I would just leave it in default . . . There is a higher risk of running into problems by removing it than by leaving it in default.
Long article on the matter: http://www.tweakhound.com/2011/10/10/the-windows-7-pagefile-and-running-without-one/
Disable it for a day and see if you run into any problems Chances are, you will only get higher performance since the kernel files will never be loaded into Virtual Memory and will always be in the RAM
I have my pagefile disabled for 5 years or more. Never ran into any problems. That's why I am saying that the requirement to have pagefile on may have been true for older applications back in the 90s and early 2000s. I think this is no longer a case. I know there are a lot of cautious people around here but is there any evidence that any of the current applications require page file? If so I would like to know which ones...
I don't use one. I have 8GB of RAM and haven't run into an OOM once in my life. A lot of programs will page when they don't need to, which isn't really a big deal at all, but... why let them? Besides, on an SSD I need every GB I can get.
Depends on the OS and system settings. On Windows, AFAIK, memory allocations will fail when there is no memory left. Some programs will not be able to handle that and will crash, others may attempt to exit more or less gracefully. Likewise on BSD and probably most other UNIXes. On Linux, there are as usual a bunch of settings: vm.overcommit_memory vm.overcommit_ratio vm.oom_kill_allocating_task - The default setting for overcommit_memory is 0, which does not mean "never overcommit," but rather "sometimes overcommit and sometimes don't depending on circumstances." 1 means "always make memory allocations succeed no matter what," and 2 paradoxically means "make allocations fail once the equivalent of all swap space and X percent of RAM are allocated." - The "X percent of RAM" above is overcommit_ratio, which is 50 by default. Remember, the amount of space allocated (not necessarily actually used) before memory allocations start failing with vm.overcommit_memory=2 is overcommit_ratio plus all swap space. - oom_kill_allocating_task pertains instead to the OOM killer, which is what normally kicks in when all memory and swap are used up. Normally the OOM killer normally tries to guess what programs are being hogs, and kill those; in practice this often kills essential processes and results in a reboot being necessary. With oom_kill_allocating_task set to 1, the killer will skip the guesswork, and only kill tasks which themselves triggered OOM errors. I would hazard a guess that, if you want any kind of predictable behavior on Linux with swap disabled, you should set vm.overcommit_ratio=100 and vm.overcommit_memory=2. But that is just a guess on my part. Don't assume Linux will run reliably with such settings; I haven't been able to test them at all! *ahem* Anyway, if you're using Windows, just be aware that there is a possibility that some essential program, somewhere in its code, does not correctly handle a memory allocation failing. In which case, if you run out of memory, said program will probably crash and lose all unsaved data. Your OS (probably) won't crash, though, unless you're running Linux with default settings.
Hmmm this explains a lot. So basically for people that don't do anything RAM intensive on their computer, no page file is required. For those that have </=8GB of ram and do intensive computing that's when problems occur.