I have seen this since I started having the full version of pg1x-3.100; when the exe protection enabled, a new exe open, there is a chance for it (some do and others dont - why? what?) to be in window task manager and occupies 52-60KByte although it is blocked to run. If users make a batch file to run a hundred of un-allowed exe files, the system memory resource would be suffered and going to crash anyway? Anyone helps me out to find the truth for this. Thanks.
Hi optical, I am not quite sure that I understand your concern but if you mean that running a .exe that tries to start other .exe's then ProcessGuard will alert on each new .exe as it tries to start providing that thes .exe's are not on the security list with the permit allow. Also the first "trigger" .exe would need to be allowed, in addition if malware had changed a .exe that is on the allowed list then ProcessGuard will not allow the changed /exe to run without user permission. HTH Pilli
Thanks Pilli. I am sorry for my poor English. My concern is that when "exec prot" was turned on, I tried running an .exe not on the list-to-run (in PG), I saw that exe name appeared in windows task manager screen and took place about 60KB; I tried it again and again on that exe file, other 60KB occupied and so on...I did not try yet, but I could think of making a batch file to run many such exe file. Since cmd.exe allowed to run, that batch file could run anyway, couldn't it? and you know I would like to mean here? Hope I am not misunderstanding you. Thanks
Hi optical, I think I understand you now. Yes, as the .exe is started then a stub is shown in Task Manager but this stub is not allowed to execute any code it is in a halted state and cannot run. When you Deny the execution of the .exe then you will see the windows pop up saying "Handle not valid" and the .exe stub is closed. Pilli
Hi optical, I took a stab at an experiment similar to what you described. I renamed five copies of a trivial program and instructed PG to "remember, always deny" execution of these new program files. Then I started a script that looped 200 times, trying to execute each program each time thru the loop (5-programs * 200-passes = 1000 denials). I discovered that after these 1000 denials, task manager showed about 600K more memory in use than before the script started. I ran this same script six more times, and task manager showed 3900K of additional memory usage. This might be a memory leak on the part of ProcessGuard, or it could be something else. After the holiday I can try to narrow down a few possibilities. Perhaps someone else will have experiments to try as well. My testing was done on Windows 2000, SP4.
I have been seeing an increasing number of greyed out processes in process explorer after my computer has been running for a while and I had recently realised that there was a correlation with my choosing to deny (once) and the processes that were there After a little bit of testing, I have found out that the same thing happens when you have block new and changed applications enabled... although doing this it looks like I have hit another minor bug, I have managed to get 2 greyed out entries with one program execution... I'll put that in another thread if I can reproduce it Something for after the holidays I think In my case on XP Pro SP2, each program is showing (in process explorer output) : different virtual sizes; can see 2372k, 4928k, 1200k (so it is presumably consuming some of the windows paging file) smallish working set size ; 64k, 52k, 32k private bytes of 524k, 3092k, 76k threads 0 handles 0 user objects 0 gdi objects 0 Now given that it is a pre-execute stub being invoked rather than the actual program, I'm not sure why the working set size and number of private bytes would vary for each one, unless something specific for the application is being initialised for the stub execution (prior to the allow/deny) ....
Some additional information in case anyone is interested in the basics of how this works.... It took me a little while to remember where I saw this, but the "how-to" basics of creating a suspended process and loading an executable (including source code) can be found on SIG^2 groups website, it is probably similar in nature to what DCS do for process guard and sloader but the nice thing is that the source is supplied... See here for the article, and from that page : The reason I am mentioning this here is because the allocated virtual memory sizes I saw for the suspended processes were different and I remembered point #6 from the description which was that the code demonstrated allocating more memory (if necessary) to be able to load the target executable over the original one So if I understood this correctly then process guard only needs to load (or create) a process using stub executable (with a fixed size and hence fixed virtual memory requirements) then if we choose to "allow" then it could allocate the required memory and continue. I'm guessing that the method used in this code isn't generic enough (it possibly wouldn't work for non-relocatable code) and/or DCS are doing something different/extra in order to make things faster. Also if it was easy then there would probably be more imitations out there... This is interesting but probably not relevant seeing as Pilli indicated that the PG suspended process is probably supposed to be killed when we press deny (or if deny always or don't execute new/changed was selected) NB: The sysinternals guys have released the 4th edition of their Windows Internals Book and this one covers XP as well - the only pity is that there is no searchable electronic copy this time :-(
Pilli, No probs, I figured if I was going to hang around and read the posts here I might as well contribute occasionally...