Discussion in 'sandboxing & virtualization' started by BlueZannetti, Dec 30, 2007.
For this task, I'm using VisualHash (the .NET-free version) and FileAlyzer
Light virtualization does have some analytic possibilities, but only for the expert. It gives the user a chance to install something of questionable origin and check it out. Perhaps they can check to see if any of their other security software has been tampered with, root kit detection utilities may be run and so forth. Nothing is foolproof, but this gives another chance, before deciding to install whatever it is for keeps. If it smells bad, just roll back.
At least right now, for the applications listed in the title, only ShadowUser Pro has that capability. It is slated to be developed for ShadowDefender over the next few months, but let's allow that a few bumps in the road may appear and call that a very tentative projection.
Given some of the weaknesses of ShadowUser Pro, I probably wouldn't use it as a platform for testing questionable content, so realistically, none of these options are really currently suited as a test platform.
I tend to view these products in a somewhat different light - if I'm unsure of what's ahead, jump into shadow mode and let the chips fall where they may. When I'm done, restart to jump back. Simple, painless, clean.
From your points above, it is unclear whether you found SafeSpace successful or not. Points 1 and 3 are contradicting eachother, as you say SectorEditor wouldn't load in SafeSpace, and then point 3 says that a permanent write was successful?
I was hoping you could clarify your points a bit, as I know that low level disk access is not possible in SafeSpace.
SafeSpace was added as a pure default installation.
The sector editor is not an automatically protected application. If I identified the sector editor as a SafeSpace protected application, it wouldn't launch.
Point 3 was the case where the sector editor was not an explicitly protected application, but the entire D:\ partition was set to be a virtualized instead. I assumed that this would be a more typical usage scenario, and in keeping with some recent malware targeting. I attempted to and apparently successfully did perform low level sector edits of the D:\ MBR under this scenario. The changes appeared live and did survive a restart.
This result is basically identical to that obtained by both Returnil and ShadowDefender prior to some code tweaking by each of them. I assume SafeSpace is in the same position they were a short time ago - a very minor tweak and a potential little gap is closed. The specific editor used was Julie Lau's Sector Editor. However, if this result still sounds fishy to you, I can give it another whirl, although it might be better to see if you get the same result following these step I did (fresh install, virtualize drive, perform edits).
SafeSpace is an application level sandbox, as opposed to Returnil and ShadowDefender which are system wide. The difference being that SafeSpace protects against malicious activity only for applications which are running inside the sandbox.
By default, SafeSpace protects internet facing applications (web browsers and instant messengers), the most exposed entry points into a system. Any activity or exploits targetting those applications in the sandbox will be restricted. So, as an example, if you are hit with a driveby which infects you with malware that intends to perform low level disk edits, it will fail because it is inside SafeSpace.
So although your test is perfectly valid for Returnil and ShadowDefender, it is out of context when you consider what SafeSpace is protecting you from.
Do you agree?
Short answer - sort of.
These types of challenge tests are easy to dream up in a way that some fairly nasty behavior can be inferred. In this case the inferred behavior would be potentially disasterous activity designed to render your system unworkable via corruption of the MBR or partition table. Now, according to my understanding, if this application had been downloaded during a SafeSpace session and launched, it should be launched in a sandboxed/protected state by SafeSpace - and the first example (failure to launch) will apply.
Now, some of the other settings, say virtualizing partition D:\, imply certain elements which don't seem to be quite achieved - at least to a launched console based attack, which is what my example was. Are there other routes that this could play out? Not that I can think of if the application works precisely as stated and no exceptional events occur.
Do I see this as an operational issue? At the moment, not really, although perhaps some different terminology should be employed to describe partition/drive virtualization since this implies systemic protection. Should a user be concerned that this is a gap? Personally, I don't think so.
Enter (HIPS)! My HIPS monitors the windows command console and aborts it's activity untill me, the user, has first had time to review the SOURCE + TARGET and any other data of interest before granting permission to continue.
I am a HUGE proponant of HIPS because of the windows internal code schematics involved in keeping close tabs on these often overlooked manifestations of potential forced intrusions.
In retrospect, a quality Sandbox should contain any such activity originating from (in this case) the command console be it safe or of risk, but the underlying question is, how far reaching could a disruption order be once a set of pre-conceived commands are allowed to signal other areas of the operating system even if sandboxed to the containment area. Seems it would have to be specially coded to jump out from the program itself, and that possibility, because we are speaking of another software program, is not impossible by any stretch.
Long View, the more posts I read here the more I am convinced that ditching realtime AV was the best decision I have made since I ditched realtime AS.
Some users here may decide to ditch resident AV and thats fine if you prefer another solution that you think is more secure given your understanding of computer security.
Though I think for the average uneducated user, the blacklisting concept will still be the bread and butter for computer security.
Just a personal perspective here - but I really think it's less a question of understanding computer security - that can be such a general and vague topic - and more a question of how you would determine whether or not a given executable or scripting file in front of you has malicious intent? That's the crux of the question for any user, even ones with rather strident default deny execution restrictions since, naturally, you can make a deliberate choice to execute.
If presented with file setup.exe obtained either on download, from a friend, just looking through an old collection of downloads, etc., how would you make the determination that it's malicious?
Running it and observing that your system does not appear compromised can be somewhat dicey since this implicitly assumes that any malicious actions are executed rather quickly - there are plenty of examples that show this is a bad assumption. If you look purely at actions, well, a lot of times the actions are no different than those used by regular applications. The context and content is often different, but the basic actions are the same. Unless one is willing to personally pull apart the file, or severely restrict what is done on a computer, a resident AV provides a lot in the way of expert backed guidance in assessing any file obtained from unvalidated sources.
I believe it continues to go well beyond the average uneducated user, with some qualifications. Those qualifications include:
The specific concerns voiced do not apply to some scenarios (e.g. rigorous default deny with no unvalidated exceptions)
There are plenty of complementary approaches which yield the same end results under specific circumstances.
There's a Heisenberg Uncertainty type principle intrinsic to security - as security is heightened, the facile user experience is degraded. Realistic approaches recognize this and balance these two forces
Earlier this year, in April, I estimated that KAV/KIS would hit 400,000 signature basically at the end of 2007. It turns out that I was rather conservative and off by ~ 100,000. As you can see in the figure below - which tries to assess malware growth rates by examining coverage provided by one of the comprehensive solutions - the past year (really since March) apparently has experienced another of the periodic accelerations in the growth of malware. The times in months listed on that figure are the doubling time for malware signatures. Specific values are different than some earlier figures due to specific region cut-off points applied, but the basic trending behavior remains unchanged.
There does appear to have been an appreciable acceleration in the appearance of malware on the Internet since April 2007, which has obvious consequences for any trailing response measure - which any blacklist approach represents.
So what's it all mean - at least IMHO?
Most users need some mechanism to provide an independent verdict of the fidelity of downloaded content. Right now, the best mechanism to provide that assurance is via the use of an AV product. The are other approaches, but this remains the easiest to implement.
Second, the robust backup of this scheme is growing increasingly important. There are multiple solutions here as well, but the light virtualization approach provided by any of the subject programs - and some others - appear particularly robust and facile to implement at the moment.
Well, I would do an on-demand scan before executing the file.
Personally, I am not questioning the usefulness of AV softs; I'm questioning the usefulness of realtime AV scanning.
there will be more and more virus/spywares which can bypass the SD/RVS/PS in 2008.
if you seach this topic in chinese programer forums. You will find, from theory to code, that will be a new fashion and interst for them. LOL
The PassDisk, Robo Dog, KillDisk, CleanMBR which came out in the end of 2007 are the pioneers. But it is a bad news for these products.
You would think that curve might have spiked earlier then tapered off by now but it looks like it's focused mainly on signatures of AV's respectively.
Thanks Blue for the details btw and commentary.
The computer these days, (O/S) especially Powered By Windows! is in my opinion from the start been geared to expand developments all around the world and create developers with innovative thinking, concepts, then distributions, & so forth i think, (guessing here), and is likely why each security field tries to limit their expertise chiefly within their own respective specialty, and likely to remain that way because think about it.......
If and/or when AV's were to incorporate sandboxing/virtualization technologies into their traditional models, what exactly would that lead to? Merges and sellouts by the droves? Some are already leaning in that direction with so-called SUITES, and look at firewalls with HIPS now, and vice-versa.
I apologize if this, my own personal opinion seems steering a bit off course, but i mention that because of this; according just as Blue has laid out from Kaspersky statistics, the malware curve continues to trend upward at an alarming rate with no real deviation to the contrary, so what alternatives do Windows security aware users have to bridge-the-gap so to speak or at least compliment and/or shore up their Anti-Virus solution?
The handwriting is already on the wall, malware writers have definitely targetted the most, the AV's market and obviously mean to put their positions at risk IMO.
So it begs to question, are we soon going to be witnessing another new transition in the making here? Or will each expert security product vendor remain within their respective fields and continue to offer basically the same model with slightly improved detections every new release as the malware writers continue on their own quest to drive up this curve as far as they can push it?
Excellent point, which emphasizes the need for light virtualization apps.
If Kaspersky has 500,000 signatures now, what will happen if five years from now that number has increased twenty-fold?
The database just keeps getting bigger, because even if a virus hasn't been seen for a while, they can't take it out of the database because you can never know if it will return in the future!
As Long View stated in another thread, AVs are based on an outdated idea.
In the virus, trojan, rootsckhidenpk's war, I chose to stop funding the mutual proliferation. I realized that I am always behind regardless of how much money I spend on programs or ones I get for free (AV's, AS's, and the like).
I chose Powershadow 2.6 and 2.8 (Greyware versions, yes) under WinXP. I liked the idea behind them in how they protect mainly for internet surfing and online poker. I feel it protects me from the insertions that occur while involved in these types of activities.
I now have a new laptop with WinVistaP. I can't use PS so I tried Returnil free to test it out. I don't know if its protection scope is more limited than the pay for version, or is an insidious intentional design, but it would crash after a few weeks. Right around 3 weeks to 1 month. This occurred on two different systems (both laptops, different manufacturers, same OS, WinVP), run by two seperate users. It forced a recovery on both machines. On the Gateway, it asked to reinstall WinVP and left a .old version, on the Toshiba it just crashed causing a reformat.
What I can not determine is if it is an infector or a conflict (Unintentional or otherwise).
Are there any plans for PS migrating to Vista, or should I just return to XP?
(I am not a code level thinker so linux is a little limited for me, not to mention its lack of microsoftstyle partnered support structure.)
P.S. The trend in the curve looks automated or mechanical. Maybe automated VTR generators and attacks growing in the unprotected unchecked areas of the computer world. Maybe something like virtualization would reduce this type of spread. Again, an on the surface perspective.
I would return to Xp for so many reasons- Vista is still not out of beta in my view - and use Powershadow, Returnil, Deepfreeze .. whatever
Yes there will be more and more attacks, but so what ? As the take up of freeze programs grows it is to be expected that the level of attacks will also grow. DeepFreeze has been attacked a number of times, initially failing, being fixed and then attacked again ?
Anyway what are we supposed to do ? continue with freeze programs that will be attacked and then be fixed or go back to AV/AS which will continue to be days if not weeks getting fixed, will continue to slow machines down and will continue to produce 50 false positives for every real nasty ( yes I confess I made that stat up but only to exemplify my point - not to delude in the way that most stats are used.) Real time AV/AS has had its day - I can see little point in running a program real time which will stop X billion nasties that I am not going to be attached by but lets through the latest and greatest. even if the same argument is made against freeze programs at least they do not slow
machines down nor produce false positives.
Which is a quite valid distinction and point to make.
My own prefererence is to keep realtime scanning for the moment, but the trending in that curve has to be recognized as an absolute killer of this approach at some point - not from the perspective of being unable to keep up with new entries per se (which is a real and significant issue as well), but from the shear logisitics of rapidly performing the signature analysis and comparison - in other words maintaining the function with limited resource footprint. There are many ways to address this, but it requires more finesse and forethought by the day.
There's no doubt of that since we already have seen as much.
I suppose the open question is how many independent bypass schemes really exist that could conceivably allow compromise of these types of products and what are the requirements to allow that compromise to occur.
I'm no expert in this area, but I do view the underlying conceptual simplicity of the approach as a powerful trait. They really perform one discrete function and there are a finite number of ways one can place data on a HDD surface. We've seen a couple of challenges quickly addressed. It remains to be seen whether more sophisticated approaches emerge.
Context is also critical to appreciate. An internet cafe in China and a random home user present two very different scenarios. One provides unfettered physical access to the machine to allow compromise, the other doesn't.
I know you're just using reasonable numbers for effect, but with a 10.8 month doubling time, the database would be projected to increase 47 fold (=2^(Period/Doubling time)) in five years at just the current growth rate. Past history implies an acceleration will occur sometime in that period as well, so this could be a low estimate.
However, the period reflected by the curve is a fairly homogeneous one which roughly covers the release lifecycle of Win XP. Unknowns moving forward include:
The impact of Vista and the architectural changes of that OS on malware proliferation.
The re-emergence of Apple/OS-X as a mainstream alternative
A ready and low/no cost Linux based alternatives
Each of those factors represent a landscape shift, which unfortunately are likely to only lightly touch the current installed base of machines.
In a general sense, it's less of an outdated idea and more one that has potential scaleablility issues in the current Windows OS environment.
Increasing connectivity certainly enlarges the pool of potential exposure as well as transmission rates. I already see part of the fallout of that at my ISP - they're much more aggressive (too aggressive in my estimation) in filtering email from some of these unchecked locales. - rendering product support from vendors in these locations a hit and miss proposition.
My own vision is slightly different - although I believe your approach is very reasonable and has a lot to recommend it.
We've all seen a lot of blood spilled here in discussions involving AV detection differences of 0.X % without any real information on whether that 0.X % population of malware was a viable and significant threat to anyone. From the testers perspective, digging down that deep is not a worthwhile expenditure of their resources and even if they did dig deeper, quantifying viable and significant is not an easy task.
That said, the direction I'm going is as follows: Is the tradeoff between a single AV which provides 99.XY% detection with a very light AV with potentially much lower global detection (but detection that covers the primary extant threats) augmented with light virtualization such that the AV/light virtualization combination provides a preferred balance in performance traits?
My own experience is a qualified yes. I tend to think it's a somewhat germane point in that this type of exercise involves an active tradeoff in the performance of one dimension with coverage in another, which tends to run counter to a lot of the discussion here and elsewhere in which the absolute limits in performance are demanded from all dimensions.
there is no doubt that a price has to be paid for most things in life - there is usually a trade off and I can see the attraction of a light av ( if such exists) combined with virtualization being preferable to a heavy but effective AV
Each person must do there own research and thinking.
In 1995/96 I started on dial up and used Norton. Over the next few years I went thru the Spyware blasters and Spybots and Adawares.............. and then one day I realised that I had never actually seen a virus and that the malware being reported was little more dangerous than the odd tracking cookie.
My security is listed in my sig. I do run on demand scans every so often and never find anything more dangerous than a false positive ( which I do report).
I am not recommending that every one thows away their real time AS/AV software fiewall, Hips Hops whatever. I am saying that it is possible to live quite happily without them and than any program imstalled on a machine needs to pay its way and not be just another layer of clothing - in case it gets cold.
To carry the clothing analogy a bit further, I see a lot of folks donning ski parkas for a walk on the beach in summer - you can do it, but it's probably not the best experience
Separate names with a comma.