Stormy weather for malware defenses

Discussion in 'other anti-malware software' started by ronjor, Mar 7, 2007.

Thread Status:
Not open for further replies.
  1. EASTER.2010

    EASTER.2010 Guest

    I see, point taken. Then my own suggestion would be to turn to a Prevx1 that can do all the actions and make those decisions for you bases on a community database they have well established and seems satisfactory enough that many users totally trust.
     
  2. Pedro

    Pedro Registered Member

    Joined:
    Nov 2, 2006
    Posts:
    3,502
    Surely there are other also interesting approaches, CH, SandboxIE/GeSWall/DefenseWall/etc.
    CH drops everything else and concentrates on behaviour, how to identify malware behaviour and block it. Sandboxes are a second firewall to me, and i keep one too.
    Prevx1 is more appealing to me, but surely there are other interesting solutions.
    Malware writers evolve, and so does the other side of the law.
     
  3. lucas1985

    lucas1985 Retired Moderator

    Joined:
    Nov 9, 2006
    Posts:
    4,047
    Location:
    France, May 1968
    Don´t forget social engineering.
     
  4. duke1959

    duke1959 Very Frequent Poster

    Joined:
    Jul 21, 2006
    Posts:
    1,238
    I think for an average user like myself, although I am armed with the knowledge obtained here, that a router firewall combined with Firefox, an Antivirus, Windows Firewall, and Cyberhawk would be all that is needed. CH is basically a set and forget program with pop ups, if there even were any, that are easy for an average user to understand. Of course because of the knowledge obtained here, even as an average user who's behind a Router SPI Firewall, I still have AVG AV and FW, Spyware Terminator, and Cyberhawk installed currently. LOL.
     
  5. herbalist

    herbalist Guest

    I doubt that rate will remain constant if malware writers continue the attack described in the first post. If more of the malware writers start using this tactic, what would be the result? The vendors coders have to put in more hours or they hire more coders to dismantle variants. Cost increases for users and larger databases the PCs have to deal with. Either way, the user pays for the AV vendors needing to defend themselves.
    Consider this point from the article:
    Much of the malware we deal with comes from these botnets. In some ways, these botnets are the evolution of the original computer virus concept. In over-simplified terms:
    Virus infects computer, sends infected content to another where it's opened, replicates, and repeats.
    Malware spreading botnets are doing almost the exact same thing.
    Malware turns computer into botnet component, sends infected content to others where it's opened, turning new PC into botnet component, and repeats.
    Same concept, but much more efficient and productive. Like viruses on steroids. Nothing I've seen indicates that this trend will stop anytime soon. If anything, it will increase and raise the exponential growth rate even more.
    In other words, advances in computing power are consumed by the security apps designed to protect them, leaving the user with little if any gain for their money. I'd call that a steady loss of efficiency, a waste of processor power, and an unnecessary waste of electricity. When we demand more efficiency from everything else we use, why do we willingly accept the opposite with computer technology?
    I'm still running the same 98 box I've had all along. I can honestly say that it's faster and more stable now than it ever was. That said, I'm also at the limits of my hardware, especially the processor. With DSL, my processor is my limiting factor, not suprising when it's a 366 mhz. New hardware would be nice. As much as I'd like it, I can't honestly say I need it, and when I do upgrade, the extra speed and processor power will be for me to use, not for some security app to waste and leave me with nothing more than I had.
    That sounds good on paper, but I know of no realistic way to tell what the next attack vector may be. Go back a few years for a moment. If someone had told you that looking at a JPEG would infect your system, would you have believed it? How about PDF files? Flash files? The WMF exploit, who had a strategy that anticipated that coming? Since there's no way to know what's coming next, how do you decide what is likely? For myself, I work on the assumption that if something can be exploited, it will be.
    This is assuming that such a rootkit gets detected. That's half the battle anymore. It's also very much a factor of the users skill and knowlege. Many of us here could win a battle with a rootkit but the average user? Factor in the OS version. With 98, I don't have to fight that battle at all. With XP, the battle is there but can be won. Then there's Vista. Where that stands with removing rootkits remains to be seen.
    Regarding the scanning downloaded content. If you use a download manager with integrated AV scanning, it's done for you once you set it up. That said, scanning downloaded material should be part of any security policy, especially if any executable content is involved.

    Yes, a certain amount of knowlege and some discipline are needed. How much depends greatly on which version of windows you use. The traffic and application control aspects of this are much simpler on a DOS based unit than with XP. I don't have to deal with services. The OS works fine without system components getting internet access. There's far fewer processes to control (or to be exploited). I'm limiting this to the software aspect of the policy. I'm definitely not recommending using a software firewall to replace a router or anything like that. Far from it. IMO, both are necessary as routers are not impervious to attack. Even though they are hardware, they use their own software, which can have vulnerabilities of its own. The one supplied by my DSL service for instance is limited to an 8 character administrative password (my next investment, a better router).

    My security strategy starts with traffic control via a rule based firewall. Only the software and system components that actually need internet access get it, and then only to where they need to connect. Apps requiring incoming connections are limited to the specific IPs they need, with ports and protocols to be used specified. While some knowlege of basic internet function is necessary (IP address structure, basic protocol types, port numbers) writing such firewall rules is more of a discipline issue. It's the taking the time to look up the IP in the firewall alert to see who it belongs to, the noting of what ports and protocol it's using, then making rules specific to those, restarting the app and doing it again, as many times as it takes. It's the avoiding of the "allow all" options for apps that don't need it, whether it's IPs, ports, direction, etc. Why let your AV updater connect to anywhere when it only needs access to a very few IPs?

    Some apps like the browser can connect to anyplace if the user allows it. Mine is routed thru Proxomitron on a non-standard port. In some instances, a browser will want to use a non-standard port that just won't work thru Proxomitron. Game sites are one example. On my box, I have a rule specific to that sites IP and the port the game server uses, avoiding an "allow all" rule.

    Proxomitron fills most of the content filtering role on my system. Other examples are the hosts file (filtering adservers, malicious sites, etc), NoScript, etc. Your browser settings have a lot of say here as well. Of the 3 basic control policies, this is probably the most complicated and takes the most time. By keeping the content filtering separate from the traffic and application control aspects, the user can choose what best suits their needs and skill. An app like Proxomitron can be intimidating when you start studying how the filters work, but there are some good sets freely available. Some filter sets allow whitelists for sites for different contents, like a list of sites allowed to use Java Script. Way too many options to cover here.

    The control of applications and their activities can be anything from system policies to HIPS. On my box, it's SSM and a ruleset that has all parent-child settings, allowed hooks, etc specified. IMO, the policy editor for 98 isn't a viable option. Too easy to defeat with tactics that have been used by malware for some time. It does take some knowlege but more discipline to take the time to specify each allowed parent and child, but the result is a security policy that doesn't allow risky behavior or unknown processes. It no longer matters if your AV doesn't recognize "malware app A". Unless you choose to allow it, it's not going to run or infect you. With SSM for instance, if you run with the UI disconnected, the user won't even be asked to allow it. It's just blocked.

    This type of policy isn't for users who like installing new apps regularly. It isn't for users who don't know windows explorer from Internet Explorer. It's for those who know what's on their systems and have them equipped the way they want them. Once finished, your security apps and policies can protect you in most any situation. I'm not careful about where I browse. I don't have to be.
    Rick
     
  6. duke1959

    duke1959 Very Frequent Poster

    Joined:
    Jul 21, 2006
    Posts:
    1,238
    Wow herbalist that was very informative, but it still makes me want to ask you if you agree at all with what a poster here in this forum says about simply using ProcessGuard Free for protection. If it can't execute it can't infect. I am beginning to think that maybe this is true, and just using PG Free with an AV and maybe Cyberhawk really isn't the best way to go, especially if you're behind a router firewall and using Firefox with NoScript.
     
  7. fcukdat

    fcukdat Registered Member

    Joined:
    Feb 20, 2005
    Posts:
    569
    Location:
    England,UK
    Who me:D

    Just to clarify i am not saying to soley use PG but to use it as the big iron up front.Again this will suit someone who has amatured system setup that dose'nt download new softwares all the time.

    Anti- exec should be the big iron up front for all mature systems IMO

    Ultimately stuff like this boils down to the end-user's ability and confidence in their arrangement.Putting it in slang i need 2 firewalls personally,one between the web and my computer and the other between executable code and the computer's memory.These 2 forms of firewalling offer the control that i require and have confidence in but as always what works for me might not be suitable for all :)

    I also use IDS to patrol inbetween but in all honesty i don't need it but just have a softspot for the cute little software*puppy* with added hosts protection(saves manually editing the hosts file after some types of malware infection when in malware hunting mode).
     
  8. duke1959

    duke1959 Very Frequent Poster

    Joined:
    Jul 21, 2006
    Posts:
    1,238
    Yeah you fcukdat. LOL. I have PG Free installed now with my AVG AV and FW, along with Spyware Terminator and Cyberhawk. Do you think I should get rid of ST? I feel CH gives me some rootkit protection and compliments PG Free more, but I'm not sure. Just wondering.
     
  9. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    Actually, anyone who incorporated White List tactics as part of their security strategy.

    wmf zero day

    Right off hand, I can think of 7 I know personally, and one in the forums, fcukdat using ProcessGuard

    [Edit: Ade, I just saw your post!]

    You don't have to know!

    From an article discussing White List from two years ago:

    There is no reason why anyone should be infected from an inadvertant mishap or zero day exploit.

    Now, downloading/installing stuff is another situation, and requires different tactics. But that's another topic.

    regards,

    -rich

    ________________________________________________________________
    "Talking About Security Can Lead To Anxiety, Panic, And Dread...
    Or Cool Assessments, Common Sense And Practical Planning..."
    --Bruce Schneier​
     
  10. fcukdat

    fcukdat Registered Member

    Joined:
    Feb 20, 2005
    Posts:
    569
    Location:
    England,UK
    Without going too OT this is your decision entirely and is down to what you feel confident with.

    I will say this from my level of knowledge about malware rootkits(and thus protection against) is that inorder for a rootkit trojan to be loaded code has to execute in the first place by the dropper file.

    So with that for me driver loading is a non issue for protection against rootkits because the droppers are caught by the anti-exec function of PG free.

    Again this boils down to what folks know and are confident using:thumb:

    Good post Rich:thumb:
     
  11. cprtech

    cprtech Registered Member

    Joined:
    Feb 26, 2006
    Posts:
    335
    Location:
    Canada
    Quotes from the article:

    "Every day, it has been a new set of subject lines and new tactics to get people to open these," Allysa Myers, virus research engineer for security software maker McAfee, said in an interview with SecurityFocus.”

    “The program compromises systems by luring their users into opening the attachments of messages with subject lines regarding current news events, including violent storms in Europe--a characteristic that led to the program's naming.”


    Just imagine the reduction of infections if more common sense was used in dealing with email attachments. I'm surprised this isn't brought up more in this thread.

    Yeah, I would agree :)
     
  12. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    herbalist,

    You may be correct at some point, but I do prefer to work with objective data if possible when it is available. The updated figure below is current as of today (3/10/2007). There are absolutely no indications of deviations from the trends established over the past couple of years, despite all the hand wringing over the past 6 months or so. Will things change at some point? I'm sure they will, but the doubling time could increase as well with changes in detection technology. I've provided estimated doubling times for each period since 2001 as well.

    I realize that, I was simply reacting to your use of the term unremovable - everything is removable - everything.

    ..and you've just lost the majority of users here. The "once you set it up" is where that happened.

    There is no need for a better one. Let's be realistic here. Disable remote administration, use 8 characters, and you're still concerned to the point of investing in a new router?

    What you do is fairly complicated. I realize that it's exceptionally safe. Let's just say, none of the machines I use go to those measures and as far as I can determine, I am as safe.

    It's great that you have an approach that you're comfortable with. However, extremely simple approaches do work quite ably as well.

    Blue
     

    Attached Files:

  13. BlueZannetti

    BlueZannetti Registered Member

    Joined:
    Oct 19, 2003
    Posts:
    6,590
    At least in my view...

    Blacklists (i.e. classical AV's):
    • If a file is flagged a user has an unambiguous caution raised. It may be a false positive - which should always be a recognized as a distinct possibility for a file on a system for an extended period of time prior to the flag - but the alert is sounded and the alarm is unambiguous
    • Unrecognized malware gets a pass with nary a whisper. This is a very time dependent issue, but a key one. No doubt about it, this is the Achilles heel of the blacklisting approach.
    • Experts schooled in the art are responsible for determining whether or not a file is malicious. A user can ignore the guidance, but the guidance is explicitly provided and based on a technically sound analysis, not a guess.
    Whitelists (i.e. AntiExecutable, process execution control applications, etc.):
    • Effectiveness can be strongly dependent on the implementation.
    • In some cases, Anti Executable for instance, the whitelisting proceeds from a "system state known good" assumption and really just controls future exposures. Validation of system cleanliness is absolutely required.
    • Process execution control applications (e.g. Process Guard, SSM, etc.) whitelist according to user input. Unfortunately, ordinary users have little in the way of an objective basis to render informed input. Often, the allow/block decision is nothing more than pure guess. If a system is prevalidated as clean by a blacklist based scan, and desired applications are given approval immediately after, it's not that different than, for example, AntiExecutable. The main difference in this specific case would be the activation barrier to add new applications, which is rather higher with AntiExecutable.
    • HIPS style whitelist approaches tend to be rather noisy immediately after installation as the base execution and communications profiling occurs and is approved by the user. If the user can get through this phase, great. However, I've seen all too many cases of alert fatigue with perfectly mundane operations being flagged as malicious, when all they are is an operation that is potentially malicious, but only if initiated by malware. Valid programs often perform the same operations.
    • Programs such as Prevx try to get around this with a hybrid community based approach tiered with known good/known bad/unknown states.
    • Personally, I think whitelists are best for reasonably static machines, which isn't an overly exclusive state. Most users are not constantly changing applications or trying out new downloaded applications.
    • Firewalling is whitelisting after a fashion, particularly with respect to application control. Again, good in principle, but how does a common user render an informed choice of whether to allow or block?

    If you were to ask me what I'd implement for as comprehensive coverage as I'd ever need for any circumstance, it would be along the lines of:
    • Classical AV, any decent one.
    • Lockdown installation/running of new executables either via OS policy management or a third party application such as AntiExecutable/etc.. The latter approach is operationally easier and can be failsafe for most users.
    • Software firewall focusing only on application based control. That's the only filtering I'd do
    • Router. Verify remote administration is disabled, change default password.
    That's it, done. This is a pretty much install, five minutes of configuration, and go approach. The only questions to the answered should be allow/block by applications on first transit through the firewall - which for most users will be less than a couple of dozen prompts in all and nothing in the way of complex configuration (unless that is desired by the user).

    I do believe one can get by perfectly well with less, however.

    Blue
     
  14. herbalist

    herbalist Guest

    I agree with the basic statement "If it can't execute it can't infect" as long as the term "it" referrs to a malicious process. Where this statement can run into trouble is when "it" is a legitimate process being used maliciously. "Regedit" is not a malicious process, but a script using it to delete the autostart entries for your security apps would definitely qualify as malicious usage of a legit app. If "it" also includes the malicious usage of legitimate applications, then the statement holds true.

    I don't know Process Guard well enough to know how well it controls the activities of the allowed processes as I prefer SSM. Even then, I wouldn't ask SSM to stand alone. I'd still want a firewall to prevent the internet content from reaching the application firewalling software.

    The combination of a router, NoScript, and either PG or Cyberhawk is a variation of a security policy using the 3 control rules. Personally, I like more control than that, especially of outbound traffic, but what you describe does serve all 3 functions to a degree. No matter which apps you use, the resulting protection is always a matter of degree. There is no perfect solution as long as windows is the operating system. No matter what the setup, there's always some way to defeat it. Often when the software and its configuration is strong, the user is the most vulnerable target, which is one of the reasons I suggest disconnecting the user interface on apps like SSM. The user doesn't get prompted with a potential mistake.

    I don't view a system from the perspective of PG or SSM being "up front". The firewall controls the traffic from the net. Basically it's first in line and will be responsible for keeping out all attacks that don't pass thru permitted channels. By controlling traffic, it stands between the net and your application firewall or HIPS, preventing a direct attack on it. In turn, the HIPS prevents malicious apps from running and attacking the firewall. Apps like SSM can restart the firewall if some type of internet attack terminates it. By interlocking or "layering" the components, the strength of the package is more than the strength of its parts. That's what you strive to set up.
    Rick
     
  15. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    Superb analysis (as usual)

    That should be a sticky post somewhere as part of developing a good security strategy.

    -rich
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.