"Trusted Web sites: Exploit tool of choice"

Discussion in 'malware problems & news' started by boonie, Sep 22, 2009.

Thread Status:
Not open for further replies.
  1. boonie

    boonie Registered Member

    Joined:
    Aug 5, 2007
    Posts:
    238
    Tech Republic's Michael Kassner on Websense Security Labs' latest report.

    Emphasis mine
    Full article here
     
  2. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    3,943
    Location:
    California
    From time to time, various Security Labs and Security Research organizations publish an impressive list of statistics revealing the sorry state of computer security. While the numbers increase, nothing basically has changed.

    The Super Bowl website hack in 2007 was a stark realization that a "legitimate" website can be compromised. Ironically, it was Websense, the author of this report on trusted web sites, that broke the story:

    American Football Championship Shenanigans
    http://isc.sans.org/diary.html?storyid=2151
    Following this, compromises occured on a tennis federation site, and others. Whether it is just a few, or 77%, shouldn't matter, for to be protected from one such attack is to be protected from all. Yet the ranks of the uninformed grow.

    Earlier this year, a spate of SQL injection attacks caused much alarm. This is a method of injecting the malicious code into the website. Yet, this same method was used in the Super Bowl website attack 2 years ago:

    Super Bowl Infection - Analysis of One Break-in
    2007-02-28,
    http://isc.sans.org/diary.html?storyid=2322
    You would think that webmasters and System Administrators would have learned from that experience (again, more than 2 years ago!). Evidently not. Why not?

    As the sans.org Diary states, making reference to the course on "Incident Handling" that they teach:

    If webmasters are not keeping up with the latest exploits and information about their job, something is really remiss., and they are guilty of negligence.

    The report mentions Facebook and the danger of social network sites in general. I've never understood why these sites are thought to be any more or less dangerous than any other type of website. The same attack methods are used on both, the most common being

    1) a script that exploits a vulnerability in the IE browser, or another application -- PDF, FLASH, etc

    2) a social engineering scenario where the user is tricked into installing a video codec or flash update--Koobface being a successful example.

    This report refers to a recent cross scripting vulnerability. These surface from time to time, and usually have the same requirements for the exploit to be successful, as listed here:

    Facebook CSRF attack - Full Disclosure
    http://blog.quaji.com/2009/08/facebook-csrf-attack-full-disclosure.html
    How is this any different from regular surfing? It's just that the social networking sites often receive sensationalized attention, especially when celebrities' pages are compromised. Last year, some celebrity's page (whose name I've forgotten) served up an exploit that infected thousands, by some estimates. The exploit: an IE vulnerability, patched in 2006.

    How is not changing default privacy settings any different from not changing the default router password? The earlier DNS changer exploit exposed that vulnerabilty. And the Conficker worm, with its built-in dictionary attack, was successful in exploiting computers on networks because of default and/or weak "Administrator" passwords.

    Same old stuff in different guises. How do you change user behavior?

    This Websense report states,

    Here is a description of gumblar:

    gumblar-cn-exploit
    http://blog.unmaskparasites.com/200...s-injected-script/comment-page-1/#comment-817
    One method of infecting the page:

    Security Experts Identify Causes Of Gumblar Attack
    http://www.spamfighter.com/News-12579-Security-Experts-Identify-Causes-Of-Gumblar-Attack.htm
    At the user end, PDF exploits by email were noted in 2006, and sporadic web-based attacks were seen in late 2007. By 2008 they were quite common:

    Another Adobe PDF Exploit in the Wild
    February 11, 2008
    http://www.avertlabs.com/research/blog/index.php/2008/02/11/another-adobe-pdf-exploit-in-the-wild/
    Wouldn't you think, after a year and a half, that it would become well-known how easy it is for users to prevent the gumblar type of attack from succeeding? Not so. Very few articles explained the basic preventative measures for this.

    Well, the statistics listed in these reports would be taken in context if accompanied by explanations that they just represent an increase in well-known, tried and proven methods.

    The report concludes with the obligatory reference to prevention:

    If that is all that can be suggested, it's no wonder that reports with statistics lead many to a sense of futility, since many times, exploits are in the wild before patches and updates are released. As one cynic remarked, Well, if there were real preventative measures in place, there would be a decrease in growth of attacks -- not as exciting to report as increases.

    For us users, of course, fortunately the Websense option is not the only one. Much of users being compromised is due to faulty decisions on their part, showing a lack of policies and procedures in place. As a computer technician likes to say, "user error."

    The solution seems obvious, yet difficult to implement on a wide scale.


    ----
    rich
     
  3. boonie

    boonie Registered Member

    Joined:
    Aug 5, 2007
    Posts:
    238
    Thanks for the reply Rich, was hoping you would respond to this.

    One of the reasons I posted this, and emphasized one statistic, is that I agree that it is indeed more of the same, but I find it disturbing none the less that the numbers are growing. Especially when there are, in most cases "policies and procedures" that can be taken to mitigate attacks on the Net.

    I have read posts in many forums that claim; "If you avoid the 'dark side' of the Net you can avoid infection." This is not entirely true. Simple measures that should be taken are often neglected, giving criminals the chance to compromise machines through legitimate sites.

    I, and so many others here, too often see people who look at protection as piling on security apps or worse, having an expired anti-virus and thinking their protected. I find friends and co-workers using laughably simple passwords or none at all, and habits that almost ensure infection.

    In addition to site exploits, we still read of corporate and even government networks being compromised through negligence, resulting in thousands of pieces of sensitive information being stolen. While there may be a rare case of the infiltration being committed by an ingenious hacker, it seems that the majority of attacks are more of the same type: Exploiting vulnerabilities and weaknesses that should have been patched or hardened, or carelessness on the part of personal.

    Not being a security expert, I am left scratching my head as to why this situation exists and continues to worsen. Is it cost? Laziness on the part of users, webmasters, network admins and developers? While the awareness of attacks and the dangers of the internet grows, and indeed is sensationalized, basic awareness and knowledge of protection, especially on the part of everyday users, remains in the Dark Ages. I've caught myself giving advice to (lecturing), and trying to persuade (nagging) people to take simple steps to protect themselves, and in most cases, I'm met with rolling eyes, or a blank stare. In frustration I often think, "...in that case you deserve whatever infection you get". Yet this attitude doesn't help with what is indeed a global issue.

    How then to pique the interest of, and educate the masses in basic security.

    "Same old stuff in different guises. How do you change user behavior?"

    Again, I'm no expert; I don't have an answer to this problem. I don't know if there is one. At this point, I just secure my own home network, help those that show real interest, and try not to worry so much about the rest of the world.
     
  4. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    3,943
    Location:
    California
    That's pretty much all we can do as users.

    At the server end, the negligence and incompetence need to be dealt with in a different way. If I were the CEO of an organization whose web site was compromised by SQL injection, I would order an outside incident response investigation. If the fault were negligence and lack of proper procedures, I would fire the System Administrator.

    How many web sites were compromised? Lots of Sys Admins out of work.

    It's easy for hackers to discover inroads to a server. There are automated tools available that scan pages automatically looking for ways to inject a script. Assuming the server's software is up to date, it's not the software's problem, rather, lax procedures in coding.

    Suppose you leave your car engine running while you go into the Post Office to check your mail box. I happens a lot, and thieves regularly patrol streets, looking for an oopportunity. Should you be surprised if your car is not there when you return?

    Is it the fault of the car?

    ----
    rich
     
  5. Lebowsky

    Lebowsky Registered Member

    Joined:
    Dec 3, 2004
    Posts:
    161
    Great posts Rich, thank you.
     
  6. Kees1958

    Kees1958 Registered Member

    Joined:
    Jul 8, 2006
    Posts:
    5,857
    Rmus maximilius revocantis malwarus :thumb:

    What when new HTML features increase distributed processing complexity of webapplications even more?

    Red = makes it more complex to manage (thus protect)
    Green = makes it easier to protect
    ------+
    More webmasters/developers will bite the dust due to lacking communication and knowledge/experience exchange as explicitely illustrated by Rich's post :doubt:
     
Loading...
Thread Status:
Not open for further replies.