Security's New Reality: Assume The Worst

Discussion in 'other security issues & news' started by Hungry Man, Mar 15, 2012.

Thread Status:
Not open for further replies.
  1. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    A more fatalistic view that attackers have already infiltrated the organization presents a different way of looking at -- and marketing -- security

    http://www.darkreading.com/advanced.../security-s-new-reality-assume-the-worst.html

     
  2. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    That's tantamount to surrender. I guess it's easier to give up than it is to implement some reasonably intelligent defenses that prevents employees and personnel from providing ways in via stupidity, eg, HB Gary attacks. It also seems that it's too much for them to take things that are really sensitive and make them inaccessible from the web.
     
  3. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Or they accept that those "reasonably intelligent defenses" don't hold up to a dedicated hacker.

    Any company keeps it's R&D type stuff off of the network - you don't see Sony's next-gen product information getting leaked through hacks.
     
  4. dw426

    dw426 Registered Member

    Joined:
    Jan 3, 2007
    Posts:
    5,543
    No, but you see everything else Sony getting hacked :D I don't get how it's a "new reality", when people should have had that mindset all along. So many of these past, "big news" breaches and attacks could have been so easily prevented, were it not for corporate number crunchers and lazy or unskilled IT. And really, with the media being what it is today, you don't need hackers to leak "next-gen" anything. It all comes out months ahead usually anyway.
     
  5. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    I think they're saying the mentality has been "prevent at all costs" and now it's "accept that the attacker will always be one step ahead."
     
  6. dw426

    dw426 Registered Member

    Joined:
    Jan 3, 2007
    Posts:
    5,543
    Get some actual hackers to be your go-to guys for security, instead of a traditional network admin like most companies do, and that may not stay the case. Traditional IT still has its place, it's a necessary part of business. However, people who think outside the box are needed to secure these corporations.

    Employ your own pen testers, go to colleges to look for the guy hopped up on Red Bull pouring over lines of codes that look like alien language to you...that guy very likely knows a hell of a lot more than your entire IT department combined does. The big problem is the bottom line mentality. The even bigger problem is when that bottom line gets hurt far worse financially and in regards to PR, when you cut corners.

    Hackers have not necessarily gotten smarter, they're using the same tools as the good guys. What has happened is decades of unchanged security practices are now costing companies.
     
  7. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    My hacker friends and my researcher friends think about hacks completely differently. I'd say the IT guys really are the ones best fit for defending but it's very funny to see how differently they think about things.

    Honestly, you can do everything you've said, you can hire pen testers to point out weaknesses and get a team together to lock things down, you will get hacked if a hacker is motivated enough.

    It drives up the cost, but if there's a nice juicy payoff or if you're just pissed them off, it'll happen and it's only a matter of time. You can spend tons of money on a 24/7 team of the elite watching each packet fly into the dozen firewalls or you can accept that and work harder on what you do once they're in.
     
  8. dw426

    dw426 Registered Member

    Joined:
    Jan 3, 2007
    Posts:
    5,543
    I agree, if you're wanted bad enough, they'll get you. But come on, anything is better than the "standard" mode of operating that has been tried, and failed hard, for 20+ years.
     
  9. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    Right, just like they claim the hackers are using sophisticated methods which turn out to be some idiot opening an e-mail attachment, or convinces someone to open a port in the firewall and let them in. It doesn't matter how good their IT people are if the other employees are able and allowed to do such stupid things.
    By their own admission, proper deployment and enforcement of existing defenses is sufficient against the vast majority of attacks. They're not interested in stopping the attacks. They're more interested in making money off the threat of attacks succeeding. Why would a potential customer for this stuff think that the next idea would be deployed any better than the previous? No defense can protect against internal stupidity. This is little more than marketing noise.
     
  10. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Wow, yeah, I bet that just sums up 100% of cases.

    This is not the case. At all. Exploits are used all the time for high profile cases.

    Glad you have it all figured out.

    I know of one attack that would have been very high profile if it had become public/ if they'd ever realized it. There was no social engineering there, he got in, took what he wanted, and got out. I doubt anyone even realized he was in there.

    I can tell you right now that, due to the nature of the place that was hacked, they were almost certainly using secure software. There was no social engineering used at all.
     
  11. emmjay

    emmjay Registered Member

    Joined:
    Jan 26, 2010
    Posts:
    1,605
    Location:
    Triassic
    It is a good thing that alternative methods and hopefully new tools are being considered. The existing security measures do a reasonable job but I think a more innovative approach is needed. I worked in a large IT department (networking) and even though we had a pretty secure intranet, we still had security issues to deal with on a regular basis. Compliance to corporate IT rules was a condition of employment! Rules and compliance have to be there but we also need to take on a new approach to dealing with hacker threats.

    I had always hoped that we could attack the hacker with code that would disable their system as soon as they were identified. There are legal implications in doing this, but as long as we are looking at new ways of combating the problem why not explore this avenue as well. Maybe in the future it will be possible to have code in a server or PC that would bounce the attacker's code right back at them resulting in the malware only infecting the source. This may be easier to make legal.
     
  12. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    Read their own words.
     
  13. BrandiCandi

    BrandiCandi Guest

    Hmm, seems like a fool-proof way to market security. The sec team can't possibly fail!

    Seriously, I suppose this could be a good perspective to include in a security team. But it's vitally important that they wouldn't give up on prevention. There's no reason to make it easier on the bad guys. But I think sec pros that really know what they're doing already employ this. They install defenses and then they also look for breaches. No one should put defenses in place and then just assume they're keeping the bad guys out.

    Maybe this perspective will help sell security to clueless management.

    @Hungry Man & noone_particular - The 'vast majority' of attacks aren't specialized or tailored to a specific target. They cast a wide net looking for the obvious & common vulnerabilities (like generic malware or bot scanners looking for unsecured servers). But a targeted attack will use information gathering and determine what vulnerabilities are present, then the bad guy will tailor an exploit specifically for the target. Sometimes the best exploits depend on human stupidity. But if a highly sensitive network doesn't have measures in place to prevent people from doing most of the stupid stuff then the security team should be fired.
     
  14. Baserk

    Baserk Registered Member

    Joined:
    Apr 14, 2008
    Posts:
    1,321
    Location:
    AmstelodamUM
    Wouldn't you agree that in case of an APT attack against for instance a defence contractor, where emails are send to employees that are indistinguishable from the real deal and in which the 'HR Department asks to confirm the information in the attached Word doc', it still might be usefull to have 'outgoing data sniffing' in place?
    Not as a substitute for other measures nor for promoting complacency but as an extra.
    You can train staff all you want but with a company having >1000 personnel, one of them is bound to fail once, given above scenario.
    Perhaps you might ask how in the world such an email can arrive on the network and consider that the fail (and you wouldn't be completely wrong) but that's exactly what a whole APT team might have been working on as a means for entry.
     
  15. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    There are automated attacks that are hosted on webpages and try to get you with some Java vuln or whatever's hot at the moment. There are also dedicated attacks that don't necessarily go for the patched stuff.

    Automated attacks will obviously be more popular, you can hit more people and in general more people will "see" the attack. Dedicated attacks will be less popular by definition, it's a single person/ company being attacked not hundreds of people being sprayed.

    Dedicated attacks also tend to have a much higher payoff.

    If you are a company with a bunch of servers, guess what you worry about? Not the Java exploit at some random hacked website, you worry about the hackers you've just pissed off or keeping your information safe from them for whatever reason.

    The idea is that you can not protect yourself from them because no matter what you do the hacker is always one step ahead (this has been the idea for years, it's the idea behind sandbox - something people on Wilders like very much.)

    Social engineering doesn't have to come into it. If you think hackers haven't evolved you're dead wrong - they have and they're very very good at what they do. I suppose it's hard to see that since

    1) The good ones don't really get caught
    2) Most of the news is talking about SQL injections because those are so public

    Moving away from the mentality that says "Ok, we're protected enough, no one's getting in" is a good thing. Accepting that hackers can break whatever you put up is a good thing. Now they can take steps to segregate their servers, line up more after-the-fact defenses, IDS's (these are usually very simple for servers because otherwise you'd get thousands of FPs a day just for someone scanning your ports), disaster response.
     
  16. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    It never ceases to amaze me how some of the people here automatically jump on everything new and try to call it better, even after the marketer of this "new idea" admits that the proper implementation of conventional security tactics work, but "that is not a sexy thing to market".

    Baserk,
    Using the example you gave, not including how that e-mail got there, I'll ask this. Why is that .doc being opened in an unprotected environment? That can be done in a virtual environment or sandbox and be done so transparently. Even on my own system, I treat all files (not originating from my own PC) that can contain malicious code as untrusted and only open them in an isolated environment. As for the staff failing to follow that procedure, it should be automatically implemented, not a matter of choice. The staff shouldn't be physically able to do otherwise. That removes most user error/stupidity from the equation.

    These companies need to get it through their corporate heads that the staff is part of the attack surface, as are those above them. Just like any application that can receive compromised files, malware, etc, they need to be treated as vulnerable and isolated from the core system. It not just the staff or IT department. This has to include those above that, starting with upper management.
     
  17. m00nbl00d

    m00nbl00d Registered Member

    Joined:
    Jan 4, 2009
    Posts:
    6,623
    I agree with you.

    In a strict environment, there should be no room for user "stupidity"... even if the user is "stupid". This includes the CEO, presidents, etc. Period.

    Like you, I treat any file originating from the Internet always as being suspicious. Even if it's send by a friend/from someone I'm expecting it from. I don't know where the file was hosted/under what environment it has been created before being sent to me.

    The source of the file may be clean (the user, the company, the service, etc), but the environment (the system) may not be so.

    The same is applied to a company environment. Anything should always be opened in a restricted environment.
     
  18. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    The article says that it works a large portion of the time. You're missing the point... entirely. You need to assume that even if typical tactics work 90% of the time that if you are under direct threat you will be that 10%.

    It's accepting that you can not stop a dedicated hacker from breaking in and you have to then take measures that assume they've already broken in.

    Some users jump on the new and some refuse to let go of the old.

    As I said in my previous post, by definition automated attacks are meant to spray as many targets as possible and therefor will hit more users but are also way easier to prevent. It's also true the targeted attacks have a higher payoff even if they take more work. So while targeted attacks are a minority of attacks (they take more time, you can't automate them and spray 1000 users) they're very dangerous because the prize of an exploit is higher.

    This isn't even really a new idea... at all. Accepting that the attacker will break your system is something I've been hearing about for ages, I'm pretty sure I brought up in another topic some time ago that one of my researcher friends likes to say that the attacker will always know your system better than you do.
     
  19. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    More often than not, when a company is hacked, the security technology itself isn't the failure. The implementation is. This comes back to something I've tried to tell people for the longest time. Start with building the detailed security policy, then select apps, settings, procedures to enforce it. Examine what each staff member needs to be able to do their job, then devise a way to fill those needs securely. That's how you eliminate the holes. It is the IT departments job. This article is another example of doing things backwards, taking an app or new idea, then trying to fit it to your system.

    I can't agree with this "assume you've lost" marketing noise. There's a big difference between assuming that you are vulnerable and assuming that you've already been hacked. Isolation is a very effective means of damage control that should be applied to everything from the user apps to the locks on the doors. There's all kinds of things these companies could do, but like they said "that is not a sexy thing to market". Their priority is obvious.
     
  20. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    You're talking about RBAC, which is a simple principal that doesn't scale well to huge companies.

    Not interested in arguing about "assuming you've lost." It isn't a game to be won or lost.

    If you want to believe it's marketing and nothing more that's fine by me. The article's there if you want to read more than the 10 words that you agree with.
     
  21. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    ROFL. Should I apply that to your endorsements of the cloud as a means to enhance failed technologies like AVs? Who won't let go of the old?
     
  22. Hungry Man

    Hungry Man Registered Member

    Joined:
    May 11, 2011
    Posts:
    9,146
    Well... one of us has a computer running 98... the other doesn't... so I feel like that should be fairly obvious.
     
  23. m00nbl00d

    m00nbl00d Registered Member

    Joined:
    Jan 4, 2009
    Posts:
    6,623
    I'm reading the article bit by bit... shame on me :oops: :D , so I'll just comment on something that caught my attention.

    Preventing phishing - typically used - attacks is difficult.

    I suppose that if something looks like it's coming from something, it gets harder, yes... the same doesn't mean you have to be literally stupid and fall for it, just like that.

    HBGary was an example of pure and plain stupidity. The first stupidity was using a third-party and closed source CMS. They wanted to be unique and they saw the result. Then, passwords saved in plain text, if I still remember well... Then some other moron gave the firewall's password to the one of the attackers, because he thought he was getting e-mails from the real person... WTFo_O

    Come on... wouldn't you call the other person and confirm it? Please... pure and plain stupidity.

    Am I saying to live in the past, in what comes to network security? No, I am not; but there's a difference when hacks happen due to stupidity.

    And yes, it's all about limiting the damage. That's how it should always have been. Nothing new here.
     
  24. Baserk

    Baserk Registered Member

    Joined:
    Apr 14, 2008
    Posts:
    1,321
    Location:
    AmstelodamUM
    Agreed, implementation of these measures isn't difficult and it certainly is possible to safeguard employees from errors/'stupid' actions.
    The million dollar answer to your question; 'Why isn't it done?', I haven't got.
    That involves probably a mix of economical and psychological arguments, coming from both upper management and (lower) staff.
    Don't get me wrong, I'm not saying that a (partially) isolated/virtualized working environment is 'unworkable' but trying to convince everyone but the IT dept. of this, is something else.

    It's still somewhat funny to read; 'it is w0cky'! link
     
  25. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    Getting their personal information, internal memos, company secrets, etc splattered all over the internet tends to be a strong motivation to tighten things up. It is one of the stated goals of some parts of the antisec movement too, making some of these companies get their heads screwed on straight. Then again, we could take this "new idea" to its extreme. Assume they're able to access everything. So take everything sensitive completely off of the servers and shut the firewall off. After all, they're already in, right?

    HBGary's problem can be summed up in a single word, arrogance, at the top level. Not much different than what happened at Panda. If you're going to brag that you can take these guys down, you better be ready. They obviously weren't. That HBGary example is particularly amazing in that anyone would believe that such a request would have come through e-mail in the first place.

    OT and not relevant to business and their servers. That said, yes I do use 98 (DOS too!) but not because I won't let go of the past. I use it because it fits my security policy and does all I need. If I was hooked on past technology, it wouldn't be default-deny, secured by HIPS, and running virtual guest systems.

    System age, new ideas vs old technology, etc aside, the primary place we differ AFAICT is in what we expect from an OS and what it should be. For me, an OS is a platform for the users apps and an interface between the user and the hardware. Beyond that, it should stay out of the way. The OS itself should be isolated from the attack surface (the internet in this case), not reliant on it. An OS shouldn't come with a giant sized attack surface and then be bloated with more code to protect that attack surface. As long as MS keeps doing the opposite of these, I won't use it. You tend to trust MS and their "new technologies". I don't. Even if we throw all this out of the equation, I'll reject their new systems on privacy concerns alone.
     
Thread Status:
Not open for further replies.
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.