Food for thought: safe browsing and blocking scripts

Discussion in 'other anti-malware software' started by Windows_Security, Feb 10, 2015.

  1. I have a friend who is a security specialist (reverse engineer of malware) and sometimes gives me tips and sometimes pulls my leg. He says amateurs often rely on fake security measures. He noticed I was using a third party script blocker.

    He left me baffled and silenced, so may be members can help me in this discussion below:


    "HE"
    Kees you are using a (third party) script blocker. You do realize that blocking third party is good practice, but that for security it only makes sence when you never allow exceptions?

    "ME"
    No, I sometimes allow exceptions. Why would this be fake security? I check the site at VT, determine which third party script to allow. In the process I create my own block list for sites relating to advertisement not in the trackers list. Reload the webpage and I am a happy and safe surfer

    "HE"
    You can't check all the third party domains before allowing, because third party scripts often invoke other scripts (or are obfuscated). So the blocked script list 9 out 10 times does not contain all the (third party) scripts. You can check this yourself: go to a news site, look at the blocked third party scripts, now allow third party and you will see there are more!

    Bottem line
    When you allow third party and it is unknown malware, you are out in the open, because most script blockers don't show all the scripts (because some are invoke by others or are obfuscated).

    So unless you switch to sandbox/virtualised session first. Your hand picked/manually allowed scripts do not protect you in terms of security. Blocking (third party) scripts reduced load times of web sites though, so in terms of speed and privacy it helps to reduce webpage load times and tracking protection, but it is fake security when you occasionally allow scripts for "safe" websites.

    Any thoughts on this :(
     
    Last edited by a moderator: Feb 10, 2015
  2. zakazak

    zakazak Registered Member

    Joined:
    Sep 20, 2010
    Posts:
    529
    Ye ofcourse he is right. That's why I am constantly working to increasing the "trusted scripts/sites" in uBlock to (in the perfect world) never need to allow anything manually. And in case I have to -> I will run the website first in an incognito tab or basically in a completely new browser e.g. sandboxed internet explorer instead of chrome.
     
  3. 142395

    142395 Guest

    Tho allowing script for a domain often brings many new scripts from other domains, aren't they still blocked in default-deny mode? So I can't get why it's fake security. As long as you do not allow those added scripts, you should be safe. And if you allow one of them which is actually dangerous, it's game over but it's natural and no wonder. Basic principle is to allow only domain which you know well and trust, and if those trusted domain is hacked you're no luck.

    Note: exception or whitelist should not be based on making a website to work, but based on trust.
     
  4. Assuming you would use the time consuming hand pick one-by-one approach. A website which would have seven "third party scripts" from which you would need to noop/allow two. Using the "allow one, allow another" approach this could take in a worst case scenario 30 tries ((7-2)x6 combinations). Look at commercial webs how many third party scripts do they have. Imagine the time it would take to allow three out of ten scripts.

    So this one-by-one would be a very time consuming method and it only protects against known bad websites. The known bad websites were problably covered by the URL filter of your AV (which filters webtraffic before it is processed by your browser). So its added protection is highly theoretical.

    You might be less out in the open (allowing one-by-one), but still on your own when you allow a third party script from as unknown malware websites (you would problably noop/allow a website which is not blacklisted yet), so no additional protection to the browsers and the security you have in place.

    In case you use application virtualization (in stead of VM), the "only benefit" of using Chrome's sandbox versus Sandboxie's sandbox in this scenario is that when you run into an chrome exploit, SBIE's added protection might, with emphasis on might block the exploit. Since I have added anti-exploit protection the security benefits of blocking third party again are highly theoretical.

    Hopes this explains it
     
  5. Well fair chance the exploit kit triggered is so smart that it does not use the Chrome exploits. So running IE in SBIE won't change a thing, nor running Chrome incognito (see Yuki's response). I have doubts about the effectiveness of the extra work you put into this.
     
    Last edited by a moderator: Feb 10, 2015
  6. wat0114

    wat0114 Registered Member

    Joined:
    Aug 5, 2012
    Posts:
    4,065
    Location:
    Canada
    No matter which site I visit, I'm allowing only a certain percentage of scripts to render it adequately. Maybe it's 60-70% allowed, and of those 30-40% blocked, the majority are of dubious nature, so it stands to reason I'm gaining some measure of security. It may not be perfect, but the chances of some script-initiated drive-by or whatever happening is still reduced. That's how I see it, right or wrong, anyway.

    I agree with this. However, uMatrix/uBlock, afaik, blocks the invoked scripts when you allow one that that attempts to do so. The manual hand picking of scripts to allow is a bit painstaking at times but that's the price you pay for increased script control security.
     
  7. Well rule of thumb (according to my friend) it is probably an A-B-C division (in marketing and logistics as well known as the 80-20 rule): you block at least 60% of the unnecessary scripts to load, save 30% traffic and run in 10% of the sites in trouble (blocking third party).
     
  8. bo elam

    bo elam Registered Member

    Joined:
    Jun 15, 2010
    Posts:
    6,147
    Location:
    Nicaragua
    Blocking scripts make sites load faster and look clean. That alone, makes using NoScript worth it for me. I don't know how it is with other script blockers but with NoScript, you have a black list and a white list. Me, I find myself doing more adding more to my blacklist than the white list.

    I have bookmarked sites pretty much white listed for whats usually required. When I open bookmarks, theres usually nothing new that has to be allowed. If I dont add a new bookmark that requires a domain to watch videos or whatever, I can go weeks without adding anything new to the white list. But I love adding domains to my black list. I work on it all the time. If I find a new domain that is not required for anything that I care about, sometimes I add it to the black list. Keeping up the black list is a good idea because black listed sites don't load when you temporarily allow a page. For convenience, this is something that I might do when I land in a new site that I am just going to open once. Keeping up the black list makes temporarily allowing a page more secure.

    The main reasons I use NoScript is not security but to clean sites of disturbing objects, to get pages to load faster and to block trackers. But I believe NoScript is indeed doing security. I used Sandboxie and NoScript for the same amount of time, a little over 6 years. To this day, ever since, I have never seen anything that looks or acts like malware while browsing. Never. That is NoScript doing its thing by blocking domains that can potentially load malware. I haven't even seen SBIE blocking a suspicious exe from trying to start despite the SBIE restrictions. That is NoScript doing security, IMO.

    Bo
     
  9. Mayahana

    Mayahana Banned

    Joined:
    Sep 13, 2014
    Posts:
    2,220
    All of this sounds like wasted time/effort/energy to me.... Deploy a NGFW and Endpoint Security on the devices, along with an adblocker, and keep everything up to date then live your life peacefully. No need to break the internet for security greater than any mortal really needs. Just my opinion.
     
  10. tlu

    tlu Guest

    You should really try µMatrix. It comes with several huge hosts files, and all those domains are blacklisted. It's hardly necessary to manually add domains. Aside from this, µMatrix is more powerful than Noscript and allows for a more fine-grained control over what the browser is doing, IMHO. It has scopes (global/domain-specific/site-specific) which Noscript doesn't have (unless you manually create complicated rules in ABE). And its matrix GUI is simply brilliant and makes it much more usable. I was a convinced Noscript user over many years but after working with µMatrix I definitely won't go back. Let's hope that µMatrix will be available for Firefox before long (like µBlock).
     
  11. bo elam

    bo elam Registered Member

    Joined:
    Jun 15, 2010
    Posts:
    6,147
    Location:
    Nicaragua
    You, of all people, with all the things that you do to keep your computers clean, how can you talk about wasted time, effort and energy. If I was doing all the things that you do, I wouldn't have any time to do the things that are worthwhile doing in the internet. You spend hours a day doing what you do. Meanwhile, I spend none:cool:.

    Bo
     
  12. bo elam

    bo elam Registered Member

    Joined:
    Jun 15, 2010
    Posts:
    6,147
    Location:
    Nicaragua
    Hi Tlu. There is one problem about trying µMatrix. I don't use Chrome. Anyway, NoScript has been great for me. I am not the kind of person that changes programs that work great for you just for the sake of trying something else. Its kind of a conservative way of doing things but thats how I do things in general.

    So, even if µMatrix.came along for Firefox, I doubt I ll take down NoScript to try µMatrix. Too bad you stopped using NoScript, the idea of how to make good use of the the black list, I got that from reading your posts.

    Bo
     
  13. Mayahana

    Mayahana Banned

    Joined:
    Sep 13, 2014
    Posts:
    2,220
    Hours a day? Well at work, that's what I am paid to do. At home? Rarely more than 'minutes' a day. Remember, the entire reason for deployment of enterprise grade hardware/software is so we don't need to invest time in micromanagement. I can run reports, script activities, and push upgrades with a single click. Not to mention group policies, LDAP, and AD. Nobody would get anything done in the enterprise realm if we didn't have all of this. Sitting around tweaking individual systems endlessly is really not my thing at all. For example sitting here at a desk, on remote I can check the status of my entire home with a few clicks, including each desktops threat matrix. :thumb:
     
  14. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    @Mayahana: not enough expertise here to comment on UTMs specifically, but I have trouble believing they (or anything really) are all they're hyped as. I cannot think of a single thing I've seen in IT that actually lived up to the hype.

    Re configuration management for desktops/workstations, I have rather mixed feelings. My gut says that if you have to really lord it over your users, you have a people problem, not an IT problem. OTOH I've often wished Linux CM systems were more fully integrated and amenable to workstations.

    For mine, I'm still looking for a (Linux) CM system to use on my home network. So far I've not found anything suitable... Mind, the bloated inflexibility of the heavier Linux desktops is not helping...

    @Windows_Security: well, all security is about risk management when you get down to it, right? But yes, IMO script blocking relies more on statistics than e.g. mandatory access control. I've seen Noscript save my skin on a few occasions, and fail to entirely on a few others. I'd still rather use it, if only for the performance benefits.

    As for the third-party script issue, I'd never heard that before but it makes sense. Blocking cross-domain requests from within a running script might not be so easy.

    As a very very tentative workaround, one might perhaps use Noscript alongside e.g. RequestPolicy, or some other method of outright blocking browser requests. The JS engine is part of the browser; if some JS makes a request to an external URL it should be blocked by RequestPolicy, assuming that Firefox's extension API actually allows that (which it might not, I really have no idea).

    I really think that is much more trouble than it's worth, though.

    Also it might be worth talking to e.g. gorhill (who develops uMatrix for Chrome), re the limits of its (and other blockers') capabilities.
     
  15. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    0
    It shouldn't be that difficult to find out. My concern is that the way that the NS and RP extensions interact could change with a browser update. The behavior change might not be noticed for some time afterwards. The package that I use substitutes Proxomitron with the ProxBlox configuration, described here in place of NoScript. The biggest difference is that javascript, iframes, etc are processed before they reach the browser instead of being processed by the browser. This, and the fact that I use SeaMonkey in place of FireFox may cause my setup to behave differently. Blocking or allowing scripts, iFrames, and other components with Proxomitron directly affects the connection attempts displayed by Request Policy. Ideally, the FireFox, NoScript, and Request Policy package should behave the same way, but this needs to be verified and should be re-verified after each browser update.

    IMO, web content filtering should be done ahead of the browser, not in the browser. As good as NoScript is, I feel that the developer should create a free-standing version of it that functions like a local proxy, and doesn't depend on the browsers APIs and the potential of them changing.
     
  16. StillBorn

    StillBorn Registered Member

    Joined:
    Nov 19, 2014
    Posts:
    297
    "Woof, woof." So sorry dudes/dudettes... I know it's not even funny but I couldn't resist...
     
  17. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    This sounds great, but in practice it's hard to implement. The proxy would have to recognize all the varieties of obfuscated JS and HTML that the browser recognizes, before they get to the browser.

    I think something like RequestPolicy might be implemented as a local proxy more easily though, since it's filtering connections in general rather than content.
     
  18. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    Hi,

    Would you cite an example of each?

    thanks,

    ----
    rich
     
  19. Sordid

    Sordid Registered Member

    Joined:
    Oct 25, 2011
    Posts:
    235
    One must consider that historically most legitimate "topsites" (Alexa) lead the exploit cascades/redirects via ads.

    Script blockers like Gorhill's allow you to default-deny pages unlike adblock and block by category which makes it more robust than just blocking by blacklist url/elements.

    For sites one visits most often, I would come up with a custom blocks (generally default-deny unless the page is very dynamic a la scrolldit.com). Otherwise, you'll hit the generic adblock subscriptions. Looking through my history, most my usage is off the same 50 odd sites. I'd say most users have an even smaller common website pool.

    Therefore, with default-deny custom sites aka sites you often visit, you'd have to exploit trusted servers. It makes nil difference if primary or third-party. They are either trusted or not, and they are exploited or not. Generally, one would not whitelist a likely attack vector on a topsite: ads.

    For coincidentally trusted sites that are unknown (legit links you've never visited), you'd likely have to inject via new ad servers/unblacklisted sites. Fanboy's list is pretty damn solid for blocking most ad servers and especially those which are used by topsites.

    As for pure exploit pages (direct malware links), it still offers protection since critical sites like Gmail can be globally blocked except when on gmail. It also allows malware domains to be blocked directly via lists or TLD filtering.

    This has low effort once configured, low FPs, but good coverage from many attacks. Pair that with the fact that pages run faster and with less total overhead--it's an easy decision for me. Why wouldn't you run a script blocker like umatrix etc?
     
  20. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    If those connections are contained in javascript or delivered via iFrames, a proxy that doesn't understand that content won't be able to filter the connections they contain. As a proxy, I don't see how it could determine which connections are the original, which are 1st party, and which are 3rd party, especially if they're not all clear HTML..
    Both Proxomitron and Privoxy are capable of parsing both HTML and javascript. I haven't used Privoxy to any real degree but Proxomitron handles javascript quite well. Many of the filters used by both the Sidki filterset and the ProxBlox merge use it. Since Proxomitron itself is basically a search and replace engine, seems to me that its only limitation is the filters, what they search for and what they replace it with.
     
  21. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    Saving my skin: last one happened a while ago. IIRC it was a compromised website with a Windows exploit kit. I was on Linux anyway, but I guess that counts.

    Not saving my skin would mainly be this, though I'm still not sure what sort of attack was used.

    https://www.wilderssecurity.com/threads/some-advice-for-linux-users-re-security-procedures.372748/

    Short version:
    - opened up a terminal and reflexively ran ls
    - saw an unidentifiable binary executable in my home directory
    - looked at the process accounting history
    - noticed instances of GCC running unbidden in my account while I was away from keyboard

    I'm not sure this involved a browser attack, but I don't think it involved system services or the package manager - if whoever or whatever had root access, they probably would have used it, and stayed very well hidden. My suspicion though is that two other machines were in some way involved:
    a) The firewall, my erstwhile netbook, which I had stupidly set up with Debian instead of a dedicated firewall distro.
    b) A Windows 7 machine, which I had not been checking up on, and which turned out to be very obviously compromised.

    I can PM you more details on that if you want. I'm fairly sure my network is clean now, but this was definitely a learning experience.

     
  22. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    You're right, it wouldn't; my bad. I blame my clogged sinuses.

    Privoxy can block JS by file, and I think by script tags. I don't imagine it would do too well with obfuscated embedded scripts.

    Proxomitron I have no idea, but I'm somewhat dubious. Search-and-replace (e.g. regex or such) isn't enough for dealing with a format like HTML, because HTML tags can be nested indefinitely; and parsers of various sorts are subject to a lot of vulnerabilities.
     
  23. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    You could always give it a try. The filtersets that I linked to in post 15 are good. It will run on linux via Wine. The ProxBlox merge file is impressive IMO.

    Interesting that you mentioned linux firewalls. I'm currently trying to sort out a problem with Smoothwall, that makes no sense to me at all. At first, I thought it was a RAM problem or a hard drive going bad. Now I'm not so sure. After running the same unit for years, I lost the ability to edit port forwarding rules. Snort was shown to be enabled but wasn't running. Most of the logs are empty. I lost access to the DMZ from the LAN. At one point, I'd swear that they red and green NICs exchanged places. On another occasion, 2 network cards were suddenly shown as unassigned. I ran Memtest for several hours, all good. I substituted another hard drive and reinstalled everything. In a very short time, I was right back where I started, same exact problems. I've reinstalled again and still can't access the DMZ from the LAN. Either it's a strange hardware failure or the hardware itself has been compromised. The failures are too specific to be a coincidence, especially the loss of most all of the logs.
     
  24. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    You mentioned NoScript. I don't know much about NoScript, so I am curious: Were you redirected, or did you go directly to the site, not knowing it was compromised? How did you know the site had an Exploit Kit? Did NoScript pop up an alert?
    How did you determine that it was a failure of NoScript and not some other vulnerable point?

    thanks,

    ----
    rich
     
  25. Well before explaining: let me stress that it is your time, so when it makes you feel better, go ahead.

    Running scripts is allowing code to run on your computer which is hosted on someone other's computer. When the integrity of that other computer is broken, allowing A and blocking B from that computer is irrelevant: you can't distinguish good from bad and the reputation (black lists) on which you decided yesterday can be different tomorrow.

    In terms of risk perception allowing one or two scripts versus allowing all scripts is like playing Russian roulette versus standing in front of an execution squad. Let's be realistic. You first have to come into such a bullet facing situation. When did you play Russian roulette lately?

    The code executed by the script has to break out of the browser's sandbox first. Given the low chance of running into an exploit which breaks chrome's build in sandbox (or breaks Sandboxie protecting Firefox): the reality is that it is a lot of work for a minor risk reduction.

    Despite this argument of fake security, I still use third party blocking, simply because you gain speed and privacy (like Watt, Bo, Sordid etc explained). Only when I encounter a problem, I just simply allow third party scripts and trust my security measures.

    Thanks for everybody for sharing your thoughts and joining this discussion (and thanks for removing some off-topic posts) :thumb:
     
    Last edited by a moderator: Feb 11, 2015
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.