UMatrix extension usage guide

Discussion in 'other software & services' started by Mrkvonic, Jan 1, 2018.

  1. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,208
    New year, new challenges. Today, we have a tutorial explaining how to use the uMatrix point-and-click matrix-based privacy tool extension, including setup, options - privacy settings, hosts files, rules and 1st-party script blocking, basic overview and usage, top-down domain hierarchy, color code, permanent and temporary rules, other tips and tricks, and more. Enjoy.

    https://www.dedoimedo.com/computers/umatrix-guide.html


    Cheers,
    Mrk
     
  2. Brummelchen

    Brummelchen Registered Member

    Joined:
    Jan 3, 2009
    Posts:
    5,858
    i prefer to stick with the original -> wiki
    https://github.com/gorhill/uMatrix/wiki

    btw its not gold, its a pale red. you need some adjustment for your colors :p

    i would say your essay is a fast overview, but i really miss details. in special the "my rules" section is much more powerful than explained.
    (if not ublock i would have taken my options from uB in umatrix concerning scripts)

    btw the "privacy" tab is longer present

    what i also want to say that i see uMatrix as a companion for uBlock. currently with the Noscript-issues lot of people call for uMatrix as a replacement - maybe it is, but in first place uBlock should work place and uMatrix as a companion.
     
  3. summerheat

    summerheat Registered Member

    Joined:
    May 16, 2015
    Posts:
    2,199
    Well, the guide is a nice help for beginners. However, it misses some important things. For example, it mentions the scope selector but does not explain the scope concept although this feature is one of the biggest advantages of uMatrix. And the excellent logger is not mentioned at all.

    I don't understand the rationale of this paragraph:
    Why isn't this necessary nor beneficial? It's true that all 3rd-party domains are blocked in uMatrix by default anyhow. But explicitly blacklisting those trackers/adservers/malware sites prevents that a user allows those sites accidentally (as it can happen in Noscript). At least, a user has the chance to think twice before he/she allows such a site. Secondly, for non-blacklisted 3rd-party domains the css and image columns are allowed by default in order to avoid too much breakage - but for blacklisted sites everything is blocked. And this is exactly what I want: Why shouldn't I block any network requests to a malware site or an adserver?
     
  4. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,208
    Because I believe manually blacklisting the internet is a futile exercise, and there's really no reason to check your local files to see whether an outgoing connection should be allowed or not. That's not how the Internet works. You should work under the premise that ANY site could be bad - and then figure out how to be fine regardless. That's far more effective, inclusive and strategic.
    Mrk
     
  5. Brummelchen

    Brummelchen Registered Member

    Joined:
    Jan 3, 2009
    Posts:
    5,858
    hmm, the cite says to me that the hosts file is not necessary - now you say that it has benefit - now what?

    or did i misunderstood that i have to configure uM that way (w/o hosts file) that all sites are bad at first and have to whitelist them? nah, thats rubbish and a lot of work.
     
  6. gorhill

    gorhill Guest

    Sure, but that doesn't make integrating these hosts files futile. Many users working in default-deny mode will often say "I allowed everything for the site to work" -- I see this commonly on NoScript forums. Well, when they do this with uMatrix, there are still some protection left. See soft-allow-all. Also, css/images are allowed by default in uMatrix -- resulting in less breakage, and these hosts files are still useful to prevent connections to known servers which deserve to be blocked. At the end of the day though, it's for each to choose how to work, and these hosts files are useful for certain ways of working.

    The overhead you mentioned, did you actually measure it? If not, how do you know it's even worth mentioning as an issue?
     
    Last edited by a moderator: Jan 4, 2018
  7. Sordid

    Sordid Registered Member

    Joined:
    Oct 25, 2011
    Posts:
    235
    Yup, rubbish. Whitelisting the internet is a futile exercise!!!

    Two obvious problems:
    You WL wilders then wilders becomes malicious.
    In trying to make sites work, you WL (temp) a malicious actor.

    The zinger: all that labor. I'd rather have malware than have to trod through whitelisting every site I visit on an hourly basis YMMV.

    I do a combo. Whitelist out low-dynamic third-partied sites I visit often and especially those which require security. Then enforce blacklists for malware globally and ad sites/elements in the browser via ublock etc.
     
  8. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,208
    We're discussing two different things.

    I am not talking about UM specifically - I am talking about the use of hosts files in general.

    The fact there are sites that deserve to be blocked - possibly - but the solution should be "out there" not in your browser. Now, the issue with trust is that you will not be betrayed by someone you do not trust - it's not malware on bad sites, it's malware on good sites. That's something that requires things - and the answer is, if you do allow, what then. I argued the exact same thing regarding Untrusted sites in Noscript.

    How to do this smartly - well, that's a separate topic.

    As for the overhead, it exists - whether it's perceptible humanly is beside the point. Yes, you can measure this. If you're interested, I can provide you with some detailed Linux-based testing results.

    Mrk
     
  9. gorhill

    gorhill Guest

    It's confusing: your guide specifically points out at hosts files as used by uMatrix, there is a picture of uMatrix's dashboard. Something worth mentioning which I forgot, is that uMatrix enforces hosts resources differently than how a OS treats them: the subdomains of any domain in the hosts resources are also blocked with uMatrix.
     
  10. wat0114

    wat0114 Registered Member

    Joined:
    Aug 5, 2012
    Posts:
    4,063
    Location:
    Canada
    After ~ two years of use, I find I'm most comfortable blocking iframes only along with the default hosts files settings. I believe gorhill calls this enhanced + easy mode? I'm only talking about uBlockO here but I think the same applies to uMatrix. I just find it too time-consuming trying to fix broken sites whenever I block 3rd-party scripts in addition to the former settings I've mentioned. It also astounds me how many members in this forum are piling on additional hosts files apparently to augment the included in trying to achieve their holy grail of blocking ads, when they're likely causing significant site breakage at the expense of blocking a few more ads. In my experience adding more hosts files = breakage of more sites.
     
  11. summerheat

    summerheat Registered Member

    Joined:
    May 16, 2015
    Posts:
    2,199
    This sounds a bit nebulous. What could that be?

    We're not living in a perfect world. In principle every site could contain malicious code. In that case you can only hope that the browser sandbox/site isolation/whatever ... mitigates the effects. But that is not a counter-argument against blacklisting well-known malicious and/or privacy-invading sites. This is considerably better than nothing. And many examples in the past tell us that mostly 3rd-party sites cause the trouble we're talking about (e.g. malware hidden in ads).
    eparate topic.

    Please do so! I'm sure that not only @gorhill is interested in your findings. Till then I doubt that such an overhead is significant. I'm pretty sure that blocking all network requests to adservers makes website loading a lot faster in most cases.
     
  12. summerheat

    summerheat Registered Member

    Joined:
    May 16, 2015
    Posts:
    2,199
    Yes, one can overdo the usage of hosts files. There are huge hosts files available with several hundreds of thousands of entries which frequently cause site breakage because they are not properly maintained. However, the default selection in uM or Steven Black's hosts file (which condenses and de-duplicates some of them) hardly ever cause trouble for me. And in any case they are very helpful if you - as gorhill wrote above - simply allow the all cell in order to unbreak a a site as the known bad guys out there would still be blocked.

    Having said that I'm not quite sure that Mrk sufficiently distinguishes between blocked and blacklisted sites.
     
  13. gorhill

    gorhill Guest

    Test case: loading the front page of wired[.]com, after soft-allowing all -- necessary if we want uMatrix to fall back as much as possible onto hosts resources for blocking purpose:

    a.png

    Here is the is the profiling result for 5 page reloads: https://perfht.ml/2CJocSL

    "LiquidDict" is the data structure holding the 85,000 disctinct domains from hosts resources.

    The overhead of enforcing the 85,000+ distinct hosts resource entries was ~2ms after 5 reloads of the test page -- and the 2ms is decomposed in two instances of 1ms, because LiquidDict is invoked from two different code paths. Keep in mind the profiler's resolution is 1ms, which means these measurements must be read 1ms ± 1ms x 2. Essentially no-ops given this is after 5 page reloads.

    Given this, I would say it's all worth it when considering that often users end up soft-allowing all in real-life usage, these provide real and concrete protection despite being no match to default-deny. It's nice for users to have this extra layer of protection when they forfeit the first one (default-deny).
     
  14. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,208
    Once again, the emphasis is not on the speed (or the magnitude of it) - it's on the whole concept. Hosts files = anti-virus signatures.
    I am opposed to any blacklisting method everywhere, ever, and this is not different.
    So whether this provides an extra layer of protection - arguable.
    Mrk
     
  15. summerheat

    summerheat Registered Member

    Joined:
    May 16, 2015
    Posts:
    2,199
    Plus anti-privacy-invading. What's wrong with that?

    I still don't understand why. What's wrong in blacklisting domains which are known to be malicious or privacy-invading? I don't get it.
     
  16. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,208
    @summerheat, I do not want to have an ideological discussion. It's pointless - like talking about personal beliefs. You want to use the hosts file feature, go ahead. I have an alternative way of doing things. You don't have to like it, or vice versa. Either way it's fine.

    Just for your information, the words you used - privacy/malicious/etc - they are subjective/circumstantial. Hence we cannot argue them. I prefer a different approach - if there's a compromise, what then. And if your backup fails, what then. DFF strategy.

    Cheers,
    Mrk
     
  17. summerheat

    summerheat Registered Member

    Joined:
    May 16, 2015
    Posts:
    2,199
    Sure. Perhaps your way of doing things is the better one. But you didn't bother to tell us how this way looks like. Till such time I cannot reproduce why you're opposed to blacklisting.
     
  18. Mrkvonic

    Mrkvonic Linux Systems Expert

    Joined:
    May 9, 2005
    Posts:
    10,208
    There's nothing to reproduce. It's simple:

    An item not on a blacklist is malicious (or whatever). You allow scripts, including that one.
    Somehow, browser security is penetrated. What do you do now?

    My approach is:

    Assume that any which site could potentially be bad (even unintentionally).
    What mechanisms do you have in place to stop a potential exploit/problem from supposedly benign/trusted sources?
    This is where you start really implementing solutions that do not rely on good/bad clarification but legit/illegit code.
    This is where things like EMET come into place.

    But then, you assume that even EMET is insufficient, what next?
    Perhaps a limited user.

    Still not good, what next?
    How's your data - what if your computer burns down (call it whatever you want, accident, malware, coffee).
    Do you have your data protected so you can resume normal operation within reasonable time.
    Sufficient data retention, distribution, multiple copies, online/offline, additional protection, etc.

    There's no end to it, of course. You can end up a doomsday prepper with a machine gun in your lap.
    But you can find a convenient spot between probable risk and comfortable life.

    Mrk
     
  19. summerheat

    summerheat Registered Member

    Joined:
    May 16, 2015
    Posts:
    2,199
    I tried to answer that question before. And it's still unclear to me why known malicious sites should not be blacklisted even when considering all those imponderabilities you mentioned.

    Anyway, we're going round in circles. We obviously deal with that problem in different ways. So be it.
     
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.