Interactive whitelisting DNS-based firewalls that can run in a gateway?

Discussion in 'other firewalls' started by Ulysses_, Feb 3, 2019.

  1. Ulysses_

    Ulysses_ Registered Member

    Joined:
    Jun 27, 2010
    Posts:
    266
    What does it matter if IPFire runs in a VM or a netbook, the compromised Windows client where gatekeeper runs can tell IPFire to open up.
     
  2. RioHN

    RioHN Registered Member

    Joined:
    Mar 14, 2017
    Posts:
    63
    Location:
    Here
    It matters because it removes vectors for attack. A compromised system can't just "tell IPFire to open up". Unless gatekeeper is specifically targetted the compromised windows system would need to either exploit some flaw in the firewall or authenticate. The firewall doesn't trust the client. As I previously said, the likelihood of such an attack is low enough that I'm not concerned about it.

    We're getting into the realms of highly targetted attacks, in which case there's nothing stopping an attacker hacking your gateway and completely neutralising your DNS filter.
     
  3. Ulysses_

    Ulysses_ Registered Member

    Joined:
    Jun 27, 2010
    Posts:
    266
    The proposed gateway has an extremely limited attack surface by virtue of its extreme simplicity. By the way, did you say that you're authenticating by typing a password every time you say "yes" to a site?
     
  4. RioHN

    RioHN Registered Member

    Joined:
    Mar 14, 2017
    Posts:
    63
    Location:
    Here
    No.

    I had a look at DNS requests for various websites to give you an idea of the work involved with your future setup:

    I connected to 4 sites:

    Yahoo.com
    Arstechnica.com
    TheRegister.co.uk
    Microsoft.com

    Yahoo
    I scrolled to the bottom of the page then clicked one random story
    Total DNS requests: 109
    Total Unique domains: 68
    63 requests came within the first 6 seconds. 44 of these were unique

    Arstechnica.com
    scrolled to the bottom of the page then clicked one random story
    Total DNS requests: 328
    Total Unique domains: 184
    259 requests came within the first 6 seconds. 146 of these were unique domains

    TheRegister.co.uk
    scrolled to the bottom of the page then clicked one random story
    Total DNS requests: 99
    Total Unique domains: 59
    62 requests within the first 4 seconds. 34 of these were unique domains

    Microsoft.com
    scrolled to the bottom of the page then clicked one random link
    Total DNS Requests: 87
    Total Unique domains: 53
    29 requests within the first 2 seconds. 17 of these were unique domains.

    Totals:
    Total requests across the 4 sites: 622
    Total Unique domains requested across all 4 sites: 305

    With uBlock Origin the numbers come down dramatically on sites with ads:

    Yahoo
    Total DNS requests: 57
    Total Unique: 37

    Arstechnica
    Requests: 24
    Unique: 12

    TheRegister
    Requests: 32
    Unique: 15

    Microsoft
    Requests: 48
    Unique: 30

    Some initial thoughts, some I've already stated.

    • Even with an adblocker you're going to be spending way more time dealing with DNS requests than browsing. It's simply not fit for purpose.
    • DNS requests will time out before you have chance to get to them due to volume queued.
    • You're seriously considering typing "yes" 30 times just to get the Microsoft site fully working? (likely refreshing it multiple times due to time outs)
    • You'll have no way to know the origin of a DNS request (browser? windows? malware?) and therefore no way to know whether it should be allowed. This is especially true of CDN's as previously discussed.
    • How do you intend to keep on top of your IPTables rules, of which you'll quickly have hundreds, when IP's are repurposed or changed?
    • What happens when you whitelist an IP? Do you still get prompted in future for the same domain? If not what happens when an IP changes and a site is no longer accessible? Would changes to cached DNS entries automatically update IPTables rules? How?
    • Shared hosting allows for thousands of domains sharing a single IP. Your solution would allow access to every domain on a shared hosting server for every application on the client if you said yes to just one of those domains.
    • In a targetted attack situation, and due to the large volume of requests, it would likely be easy to trick you into allowing a malicious domain.

    Just some things to think about.
     
    Last edited: Jun 5, 2019
  5. Ulysses_

    Ulysses_ Registered Member

    Joined:
    Jun 27, 2010
    Posts:
    266
    Many names like youtube.com and s.yimg.com are common all over the place, you only whitelist them once. Also *.yahoo.com and *.oath.com and *.youtube.com are families where you either trust all members of the family or you do not. Could still choose to allow some of *.microsoft.com and not all.

    Only once ever. And it can be just a press of the "y" key, not typing "yes - ENTER". Or "n" for "no more prompts".

    Thousands of rules are automatically generated whenever needed, once you've allowed *.google.com or whatever. Maybe rules can be automatically simplified too (1.2.3.0/24 instead of 1.2.3.0, 1.2.3.1, ... or whatever N-bit wildcard) for performance.

    You don't whitelist an IP, you whitelist a name (or a wildcard like *.yahoo.com) and automatically all IP's associated with it. I think when an IP in a cache is no longer accessible, a new DNS request is generated.

    Yes but the malware would not know this and either attempt its own malicious IP's which would fail or do DNS lookups which would fail, and chances of a hacker's wanted malicious site being hosted in the same IP as one of my trusted sites are small. That's an issue with all IP-based firewalls.

    Why don't you list the unique names looked up when you visit yahoo.com with Adblock Plus installed. I'm sure they are only 5 or 10 if you write them as wildcards, possibly less for decent functionality without bells and whistles.
     
    Last edited: Jun 5, 2019
  6. RioHN

    RioHN Registered Member

    Joined:
    Mar 14, 2017
    Posts:
    63
    Location:
    Here
    You missed one of the most important security related issues:
    • You'll have no way to know the origin of a DNS request (browser? windows? malware?) and therefore no way to know whether it should be allowed. This is especially true of CDN's as previously discussed.

    Add to this:

    • DNS requests will time out before you have chance to get to them due to volume queued.
    • You have no way to deal with connections to IP's directly, they'll simply fail without a DNS request.

    Regarding your points:

    Thats once for one site. Think long browsing sessions where you may be looking through many google results and following hyperlinks from those pages. Each time having to okay a few DNS entries, reload the page, allow a few more, try to work out which are needed and which aren't. And what if you only want to add the rules temporarily or for one process? It's not possible, all clients and all processes with internet access would be able to connect.

    I think I'm confused about what you're actually trying to achieve. On the one hand you mentioned strictly controlling DNS requests preventing Microsoft from uploading data, and now you're talking about blindly allowing root domains and all subdomains (potentially tens of thousands of addresses) without knowing their purpose.

    Forgetting for now the issues you may experience with such large rule lists, many big online services allow users to store files or create websites; Google Drive, Microsoft OneDrive, Amazon Cloud Drive to name a few file storage services. If your PC was compromised there would be numerous ways in which your files could be uploaded or malware downloaded without you knowing. Google file cabinet, sites.google.com have been used in the past to spread malware.

    If you allow *.google.com in your script what happens? How are you retrieving all subdomains of google.com in this instance?

    Just to be clear, my point isn't that you should be worried about services like Google Drive, it's that this solution isn't the security panacea you seem to think it is, and it's impractical to use.
     
  7. Ulysses_

    Ulysses_ Registered Member

    Joined:
    Jun 27, 2010
    Posts:
    266
    There will be a whitelist of names and wildcards that will be parsed whenever a DNS request is intercepted and the DNS request will be allowed to complete if the name or a wildcard for it is in the whitelist or if it is added to the whitelist with a press of "y". Each DNS reply is intercepted too to extract the IP and make an iptables rule for it. So after a while you have 10 IP rules for the same name, uk.yahoo.com, 15 IP rules for guce.yahoo.com, etc.

    There is the current script posted here which is in its infancy, not working yet. And there is another set of scripts I wrote 5 or 10 years ago that are fully functional, based on dnsmasq, dmesg and iptables, where you type DNS names/wildcards after you see some names on the screen that are blocked initially. You have to click on Reload on the browser after you type all the names you want. And repeat if some more names are needed. :) That was for dealing with dangerous software that were suspected trojans, in VM's of course. It was developed to block ultrasurf's bizarre connections to *.mil etc but was enlightening about other applications too and operating systems.

    The new solution aims to delay DNS requests till a swift decision is made with a "y" or "n" press for each name. So no need for Reload, the browser will just think ping times are terrible. Unless you're really slow in pressing "y". Or if you say "y" to one name at a time, watch what firefox shows, say "y" to another name if firefox is still incomplete, watch what firefox shows, say "y" to another name if firefox is still incomplete, and so on. In the hope that some names and associated content are not needed.

    In practice there's a lot in common between complex sites, stuff like adsense are ubiquitous, you ok them once or reject them once and the learning stays forever and benefits other sites too. Whereas with simple sites one name only is to be ok'ed.

    If you can distinguish uk.yahoo.com from fr.yahoo.com in terms of trustworthiness, there's nothing to stop you from saying yes to uk.yahoo.com instead of *.yahoo.com. Probably a default to the observed uk.yahoo.com is needed and you also get the option to accept *.yahoo.com instead if you press "2" for "2 levels". With microsoft.com you would definitely choose specific names only, and as few as possible. Or none.

    That's trivial to add as a feature, it just needs to intercept all communications with unknown IP's and prompt likewise, allow to pass if whitelisted, maybe show whois or reverse DNS information about the IP to aid the decision.

    Data volume is no issue, socat has plenty of buffering capability with a certain option and so do unix pipes. The number of "y" presses can become an issue, but if you need to take your time to think, Reload is the least of your concerns.

    This is a good point. CDN's are not my thing, I never understood people who so thoughtlessly give their data away, security is unlikely to be their thing. Browser-generated DNS names are easy to tell, just hit Reload and here they come again.
     
    Last edited: Jun 5, 2019
  8. lucd

    lucd Registered Member

    Joined:
    Jan 30, 2018
    Posts:
    155
    Location:
    Poland
    my question might be a little provocative but whats the benefit if you can block ips with simple software like peerblock (I don't do torrenting, but its useful for browsing). You can even do geofencing (there are great updated lists not just the old broken I-block list, example :https://netroar.com/BlockListsNotes.txt) and write your own rules in txt without the hassle of setting up VMs and wasting resources. The only problem is that peerblock is vulnerable to buffer overflow, which could be mitigated with some antiexploit. Peerblock works great for ip blocking and temporal or permanent whitelisting but the asn filtering
    if we want maximum security wouldn't it be better to just use pfSense that has IDS in VM, I mean what's the point of firewall without IDS/Intrusion detection and prevention. I can see the value of ip blocking but there is a more complete competitor, pfsense. I would only run ipfire because testing software is fun
     
    Last edited: Jun 11, 2019
  9. RioHN

    RioHN Registered Member

    Joined:
    Mar 14, 2017
    Posts:
    63
    Location:
    Here
    We're discussing external (to windows OS) solutions. In the event you had a rootkitted machine how well would peerblock protect you from data exfiltration? When the blocking is done on the machine which is compromised can you trust it?
    List based IP blocking has it's uses, I have peerblock installed currently, but IP's can change at any time and lists can quickly become unmanageable if not maintained.

    I have no loyalty to IPFire, it just happened to fit my needs at the time.

    I also believe this is all overkill for the most part. I've never had issues which required these setups.. It is fun to play around with though.
     
Loading...
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.