I use DNSCrypt if I am not on my vpn DNSCrypt Windows Service Manager dnscrypt-proxy/dnscrypt-resolvers.csv DNSCrypt https://dnscrypt.eu dnscrypt-winclient
I would like to add something here. I just did a ping to www.startpage.com followed by a ping to startpage.com. Prior to that I had this on my /etc/hosts: 212.121.101.10 startpage.com Here was the result: [xxx@yyy ~]$ ping -c 1 www.startpage.com PING startpage.com (89.146.4.149) 56(84) bytes of data. 64 bytes from rt4bb146-89-149.routit.net (89.146.4.149): icmp_seq=1 ttl=47 time=768 ms --- startpage.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 768.702/768.702/768.702/0.000 ms [xxx@yyy ~]$ ping -c 1 startpage.com PING startpage.com (212.121.101.10) 56(84) bytes of data. 64 bytes from startpage.com (212.121.101.10): icmp_seq=1 ttl=46 time=839 ms --- startpage.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 839.221/839.221/839.221/0.000 ms As we can see startpage.com had the ip 212.121.101.10 which was given in my hosts file whereas www.startpage.com had the ip 89.146.4.149. So I think we need to add both www.startpage.com and startpage.com to the hosts file. And also we need to add the host name for those sites which are on a shared hosting where multiple websites are on a single ip address, like this: <ip-address> <domain-name> <host-name> 212.121.101.10 www.startpage.com startpage
I still haven't figured out which is worse. My ISP seeing my traffic or a VPN. these VPN's are shaddy as ~ Snipped as per TOS ~ to me.
or let a DNS caching program like Acrylic do the work for you. http://mayakron.altervista.org/wikibase/show.php?id=AcrylicHome and GUI for the program avail here: http://dev.arqendra.net/index1.php#adpm
I like to add something worth to mention here. While you're innocently surfing websites, connections are being established in the background. Advertising nodes are not only being used for marketing purpose but tracking also. Notorious ones like Google's 1e100.net, amazonaws.com and akamaitechnologies.com. Pull up your command prompt and type in (case-sensitive): netstat -aop tcp 10 | findstr ESTABLISHED You'll find out how noisy a simple web surfing actually is. This is where a good understanding and implementation of firewall rules are imperative.
Adblocking, anti tracking extensions and MVPS hosts files are all very good in blocking these without messing with firewalls. I use java script blocking as well with only vetted top level domains allowed to run scripts but that isn't for everyone. Lots of people say it "breaks the web". From my point of view, it puts the web back under your control and breaks what you don't want.
This is very inefficient. They only allow three free searches per ip address and even then it does not list all the subdomains. One can just use Wireshark and use the dns filter to get the subdomains for which dns queries are being sent and just drill/digg subdomain.com to get the ip address of the domain/subdomain.
Yep, the web started with plain-old HTML markup. A lot of what is done with javascript is a) associated with dreadful individual browser quirks and workarounds for that and b) having to cater for old standards. If you could target only modern HTML5 and CSS3, then most user elements could be script-free. Another part is genuine functionality. And the biggest - and most dangerous - part is advertising & tracking. And then we come to the fact that browsers were supposed to be individual sandboxes limited to that site, Not sharing with all and sundry, making cross-site attacks easy. And the Browser sandboxes are buggy and leaky, so we cannot trust that they will keep us safe from the javascript code. Unsigned code which could come from pretty much anywhere, and be changed at any time. Doesn't sound a good idea really! So whitelisting at least sources of code is a pretty sound policy.