What is a good non-cucked DNS?

What is a good non-cucked DNS?

I've been using Level3 but they have a really annoying "feature" that automatically takes you to their shitty search page when you enter an incorrect domain. The search page spergs if you block its tracking shit too.

Other urls found in this thread:

wiki.opennicproject.org/Tier2ServerConfig
twitter.com/SFWRedditGifs

OpenNIC

Why didn't I think of that? Thanks!

How trusty are these?

More than the rest.

I'd like to use this but I'm looking for one that non-autistics can access, I'm not going to convince everyone to manually add DNS server IPs to their computers.

(I'm not OP)

Anyone know if namecheap is good? That's the one my buddy recommended to me.

TorDNS

TorDNS + dnsmasq to deal with the 500ms latency

Sure the exit node can fuck with the dns, but it's a trade off. Exit node doesn't have your IP, OpenNIC / L3 / etc does.

For shit that matters you can just add the ip directly into the hosts file.

They may or may not have an agenda, whereas others definitely do. Also, objective pros:

Plus you get to enjoy the marvels of autisternet like end.chan. Holla Forums is always bitching about normies on muh internet, OpenNIC sounds like a great solution.


They're okay but don't they de-whoisguard your domain if you stop paying?


That was a bit annoying to set up but I think I got it to work. How can I test what DNS is being used for non-Tor traffic (should still use TorDNS even for clearnet)?

I fantasize that I am INTERNET DANGERMAN and I must have Elite DNS servers.

We just don't want to support globalist kikes.

DNSPort in torrc is specifically for DNS, you should be using that for the clearnet DNS traffic. Run tor as root and have DNSPort sit on 53, or have dnsmasq sit on 53 and point to some higher DNSPort

if it's not clearnet traffic than you should be using socks port, you shouldn't be having any dns leaks.

91.218.115.155
213.161.5.12
108.61.164.218
31.3.135.232
45.32.28.232
193.183.98.154

these are prob good, i use the top one

nice try Putin

any OpenNIC server with DNSCrypt.

any opennic server with dnscrypt

...

Any OpenNIC DNS server with DNSCrypt.

Ghandi. They're not the cheapest registrar, but they don't fuck you over when you need to renew like namecheap and godaddy do.


An alternative if you want it to be fast and anonymous is to run your own OpenNIC T2.


We do have an agenda: ending the ICANN monopoly. Most of us don't log queries and we're not interested in invading your privacy like all the other DNS providers do.

...

That was my goal. I set "DNSPort 9053" in torrc, and then:
no-resolvport=9053server=127.0.0.1#9053listen-address=127.0.0.1

In dnsmasq.conf based on the Arch wiki.

I was expecting this to result in local dnsmasq cache being tried first, and the external DNS being resorted to only if there's no cache. It seems to be working because if I do "dig example.com" first the query time will be 20-30 msecs but if I do it again it's 0.

However if I comment out my secondary OpenNIC IPs in resolv.conf and only leave "nameserver 127.0.0.1", then resolution breaks. Cached domains still resolve, but if I try a new domain I get an error about server not responding. But torify curl and tor-resolve still work. So it seems like dnsmasq is not talking to tor's DNS.

Yeah, how do I actually verify that? Wireshark, or is there a more standard way?


How hard is this?

There's something I can get behind. What I meant was "anti-user and pro-corp/govt agenda", sounds like OpenNIC is the opposite. At least taking things at face value.

ftp://FTP.INTERNIC.NET/domain/named.cache

Can you install the DNS server of your choice and edit a config file?
wiki.opennicproject.org/Tier2ServerConfig

Also, check netstat to see if dnsmasq and tor are listening on the correct ports, and check their logs for errors.

you have dnsmasq looping on itself
in dnsmasq.conf
port=53
resolve-file=
no-resolv
no-poll
listen-address=127.0.0.1
server=127.0.0.1#9503

DNSPort 9053 in torrc

make sure you restart tor after you change torrc
make sure you restart dnsmasq after you change dnsmasq.conf

Also assuming your using arch or netctl in general to handle networking, you have to add

DNS=('127.0.0.1')
in your profile in /etc/netctl

otherwise it's going to take dns settings from dhcp, and then put it in resolv.conf (which dnsmasq should ignore anyway with these settings) but change it anyway

also once you've got dnsmasq working you shouldn't be using resolv.conf at all, all the dns settings go into dnsmasq.conf.

Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.
Go back to reddipol, retard.

I bet you're that autistic ponyfag who was spamming the board a few days ago

is there a DNS solution that doesn't involve trusting some random server? For instance, could I just download all the DNS records for the whole thing and do lookups locally?

As far as I understand, even if you run your own DNS server it still gets records from some other server, which you have to trust to not log your queries.

Wouldn't all the DNS records be like petabytes of data.

dunno, but storage is cheap.

Probably best to setup unbound for local longer term caching, although I am interested in getting all of the records as well.

Ah, figures. I thought port is the port it communicates to tor on. I think you made a typo also, it should be "resolv-file", in case anyone else tries this.

Anyway, that solved the issue. My DNS queries now take 10x as long so I'm pretty sure I'm using tor now, I'll check traffic in more detail later to make sure. Thanks a bunch, would have taken me a long time to figure that out on my own. Sorry for turning it into the tech support thread.

Well, Manjaro, and the network manager applet. Yeah, I noticed it uses the ISP DNS from my router. Easily fixed by going into connection settings, IPv4 tab, switching "Automatic" slider to off under DNS and adding 127.0.0.1 as DNS server.

Although I think resolv.conf takes precedence over NM (eg. if you edit resolv.conf while connected). The issue is that NM overwrites resolv.conf on connecting (so every boot)... You can disable that, but it's unnecessary hassle since with your config bypasses resolv.conf anyway like you said.

By the way, don't you disable the normal DNS in three ways? no-resolv, resolv-file, no-poll seem to be doing similar things, so just no-resolv should be enough in theory. Is it just to be safe?


For IPv4, there are 256^4 so about 4*10^9 addresses. If you store them in a string->ip hashmap, each IP would be 4 bytes, so you have 17 GB for the mappings. Plus the hashes of domains and so on, but still pretty manageable. Maybe you could even compress it.

In reality not all IPs are used, and also many IPs have no domain. So it would be a lot less than that. Also you don't visit every site. Realistically "your own" DNS would only have the 1% of domains you resolve 99% of the time, and on the rare occasions that you visit a rare domain it will resolve that from an external DNS and cache it for later. Which is pretty similar to dnsmasq I think, except dnsmasq caches them after the fact instead of pre-fetching.

For IPv6 you do run into 1-ish PB. A bit high for a desktop but still doable... Though a bit inefficient just for DNS. Domains cover only a tiny, tiny fraction of IPv6 space, though, so in practice again you would use far less space.

This is not a big issue with an intelligent client-side. Instead of just resolving from a server and trusting it, what the OS should be doing is resolve new domains from several independent DNS'es and compare (that way hijacking is immediately detected). It should then save records on that locally, and go by that cache for most user activities. The cache is occasionally validated (but at random times not coincident with user actually connecting to that server so that name resolution doesn't leak browsing activity). Cached domains are also carefully watched, warning the user if a domain that has been long associated with a given set of IPs suddenly changes (maybe even looking at geoloc for less false positives), so they can manually review and decide whether it's just server moving or fuckery. This way even if somebody subverts all the DNSes out there, if you've accessed that domain before you are protected from their shenanigans.

I dunno, maybe dnsmasq already does this stuff if you examine the configs. I'm not that tinfoil yet though, just wanted to stop using shitty datamining botnet DNSes.

I use level3 sometimes too, if I do it's usually 4.2.2.2, it only does the search shit on rare occasions. OpenNIC is a better option as mentioned way earlier.

I don't know if they're part of OpenNIC but dnscrypt.pl is good, even though it's far away for North American users it still works well enough. Using DNSCrypt also gives some extra protection for DNS leaks for VPN users.

Yes, it's called run your own OpenNIC T2.


You just invented DNS. Though there's more than one record pointing to each domain, so for instance, .com is ~20GB. I'd estimate the whole thing is ~500GB, and it changes constantly.


I don't believe they are, but several OpenNIC T2s offer DNSCrypt, and the .bit TLD is a namecoin gateway on OpenNIC

I was just trying to do a back of the envelope calculation.

How do real DNSes store it though? I would have probably used a big database with lots of caching to RAM. For scaling, divide up the databases between different machines. Then hash the domain with very low entropy and map that to each DB.

Yeah, but I figured those are rare so they won't inflate numbers much.

Ah right, guess that's the real problem. Storage is cheap but you would be constantly querying other DNSes to stay up to date.

Is there any way to ping 2 independent dns, check if the URL has the same IP and only then connect you to said url?

Generally, a flat zone file. I'm not 100% sure of the internal representation, but the binary wire data format, sorted with an index would be extremely fast. Relational databases (i.e. SQL) are actually quite slow in comparison.

The majority of domains probably have 1-10 A records and a MX record at a minimum. Most will also have some AAAA records, maybe NS records and SOA for delegation, and a good number will have all the bits like DKIM, SRV, etc. And that's not even counting DNSSEC.


None existing that I know of, but you could trivially do that with any dns library, even dig and a wrapper script. Problem is that browsers have their own DNS clients because reasons.

I'm really tempted to write one now.

It would be really easy to have it look at DNS requests, do the custom validation, and then output the results into hosts although obviously this will have many practical issues.

It would be even better to put it between tor proxy and dnsmasq, but then you have to learn how DNSes actually work. Can't be that hard though, you just have to listen on one port and spit stuff out the other.

I'd be really surprised if this hasn't been done years ago, though. Seems like such an obvious thing.

ya pretty much, no-resolv should do it the rest are to be safe, you can also firewall off 53 to be safe, nothing should be going on on 53 anymore.

ya def a typo thanks

This is the problem and I came to the same conclusions. Dnsmasq doesn't have that capability. The issue with hitting multiple dns servers to compare, is your defeating the purpose of TorDNS unless you pump those requests over a VPN. DNS uses UDP, tor must have specific code dealing with that to pump it over, you can't just point DNS at a socks port and proxy it normally with Tor.

You'd pretty much have to write your own little server that does this, but the other issue is the DNS TTL. Most of the time it's like 600s (5 minutes), and that comes from whatever settings the person is using who owns the domain. You'd have to override or ignore this with your server for it to make sense, but then your DNS isn't going to pick up on IP address changes quickly or not at all if it's ignored.

You can change the TTL with dnsmasq but there's hardcoded limits, I forget what it is but the devs specifically limited it, I think it's something like half an hour. To override that you'd have to download the source, change it and recompile.

These clients should be pumped over your DNS anyway.

HTTPS is supposed to deal with trusting the output of the DNS, that the IP = domain name. This assumes you trust HTTPS. I think most browsers doing a decent job complaining if someone sends you a self signed cert instead of the legit one, but I don't know a lot about https spoofing.

install new router firmware and force it through there
problem is, servers go down regularly enough that it'll need semi-regular maintenance

Not necessarily. You can make multiple TorDNS resolutions, maybe even changing relays in between. That way you'll get several results, from different relays. If one relay is bad, it will be a minority opinion. This makes TorDNS even slower (but with dnsmasq who cares) but you hardly compromise anonymity. It might facilitate attacks based on entry-exit timing though, since it would be a characteristic pattern of requests.

Also, if you're okay with slightly compromising your privacy, you can do these resolutions in the clearnet. First run normally for a few months and log all the domains you visit. Then resolve those domains, maybe a few random ones with similar alexa rank/tags to thwart fingerprinting, as a batch at the beginning of the week. Save those resolutions, and from then onwards, use TorDNS but compare the response to your local database. If they match, TorDNS is telling the truth. If they differ, either the server changed in the few days after the pre-fetch, or your Tor exit node just tried to attack you. This can be decided automatically (do additional DNS queries to verify) or prompting the user to investigate manually.

The idea is that knowing that your IP tries to resolve a set of domains at 9 am every monday, is a lot less information than having a DNS query from your IP every time you go to a page. For one, with the weekly (or monthly or whatever) prefetch there's no guarantee that you will actually visit any of the domains in your batch request that week (if you salt with decoys from alexa, there's no guarantee that you ever visit them at all), second the DNS only knows that you want to resolve that domain, but not when and how many times you need to visit it.


You mean that if somebody tries to trick you with an incorrect resolution, you will still be alerted when the IP you got is unable to produce a valid SSL cert matching that domain? If so, yeah, good point. So long as you trust the certs HTTPS is safe, only HTTP is an issue (but not running HTTPS everywhere is retarded anyway).

However, the certs include a domain by virtue of a CA, IIRC. If a CA is compromised then the hijacker can easily make a fake cert to go along with the DNS hijacking. Random Tor trolls wouldn't be able to do it (unless they are elite hackers) but states could.


You could have the router's DNS loop back to one of your own machines on the LAN. Then whatever complicated fault tolerance you need can be done on there with scripts, and you can SSH into it for easy maintenance.

I use pdnsd as a local DNS cache. It's just a caching proxy for my ISPs DNS. I'd rather do lookups with my ISP than some other enterprise.

I have not experienced problems with caching entries longer than their TTL, I think I've set it for a week or something. Not many services have domains that are changing DNS very often.

I have also set up entries for pdnsd to lookup requests for TPB on TPBs own nameservers. It's a pretty neat and simple little DNS cache. Not as complex as dnsmasq.

I came.

The problem is that you then have to trust hundreds of signers, including a great many FONs and only one has to be breached.
The other issue is that a forged certificate can be used for pinpoint attacks. If you can't get at the target's DNS server or play MITM (dnscrypt to a foreign server, or tordns solves this), they have to push their forgery to the world, and someone will probably notice.