The last time I worked with the DNS protocol, I seem to remember that everything had a well-defined (and small) limit on its size, and it wasn't really necessary to use dynamic allocation at all. Strings are length-prefixed, not null-terminated, and the longest length fields are only 16 bits.
A look at the RFC shows the strict limits on size:
In comparison to text protocols like HTTP, I see far less opportunities to make errors like buffer overflows with DNS. It's a simple and relatively easy to parse binary protocol.
We clearly need more/better tooling in this area in C).
I think the tooling we have nowadays is better than we give it credit for. Between clang's analyser, ASAN, Infer and tis-interpreter, I often find when someone points out a vulnerability, I can get one of these tools to make an issue of it.
Note I'm not suggesting this is enough to make C safe.
Edit: A quick demonstration, based on 30 seconds with valgrind. If I run with ./bin/massdns --help, it makes a malloc() on line 595, then exits on 605 without free()ing. And yes I know this isn't really an issue in practice, but it demonstrates that the tools can see issues.
Ugh, so public resolvers are flooded with requests? Wouldn't it make much more sense to set up a local caching resolver like unbound and feed your queries to it? It would be much more considerate towards the public resolvers and also use less of your own bandwidth.
There are valid use cases for this. For example, web crawling needs to do millions of resolutions one time which completely nullifies the need for a cache.
This is already discussed below and you are right. It was an issue with the perspective of the term recursion from my side. Unfortunately, I cannot change the HN title anymore. (Maybe some moderator can?) The GitHub project description has been changed.
Some of the resolvers might ban/rate-limit you indeed and even send abuse complaints to your ISP.
EDIT: You might have a look at the newly implemented --norecurse option.
Out of curiosity, what is the difference in a blurb? I was reading about people bemoaning the addition of systemd-resolved as stub resolver by default in Ubuntu 16.10. I get it, but do not have time for the full RFC.
(Please understand, I use systemd, just not resolved yet; this is not meant to start a tangential flame; I just realize even though I am one of those idiots who preaches better security and runs his own dnsmasq settings per recommendations of known bloggers, I need more detailed background on DNS.)
What are the benefits of resolving DNS entries in a non-recrusive manner? A recursive resolve means that the final resolution of the domain is cached by the original DNS server that was queried along with all the servers it took to resolve. My understanding is that this behavior was ideal.
What I mean by non-recursive in this case is that the program does not perform recursion by itself as it relies on a list of DNS resolvers. When querying these resolvers, the recursion bit is actually set. However, I agree that the title might be misleading but I am not sure how I could express that better.
This is the line of code that pre-checks whether an incoming packet should be parsed at all. If it is not a response packet or if the number of questions is not one, it should not be parsed, simply because the packet parsing function expects a reply packet with exactly one question, namely the one that has been queried for.
For easy comparability, request handling rates of tools like this are best described in terms of requests per second, not requests per hour. Also the hardware on which the benchmarks were performed needs to be described in detail.
Are you sure tinydns, which has existed for ages without security vulnerabilities, can't trivially handle 27k requests/second on modern hardware?
[+] [-] audidude|9 years ago|reply
well at least it's pretty clear up front. No malloc result checks, invalid format in printfs, probably more.
(I don't mean to be disparaging, please don't take it that way OP. We clearly need more/better tooling in this area in C).
(Edited for clarity)
[+] [-] userbinator|9 years ago|reply
The last time I worked with the DNS protocol, I seem to remember that everything had a well-defined (and small) limit on its size, and it wasn't really necessary to use dynamic allocation at all. Strings are length-prefixed, not null-terminated, and the longest length fields are only 16 bits.
A look at the RFC shows the strict limits on size:
https://www.ietf.org/rfc/rfc1035.txt
In comparison to text protocols like HTTP, I see far less opportunities to make errors like buffer overflows with DNS. It's a simple and relatively easy to parse binary protocol.
[+] [-] tptacek|9 years ago|reply
[+] [-] technion|9 years ago|reply
Note I'm not suggesting this is enough to make C safe.
Edit: A quick demonstration, based on 30 seconds with valgrind. If I run with ./bin/massdns --help, it makes a malloc() on line 595, then exits on 605 without free()ing. And yes I know this isn't really an issue in practice, but it demonstrates that the tools can see issues.
[+] [-] blechschmidt|9 years ago|reply
[+] [-] blumentopf|9 years ago|reply
[+] [-] matt_wulfeck|9 years ago|reply
[+] [-] textmode|9 years ago|reply
I believe a better description for this program would be a "stub resolver".
[+] [-] blechschmidt|9 years ago|reply
Some of the resolvers might ban/rate-limit you indeed and even send abuse complaints to your ISP.
EDIT: You might have a look at the newly implemented --norecurse option.
[+] [-] 616c|9 years ago|reply
https://lobste.rs/s/edrihm/sad_news_today_systemd_resolved_b...
http://seclists.org/oss-sec/2014/q4/595
(Please understand, I use systemd, just not resolved yet; this is not meant to start a tangential flame; I just realize even though I am one of those idiots who preaches better security and runs his own dnsmasq settings per recommendations of known bloggers, I need more detailed background on DNS.)
[+] [-] dmlittle|9 years ago|reply
[+] [-] blechschmidt|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] jedisct1|9 years ago|reply
[+] [-] blechschmidt|9 years ago|reply
[+] [-] known|9 years ago|reply
[+] [-] unknown|9 years ago|reply
[deleted]
[+] [-] otterley|9 years ago|reply
Are you sure tinydns, which has existed for ages without security vulnerabilities, can't trivially handle 27k requests/second on modern hardware?
[+] [-] blechschmidt|9 years ago|reply
The tool has mainly been tested on a Hetzner EX41 server. (Ubuntu, Intel® Core™ i7-6700, 32 GB RAM)