Today, Google made its dns resolver service public (184.108.40.206 and 220.127.116.11 - the IP address equivalent to 1-800-call-sam).
From the <a href=”http://code.google.com/speed/public-dns/docs/performance.html#provision”>Google Public DNS docs</a>:
The complexity of the name selection problem makes it impossible to solve online, so we have separated the prefetch system into two components: a pipeline component, which runs as an external, offline, periodic process that selects the names to commit to the prefetch system; and a runtime component, that regularly resolves the selected names according to their TTL windows.
The pipeline component synthesizes the names from a variety of sources, mainly the Google web search index and recent Google Public DNS server logs, and ranks them according to benefit. Benefit is determined by a combination of various factors, including hit rate, the cost of a cache miss, and popularity. The pipeline then factors in the TTL cost; re-ranks all the records; and selects all the names from the top of the list until the cost budget is exhausted. The pipeline continuously updates the list of selected names. The runtime component continuously resolves the selected name records and refreshes them according to their TTL settings.
Google’s biggest advantage is they have, as usual, the baddest-ass database available of popular domain names, and thusly, which are worth caching and prefetching. This is something that low-tier DNS servers (traditionally ones run by ISPs for customer access, ie a Comcast subscriber gets DHCP served some regional Comcast DNS server addresses) are unable to do. Local DNS servers have a very small sample size for site popularity, and are completely disconnected from what other local dns servers see as being popular.
I do envy Google’s attempt to keep it simple - OpenDNS is not keeping it simple at all, and giving users control over things that they shouldn’t have, ie creating arbitrarily-resolvable names and blacklisting certain dns entries. Conceivably, Google engineers can figure out some great algorithms to dramatically improve efficiency and reduce the total traffic caused by superflous dns requests…. BUT…. how about asking ISP nameservers to use Google’s server as a primary cache? Why go directly to users and have them tie themselves to it? This is a very bizarre strategy that Google’s employing.
While looking through OpenDNS’s site, I came across <a href=”http://blog.opendns.com/2009/11/11/opendns-matching-googles-free-airport-wifi-gift-with-free-services-for-the-holidays/“>this hilarious little tidbit</a> - can it be any more obvious that OpenDNS is digging deep? “Hey network admins, you can use our free service, for free, but only during the holidays! Can I pleeeeease get authority over your dns? Pleeeeease?”