Hacker News new | past | comments | ask | show | jobs | submit login
IT Pro confession: I contributed to the DDOS attack against Spamhaus (theregister.co.uk)
79 points by esalazar on March 28, 2013 | hide | past | favorite | 52 comments



"""Let's say that you leave your recursive server open to the internet. Now not only can you ask your DNS server for information about other DNS servers on the internet, so can anyone else. If someone asks your server "where is www.google.com" a whole bunch of times then your server starts flooding google.com's DNS servers. For every 1 byte of data sent to your DNS server 50 bytes of traffic end up directed at the target."""

This explanation is skipping a key component of a DNS reflection attack. When the attacker makes a DNS request, they spoof their source address so it is the address of the host they want to attack. Thus they send a small request to your DNS server, and your DNS server returns a large response not to them, but to the host they're attacking.


Why does it need to be recursive then?

Couldn't you perform the same attack by querying a whole bunch of authoritative name servers for zones they serve with forged source addresses?


The attack as specified by the article would be pretty ineffective as recursive DNS servers tend to have a cache. So only the first request would hit the target DNS server.

Also, you can definitely do this with authoritative servers only, which is why only egress filtering by ISPs is a permanent solution to the problem. However, there are way fewer authoritative DNS servers out there than there are open recursors and they are better managed. So an attack would never grow to the scale this one has grown to using only authoritative servers.


I'm not so sure about that. Even with recursion turned off, the server still responds with an error packet. While the error pack is small compared to an actual answer, hit enough DNS servers (authoritative or not) and you can still generate quite a bit of traffic.


I don't think egress filtering is either practical or the only solution to this problem. I believe are other solutions, better ones that don't involve traffic filtering and threatening the open nature of the internet.


You got me confused for a minute. So I assume in an actual attack you'll rotate the domains being requested?


No, the attack does not work by overloading target DNS servers. There is no benefit in asking a recursive DNS server to make a DNS request for you to overload another DNS server, you could just make that request yourself(or with whatever botnet you're using).

The attack works by sending a recursive DNS server a request with a spoofed source IP. Namely, you make the recursive DNS server think your target is making the request. While a typical DNS query consists of a 64byte UDP packet, a reply can be much much lengthier(it can go well over 1KB).

So say you have a botnet with a total bandwidth of 1Gb/s. Each request you make(64bytes) will result in, say, 1KB being sent by the DNS server to your target although the server thinks it is sending it to you. That results in a 16x amplification of the amount of data you are sending the target's way. So instead of flooding your target with 1Gb/s of data, you are flooding it with 16Gb/s of DNS replies.

The only permanent solution to this problem(though it is discussed elsewhere in this thread why this is impractical) is for all(or almost all) ISPs to have egress filtering. That is, that they would drop all packets sent from their networks with a source IP that is not on their networks. This would make it impossible to fool a recursive DNS server into sending the reply to the wrong IP.

Since this is very hard to do(ISPs have zero incentive to do egress filtering, and we can't even locate the ones from whose networks these attacks are originating to shame them into doing it) the pursued solution is the easier one of locating and closing publicly open DNS recursors. This would still allow DNS amplification attacks using authoritative servers, but they would be much more limited in scope.


Thanks for the great explanation.

So if I understand correctly, the problems with the DNS amplification attack using only authoritative nameservers are:

a) You have to keep track of which name to request from which server

b) You can't optimize for a particularly large response

c) Operators of authoritative name servers are likely to be more sophisticated and therefore have egress filtering.

d) There aren't as many authoritative nameservers as open recursive servers (?)


All points correct except c). Egress filtering happens on the ISP side, there's nothing the DNS server can do once it gets a request with a spoofed source IP.

But since operators of authoritative name servers are more likely to be sophisticated they could notice an ongoing attack and throttle down the replies without negatively affect anything else. In fact, that protection could be built into the server code. Simply throttle consecutive replies to the same requester to a sane amount. There is no legitimate use-case where the same person would make a humongous amount of consecutive requests from an authoritative server as responses are usually cached. If that's done, an attacker wouldn't be able to coerce authoritative servers into flooding a target, they would just send replies at a slow rate(after an initial speedy response) and no significant amplification would occur.

As you state in d) there are a lot of open recursive servers out there that are unlikely to be updated or managed by someone sophisticated enough to respond to attacks like this. Whereas this is less likely with authoritative servers.


You'd have to ask for different domain names for every DNS server, while when they're recursive you can ask every machine for the same name, one chosen for particularly large result.


You should also note that DNSSEC is just as much of a problem as open resolvers.

A normal A record lookup results in 1-2x amplification

   $ dig www.ripe.net. in a | grep SIZE
   ;; MSG SIZE  rcvd: 46
Asking for DNSSEC records specifically yields a 10x+ amplification

   $ dig www.ripe.net. in RRSIG | grep SIZE
   ;; MSG SIZE  rcvd: 534
According to research by DJB[1] over 2000 DNSSEC enabled zones provide >30x amplification for incoming UDP queries.

1. cr.yp.to/talks/2012.06.04/slides.pdf


But you can easily force regular DNS to give you high amplification. Just query a domain which gives you a larger response, such as multiple A records, MXes, SPF records, and so on. For example:

    $ dig google.com. ANY | grep SIZE
    ;; MSG SIZE  rcvd: 546
If you want more amplification than that gives, just host one yourself. The recursive resolver will hit your DNS server once, then send out replies based on the cache.

There seems to be lots of fearmongering about DNSSEC amplification, but you can get just the same amount of amplification out of regular DNS, so it seems that fixing DNS amplification in other ways would be more effective than trying to avoid adopting DNSSEC.


> There seems to be lots of fearmongering about DNSSEC amplification

There is also a 300+ Gbps DDoS attack making use of it right now. This was foreseen as a huge amplification vector in a stateless protocol during the design phase, but was ignored. Now we get to reap the benefits of that decision.

Normal DNS responses don't often grow to the size of google.com/IN/ANY. You have a very limited number of authoritative sources to use (and hosting yourself creates a bottleneck as well as a path back to you). With widespread DNSSEC adoption every zone becomes a good amplification source, which nullifies the current best practices for mitigation (rate limiting responses per zone/source).

If DNSSEC adoption becomes the norm, open recursive resolvers no longer become the problem and direct to authoritative becomes a viable attack vector.


  | There is also a 300+ Gbps DDoS attack
  | making use of it right now
I have yet to see anyone state authoritatively that DNSSEC is being used in this attack. Could you provide a reference for this?

If this attack right now is able to reach 30x amplification without DNSSEC, then what's the point of of decrying DNSSEC amplification as a huge issue?

Other discussion: https://news.ycombinator.com/item?id=5451299


DNSSEC is the amplification in "DNS amplification attack." I personally run a (heavily rate limited) open resolver as a honey pot to observe these attacks in progress.

You can read CloudFlare's own explanation of how these attacks work http://blog.cloudflare.com/deep-inside-a-dns-amplification-d...


Unless I'm misunderstanding, the 'amplification' in 'DNS amplification attack' doesn't necessarily refer to DNSSEC. The idea is that you use x amount of bandwidth to send y amount of bandwidth at the target where y = kx, for some value of k that is significant enough to make it more worthwhile than just sending the traffic directly.

E.g. make a UDP DNS request to an open resolver with the source IP forged to be your target, then the response is sent to your target (rather than to the real source of the request).

My understanding is that the problem people have with DNSSEC in this regard is that the data returned in those responses increases by a lot (allowing for a 30x increase?). But if attackers are able to accomplish this without DNSSEC, then what's the point of talking about how horrible DNSSEC will make things in this regard?


So could some right thinking person scan the internet for open DNS resolvers and perform DDoS attacks against them using other open resolvers?


Yeah! Right Thinkers! DDoS all those awful Wrong Thinkers! Knock their bastard servers right off the internet! That's the solution!


It's actually quite an elegant solution to getting people o configure their servers correctly. Instead of their servers being a hazard to the wider internet, they become a hazard to each other.


Do you simply not care about the collateral damage, or do you feel the ends justify the means?


I think that was implied. However you're right that the author should have made that a little more clear.


Thanks, I was wondering about their explanation. Every DNS server I setup will temporary cache the results of its recursive lookups so I didn't get how this was going to work.


The post slug is the best part of this article by far: "i_accidentally_the_internet" Full current URL in case it gets updated: http://www.theregister.co.uk/2013/03/28/i_accidentally_the_i...


You'll like the opening image in this, then; #10 on HN home page right now: http://blog.tinfoilsecurity.com/building-a-browser-extension...


I'm familiar with the meme. The humor isn't in the meme, but in the usage by a professional journalism establishment that is unrelated to the title of the piece.


Right.


I have a couple of name servers I've inherited since starting my job. How would I go about testing these servers to see if they're set up correctly (obviously I'm not interested in the forged UDP packets side of things, only testing to make sure that recursive look ups are disabled).


you could do an nslookup google.com <your.name.server.ip> from off network, or check your blocks on http://openresolverproject.org/


Whats a simple way to confirm by test your DNS server isn't doing recursion?


The Open DNS Resolver Project has a list of 25 million open resolvers. You can query their database for your IP address or up to a /24. Their site also has information on how to reduce or eliminate the problem via a couple options (RRL, BCP38). If you run a BIND resolver, consider switching to unbound. Part of this problem is rooted in BIND combining resolver and authoritative service in one daemon, which IMO "mis-educated" a lot of people.

http://openresolverproject.org/



fantastic, thanks!


I don't understand why it's necessary for the server to be open, and have recursion enabled. I run a couple of authoritative name servers and have seen them used for amplification attacks. Sure, it's not as easy as querying every open recursive DNS server you can find for <single_domain_with_huge_sized_reply>.com, but there's still (literally) billions of unique hostnames on the internet which can be resolved "legitimately" via their authoritative name servers. There is no magical config option to prevent this; the only way to block this type of activity is to analyze traffic to find IPs that are repeatedly sending the same [spoofed] request.


Some have suggested that DNS move to TCP, but I don't think that's proper. The nature of DNS lends itself to connectionless, lightweight communication. That said, could the next iteration of DNS implement application-level handshaking?

The reason not to do this at layer 4 is because I, in the several minutes of pondering it, think it could break lots of security devices that track connection state across lots of computers in a network. Make some kind of

  C -> S request  
  C <- S ack 
  C -> S yes  
  C <- S lots of data  
  done

  C -> S request  
  C <- S ack
  C -> S no  
  done


Unfortunately, round-trip time is still important, too. I suspect, almost doubling the DNS request time may cause problems in some cases.


This really is a real issue. My home machine was an open recursor for a while too. I set up a dnsmasq installation and forgot to set an "except-interface" to restrict it to the internal network.

I even like to think I know this stuff well, but still got burned. I'm sure at the time my security analysis (if I even thought of the externally-facing issue) was "who cares if I expose a caching nameserver with no sensitive content to the rest of the internet?".


How disappointing. I thought it was going to be the story of a fed-up email admin breaking down and DoS'ing one of the scourges of the internet.

Blacklists are pure evil, and nothing will ever change my opinion of that. They cause far more problems than they solve. Granted, it's usually by idiot, over-zealous mail admins who block on merely being listed anywhere, rather than by weighted score.


Blacklists are the only reason e-mail is still usable.


I thought it was statistical filtering and crowd-sourced spam tagging (like Google's spam filter). I maintain a mail server for a client and Spam Assassin (edit: and greylisting) works well enough without blacklists enabled. Throw in a couple of extra Bayesian filters via procmail, and you're doing about as well as Google does.


Greylists are murder on businesses that depend on receiving mail from new people.

I see SpamHaus as akin to a the Microsoft monopoly in the 90's. If your interests are aligned with them, great. And for most people they do a great job. But there a lots of small businesses who get caught up and nearly crushed. Because a listing on a blacklist can be murder for a business that depends on communicating with people over email.


Why are graylists that horrible? All it does is require the sending server to retry 5 minutes later; I don't see how that would have any impact on a business unless they are in the habit of being on the phone with new customers and asking them to send an email at the same time.


Assuming the sending server does that. Maybe it takes a few hours. Maybe it doesn't. Small businesses can be a mess, and you can't say "well, your customers suck" when the client complains about how greylisting is working for him.


I've seen many poorly written web form handlers that try to do SMTP themselves, and that clearly don't ever attempt to retry graylisted failures...


Why in the world would greylists be "murder" on businesses? We use 10 minute greylisting, and I occasionally check the logs and it does not seem to ever cause us to lose e-mails from anything but spammers.


I hate spam as much as anyone but blacklists have gotten out of hand.

I rented a server, and when I decided to use it for sending emails, I found out the IPs were blacklisted. I tried appealing to Microsoft and they claimed the IP was blacklisted after I rented it. This was ridiculous since I had just installed Postfix for a few days and barely sent any emails out.

So I decided to relay all my emails to another server and haven't had any problems with it for a year except now I am stuck with a server with blacklisted IPs.

Some messages still randomly get blocked by Hotmail while Gmail happily accepts them. This whole email delivery problem is a mess and the fact that people are paying to have someone else send their emails is proof of how bad Email is failing.


I run a mail server with OpenBSD's spamd[1] with greylisting and it works well enough. Most spam is not sent by SMTP compliant hosts. Blacklisting makes it particularly hard to recover IPs that have ever been compromised and unfairly hurts good hosts on otherwise untrustworthy (like some home ISPs) networks.

[1] http://www.openbsd.org/spamd/


Greylisting is what makes the biggest difference for us too.


Agreed. If I had known at the time that there was a DDoS against Spamhaus, I'd have probably joined in against the self-righteous pricks. Block my home server, will you?


As far as I know, Spamhaus doesn't block anything. They just maintain a list of IPs they see spam-like activity coming from. The actual blocking is implemented by whoever is using Spamhaus for their blacklist. Your beef is with the admins who block everything based on a single blacklist.


That's kinda like saying the MPAA ratings systems is ok because my beef is really with large theater chains that refuse to carry non-rated films. Large, entrenched authorities who provide ratings about [insert noun here] have historically been a problem.


I believe your analogy is correct, I see no problem with it. Theaters do not have to follow the rating system. There are local theaters that will play unrated films, and film festivals that play unrated films. Likewise, while you can find some ESRB ratings on the Apple App Store, Apple doesn't really have to pay attention to those and are free to set their own standards. If a theater chain refuses to play a film because it is unrated, that's not the MPAA's fault. And I never thought I would be defending the MPAA...

Sure, large and entrenched authorities pose risks. There's not much that can be done about that as long as they're large and entrenched. The best way to ensure their power stays in check is to try to have their customers put pressure on them to clean up their act. And the end-user (you and me) is not their customer. We are customers of the businesses and organizations implementing their block lists. I understand your frustration, and as security professional I have my own beef with Spamhaus ratings, but the answer to that problem lies in comparing their ratings with those of other organizations and a bit of common sense.


one of my server got exposed too, it was being queried for ripe.net




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: