Hacker News new | past | comments | ask | show | jobs | submit login
Dyn Analysis Summary of Friday October 21 Attack (dyn.com)
187 points by LVB on Oct 26, 2016 | hide | past | favorite | 113 comments



Changes made by large companies that relied 100% on DYN:

* us-east-1.amazonaws.com: split between internal, UltraDNS, DYN

* spotify.com: all internal nameservers now

* reddit.com: all Route53 now

* github.com: all Route53 now

* netflix.com: all Route53 now

* paypal.com: split between UltraDNS and DYN

No changes made:

* twitter.com: 100% with DYN


All our eggs in that basket got crushed. Let's take this basket over here and put all our eggs in that instead. Facepalm


It's probably temporary? Move to R53, then figure out what's needed to manage records on two providers (if only for internal processes). Top engineering teams aren't going to knee-jerk this, right? Or did Dyn show some unfixable incompetence?


Is it? I thought GitHub, at least, was split Dyn/Route53 shortly after the Dyn outage started as a means of getting back online. Now, they've removed their Dyn, and are now exclusively Route53.

I don't think Dyn showed any incompetence; the parent-poster was merely remarking on relying entirely on a single provider, who, if they get DDoS'd, causes your site goes down. (There was some previous discussion about splitting between providers, but some commenters noted that it was difficult, or at least non-trivial, to replicate records between two providers.)


The problem is that you need to find a DNS provider that allows master and slave configurations of your DNS information. For example, Dyn can act as a master and UltraDNS can act as a slave, however, Route53, you can't be either. With Route53, you are all in.

Lucky for Route53 users, Route53 DNS surface is really large and there is a really good chance that not even is attack could hurt it.


AXFR isn't the only way to sync records between providers. You just need a tool that speaks to the APIs of each provider and can sync between them that way. Heck, I had syncing in place at a startup between Route 53, DNS Made Easy, a pair of TinyDNS servers, and a git repo (which was our historical backup of changes) years ago. It was 300 lines of Python and 100 lines of shell. Albeit, we only had a few dozen or so records to manage, but this isn't rocket science.

Aside: I came out of college as a sys admin with a CS degree and writing tools like this was par for the course. If devops folks aren't writing tools like this today, what are they doing?


Honestly: I think they are spending most of their time moving existing working infrastructure into containerized infrastructure and figuring out how to deploy their blog on k8s. They are working on learning libraries that abstract abstractions.


To be fair Route 53 will split over other providers so you should be good in theory. But yeah if something more specifically targets Route 53 then that could be the same problem.

At least that's my understanding anyway.


So we now run a split view with DYN and AWS. My biggest issue with AWS is again they are a large attack surface, but also they don’t really play super nice with others and no DNSSec.

We are currently evaluating the Netflix denominator tools to spread DNS and sync our alternate providers.

My biggest problem during this outage was that I could not login to my registrar and make change to DNS directly - I had to login to DYN and ADD Route 53, it was impossible to remove DYN completely. And that's how we landed up with a split view.

NOW if anyone can tell me of a competitor for the Traffic Director product that works on Port 25 I'll be happy to consider a migration. Cloudflare has something in the works, but I’d really just like a DNS provider with a virtual load balancer that can handle my 250qps at a reasonable price.


You could build your own DNS set up for Traffic Director. Based upon the recursive server IP hitting you, send back responses that are closest to the user.

There is an RFC to pass information about the client subnet: https://tools.ietf.org/html/draft-ietf-dnsop-edns-client-sub...

Which Google DNS uses to tell your name servers more information about where the client is located. This allows you then direct them to the nearest server.


Minor correction w.r.t. Netflix: netflix.com itself is on Route53 but parts of the CDN appear to still be 100% on Dyn:

  $ dig +short ns nflxvideo.net
  ns1.p19.dynect.net.
  ns4.p19.dynect.net.
  ns2.p19.dynect.net.
  ns3.p19.dynect.net.


We use CloudFlare and after this my boss said "set us up with secondary DNS somewhere." Unfortunately, CloudFlare doesn't support being a primary DNS provider with NOTIFY messages. They are designed to handle the DDoS for us by proxying content. It's an interesting problem and I don't know whether to push back to CloudFlare or my boss. Anybody else running secondary DNS after this with CloudFlare?


(I work for CF)

Indeed, if you want the HTTP/HTTPS traffic to go through Cloudflare, the DNS must go through Cloudflare. There are generally two ways to set it up:

a) You move your DNS auth to Cloudflare and allow it to manage it.

b) You keep managing your domain yourself, and CNAME to Cloudflare. See: https://support.cloudflare.com/hc/en-us/articles/200168706-H...

What you should do depends on your setup and threat model. Do you fear DNS auth going down? Do you think your DNS will be a target? Do you use Cloudflare to hide your HTTP origin IP addresses?

For example, if you fear DNS auth going down, but you must use Cloudflare for HTTPS (say: for caching and SSL certs), then changing DNS off CF makes little sense. You already assume stability by expecting it to work HTTP layer.

If you think you can be a target of DNS attack, I'd say having multiple auth is unlikely to give you more mileage.

If you can afford disabling CF on HTTP layer, exposing your HTTP origin IP and want to have two different DNS auth providers, fine, you can do CNAME. But then you have three vendors to worry about, and problems with each can lead to trouble.


By the way, slightly out of topic but I was very frustrated with a Cloudflare sales guy who reached out to my customer during the outage and told him that we should switch to Cloudflare to be protected from DDOS.

It comes a bit as gloating in the face of the attack on Dyn and there's no reason to believe that Cloudflare's DNS would fare any better.


From the numbers that were published, it seems that Cloudflare would've probably handled the attack without outages. They have significantly more PoPs, especially in the regions that were attacked (Dyn has 2 in US-East and 8 in US, Cloudflare has 6 US-East and ~20 in US overall). I think it's unlikely that an attack of 1-2Tbps would've brought them down.

Answering DNS is not very costly, so if you have enough capacity to the servers, answering shouldn't be the bottleneck.

I agree that it's very bold to do that, but I'd trust them with handling DDOS more than most other providers.


You don't need to use CloudFlare DNS to route HTTP(S) to them. They would just strongly prefer that you did.


* spotify.com: all internal nameservers now

I don't know much about running nameservers but moving to all internally hosted seems like an odd choice to me, can anyone explain whey that's a good move?


With only a modest simplification you can view security as ultimately just being a figure measured in dollars: "it costs an adversary $X to beat these countermeasures." Your goal in securing a system is not to push X to infinity, though that might be a reasonable goal (e.g. if you're a security researcher designing new crypto primitives). Instead your goal in engineering your company's security consists in evaluating the value $V of what you're securing, and then raising X until X > V. There are uncertainties in measuring X and V and in how attackers will view these tradeoffs and so forth, but it's nothing you can't account for by building in an engineering tolerance like X > 2V. The basic story remains.

Spotify simultaneously has large resources and offers a non-essential infrastructure service (music to listen to while you're doing something else). The V gained in DoSing them is very small. They got attacked anyway because they shared infrastructure with other companies, which pools the V together to create something much larger. Some attacker saw a case where V >> X and attacked it to great success until Dyn was able to bring up X again. During the interim, Spotify was down despite having V << X.

In short: Spotify probably can't do DNS better than Dyn, but they can do DNS better than the sort of people who have reason to attack them (presumably trolls, maybe some future hacktivist who doesn't like some business decisions they make, unscrupulous competitors). This attack was a wake-up call for them, "oh, if we're pooling with these other folks then we'll become targets of larger hacktivist attacks and state actors, who are not directly targeting us per se." Those attackers could presumably still take out Spotify's home-rolled DNS, but they have no real motivation to target Spotify in particular any more.


It lower surface attack. With companies like Dyn, they are affected even when someone is targeting other sites, while with internal DNS servers that are only used by themselves they will be down only if someone is attacking them directly.

If someone is targeting them directly it doesn't matter much that DNS is up and running, their site is still down.


DNS is a rather simple service that was always meant to be run internally.

The question you should ask is why did these companies used an external DNS in the first place?


So they don't waste cycles on something not part of their core business or competency? Pretty standard reasons to pay someone to solve a problem. I think what this really showed is Dyn was not as competent in mitigating as what people thought.


The implication of incompetence isn't really fair here. This attack was fairly unique, in that it had a sufficient quantity to be a quality of its own. It's unclear whether any DNS provider could have survived it, except by luck of not being chosen as the target.


>This attack was fairly unique, in that it had a sufficient quantity to be a quality of its own.

isn't that basically the definition of DoS?


Yeah, that's exactly why I asked. Seems like one of those things where it makes sense to me to outsource, but I don't really know if I'm right on that.


[I'll try to make it simple, ignoring edge cases and real world complexity]

You can't outsource DNS. It's one of the critical piece of networking that must be in every infrastructure.

The common DNS server is BIND. It's been there for 30 years, it's well known, well manageable and well understood. Sysadmins have to know it and manage it. It's especially critical for worldwide multi-site tech organizations.

There is no need for anything else. BIND can do everything and is the most flexible. Some of the alternatives lack some or most of the features (e.g. some type of DNS records).

You should assume that any organization is running it's own DNS servers. (ignore the edge cases).

---

In practise for large scale operations, the DNS tree will get very complex.

What the websites changed was only the public DNS server for reddit.com or airbnb.com. It's only the top of the iceberg. There is likely a very complex DNS setup underneath including public domains, private domains, special internal domains, CDN, per datacenter, per continent, etc... which could imply 10 different DNS services.

Who serves the top level public domain is a details. We should assume that the companies put whatever they could in little time to fix the ongoing issue.


> You can't outsource DNS. It's one of the critical piece of networking that must be in every infrastructure.

This is simply not true. For resolvers, you can use your ISPs DNS servers or use a public resolver like Google DNS, OpenDNS, etc. For authoritative DNS there are plenty of hosted (outsourced) offerings like Route53, Dyn, Google Cloud DNS, etc.

This may not work for sufficiently complex organizations, but in my ~20 person SaaS company we have zero DNS servers and it works just fine. We use our ISP's resolvers for client lookups, and Google Cloud DNS for authoritative DNS.


As I said. It's a simplification. I really don't (and can't) get into a long explanation here about how to run a complex DNS infrastructure spanning multiple continents and datacenters ^^

Thing is. You gotta to run your own DNS since the moment you want your own DNS names. Good for you if a simple external DNS service is enough for you, a single 20 people office is not comparable to what the websites mentioned are operating.


The approximate difficulty in running DNS server is the same as running a static HTML web server. Low difficulty.


Until new self-owned DNS server becomes victim of a DDoS attack.


If you think nobody will have much motive to run a very sophisticated / expensive attack on you specifically (e.g. Spotify), then self-hosted is great. You won't be taken out as collateral damage when they're targeting someone else.


If Spotify's networks are all down, what good would a functioning DNS do?

(And I know it's not that simple, but that's probably the basic reasoning behind it.)


> If Spotify's networks are all down, what good would a functioning DNS do?

Email would still work. You can't receive email if the sending server can't look up your MX records. Since spotify.com uses Google Apps, their email would survive a total network outage if they used third-party DNS.


Spotify clients can just go to the IP address?


Perhaps their "internal" nameservers are just vanity nameservers hosted by someone else.


I wondered that too, but I whois'd the nameserver IPs and they're all owned by Spotify.


Zone data that I have access to shows that they lost roughly 1500 domains on the 23rd and 250 on the 24th and 155 on the 25th.

On previous 3 Saturdays they lost between 40 and 60 domains.


Twitter is making many changes including adding a secondary provider - the work is ongoing but should be out soon.


I think Heroku is switching to two DNS companies after this as well.

https://status.heroku.com/incidents/965

"This outage exposed a critical weakness in our DNS hosting configuration. We are taking immediate steps to add additional DNS providers. This should allow us to avoid impact in the future, provided that at least one of our DNS providers is operational."


What happens if you have a DDOS on Route53? I'm sure they can handle the attack, but do you have to pay for the requests? Or are there clauses that they drop the fees if the requests were malicious? If not, the financial risk could easily outweigh the benefits of availability for smaller companies.


I'm in the process of looking for a secondary DNS server for a client but because they rely heavily on geolocation load balancing it's not simple... I wonder if anyone has other recommendation beside UltraDNS for a good slave?


Alright, what the fuck?

Shame on reddit, github, and netflix for learning literally fuck-all from this.


One thing I don't understand about this attack:

Virtually everyone is behind NAT these days, often multiple layers of NAT. So how does the botnet manage to telnet or ssh into these set top boxes or lightbulbs or whatever? When I want to SSH into my home computer I have to go through elaborate maneuvers to get it to work.


I believe UPNP is used commonly by many of the IoT devices that end up in botnets.


As I understand it UPNP allows a device to negotiate a port map with a NAT box. But the developer of the IoT device would have to specifically want to map port 22 or 23. It's not something that would happen automatically.

So are you saying that these IoT device makers not only hard-coded root usernames and passwords into their devices, but deliberately set up UPnP mappings to those ports?

That looks malicious rather than negligent. Am I missing something?


Your understanding is the same as mine. I don't think that's necessarily malicious though. I also don't think it would have to be to 22/23 - do we know that SSH or Telnet were the attack vector? Thinking about general IOT devices, even if it were SSH this time it well be a web UI with RCE next time.


A large number of the devices were manufactured by XiongMai Technologies. Judge for yourself whether the backdoor was put in place intentionally. https://krebsonsecurity.com/2016/10/iot-device-maker-vows-pr...


I've heard a lot of routers accidentally or negligently allow upnp on the WAN interface


That's pretty horrifying but if true it answers my question about how this can affect so many devices. It sounds like whether because they were the directly hacked devices or because of this UPnP flaw, home routers are the main villain here. Not smart lightbulbs and webcams which the news media seemed to want to blame -- though the latter may well have been secondary villains.


I'm surprised there are UPNP implementations that even allow mapping to specific, low-numbered ports.

(Well, not surprised as much as disappointed.)


And so? There is more than one way to spread a botnet.

One particular method is DNS rebinding. They attack your webbrowser, then your browser passes the attack to the device inside the network.

Another means is 'the weakest link'. An insecure and compromised device (even if it just a user account on that device) scans your local network behind the firewall , passes information to a command and control server, which downloads and passes an exploits to the devices it discovers on your network.


So I may be flat out wrong about this, but I believe the clients, once infected phone home, thus (mostly) removing the need for worrying about NAT, as they router will forward the Server's connections to the correct machine once it sees that a prior outgoing connection has been made.


I think the question is about the initial infection.


I commented this in another post, but at least for the affected surveillance cameras and recorders, the firmware for these devices is built on Busy Box. The manufacturers either opened up telnet or left telnet open for these firmwares until about 3 years ago when it started getting exploited.

The characteristics for how these surveillance devices were hacked are that the devices are using older firmware with an exploitable telnet feature and that the default credentials are still intact. There are a few vectors (using UPnP, HTTP API, directory traversal) that can be applied to bypass authorization. Throw in a global directory of these devices (shodan.io) and you have the ability to search for these devices with specific firmwares, run every attack vector to compromise these systems, and, once connected, have them do _whatever_you_want_.


There's still a lot of those on public network. Either via uPNP, or directly. Shodan (https://www.shodan.io/search?query=port%3A23) says Total results: 3,957,271

And that's only on the default port.


> Virtually everyone is behind NAT these days, often multiple layers of NAT. So how does the botnet manage to telnet or ssh into these set top boxes or lightbulbs or whatever?

Infected windows desktop can scan local network and infect IoT devices.


With more uptake of IPv6, do people still use NAT? I thought NAT was just for IPv4?


There are still many corporations and ISP's that haven't configured or support ipv6 yet.


A good chunk of the infected devices are shitty NAT routers that had a backdoor...


Routers by default shipping with permissive upnp settings?


Probably uPnP?


Dyn aside, xTbps DDOS is the new norm.

Expect more of this in the near future, single-source infrastructure is becoming a huge liability (not that it wasn't before). I wonder what impact on SLAs it will have when cloud services providers are taken down - will they honor their SLAs or inject DDOS clauses into them to shield themselves. You won't see many standing up to multi-Tbps attacks, at least for the moment.


Doesn't that also provide a huge incentive for cloud offerings to sell DDOS-resistance products and services? Isn't that a huge market already with gigantic margins?


OVH throws their DDoS mitigation system in for free when you host with them.


How does their system work and how effective is it?


It's built on FPGA and it's very effective.

https://www.ovh.com/ca/en/anti-ddos/


Game servers that are hosted on OVH get DDOSed daily and they drop like flies.

Myself and a lot of friends had servers killed both when renting VPS/Dedicated server or a dedicated "Game Server".

And all and all with considerably smaller botnets like the ones you rent for a few $ per hour.

If you are running a public server you learn quite quickly that if you permaban a cheater or just some annoying kid you should expect to be DDoSed these days.


Not sure DDOS clauses are that unreasonable. Especially when attackers might be targeting your DNS server, even though you're the target.


Can any one explain why they keep referring to this as a complex attack? From the article it seems to be a simple volumetric attack. They mention that it uses UDP port and TCP port 53, nothing complex about that...

Am I missing something here. It wasn't an L7 attack (or was it?) Why keep referring to it as complex?


I think the market's expectation is that a DNS provider is prepared for a DDoS of any size, but not necessarily any level of complexity, so that's a lot of incentive to talk up the complexity of the attack.

What's described in this incident report is totally within the capabilities of a single individual with public knowledge, though. If they could have proven otherwise, they probably would have (unless that somehow conflicted with their criminal investigation).


Much of the complexity is likely in building and managing such a large botnet.

There also isn't a lot of details here on the exact nature of the traffic. They say it was hard to distinguish between legitimate traffic and this malicious traffic. So the botnet is at least rotating their requests through lists of customers hosted with them (though that isn't complex, but it is forward thinking. If the botnet was all making non-stop requests for just a few domains, that would be a strong signal to start filtering traffic, first internally, then pushing ISPs to block it upstream).


Its the alternative of "state actor haxored us". Marketing BS.


There are two different things. Please correct me if I'm wrong on this:

1. Device backdoor open Port 23 (telnet), used to take over loT devices.

2. The loT devices attacked through Port 53 (DNS).


I wonder what 1.2Tb would do if pointed at a ELB or an AZ in AWS. Someone must have tried at some point but i've never heard of any widespread outages caused by DDOS on AWS. Is it just that bandwidth available to an AWS datacenter is that much bigger than what dyn have?


It's safe to assume that 1.2Tbps isn't a big deal for Google/MSFT/AWS/Cloudflare/Akamai/Yahoo/Verizon/etc.

DNS is normally a low-bandwidth protocol so if you only provide DNS services, needing to purchase 1000x your normal bandwidth to handle these bursts would be miserable. If a DNS provider were also providing video services (Vimeo/Twitch/etc), then a 1.2Tbps increase in traffic could be easily absorbed.


You would think that, yet Akamai dropped Krebs over half that.


I've read a few comments elsewhere that the attack on 21st may have been an element of a state actor MITM attack. What's the expert opinion?

[p.s.] The question is precisely this: is it possible that a DDoS attack on DNS can be used to affect/mask a MITM attack.


"State actor" today is an abused term, because it helps the victim look less bad, the government will back it up, because it paves way for new regulations and there's no way to prove one way or the other. Especially when it comes to DDoS.


A new article got just published to say that it was done by script kiddies from hackforums


This is insane how big these things are now: "There have been some reports of a magnitude in the 1.2 Tbps range; at this time we are unable to verify that claim."


They mention 100k participating devices/IPs - that would mean an average of 12 Mbps upload. Sounds high, but plausible? [ed: I actually think it's kind of sad that most users aren't on symmetric gigabit links yet... But in this context I guess it's a blessing of sorts... ]


"They mention 100k participating devices/IPs - that would mean an average of 12 Mbps upload"

I doesn't have to be. An attack can be highly asymmetric such amplification/reflection attack. I am not saying this is what happened in the Dyn case but rather addressing your average comment. Just an example having hosts in the botnet send queries to non-Dyn open resolvers on the internet and in those queries they spoof the source IPs of Dyn DNS servers. Now all of those open resolvers on internet start sending return traffic to Dyn. If in those spoof request they request EDNS0 those responses could be up to 4K. Compare that to the size of query packet and you have a lot leverage.


We already know Mirai has been able to reach over 1 Tbps, and we know Mirai was at least one of the cannons hitting Dyn. So 1.2 Tbps is definitely plausible. Mirai has decreased in size to a degree due to more public awareness, but it's still massive.


Way more than 100k bots lol.


In the case of a DNS amplification attack (like this one), a malicious host can cause much more traffic than it can itself supply.


Can anyone recommend a good place to learn more about this field of network security? I'd really appreciate!


maybe this is something for you -> https://cybersecuritybase.github.io/introduction/


How exactly was the attack mitigated when in the same sentence they confess it STOPPED _on its own_.


Mitigated means "make less severe, serious, or painful.". In other words, they took steps that stopped the attack from effecting legitimate traffic. Once the attacker sees that the attack is no longer effective, the attack stops. After all, it's a denial of service attack. If the service is no longer being denied, then what is the point of continuing the attack.


To cost the company significant amounts of money. Bandwidth in those volumes isn't cheap.


I think a lot of botnet operators are also worried about their bots being "discovered" during sustained bandwidth attacks. Your average home user isn't going to notice their internet being weirdly slow for a couple of hours every few weeks, but a sustained DDoS will have people noticing and potentially discovering / cleaning their devices. ISPs might also take notice of the sustained outbound traffic volume, identify a pattern and contact customers.


Wouldn't it be costing the attacker some amount of money running the C&C server?

I'd think at some point if the attacker isn't getting the same results from the attack he'll wait for them to scale back then strike again.

Is my thinking right here?


No. each individual attacker is actually an infected "bot" of somekind. In this case, Internet of Things devices all over the world. Your "smart TV" or "Smart refridgerator" all contributed to this, over YOUR bandwdith.

The average broadband user has at the least, 10Mbps/UP. If your smart appliances all started sending at least 10mbps up... you only need a million smart TV's to start causing damage.

I'm guessing there's 100M Smart TV's in the USA? Each sending 10Mb/s up? There's 1GB/s of traffic. Multiply this by the next 50 nations who have smart TVs and bandwidth to spare...

Then make it multiuple devices in a home. Then make it every smart device on the planet, using SOME bandwidth... it gets painful fast, and its free for the attackers, since the poor sap with a hacked smart TV is doing the work.


> Your "smart TV" or "Smart refridgerator" all contributed to this, over YOUR bandwdith.

Makes me wonder when we'll start seeing reports of people getting hit with data overage charges because of hacked smart devices. I only have 4 Mbps up, but if an infected device used all of that 24/7 it would chew through my Comcast 1TB monthly cap.


That math doesn't seem right. 100M devices sending 10Mbps is 1,000,000,000Mbps = 1,000Tbps, not 1GB/s.

For an attack of 1.2Tbps, you only need 120k devices at 10Mbps.


The attacker is still spending resources giving out instructions to the bot army.

Unless they are using a setup like @tomschlick mentioned or some P2P thing.


Sending out a message to 1 million compromised devices takes almost nothing. Also, if you've compromised 1 million devices, chances are you also compromised a linux server on a fast host for command and control.


> Wouldn't it be costing the attacker some amount of money running the C&C server?

Not necessarily. They could configure the C&C to run off of something like pastebin/gist where they just input a list of IPs to attack and the botnet checks a specific url every X minutes for new instructions.


And even if it's a more complicated scheme, there's nothing to suggest the attacker has to run C&C on a host for which he's paying. I scrubbed a lot of IRC daemons off legacy shared-hosting boxes, back in my admin days.


That's very cool. Wasn't aware of this technique.


Exactly my point. There was no 'dns is slow but finally works now' moment, there was total dns blackout until attack stopped.


mit·i·gate ˈmidəˌɡāt/ verb make less severe, serious, or painful

Don't think they said they solved the problem. They just made it less bad. Some DNS queries got through, but not all.


Not smart of them to have a product guy write this. Even if he had it would have been better delivered (if shadow written) by someone in engineering. They're customers are not necessarily people who want to hear technical claims from Product.


There is a reason behind this, engineering had nothing positive to say, even this announcement admits attack stopped when it stopped, there wasnt anything technical they did that fixed the situation, unless by mitigation they meant paying ransom and that was the cause attack stopped on its own.


Any word anywhere on motive? Or are we still left to speculate?


OT: Is my eyesight going away, or is the font unreadably light-coloured?


The color is #555, so it's not terrible.

I think the font-weight of 300 is the bigger culprit here. If you pop open the Chrome dev tools and change font-weight to normal, everything becomes much easier to read.


No, the font weight and color are horrible.


My company only just noticed our font renders terribly for one of our newer web apps - all the developers had Macs and didn't notice an egregious difference when rendered on lower quality displays. We changed it immediately. I feel empathy for this particular development oversight among small teams.


Yeah, a content font shouldn't be at 300. It's way too thin at that font size.


Also OT: Anyone else think the Dyn logo is reminiscent of the old Symantec logo? I actually had to go look to see if there was a relationship there (also discovering that the logo was considerably different than I remembered).


Not terrible. Just mostly terrible.

For Chrome there is a "High Contrast" add-in to normalize shiat font color selection.


> We began to see elevated bandwidth against our Managed DNS platform in the Asia Pacific, South America, Eastern Europe, and US-West regions that presented in a way typically associated with a DDoS attack. As we initiated our incident response protocols, the attack vector abruptly changed, honing in on our points of presence in the US-East region

Interesting. Much of the reporting on the day of suggested that the attack was felt exclusively in the United States, but this says otherwise.


Sounds like someone found a way to target one server with all devices worldwide. Shouldn't that be impossible with Anycast? Or did they reveal an ip address that was just referring to US-EAST?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: