What would legislation look like for DDoS on an ISP level?
You make DDoS mitigation sound easy, but most of what I would call successful attacks are from traffic that looks real at the ISP level and are relatively low bandwidth. Attackers achieve success through locking some aspect of their victims architecture.
Most websites are not prepared for large fluctuations relative to their normal traffic, which look like a drop in the bucket when you are at an ISP level. I don't blame websites for this because mitigation at this level can be expensive.
I think legislation for something like this would be a mess, because it's not simple and it has a technical solution.
The vast majority of DDOS these days is due to IP spoofing, which is what is possible if you don't ingress filter. It's a simple and 100% effective technique against IP spoofing, the problem is everyone on the internet needs to do it...
And something like 90%+ of IP endpoints have, that last remaining percentages is a real bitch though
>The vast majority of DDOS these days is due to IP spoofin
Um, no. Amplification attacks, and direct attacks from compromised hosts, such as IoT, are the majority of the traffic. BCP38 at the edge doesn't protect against devices spoofing their internal network address from someone else in their same subnet/routing block.
How often do those kinds of attacks occur? I'm actually curious. From my research it's more common that DDoS are intersubnet rather than intra. If this is the case, why not make Comcast prevent spoofing within its own network? They're at a good place to know legit IPs.
If I'm correct, we're going to need this sooner rather than later. As IoT gets rolled out (had to remove my new AC from the WiFi since it was just easier to let the install hook it up than explaining why it's a bad idea), we need to have someone be accountable for security. IoT manufactures should be on the hook, but so to the network admins.
You make DDoS mitigation sound easy, but most of what I would call successful attacks are from traffic that looks real at the ISP level and are relatively low bandwidth. Attackers achieve success through locking some aspect of their victims architecture.
Most websites are not prepared for large fluctuations relative to their normal traffic, which look like a drop in the bucket when you are at an ISP level. I don't blame websites for this because mitigation at this level can be expensive.
I think legislation for something like this would be a mess, because it's not simple and it has a technical solution.