Hacker News new | past | comments | ask | show | jobs | submit login

>>but using scrypt or bcrypt server-side at effective difficulty levels will either destroy your L1/L2 cache or exceed your CPU budget.

Perhaps this is the real issue: lack of sufficient budget for CPUs that can reliably get the job done is a reflection on the priority organizations place on information security.




For an attacker, a great way to cripple an organization that uses scrypt/bcrypt easily is just to attack logins. If each login takes 1 second of work, then it's easy to overwhelm any system system using a host of bots on distributed hosting systems. Say an attacker can hit with a mere 1 req/s from 100 hosts (very easy with aws/digitalocean/hetzner/etc), then that's 100 CPU-seconds of work just for logins.

Comparatively, if they hit with 100 req/s for normal front-end data, caches can easily handle that. So really, adding these types of login checks to any publicly available login page ends up creating an on-demand DDOS target.

From the password cracking side, the asymmetry comes from the fact that an organization has to have X amount of server resources to service Y number of requests, but an attacker can utilize the same X amount of server resources to service just one thing, breaking the exact password they are looking for. This is further exaggerated by the fact that generally for an organization those X server resources also need to more than just service logins (like actually serve content or API data).


If someone can't easily figure out how to create per-IP login attempt throttles, I suspect their application has far more gaping holes in it.


a mere 1 req/s from 100 hosts

I specifically set a request number that would be below most thresholds.

That said, a one-time request from 100 hosts would still use 100 CPU-seconds of work. Other than preemptively blocking hosts (such as all of AWS), there is no way that a "per-ip attempt throttle" is going to catch a single request from 100 different hosts.

If you had a server with maxed out with 8 intel E7-8870V2 CPUs (15 cores @ 8 CPUs = 120, $32k server cost just in CPUs), and set for a work factor that gave a normal "1 second" work factor, then someone just DOS'd you for most of a second. On a more reasonable 8 core server, that DOS would last 12.5 seconds (actual CPU time, not including the fact that they are stacked on top of each other). And on a dual core system that's almost 2 minutes.

If there was any other part of a website that could be DOS'd for so long with so few requests, then most people would also suggest the application had "far more gaping holes in it." Moxie's point above is that the advice presented is to just put bcrypt/scrypt/PBKDF2 in place, and no advice is given at all on how to deal with these issues that come up, or even that they exist at all, and thus systems end up being misconfigured or work factors relaxed to a point where only a false sense of security is gained.


> I specifically set a request number that would be below most thresholds.

No legitimate user will be logging in once per second. If you're specifically throttling logins and don't set it higher, you're so incompetent you shouldn't be writing production code in the first place.

> That said, a one-time request from 100 hosts would still use 100 CPU-seconds of work. Other than preemptively blocking hosts (such as all of AWS), there is no way that a "per-ip attempt throttle" is going to catch a single request from 100 different hosts.

So you've successfully DDoS'd an application for the few seconds that it takes a couple dozen cores to chew through 100 CPU seconds. Um, congratulations? I don't think there's a reward for "most pathetic DDoS attempt in history", but it should go to this.

> And on a dual core system that's almost 2 minutes.

You're very confused. When we say "1 CPU second", we're speaking of a single CPU core. 100 CPU seconds on a dual-core system takes ~50 real seconds. Your hypothetical 120-core server would be theoretically capable of processing nearly 120 logins per second at a 1-second work factor. bcrypt et. al are not multi-threaded.


A site I know allows 5 login attempts per hour. That seems plenty for legitimate purposes. I've never heard anyone complain.


But it doesn't matter if they keep hitting their service with a list of known emails and then sending bogus passwords, 5 times per email per host.


Sorry for late reply. It was 5 attempts per hour per ip. Not per email.


There are botnets with hundreds of thousands of hosts in them. There are proxies and NATs with hundreds or thousands of users behind them. How do you balance those out?


> There are botnets with hundreds of thousands of hosts in them.

Which have a wide array of other mechanisms by which to DDoS your application. If that level of force is being directed at you, you need professional DDoS mitigation assistance. The CPU time cost of your login mechanism is immaterial.

> There are proxies and NATs with hundreds or thousands of users behind them.

You do not need to throttle an IP if it is not the source of an attack. But it remains inevitable that a serious DDoS will sometimes break legitimate users, even with intelligent mitigation strategies. Welcome to the real world, it ain't pretty!


Which have a wide array of other mechanisms by which to DDoS your application.

Why use any of those other mechanisms, which might require a few thousand hosts, when a method exists where a hundred hosts can do just as much damage by getting the server to punch itself in the face.


Because a method does not exist where a hundred hosts can do just as much damage. It is utterly trivial to detect and block anomalous login activity from 100 hosts.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: