Hacker News new | past | comments | ask | show | jobs | submit login
Password Cracking with 8x Nvidia GTX 1080 Ti GPUs (servethehome.com)
207 points by EvgeniyZh on June 13, 2017 | hide | past | favorite | 109 comments



In 2010 I built an 8-GPU machine[1] (4 dual-GPU AMD HD5970) and wrote an MD5 bruteforcer (then faster than hashcat), doing 28.6 then 33.1 billion passwd hashes/sec with a software optimization:

http://blog.zorinaq.com/whitepixel-breaks-286-billion-passwo...

It's interesting to note that 6.5 years later a single GPU like the Nvidia 1080 Ti can match the whole 2010 machine (32 billion hashes/sec). This is a doubling of speed every ~2 years. Moore's Law is still alive and kicking (contrary to what many claim)!

[1] Incidentally posting this machine on HN is how I got pointed to Bitcoin thanks to the reply of a HN user :) https://news.ycombinator.com/item?id=2003888


> Moore's Law is still alive and kicking (contrary to what many claim)!

That statement is so often misunderstood, in multiple ways.

First off, Moore's Law isn't technically about performance increases. It's about doubling of transistors every 2 years on the same die space. We still got that on CPUs until very recently, even though CPU performance has stopped doubling every 2 years like 15 years ago. But now even the transistor count doubling doesn't happen every 2 years - for CPUs. If that were to happen we'd all have 32- or 64-core laptops by now. And even then the performance wouldn't be anywhere close to 64x (compared to single core).

The second misunderstanding, which exists in your post, too, is that most people refer to Moore's Law dying when they talk about CPUs (especially if we discuss performance).

However, GPUs have indeed kept doubling or so their performance every 2 years, also until very recently. Now it's more like a 30% improvement per year, or 70% improvement every 2 years, which is still way better than CPUs.

And the reason for this is because GPUs pretty much scale with number of cores. If you have 4000 CUDA cores in 2017 for 150W TDP and a $1000 price, then you're going to have let's say 7000 CUDA cores for 150W TDP and $1000 price in 2019, and the performance will be 70% greater. It doesn't work like that for CPUs, and it hasn't for like 15 years.


On a GPU more transistors = more compute units = more performance. Hence my over-simplification of Moore's Law.

I strongly disagree that the rate of CPU perf improvement has slowed down "15 years ago". What a laughable statement. You have to look beyond core count to gauge performance. Microarchitectural improvements, new instruction sets (SSE, AVX), bigger caches, etc certainly still help keep the pace. Have a look: https://www.hpcwire.com/2015/11/20/top500/ In particular: https://6lli539m39y3hpkelqsm3c2fg-wpengine.netdna-ssl.com/wp...

Also perhaps you get confused by the fact the average wattage of CPUs sold to consumers is dropping. If, from the performance of a 60-70 watt Netburst Pentium 4 from 2001, you project the expected performance of a 2017 CPU according to Moore's Law, then you should look at today's 60-70 watt CPUs, not at modest 10-20 watt CPUs that seem to be quite popular these days.


>> And the reason for this is because GPUs pretty much scale with number of cores. >> It doesn't work like that for CPUs, and it hasn't for like 15 years.

It's more the workload than anything - CPUs scale very well with the right tasks. Nobody gives GPUs the 'wrong tasks' that don't scale


> Moore's Law isn't technically about ...

Moore's law is fuzzy and has been for a while. I doubt if even Moore has the authority to say what it is about anymore.

When one person hears "Moore's law" or any other words what are the chances the person hearing those words is thinking the same thing as the speaker? For some words the listener hears the same thing the speaker intended, but I don't think it is for these words anymore.

Performance is still exponential the practical of Moore's law is still real.


> incidentally posting this machine on HN is how I got pointed to Bitcoin thanks to the reply of a HN user :)

Mining in 2010 with such a machine. That must have yielded a considerable ROI.


Someone in that thread calculated 50 coins per 26minutes. Assuming they didn't sell any, that would be worth $250k/hour today...


I kick myself because in 2010 I had 2 HD5970s was all set up for mining and got bored and did something else.


> Moore's Law is still alive and kicking (contrary to what many claim)!

In GPUs, which is why tech that can take advantage of parallel processing such as Deep Learning, AR or coin mining has a bright future, whereas the rest is slowly falling into standardized oblivion with low-paying jobs.


A different way to look at it might be that there are fewer applications that can benefit from higher speed that can't be parallelized. 2x speedup on your CPU gives you what exactly? And at 64gb of ram, I don't need 128gb unless I'm moving into HPC territory. Consumer PC hardware is good enough and everything new people want to see has to do with flashy graphics in low-power small-form-factor devices (or people want decisions made for them in what turn out to be computationally intensive ways).

As you point out, graphics, deep learning, AR, and others can benefit from increased computational power. Therefore, we're seeing Moore's Law (which didn't start with Moore) appear where that increased compute ability can be most readily applied.

Meanwhile, when an exciting new technology appears, there are always a small group of people that have the required expertise. With IT, young brains are able to learn these new skills incredibly quickly, so markets respond very quickly. From what I've seen, there's usually an overshooting effect where people think "there's a future in X" for longer than they should. Maybe school is to blame. But basically, you have a ton of new entrants to the market, and they fill demand, but demand levels off sooner or later, and that leads to the low-paying oblivion you're talking about.

I point all this out because there's nothing magic about deep learning, AR, etc. They may not be high paying niches for long. And already, the most-reliable/highest-paying opportunities seem to be working in small groups serving defined markets.


"At a current market value of about USD$0.22 per coin" wow


The regret I feel having given away all of my 100 bitcoins at that price point will forever haunt me.


You can get over that feeling very quickly by reminding yourself to focus on what you might be missing out on today that you don't want to regret in years from now.


Or, alternatively, by remembering the efficient market hypothesis. And going for ice cream with someone you like.


That would be good consolation, but EMH doesn't really apply to emerging markets due to the large information gap about the market. So if you were part of the informed population and thought it was a good investment, but didn't follow through it's still pretty hard to not feel like you made a costly mistake.


It's annoying when ppl try to dull the pain of regret with statements like that. You're right but it's not helpful in practice.

Bitcin turned anyone who bought even just a few dollars of it in 2009-2010 and held, rich And those who bought $1000s if it in 2011 into millionaires. This may be is the greatest gain of any tradeable instrument in the history of civilization. You can't find those kind of returns anywhere else, short of investing in the next Uber or Facebook but those were private ventures and unlike Bitcoin not open to the public.

Selling 1000 books on amazon for $8, although virtually possible for ordinary people to d, is like a piddly 3 bitcoins that any fool could have bought have bought in 2012. Goes to show how capital beats labor. To get rich, look for things that will go up and put your money in it early. Easier said than done, but the returns are astronomical when you get it right.


Ethereum.


If only I had taken my own advice :(


How would you cash out on a huge BTC gain? What would happen if you sold say $500,000 worth at an exchange, then had them wire that to your bank account? Let's say no account in your name ever had a monthly deposit total greater than $10,000 and suddenly that shows up. Would you be under investigation by several organizations? Would you be flagged for audit every year for the rest of your life?


Your bank might be worried, but if you use a 'reputable' exchange that does KYC/AML, e.g. GDAX or Gemini, it should be alright. You might want to space out the bank transfers.

As for the tax man, declare the capital gains and you'll be right as rain.


The huge mistake you really made was not taking those two dollars and buying a winning powerball ticket. Had you known then what you know now losing some cash on Bitcoin was the least of your lost opportunities. ;)


Seconded. The bitcoin to USD ratio is like 1 to 2700 right now

http://www.xe.com/currencycharts/?from=XBT&to=USD


What's caused the price to rise so much the past few months?


Accepting by major parties and even speculated governments (Japan I believe) was one. In generic, being able to spend it in more places as well as people seeing the potential of trading in it.


It's also being used to hide cash in China.

http://www.cnbc.com/2017/05/23/jeff-gundlach-has-a-theory-on...


IMO, this happened because of the Ethereum ICO fever. While there are a number of exchanges that allow you to trade USD -> BTC, there aren't as many that allow trading USD -> ETH. So there are people who might have traded USD -> BTC -> ETH, and this increased demand for BTC.


You can check this thesis pretty easily: https://coinmarketcap.com/currencies/volume/24-hour/

As you can see, ETH <-> Fiat pairings have more volume than BTC -> ETH right now.


erethium yes, but other factors too:

Bitcoin is a form of safety and asset diversification in a world of economic uncertainty. Wealthy foreigners like bitcoin because it is in many instances safer than keeping money in a bank


Foreigners relative to which country?


Speculation. FOMO.


Am i correct in assuming that floating point operations would stress these machines even more so, so in other words if the hash computation somehow required lots of floating point calculations the time taken to crack would be much longer?


There aren't any widely used hashes that require floating point computations. Other than that, floating point instructions are usually slower.


True on a CPU, but on a GPU floating point is as fast, or usually faster.


We all know MD5 is broken, but looking at it from purely a brute-force perspective:

If you look at a US English keyboard, you've generally got 47 unique character keys. Let's double it and say there are 100 different characters you can type just using the character keys and shift.

This machine could brute-force crack any 6 character password in under 4 seconds, any 7 character password in just over 6 minutes, and any 8 character password in just over 10 hours.

And that's using the least efficient method known.


As a non-infosec guy, could someone shed more light on the implications for end users?

I get that the combination of password reuse, short passwords and the fact that some services store passwords in plain text or as MD5 hashes makes it easy to break into accounts once a single service is compromised.

So my takeaway is not to use longer passwords, but to use a password manager and have unique passwords for every service. My current setup is 8 character passwords for online services (easier to occasionally type in manually).

Am I running a risk by not using 12 character passwords?


In order of importance:

1) Don't use a really bad password like 'password'.

This one is the most important because it might allow an attacker to compromise your accounts online--that is without compromising the site itself.

2) Use a different password for each site.

This one is important because you don't want a compromise of smallvillelittleleague.org, which stores its passwords in plaintext, to mean that an attacker now has access to your banking accounts.

3) Use 2-factor on high importance / risk websites.

4) Use very strong passwords everywhere (i.e. long randomly generated).

If you've done 1-3 above the scenario where having a very strong password over a medium strength password is of concrete benefit is fairly narrow. It requires that the attacker get a website's password hashes, that the hash used be a fairly weak one, but that the website not be totally owned (because if it was then there's no additional benefit to having your site specific password).

All IMO of course.


> 4) Use very strong passwords everywhere (i.e. long randomly generated).

You can also go the route of using passwords like:

   MyEmailIsFromGmail!
   or
   HackerNews?MoreLikeSlackerNews


Note that HackerNews?MoreLikeSlackerNews has much less entropy than j-9yh`qw#j54-JIR$


Sure, but having to type that in will cause an aneurysm.


That's why password managers exist


I'm personally convinced 8 chars is now too short to be safe, and I suspect real attacks are generally much faster than 8 hours for a password of that length.

Using a password manager to generate random passwords you get a way to be impervious to dictionary attacks, in addition to being able to generate and manage longer passwords. I'm generally using 20 char passwords, and I'd turn it up further if there weren't so many stupid websites that limit the max length of passwords to 20 characters.

For passwords I need to type, especially if I need them occasionally on a touch screen tablet, I'll use a long all-lowercase letters password. Some of them have a 'make pronounceable' option as well that gives random syllables and makes typing a 20 char password easier than typing an 8 char password of completely random characters. 20 chars of lowercase alpha is a lot more secure than 8 chars of mixed-case alphanumeric and punctuation.


Yeah... I feel if you're limiting input that limit should at LEAST be 64-100 characters or more. Then, since you're hashing anyways, I wouldn't worry too much about limits (other than practical check times, for creation complexity requirements, etc).

The other side is to use a fairly expensive hash, and methods to mitigate/reduce use of a login system as a DDOS vector... having the system, and database used for authentication separate from your actual application is a good start, as is exponential backoff on bad passwords by IP and username.

Moving to a separate "auth" domain that returns a signed or encrypted token, and having that in isolation won't stop your processes from running if you get too many requests for auth at once. Having an exponential and random wait before returning from a failed login is another. Keeping track of IP/user requests in an N minute block is also helpful.

token re-auth may be on the auth domain, or the actual service domain, so that can be different.


Agreed that long passwords are generally better.

For online services (e.g., HackerNews), what's the scenario where an attacker cracks an 8 character password in 8 hours? I assume the attacker would need to download a copy of the service's password store and in that case the service has been hacked to a degree that the attacker won't need to crack passwords anymore.


Password managers are good but long passwords and password managers are better. If we are assuming MD5 then yes an 8 character password is not secure if someone is targeting you as it would take 10 hours to crack. If someone has a entire database of users let's say 50,000 users all with 8 character passwords then that would take 57 years to crack every password, so you may or may not be in the unlucky few that are at the top of the list. Also as GPUs get more powerful that time is going to come further down and you will be more at risk. Personally I use longer than 12 character passwords with a password manager, the inconvenience of having to type it in manually once or twice is not really that much greater than an 8 or 12 character long password.


That 57 years is assuming they're storing their passwords MD5 hashed, with a salt (hah, i'm sure they thought of that if they're using MD5), and using the least efficient method possible to crack the passwords.


It also assumes only a single system... not a cluster of several hundred or thousands. It also assumes no weaknesses in the algorithm itself that may be mitigated.


That's assuming you're only dealing with cracking a hashed version, or you have unlimited instant opportunities to "get it right"


That's correct, but the context of this discussion is cracking hashes locally.

Note that using salts don't necessarily slow down brute-force cracking of a specific hash if the salt is known and understood.


as with most really high electrical loads, if you can operate it remotely, do so. There's places in North America near major hydroelectric dams with electricity that costs $0.03 to $0.045 per kWh. Running something like this and the cooling needed for a 3kW thermal load on California electrical prices is a good way to burn money.


Quebec and Manitoba have rates as low as 0.03$CAD/kWh (~0.023 $USD), or about 0.07$USD/hr for this gpgpu system.

Servethehome is including colocation rental costs (which also means colocation power cost, typically 3x utility markup) in their cost-per-hour estimate here.


quebec hydro is a huge reason for the location of the OVH datacenter there. margins are thin in the massive bulk hosting/colo/dedicated server business.

http://www.datacenterknowledge.com/archives/2013/01/17/ovhs-...


Actually they buy supply from the hydro dam which is right beside them, at 125,000 volts they have their own substations.

There is also a windfarm about 10 minutes from there, and they also have access to the regular hydro quebec power grid.

That location in beauharnois was very strategic... cheap land, hydro dam right beside, near the hydro quebec main transmission line for US interconnect... only thing they were missing was fiber but they brought it in.


99% of the electricity in Quebec is from hydroelectric dams. This is how the costs are kept so low, and you don't really have to worry about resulting greenhouse gas emissions.


Semi-sorta. It did involve flooding huge amounts of Quebec, so there are consequences and greenhouse gas emissions due to rotting trees, but it's a different sort of equation.


Depending on when the land was flooded, some areas may me good candidates for underwater logging now or in the future. Apparently in some places this is a great source of wood for high quality musical instruments because there aren't many old hardwood trees available anymore.

https://en.wikipedia.org/wiki/Underwater_logging


The Hydro Quebec plants are near the arctic. Some of them are near the tundra line (where trees suddenly stop growing and lichen takes over).

Don't expect much rotting that far north. It will take centuries to even start to rot. Also don't expect underwater logging either. This is the Boreal forest, not the rockies one. It's even past the northern limit for Boreal Cedar, the only mildly valuable tree you might find. After that, you get young evergreen and pine. They burn before they get old and grow slowly due to the climate.


If you compare X vs 1/10,000th X then the emissions from the second option are effectively meaningless.


Ah, apparently it's a much more significant factor in warmer climates: https://www.internationalrivers.org/campaigns/reservoir-emis...


That's a kind of an apples to baseball comparison. Methane becomes vastly more important over extremely short time frames but with a 12ish year half-life it's long term contribution is mostly as a carbon source.

Further, tiny dams produce very little power. So, if you play with the numbers you can get extremely different results.

On the other hand if you look at the annual 13,100 GWh from https://en.wikipedia.org/wiki/W._A._C._Bennett_Dam times the 49 years it's been in operation that's the equivalent of (1,000,00 kg / GWH from coal) * 13,100 * 49 = 707,573,630 short tons of CO2 which is vasly larger than all the biomass in the lake to start with. Further, you can log the land before you start minimizing the total biomass flooded.


British Columbia is also 92% hydro but more expensive at 0.08$CAD/kWh to 0.12$CAD/kWh


Someone password cracking with 8 GTX 1080's isn't likely worried about the electricity costs associated with said cracking.


If you anticipate a full load and include cooling, already within a single year the electricity costs more than the GPU hardware - so yes, even (and especially!) if you're buying top end gear then electricity costs matter, it's more expensive than the shiny stuff.


true, but people use similar hardware configurations for all sorts of things. On a larger scale if you have ten systems each of which draws 3.2kW as measured from a watt meter on the AC side of their power supplies, you want to consider the operating costs as a significant part of the budget.


It was mentioned in the article though.


People tend to forget about "the flyover states," but here in Chicago we get ~55% of our power (that was back in 2011 [0]) from renewable and nuclear sources, and have access to similar pricing as a result.

[0]: http://chicagoloopster.medill.northwestern.edu/2012/01/27/so...


I have some 4u servers and I found that with 4x in a rack that's already to dense and the temperatures too high with the consumer level cards that need active cooling.

On the other hand, in CA more rack space was relatively cheap.


> The GTX 1080 Ti is the go-to value card for deep learning at the moment.

Ah, that explains a lot. I recall seeing that AMD cards are being used for most of the Bitcoin / Etherium stuff right now, so I thought it was odd to see team-green used in this case. IIRC, AMD cards have faster integer performance, but are slower in floating-point than the NVidia cards. Password-cracking is primarily integer-based however, soooooooo...

But since they're actually building a Deep Learning Platform (and while waiting for other stuff... using it to quickly-test a password cracking solution), these benchmarks make sense. Plus, its an interesting datapoint in any case.

I do wonder what the AMD-cards would do however. Whether it'd be more efficient, or faster... (or hell: maybe conventional knowledge right now is wrong and NVidia is faster)


The major difference between AMD and Nvidia for mining/hashing and other integer algorithms is that if that algorithm contains a 32 bit 'ROR' (rotate right) then AMD will do that in one tick whereas on an Nvidia card you would need 3 ticks.

That single low level detail meant that AMD destroyed Nvidia when it came to computing hashes quickly because the innermost loops of those algorithms contain a ROR.

Since then the situation has changed, NVidia has improved their performance but bitcoin mining has moved to ASIC almost entirely and for other algorithms AMD still seems to come out ahead of Nvidia.

For deep learning the situation is reversed, there it is almost entirely Nvidia, a major reason is that Nvidia put substantial effort in low level libraries that get the maximum out of their hardware for deep learing applications.


I think (not sure) that AMD is faster per $ not faster for mining specifically ? So it makes more sense when mining for bitcoin to buy the fastest per $ cards ?


You need to count in the performance per Watt and nvidia is sometimes better when it comes to that. Anyway, with current bitcoin and ethereum price, good luck finding any high end nvidia or amd GPU - miners are buying everything


Originally ATi (right in the middle of being acquired by AMD) was killing nVidia in the bitcoin mining arena. People were buying 5770s left and right for the crazy hashrate they were getting.


Can this brute force a smart fridge?


Probably, but Jian Yang won't be happy.


Did you forget the passcode to your beer?


It it a reference to the TV series Silicon Valley (season 4).


bcrypt: 21kH/s scrypt: 750kH/s

The author didn't mention the work factor so I don't know how comparable the results are. But I thought the merit of scrypt over bcrypt was that it was memory hard, i.e. hard to run on a GPU. It doesn't seem to be the case.


The thing about bcrypt is that despite using small data structures in memory (kilobytes) it reads and writes and rewrites a lot of data into them (megabytes, when choosing common work factors). And the small size of its data structures is still just a bit too big to fit many parallel instances of bcrypt in GPU L2 caches. So it is effectively "memory hard" on GPUs.

Contrast this with scrypt which writes memory (typically megabytes) only once, and a byte is read, on average, only once.

bcrypt and scrypt are both, at the moment, memory hard on GPU, but for different reasons. bcrypt reads/writes many times a small buffer. scrypt reads/writes once a large buffer.

Don't pay too much importance on comparing the hash/sec of these 2 algorithms in this specific benchmark. The configurable work factors influence speed a lot.

One day L2 caches will be big enough and we will see a sudden massive cracking speed improvement for bcrypt. This isn't going to happen anytime soon for scrypt.


So one is memory capacity bound, and one is memory bandwidth bound.


The notion of scrypt being "memory hard" is from 2009. This is when GPU just started supporting 1GiB of RAM [1] 256MiB and 512MiB models were still common place. The 1080Ti supports 11x that [2] its a far cry to buy a GPU that doesn't support at least 4GiB.

Scrypt difficult is 128 bytes × N_cost×r_blockSizeFactor [3]. The "standard" parameters 16384 = N blocksize = 8 results in 16MiB of memory per instance.

On a 512MiB GPU that was only 32 instances in parallel. A modern 1080Ti (with 11GiB) that can be 704 instances. Now we're cooking with gas. Moore's Law it happens.

[1] http://www.anandtech.com/show/2744/2

[2] https://www.evga.com/products/productlist.aspx?type=0&family...

[3] https://stackoverflow.com/questions/11126315/what-are-optima...


That's because scrypt is using the wrong memory hardness. The really hard problem in reality is memory load latency (assuming a large memory with random access). We essentially haven't made much progress on that font in decade(s), but have tapered over it with ever increasing levels of cache. If the access is effectively random, then the caches won't help. Base the crypto on that!


> The author didn't mention the work factor

It's not really the author, it's the Hashcat benchmark which they've just straight run onto the system, there's the exact same problem with e.g. PBKDF2:

    Hashtype: PBKDF2-HMAC-SHA256
    […]
    Speed.Dev.#*…..: 14417.6 kH/s

    Hashtype: Django (PBKDF2-SHA256)
    […]
    Speed.Dev.#*…..: 729.6 kH/s
or

    Hashtype: PBKDF2-HMAC-SHA512
    […]
    Speed.Dev.#*…..: 4974.7 kH/s

    Hashtype: OSX v10.8+
    […]
    Speed.Dev.#*…..: 147.2 kH/s
(it's my understanding that current OSX uses PBKDF2-SHA512)


I wonder what the numbers are for Argon2



I’ve once used my PC for the same thing, the result was OK. Based on raw performance, my GTX 960 GPU is 4.5 times slower than a single 1080 Ti, and thus 36 times slower than 8x 1080 Ti’s.

My GPU was able to try 20GB passwords in a couple of minutes. That was the largest password list I’ve found on the internets.

However, neither system can brute force even a short 8 characters password. If the password is lower+upper+digits, that’s (26*2+10)^8=2.14E+14 passwords. This number is way out of range even for a PC with 8x 1080 Ti.

Those GPUs are only useful in a very small number of corner cases.


> My GPU was able to try 20GB passwords in a couple of minutes. That was the largest password list I’ve found on the internets.

Password matching is the low hanging fruit, I usually run them on a regular server.

Where you need the large GPU workloads is in applying rules to these lists and in generating variations. There is also work in generating the rule files themselves.

The good rule sets like dive[0] have 1.2M+ rules[1]. Run that again through your 20GB list and you'll see quickly why you need many GPU's :)

[0] https://hashcat.net/wiki/doku.php?id=rule_based_attack

[1] you don't need that many rules, it is very rapidly diminishing returns after ~60% cracked and thousands of rules.


2.14e14 is nothing. 8x 1080 Ti can crack that in less than 15 minutes (NTLM, MD4, MD5).


This page says 8x 1080 Ti can do 1.3e10 hashes/sec for NTLM:

https://gist.github.com/epixoip/a83d38f412b4737e99bbef804a27...

To test 2.14e14 passwords, you need 4.5 hours.

Add a single extra character to that password and the time will become too long regardless on your hardware.


No it says 334.0 GH/s (3.3e11). But they actually do 441.4 GH/s (4.4e11) according to this HN submission using the latest hashcat.

I can understand why you were doubtful if you were off by 33x.


Also, the OP didn't mention the use of symbols. Cracking even a 8 digit password that have symbols in it will become impractical to even consider.


Process of elimination restricts character and symbol sets, generally, narrowing your set of possible combinations greatly.

The best way to crack a password isn't to brute-force it first, it's to first analyze who made the password, and the password system, to narrow down all possibilities before you try brute-forcing.

Example; if a person is American, you can pretty much assume they're restricted to the typical US keyboard and its symbols, for 90+% of the population. Very few people know of ALT codes or unicode or even the character map, even in IT. That narrows your symbol subset down dramatically. System for passwords truncates after 12 characters, has a minimum of 8? You already know you don't need to try doing anything with more than 12 characters, and you can limit your password cracking to starting with 8 characters and ignore anything with fewer than that. That eliminates a whole slew of brute-forcing that is required, as you've now narrowed down the password range.

All it takes is a little thinking. Man can make it, man can break it, there is simply no exception.


I believe the poster upthread already considered only restricted characters (upper + lower + digits), so the difficulty they stated is what remains after your analysis.

> Man can make it, man can break it, there is simply no exception.

Nice platitude, but this is simply not true.


"Nice platitude, but this is simply not true."

You got an example of anything man has made that man has not broken?

"I believe the poster upthread already considered only restricted characters (upper + lower + digits), so the difficulty they stated is what remains after your analysis."

No it's not, because they didn't think of things like password truncation (which my bank annoyingly does) and various other things.

I tested it. It took me almost an hour to crack my chosen mixed-character + symbol 15 character password with a GTX970 implementing the few rules I stated above. Howsecureismypassword.net says it would take a computer 16 BILLION years to crack.

My point very firmly stands.


Rightly said. Even after we narrow down all the possibilities, there are still some common symbols that usually get used in passwords (%, $, @, etc.)


I wonder what happens to all of the old cards that are replaced every generation? Would be nice to snag a couple of those off of eBay.


They're there and unbelievably cheap. Just wait for the inevitable death of GPU mining again.


Would you happen to know the model name of yesterday's deep learning GPU?


Interesting to see real world figures for bcrypt, also interesting that 7Zip's hashing algorithm seems extremely resilient (I assume it works on the same kind of difficulty / work factor that bcrypt uses but with a higher factor)


Slightly tangential but:

7zip amazes me. It hasn't been regularly updated for years, and still comes out on top of most benchmarks. And yet I find that many, if not most, people on Windows use WinRAR.

It's good software, easy to install, easy to use, and it doesn't nag the user with warnings about licensing. I don't quite understand how WinRAR got popular in the first place with that kind of competition.


When I was first learning computers I'd assumed winrar and winzip were official programs simply because of their names--that is, that they owned the file format. If others make the same mistake, this might explain 7z's market disadvantage.


7zip seems to love to lose file association or simply fails when I double-click a .rar or .7z file, saying file type not recognized (nevermind the fact I just made that very archive with 7Zip!)


I've never had file associations working with 7z on any machine. :/


I think you have to run 7zip as admin when setting the associations.


had the same reaction... RAR similarly has good numbers.

Would love to see numbers for Argon2 :)


There exists a computerphile video about this.

https://www.youtube.com/watch?v=7U-RbOKanYs


As a dev you should take a look at the difference of hashes/s of the different algorithms. Some are measured in GH/s, some in MH/s, some only kH/s. E.g. MD5 is way faster then SHA256. So if you only use a strong hash for ensuring data integrity in the absence of adversaries and performance matters, you may prefer MD5 even in these days.


There's no good reason to use md5, period. If performance matters and security doesn't, use something like mmhash3. If security matters and speed doesn't, use something like SHA-512 (optionally truncated to 256 bits). If both matter, use something like siphash if a prf will do or blake2 if you need a true, high-speed cryptographic hash.

(Actually, besides portability and interoperability, there's probably no good reason to use SHA-anything over blake2. Although if you are working in environments that provide hardware crypto support (Intel SHA Extensions on Atom, now also supported on Ryzen so maybe we'll see it on the desktop, too), SHA becomes faster than blake2 and you should use that instead.)

If at any point you find yourself in a situation where your hash being computed too fast poses a security risk, that should be a HUGE warning. Hashes should be fast, cryptographic or otherwise. If you need a "slow hash" you are probably looking for a derivation algorithm and not a hash and should be looking at scrypt, bcrypt, or even pbkdf2.


I have not looked deeply at mmhash3 (I guess it means murmurhash3), but wikipedia says:

[...] When using 128-bits, the x86 and x64 versions do not produce the same values [...]

which will make it unsuitable for cross-platform applications. I was talking about a usecase where you could also choose CRC32 from a security standpoint but want more collision resistance. How does blake2 performance compare to MD5?


The wikipedia page might be a bit misleading - there's a 128-bit murmur3 that uses 32-bit math (works well on most every processor), and a 128-bit murmur3 that uses 64-bit math (much faster on 64-bit processors, much slower on 32-bit ones)

-Austin, Murmur author.


ah, so there is mm3-128-32 and mm3-128-64

that makes it actually a viable alternative to MD5




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: