Hacker News new | past | comments | ask | show | jobs | submit login
Hashcat 6.0 (hashcat.net)
275 points by miles on June 17, 2020 | hide | past | favorite | 53 comments



That's ~5 commits a DAY on average since the last release a year ago, primarily from 29 contributors.

That is a rate of development that bests most paid teams that I know of.

I am very impressed. How do you manage so much commitment for an open source project?


Security researchers / developers are employed by companies and organizations that have an interest in this technology, e.g. law enforcement, secret services.

I suspect most serious / active open source projects have a number of paid for developers like that. TBH, they need it, if their scale is beyond a small library / utility.


hashcat is also one of those tools that is both 1. a “core” that can be used in other software; while also being 2. in a class of software that benefits heavily from network effects (i.e. when someone contributes new algorithms to it, everyone gets just a little further in cracking the “mystery hashes” they have laying about.)

Hash reversing as a problem having property #2, virtually guarantees that the landscape of hash-reversing software would look like an oligopoly, because people would use the tools with the most algorithms, and so contribute to those, and so “the rich get richer.”

But hashcat having property #1 means that there’s no political reason (e.g. your enterprise wanting to ship something with your own branded GUI on it) to be unable to use hashcat, and so no reason for anyone to create their own new full-stack hash reversing system, when hashcat already exists to be used within such software.

Effectively, these properties are the same thing that made ffmpeg the “winner” in its own space, as discussed yesterday (https://news.ycombinator.com/item?id=23540704).


NSA also have lots grants that result in professors bullying their students to contribute to projects like these.

The reason this is MIT and not GPL makes this project all sorts of bad. MIT has its places and I have several personal open projects under it, but I can't imagine a case where society would be better by a company having closed hash cracks built on top of open source work.


A key to merging many commits (PRs, CLs, etc) is in making small commits. It's not just parceling the same amount of work into more pieces. Code review time grows very super-linearly with the amount of code to review.


When passion drives your work rather than money.


Non ethical hackers can get paid non ethically.


Maybe don't make unsubstantiated claims without evidence.


it may be suggestive, but it's still a fact


The bare fact is not an answer to the question. It's a non sequitur.

The clear implication that actually answers the question, that these non ethical hackers are responsible for the commitment to the project, is an unsubstantiated claim.


A fact used to imply completely unsubstantiated facts.


How can something be 'suggestive' and 'a fact' at the same time?


By being true, and suggesting something else.

Obviously non ethical hackers can get paid non ethically. Anyone can get paid non-ethically after all. It's a fact that's so trivial it's useless if you just take it for its factual content. What makes it interesting is the implication that people use hashcat for unethical (and/or illegal) activities and use the proceeds from that to pay for their time improving hashcat. The comment doesn't state that, but it's what we are all thinking when reading it.


Prove it


Grandest addition that I can see is the various WPA/WPA2 changes. Not only did it get ~13% faster with this release, but PBKDF2 and PMK support been added too. Also CUDA support is obviously a godsend from above as well.

Fantastic piece of software. Authors: thank you for your hard work!


Hashcat has long been a user of OpenCL. I wondered whether that was because CUDA really wasn't much better for this application, but this release puts that to rest:

> One of the biggest advantages of CUDA compared to OpenCL is the full use of shared memory ... This and other optimizations are the reason we improved the performance of bcrypt by 46.90%.

Also interesting that they specifically call out CUDA on ARM devices like the Jetson Nano and Xavier. I suspect that the GPU in the Nano is better than the MX150 in my laptop.


Not a hash at dev, but my understanding is that historically, AMD has had "more raw power" GPUS then Nvidia - but has historically suffered worse driver and game implementations. This is even though the 1000x series by Nvidia, AMD cards were usually still the most used cards for crypto mining (which is basically a hash function)

As CUDA only really runs on Nvidia hardware, it makes sense that they might be motivated to be as compatible as possible.


Congrats on the release! I was following up the development progress on GitHub.

I'm pretty excited about "Plugin Interface". I think we can this refactor effort as a success story: simpler code + improved performance + more testing.

It's amazing that they've added this tutorial for adding new algorithm: https://github.com/hashcat/hashcat/blob/master/docs/hashcat-... (previously information was scattered around PRs).

Thanks atom and all the other contributors!


Is there a Hashcat-as-a-Service or is everyone just renting out EC2 GPU instances by the hour?


Most hashes can be cracked with onlinehashcrack.com -- They are free if its under 8 characters and something like 5$ if its not. You can submit as many as you want and if they don't crack it. its free


Is there something similar for Ethereum presale wallet hashes?

I have a wallet of which I know enough of the password to reduce the space to < 10 chars that need to be guessed.


Check out the configuration for a masked attack [0]. You could create a custom character set with the portion that you know and then brute the rest. You could then rent a p2.16xlarge [1] from AWS at about $15 per hour. If you know how much coin is in there you can do a cost/benefit analysis.

[0] https://hashcat.net/wiki/doku.php?id=mask_attack#custom_char... [1] https://aws.amazon.com/ec2/instance-types/p2/


Thanks! That looks fairly simple, I'll try to set it up on my machine first but even a p2 instance will be worth it in this case.



Forge by Inferno Systems wraps hashcat with a workflow more conducive to non-technical people. That currently requires on-site but I believe they have a cloud offering planned: https://inferno-systems.com/forge/index.html


I personally choose not to send client hashes into untrusted environments. I’d maybe consider EC2 with client permission, but not other dodgy services.


Any way to use Metal instead of OpenCL now?


AFAIK, the main contributors are all on linux (or alike) systems, so you're unlikely to see Metal support out-of-the-box. Although, with the new backend/plugin interface added in this release, outside contributors can it themselves if they really need it.


What's the benefit of Metal in this use case? Are there any noticeable speedups in other brute forcing tools that switched to Apple's proprietary API?

Given that OpenCL works on every decent modern platform and GPU brand I doubt much effort will be put into Metal unless someone familiar with the API and willing to put in the extra work joins the team of maintainers or creates a fork.


>What's the benefit of Metal in this use case?

Continued usage on macOS if you care about that kind of a thing since Apple has deprecated OpenCL support.


TIL. That's awful, but then again I'd expect nothing less from Apple. It's a miracle they even supported open standards in the first place.

As long as Apple keeps OpenCL around, even if it's deprecated, these tools should still work. I'd expect that only the announcement of complete removal of OpenCL support would be enough to actually make hashcat put in the extra effort of writing a special Apple backend like that. Maybe they're generous or bored and do it before that, but I wouldn't expect them to in the near future.


It's not such a miracle - all companies like standards until they have sufficiently many apps on their platform - then they switch to proprietary to prevent app portability to competing platforms.


All new major features look like incredible additions, and OTOH do not seem to water down what the software is supposed to do in the first place. I can only applaud the contributors for their dedication. This really looks amazing.


Anyone know why Java's object hash is even on the list given how small it is? It's not even mean to be cryptographically secure.


Because hashcat doesn't really reject PRs hash algorithms, at least to the extent of my knowledge, so long as the code quality is decent. Or in other words, "Why not?"


How many hashes per second can a high end GPU do?


I ran it recently on my 1080ti:

Session..........: hashcat

Status...........: Exhausted

Hash.Type........: MS Office 2010

Hash.Target......: $office$201010000012816*[removed]

Time.Started.....: Sat Apr 18 09:05:24 2020 (3 mins, 35 secs)

Time.Estimated...: Sat Apr 18 09:08:59 2020 (0 secs)

Guess.Base.......: File (merged.txt)

Guess.Queue......: 1/1 (100.00%)

Speed.#1.........: 92589 H/s (2.67ms) @ Accel:256 Loops:128 Thr:64 Vec:1

Recovered........: 0/1 (0.00%) Digests, 0/1 (0.00%) Salts

Progress.........: 19922208/19922208 (100.00%)

Rejected.........: 0/19922208 (0.00%)

Restore.Point....: 19922208/19922208 (100.00%)

Restore.Sub.#1...: Salt:0 Amplifier:0-1 Iteration:99968-100000

Candidates.#1....:

Hardware.Mon.#1..: Temp: 74c Fan: 55% Util: 89% Core:1949MHz Mem:5508MHz Bus:8

Started: Sat Apr 18 09:05:07 2020

Stopped: Sat Apr 18 09:09:00 2020


Is that saying ~92k hashes per second? What's the MS Office 2010 hash type?


I think that is what it is saying! The hash type is AES-128 with SHA-1 hash stretching x100,000[0][1]. The Office 2010 hashmode is 9200[2].

At work, someone of importance wanted access to a password protected file from an employee that left. I ran it through several wordlists to demonstrate an attempt was made and shared the cost/time required for 100% recovery. Never solved it and the cost/time analysis was enough to make them say oh well!

[0] https://en.m.wikipedia.org/wiki/Microsoft_Office_password_pr... [1] https://en.m.wikipedia.org/wiki/Key_stretching [2] https://hashcat.net/wiki/doku.php?id=example_hashes


You can find benchmarks on Google, e.g. https://gist.github.com/binary1985/c8153c8ec44595fdabbf03157...

75 giga hashes per second for ntlm.


That's v5.0.0, so not really applicable when considering the new performance improvements in version 6 or CUDA support.


That depends heavily on which hash algo: cheap GPUs can rip through MD5s but expensive ones will still take forever on bcrypt with a high work factor. Hashcat 6 beta hit 100GH/s for NTLM on a 2080 TI, though: https://twitter.com/hashcat/status/1095807014079512579?lang=...


this is why you should use scrypt vs bcrypt or some other memory intensive algo to generate hashes.


So what's the difference between hashcat and johntheripper?

Any reason to use one over the other?


I had to check to see whether John the Ripper is still maintained. As a matter of fact, 1.9.0 was released last year, four years after 1.8.0:

https://www.openwall.com/lists/announce/2019/05/14/1

The release notes mention that CUDA support was dropped, but that 88 formats out of 407 have OpenCL support.

A few formats also have support for the ZTEX 1.15y, a now-discontinued FPGA-based board popular for crypto mining, which is something I don't think Hashcat has. Here's an article I found on that topic:

https://medium.com/@ScatteredSecrets/bcrypt-password-crackin...

Edit: the two HN submissions for JtR 1.9.0 got no comments, but this Slashdot post does have some comments from a maintainer:

https://it.slashdot.org/story/19/05/18/1841245/new-john-the-...


Hashcat has far better performance, unsure if jtr supports ‘rules’ (eg add ‘1!’ Suffix to each word - bashcat has many rules like this).

Hashcat is a pita for quick jobs so JTR is better if you’re teaching a class of unequipped students.

I’ve never had hashcat work out of the box at all, either driver issues or it exits because it overheats my GFX. When it is running it is very, very good though.


I’ve used johntheripper for wordlists and hashcat for brute-forcing, but the ethos might have changed.


Hashcat supports wordlists with large rule sets, search ‘best64 ruleset’ for instance


Awesome, thank you!


Hashcat has massively better scalability and support for GPU acceleration


1800 commits since the last release - that's not the "continuous delivery" ;)


Smiley indicates that you're joking, but in case you're not, I don't see any commitment from their side about doing continuous delivery and it's neither the best way for ALL projects to do development. Most web startups seems to default to it these days, but that doesn't mean it's a MUST for all types of application building.


For those that care, it is though - they can do their own builds off the master or develop branch. You're confusing continuous delivery with continuous deployment. Delivery is that your software CAN be deployed at any given point, which is ensured by e.g. a test suite and other checks before something is merged.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: