Security researchers / developers are employed by companies and organizations that have an interest in this technology, e.g. law enforcement, secret services.
I suspect most serious / active open source projects have a number of paid for developers like that. TBH, they need it, if their scale is beyond a small library / utility.
hashcat is also one of those tools that is both 1. a “core” that can be used in other software; while also being 2. in a class of software that benefits heavily from network effects (i.e. when someone contributes new algorithms to it, everyone gets just a little further in cracking the “mystery hashes” they have laying about.)
Hash reversing as a problem having property #2, virtually guarantees that the landscape of hash-reversing software would look like an oligopoly, because people would use the tools with the most algorithms, and so contribute to those, and so “the rich get richer.”
But hashcat having property #1 means that there’s no political reason (e.g. your enterprise wanting to ship something with your own branded GUI on it) to be unable to use hashcat, and so no reason for anyone to create their own new full-stack hash reversing system, when hashcat already exists to be used within such software.
NSA also have lots grants that result in professors bullying their students to contribute to projects like these.
The reason this is MIT and not GPL makes this project all sorts of bad. MIT has its places and I have several personal open projects under it, but I can't imagine a case where society would be better by a company having closed hash cracks built on top of open source work.
A key to merging many commits (PRs, CLs, etc) is in making small commits. It's not just parceling the same amount of work into more pieces. Code review time grows very super-linearly with the amount of code to review.
The bare fact is not an answer to the question. It's a non sequitur.
The clear implication that actually answers the question, that these non ethical hackers are responsible for the commitment to the project, is an unsubstantiated claim.
Obviously non ethical hackers can get paid non ethically. Anyone can get paid non-ethically after all. It's a fact that's so trivial it's useless if you just take it for its factual content. What makes it interesting is the implication that people use hashcat for unethical (and/or illegal) activities and use the proceeds from that to pay for their time improving hashcat. The comment doesn't state that, but it's what we are all thinking when reading it.
Grandest addition that I can see is the various WPA/WPA2 changes. Not only did it get ~13% faster with this release, but PBKDF2 and PMK support been added too. Also CUDA support is obviously a godsend from above as well.
Fantastic piece of software. Authors: thank you for your hard work!
Hashcat has long been a user of OpenCL. I wondered whether that was because CUDA really wasn't much better for this application, but this release puts that to rest:
> One of the biggest advantages of CUDA compared to OpenCL is the full use of shared memory ... This and other optimizations are the reason we improved the performance of bcrypt by 46.90%.
Also interesting that they specifically call out CUDA on ARM devices like the Jetson Nano and Xavier. I suspect that the GPU in the Nano is better than the MX150 in my laptop.
Not a hash at dev, but my understanding is that historically, AMD has had "more raw power" GPUS then Nvidia - but has historically suffered worse driver and game implementations. This is even though the 1000x series by Nvidia, AMD cards were usually still the most used cards for crypto mining (which is basically a hash function)
As CUDA only really runs on Nvidia hardware, it makes sense that they might be motivated to be as compatible as possible.
Congrats on the release! I was following up the development progress on GitHub.
I'm pretty excited about "Plugin Interface".
I think we can this refactor effort as a success story: simpler code + improved performance + more testing.
Most hashes can be cracked with onlinehashcrack.com -- They are free if its under 8 characters and something like 5$ if its not. You can submit as many as you want and if they don't crack it. its free
Check out the configuration for a masked attack [0]. You could create a custom character set with the portion that you know and then brute the rest. You could then rent a p2.16xlarge [1] from AWS at about $15 per hour. If you know how much coin is in there you can do a cost/benefit analysis.
Forge by Inferno Systems wraps hashcat with a workflow more conducive to non-technical people. That currently requires on-site but I believe they have a cloud offering planned: https://inferno-systems.com/forge/index.html
I personally choose not to send client hashes into untrusted environments. I’d maybe consider EC2 with client permission, but not other dodgy services.
AFAIK, the main contributors are all on linux (or alike) systems, so you're unlikely to see Metal support out-of-the-box. Although, with the new backend/plugin interface added in this release, outside contributors can it themselves if they really need it.
What's the benefit of Metal in this use case? Are there any noticeable speedups in other brute forcing tools that switched to Apple's proprietary API?
Given that OpenCL works on every decent modern platform and GPU brand I doubt much effort will be put into Metal unless someone familiar with the API and willing to put in the extra work joins the team of maintainers or creates a fork.
TIL. That's awful, but then again I'd expect nothing less from Apple. It's a miracle they even supported open standards in the first place.
As long as Apple keeps OpenCL around, even if it's deprecated, these tools should still work. I'd expect that only the announcement of complete removal of OpenCL support would be enough to actually make hashcat put in the extra effort of writing a special Apple backend like that. Maybe they're generous or bored and do it before that, but I wouldn't expect them to in the near future.
It's not such a miracle - all companies like standards until they have sufficiently many apps on their platform - then they switch to proprietary to prevent app portability to competing platforms.
All new major features look like incredible additions, and OTOH do not seem to water down what the software is supposed to do in the first place.
I can only applaud the contributors for their dedication. This really looks amazing.
Because hashcat doesn't really reject PRs hash algorithms, at least to the extent of my knowledge, so long as the code quality is decent. Or in other words, "Why not?"
I think that is what it is saying! The hash type is AES-128 with SHA-1 hash stretching x100,000[0][1]. The Office 2010 hashmode is 9200[2].
At work, someone of importance wanted access to a password protected file from an employee that left. I ran it through several wordlists to demonstrate an attempt was made and shared the cost/time required for 100% recovery. Never solved it and the cost/time analysis was enough to make them say oh well!
That depends heavily on which hash algo: cheap GPUs can rip through MD5s but expensive ones will still take forever on bcrypt with a high work factor. Hashcat 6 beta hit 100GH/s for NTLM on a 2080 TI, though: https://twitter.com/hashcat/status/1095807014079512579?lang=...
The release notes mention that CUDA support was dropped, but that 88 formats out of 407 have OpenCL support.
A few formats also have support for the ZTEX 1.15y, a now-discontinued FPGA-based board popular for crypto mining, which is something I don't think Hashcat has. Here's an article I found on that topic:
Hashcat has far better performance, unsure if jtr supports ‘rules’ (eg add ‘1!’ Suffix to each word - bashcat has many rules like this).
Hashcat is a pita for quick jobs so JTR is better if you’re teaching a class of unequipped students.
I’ve never had hashcat work out of the box at all, either driver issues or it exits because it overheats my GFX. When it is running it is very, very good though.
Smiley indicates that you're joking, but in case you're not, I don't see any commitment from their side about doing continuous delivery and it's neither the best way for ALL projects to do development. Most web startups seems to default to it these days, but that doesn't mean it's a MUST for all types of application building.
For those that care, it is though - they can do their own builds off the master or develop branch. You're confusing continuous delivery with continuous deployment. Delivery is that your software CAN be deployed at any given point, which is ensured by e.g. a test suite and other checks before something is merged.
That is a rate of development that bests most paid teams that I know of.
I am very impressed. How do you manage so much commitment for an open source project?