Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: JsCrack, an experiment in distributed computing
5 points by jamess on Dec 18, 2008 | hide | past | favorite | 3 comments
For reasons relating to a bit of reverse engineering I was doing today (er, yesterday now... sleep schedule isn't going too well), I got thinking about how long it would take to crack a single DES key on commodity hardware today. While I was thinking, I hit on a novel idea. Distributed computing using javascript piggybacking on unsuspecting browsers.

It occurred to me that a site like youtube, if they parcelled out the DES key space to their browsers in leisurely 10,000 key increments such that you could check them in the background while watching a 1 minute video without making a dent in the CPU, could crack a DES key in something like 3 years in the average case (assuming a zero growth rate in audience.)

I going to have to think about how to manage the key space such that I don't chew more CPU doing that than I would checking the keys personally, however I think I'm going to implement this little experiment in distributed computing and see if I can attract people to paste some javascript on their personal blogs and suchlike.

Anyone interested in joining in, either as a developer or eventual cracker?




It's been done (but for SETI, not password cracking):

http://ajaxian.com/archives/massively-parallel-crowd-sourced...

However, this requires the client have Gears installed, which is basically no one. There's no reason you couldn't do it in the main thread, as long as the task were easily split up into multiple chunks and you were careful not to use too much CPU time.

The hard part is getting it widely distributed. The big traffic sites like YouTube would never do something like this. If you're unscrupulous you could inject the JavaScript in any site you can hack (WordPress blogs, etc...)

A MapReduce type implementation would be very very cool. Make it generic, such that you could submit bits of JavaScript code and the URLs of the data to operate on, and the location to post the results too.

It would surprise me if botnet owners hadn't already done something like this. Of course they can run whatever native code they want.


Hah, blast. Just when you think you have an interesting new idea... Certainly not using google gears would be a good idea, though.


It would still be an interesting project. Especially if it were a generic MapReduce type thing.

It's crazy to think how much power a major site like Google could harness if they actually did this. If I were them I'd do it, but make it opt-in, of course.

If it weren't for the same origin policy they could make a distributed web crawler that loads web pages, parses them, and returns a nice clean list of words and links for the real heavy duty processing. They would need some assurances that clients didn't tamper with the data. Perhaps several clients could process the same pages, and they could ensure they match before accepting the data.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: