Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> AI winters are a result of a massive disparity between the expectations of the general public and the reality of where the technology currently sits.

I think they also happen when the best ideas in the field run into the brick wall of insufficiently developed computer technology. I remember writing code for a perceptron in the '90s on an 8 bit system, 64 k RAM - it's laughable.

But right now compute power and data storage seem plentiful, so rumors of the current wave's demise appear exaggerated.




I wonder, though, what will happen with the demise of Moore's law... can we simply go with increased parallelism? How much can that scale?


That part will be harder than we can imagine.

Most of the software world will have to move on stuff like Haskell or functional language. As of now bulk(almost all) of our people are trained to program in C based languages.

It won't be easy. There will be a renewal for high demand software jobs.


I don't think Haskell/FP is a solution either... Even if it allows some beautiful straightforward parallelization in Spark for typical cases, more advanced cases become convoluted, require explicit caching, and decrease performance significantly, unless some nasty hacks are involved (resembling cut operator in Prolog). I guess bleeding edge will be always difficult and one should not restrict their choices to a single paradigm.


I wish GPUs were 1000x faster... Then I could do some crazy magic with Deep Learning instead of waiting weeks for training to be finished...


That's more a matter of budget than anything else. If you problem is valuable enough spending the money in a short time-frame rather than waiting for weeks can be well worth the investment.


I cannot fit a cluster of GPUs into a phone where I could make magic happen real-time though :(


Hm. Offload the job to a remote cluster? Or is comms then the limiting factor?


It won't give us that snappy feeling; imagine learning things in milliseconds and immediately displaying them on your phone.


Jeez. That would be faster than protein-and-water-based systems, which up until now are still the faster learners.


somebody is working on photonics-based ML http://www.lighton.io/our-technology




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: