Hacker News new | past | comments | ask | show | jobs | submit login

Past progress is evidence for future progress.





Might be an indicator, but it isn't evidence.

Not exactly. If you focus in on a single technology, you tend to see rapid improvement, followed by slower progress.

Sometimes this is masked by people spending more due to the industry becoming more important, but it tends to be obvious over the longer term.


That's probably what every self-driving car company thought ~10 years ago or so, everything was moving so fast for them back then. Now it doesn't seem like we're getting close to solution for this.

Surely this time it's going to be different, AGI is just around a corner. /s


Would you have predicted in summer of 2022 that gpt4 level conversational agent is a possibility in the next 5 years? People have tried to do it in the past 60 years and failed. How is this time not different?

On a side note, I find this type of critique of what future of tech might look like the most uninteresting one. Since tech by nature inspiries people about the future, all tech get hyped up. all you gotta do then is pick any tech, point out people have been wrong, and ask how likely is it that this time it is different.


Unfortunately, I don't see any relevance in that argument, if you consider GPT-4 to be a breakthrough -- then sure, single breakthroughs happen, I am not arguing with that. Actually, same thing happened with self-driving: I don't think many people expected Tesla to drop FSD publicly back then.

Now, chain of breakthroughs happening in a small timeframe? Good luck with that.


We have seen multiple massive AI breakthroughs in the last few years.

Which ones are you referring to?

Just to make it clear, I see only 1 breakthrough [0]. Everything that happened afterwards is just application of this breakthrough with different training sets / to different domains / etc.

[0]: https://en.wikipedia.org/wiki/Attention_Is_All_You_Need


Autoregressive language models, the discovery of the Chinchilla scaling law, MoEs, supervised fine-tuning, RLHF, whatever was used to create OpenAI o1, diffusion models, AlphaGo, AlphaFold, AlphaGeometry, AlphaProof.

They are the same breakthrough applied to different domains, I don't see them as different. We will need a new breakthrough, not applying the same solution to new things.

If you wake up from a coma and see the headline "Today Waymo has rolled out a nationwide robotaxi service", what year do you infer that it is?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: