Hacker News new | past | comments | ask | show | jobs | submit login

Why are you so certain that flaws will be fixed? Seems like there is a giant leap between a machine spewing words based on probability and actual deep understanding of the code it's suppose to write



Because ai is following a predicable trend of exponentially increasing task length @50% probability: https://x.com/METR_Evals/status/1902384481111322929

If we are the top of an s-curve, the recent samples on that curve would be below the trend line, not above it.


A "machine spewing words based on probability" is an implementation detail. I'm not making a grandiose prediction about the future. All I'm saying is that these machines are improving super fast.

I'm also stricken by the superficiality of analysis like "oh it's just probabilities" from so many devs; might as well say "it's magnets".


In my comment I was questioning the certainty that those fundamental flaws will be fixed. I'm one of those people who don't believe that iterating over LLM will make that giant leap.

You can call it an implementation detail but it's like both a wheel and a wing can take your over some distance but the difference between them is staggering. Wheel will never send you flying (normally)


Until this knowledge is widespread, a lot of devs better hold on to their current good jobs.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: