People have been rewarded in the past for being skeptical, and have learned this trick to a detriment to their own intellect.
It is a very easy trap to confuse skepticism with expertise on the internet, and HN is one of the worst about this - most of the comments that feign skepticism barely even understand the topic they are commenting upon anymore, and are simply generating skeptical words because that's what has rewarded them in the past.
There is also unsubstantiated hype, which is not helpful either.
Occasionally someone actually has read a few hundred fundamental papers on ML and can give an actual educated response, but it is quite rare. Typically they don't feign skepticism, but rather notice there are noteworthy improvements provided by metaRL and RLHF, etc.
I work in vision. Go look up the imagenet leaderboard. Look at the results of Alexnet vs the top result today. The trend is a log line. The top contending architectures still include CNNs trained on backprop, they’ve just had a decade of tricks applied to eek out some improvements. The transformer based vision models aren’t much better.
Talk to any machine learning expert and they’ll tell you the math and fundamentals haven’t really changed since the 90s, we’ve just gotten better at scaling. Transformers came onto the scene half a decade ago and we could scale them much better than CNNs, but like CNNs of today, we’ve hit the diminishing returns limit.
Maybe look at actual data instead of being dismissive to different opinions.
So interestingly, you can actually have linear or exponential curves on your way from 0 to 100. And you completely ignored how the basic building blocks and algorithms are more or less the same. I think I'm done discussing with non-experts.
I find delightfully ironic you just accused me of not having an intelligent take on AI, just trying to fake one.
That out of the way, the very term AI has been applied to automatic computation since its inception. And the current hype drive is nothing but marketing for software engineering done the hard way. You get one good chatbot by turning 8 years of internet into 1 tb of parameters on memory, costs nearly a million a day to run and... it can regurgitate semi-coherent prose. Wow. Talk about hype.
I'm not skeptical on AI to sound smart. Hell what do I get from some random anonymous account I use to read some blog aggregator? I am deeply skeptical of people selling hard some shiny new compute silver bullet that will do away with all the nasty complexity. Because it won't. We've been warned nearly 60 years ago about that.
Since you don't know squat about my background, maybe you're the one slinging snark around here.
Occasionally someone actually has read a few hundred fundamental papers on ML and can give an actual educated response, but it is quite rare. Typically they don't feign skepticism, but rather notice there are noteworthy improvements provided by metaRL and RLHF, etc.