> it's not at all clear that they're really more than a glorified autocorrect.
I use LLMs regularly in my job - many times a day - and I suspect you haven't used them much if you think this.
They're relevant to the singularity discussion though, because they already give a taste of what superhuman intelligence could look like.
ChatGPT, for example, is objectively superhuman in many ways, despite its significant limitations. Once systems like this are more integrated with the outside world and able to learn more directly from feedback, we'll get an even bigger leap forward.
Dismissing this as "glorified autocorrect" is extremely far off base.
> ChatGPT, for example, is objectively superhuman in many ways
But so is a car.
ChatGPT is mostly superhuman in its ability to draw upon enormous numbers of existing sources. It's not superhuman, or even human, in terms of logic, reasoning, or inventing something truly new.
I use LLMs regularly in my job - many times a day - and I suspect you haven't used them much if you think this.
They're relevant to the singularity discussion though, because they already give a taste of what superhuman intelligence could look like.
ChatGPT, for example, is objectively superhuman in many ways, despite its significant limitations. Once systems like this are more integrated with the outside world and able to learn more directly from feedback, we'll get an even bigger leap forward.
Dismissing this as "glorified autocorrect" is extremely far off base.