Hacker News new | past | comments | ask | show | jobs | submit login

It's quite obvious that these LLMs are approaching and encroaching on human intelligence. It's so strange to see people continuously be in denial. They clearly aren't fully there yet but two things must be noted:

   1. At times and in certain instances LLMs do produce superior output to humans. 

   2. There is a clear trendline of improvement in AI for the past decade. From voice recognition in Alexa to Dall-E to chatGPT. The logical projection of this trendline points to an inescapble and likely possibility that if AI is not superior now it will be in the future. 
There is a huge irrational denial of the above logical deduction. I think it's because chatGPT hit us in a way that was too sudden. It's like if I saw a flying saucer and I told you I saw it, your first reaction is disbelief even if I produce logical evidence for it.

I mean the GP you replied to knows what the guy is talking about, but he just doesn't want to admit it.




Agreed. There is a phenomenon that I haven’t found a good name for, which I first observed in self-driving cars: “AI made a mistake that only a really dumb human could make, therefore AI is really dumb”.

If you imagine the spider chart of capabilities, it’s certain that AI will be super-human on average before it is super-human on each dimension, so even when it can replace 50% of current jobs it’s likely to have its own “cognitive biases” that seem dumb to us. I think this is a cognitive bias on our part (pattern matching instead of properly probability-weighting, maybe the conjunctive fallacy).

I regret the snark in my post but I find the “pretend not to understand someone’s clear point” rhetorical device obnoxious. I am aware of a few reasonable arguments against Hinton’s position (I don’t happen to agree with them), but they require more finesse to construct.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: