Hacker News new | past | comments | ask | show | jobs | submit login

The problems with LLM are numerous but whats really wild to me is that even as they get better at fairly trivial tasks the advertising gets more and more out of hand. These machine dont think, and they dont understand, but people like the CEO of OpenAI allude to them doing just that, obviously so the hype can make them money.



> These machine dont think

And submarines don't swim.


And it would be bad for a submarine salesman to go to people that think swimming is very special and try to get them believing that submarines do swim.


Why would that be bad? A submarine salesman convincing you that his submarine "swims" doesn't change the set of missions a submarine might be suitable for. It makes no practical difference. There's no point where you get the submarine and it meets all the advertised specs, does everything you needed a submarine for, but you're unsatisfied with it anyway because you now realize that the word "swim" is reserved for living creatures.

And more to the point, nobody believes that "it thinks" is sufficient qualification for a job when hiring a human, so why would it be different when buying a machine? Whether or not the machine "thinks" doesn't address the question of whether or not the machine is capable of doing the jobs you want it to do. Anybody who neglects to evaluate the functional capability of the machine is simply a fool.


> but you're unsatisfied with it anyway because you now realize that the word "swim" is reserved for living creatures.

There are swimming robots.[0][1] Swimming is qualitatively different to what submarines do. The exception is helical flagella, not robots.

[0]: https://robot.cfp.co.ir/en/robots/swimming

[1]: https://www.robotswim.com/?lang=English


> The exception is helical flagella, not robots.

Don't you think they're an exemption because they're alive? If seals had propellers we'd still say they swim. Squids use jet propulsion and we still say they swim; do jetskis also swim? Somehow not.


> These machine dont think, and they dont understand

But they do solve many tasks correctly, even problems with multiple steps and new tasks for which they got no specific training. They can combine skills in new ways on demand. Call it what you want.


They don't. Solve tasks, I mean. There's not a single task you can throw at them and rely on the answer.

Could they solve tasks? Potentially. But how would we ever know that we could trust them?

With humans we not only have millennia of collective experience when it comes to tasks, judging the result, and finding bullshitters. Also, we can retrain a human on the spot and be confident they won't immediately forget something important over that retraining.

If we ever let a model produce important decisions, I'd imagine we'd want to certify it beforehand. But that excludes improvements and feedback - the certified software should better not change. If course, a feedback loop could involve recertification, but that means that the certification process itself needs to be cheap.

And all that doesn't even take into account the generalized interface: How can we make sure that a model is aware of its narrow purpose and doesn't answer to tasks outside of that purpose?

I think all these problems could eventually be overcome, but I don't see much effort put into such a framework to actually make models solve tasks.


> Also, we can retrain a human on the spot and be confident they won't immediately forget something important over that retraining.

I don’t have millennia, but my more than 3 decades of experience interacting with human beings tell me this is not nearly as reliable as you make it seem.


There is no guarantee that a human would solve the task correctly. Therefore, according to your logic, we can say that humans do not solve tasks either.

To claim that only humans can accurately solve a task using words and wisdom is to give humans too much credit. They are not that lofty, sacred, or absolute.


Could be the sign of the next AI winter.


I dont think you understand mate


Do you believe that machines cannot think or understand? That is very racist. It is a terrible racist to refuse to recognize the possibilities for beings that are different from you.

If you are an LLM, I will grant you that claim.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: