Hacker News new | past | comments | ask | show | jobs | submit login

Discussions of consciousness and AI are broadly confused. People, especially scientists, are not familiar with philosophy of mind and what philosophers currently think. For an introduction to some of the best thinking on the subject, see this interview with Andres Gomez Emilsson of QRI: https://www.youtube.com/watch?v=xJzBjBo24g8.

For something more 'mainstream,' but still reaching see this interview with Philip Goff: https://www.youtube.com/watch?v=D_f26tSubi4

The good news is we're starting to get a handle on these questions. We're a lot further along than we were when I studied philosophy of mind in school 15 years ago.

As far as I can see at the moment, LLMs will never be conscious in any way resembling an organism, because symbolic machines are a very different kind of thing than nervous systems. John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.

As far as impact, LLMs don't need to be conscious to completely transform society and good and bad ways. For the best thinking on that, see Tristan Harris and Aza Raskin's latest: https://vimeo.com/809258916/92b420d98a




> John Searle, broadly, framed the issue correctly in the 80s and the standard critiques are wrong.

The standard critiques are not wrong, IMNSHO. Searle's Chinese Room is facile mind-poison. It is an unfalsifiable hypothesis.

What if I could simulate physics down to the molecular level, including simulating a human brain? Would that be conscious? If not why not?

And if I ran that simulation (a bit slowly, granted) by having that guy from the Chinese Room manually run the simulation, painstakingly following the instruction code of that simulation, would the fact that the simulation is being implemented by someone who unrelatedly is conscious himself, have any bearing on the scenario?

Searle's argument here is "Not Even Wrong".


I recommend watching the interview with Andres. When we adopt a defensible model of consciousness, it becomes clear that simulations of things are different from the things being simulated, and as such we cannot expect them to be conscious in the same way.

I once also thought Searle was 'not even wrong'—you need to go down the rabbit hole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: