Hacker News new | past | comments | ask | show | jobs | submit login

> it's not that impressive. They use agent's internal states (LSTM cells, attention outputs, etc.) to predict whether it is early in the episode, or whether the agent is holding an object.

That seems like a decent definition of awareness to me. The agent has learned to encode information about time and its body in its internal state, which then influences its decisions. How else would you define awareness? Qualia or something?




By that definition wouldn't a regular RNN or LSTM also possess awareness?


I think it would be perfectly reasonable to describe any RNN as being "aware" of information that it learned and then used to make a decision.

"Possess awareness" seems like loaded language though, evoking consciousness. In that direction I'd just quote Dijkstra: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."


Ooh, that’s a great quote.

I’d say that it’s no less interesting, either.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: