> it's not that impressive. They use agent's internal states (LSTM cells, attention outputs, etc.) to predict whether it is early in the episode, or whether the agent is holding an object.
That seems like a decent definition of awareness to me. The agent has learned to encode information about time and its body in its internal state, which then influences its decisions. How else would you define awareness? Qualia or something?
I think it would be perfectly reasonable to describe any RNN as being "aware" of information that it learned and then used to make a decision.
"Possess awareness" seems like loaded language though, evoking consciousness. In that direction I'd just quote Dijkstra: "The question of whether a computer can think is no more interesting than the question of whether a submarine can swim."
That seems like a decent definition of awareness to me. The agent has learned to encode information about time and its body in its internal state, which then influences its decisions. How else would you define awareness? Qualia or something?