Basically it means having "agency." What most people are looking for when they think of "intelligence" is not the ability to master specific tasks, but to choose which tasks to perform using one's own "free will," ultimately leading to behavior that humans find novel and feel they can connect with.
What makes you think a mouse (71M neurons) has agency\free will? Does a cockroach (1M neurons), fruit fly (250K), jellyfish (5k) have agency? I don't think we're gonna get far by relying on a phenomenon that we can't clearly define or even (externally) observe.
Indeed. Human beings have many, many examples to suggest that we lack agency, as well. Why do addiction, obesity, crimes of passion, etc exist?
Without the baggage of the limbic system and dopamine-seeking behaviors, it's quite easy to argue that an artificial intelligence is potentially capable of even greater degrees of agency than humans.
But what does it mean in the context of a mouse? The mouse isn't using it's free will to decide whether to become a computer programmer or a doctor, it's responding to stimulus and environment. If an AI is trained to mimic the responses of a mouse is that intelligent?
Agency in the context of a machine seems purposefully impossible to reach - its decisions are always somehow tied back to how it was programmed to react.
The mouse reacts to stimuli and environment in a qualitatively different way than our programs do. It does continuous and essentially free-form learning of the environment around it, and engages in what looks to us as dynamic formulation and achievement of goals. In "AI" we have today, the learning is very shallow (despite the "deep learning" buzzword), it's usually neither free-form nor continuous, and goals are set in stone.
The goals for a mouse are also set in stone and simple: maximize brain dopamine. Almost everything a mouse does can be described in terms of maximizing that 'reward', and that can lead to a host of other emergent behavior.
I don't see much difference between that and Open AI's engine: https://openai.com/five/. Watch some of those games and you definitely see the same dynamic formulation and complex decision-making, none of which was directly programmed.
You are applying what you know about the learning process in current AI. But if you simply observed behavior between a real mouse and an AI mouse, especially if the latter was trained to mimic the behavior of the former, can you tell that they react in a completely different way?
If your AI mouse would behave like the real mouse for couple hours of observation, I'd conclude you've done a good job.
I'm not trying to make a Chinese room argument (which I don't buy), implying there's some hidden "spark" needed. I'm just saying that currently existing "AI" programs are pretty far from mouse brain, both in individual capabilities and the way they're deployed together (i.e. they're not). For instance, deep learning is to mice brains what a sensor/DSP stack is to a processor. We seem to be making progress in higher-level processing of inputs, but what's lacking is the "meat" that would turn it into a set of behaviors giving rise to a thinking entity.
I put agency in quotes because it's really a convincing illusion of agency that we're going for. In the end, I agree with those making the point that even we don't have free will.
Ultimately it just has to be able to convince humans that "wow, there's an actual thinking and learning 'being' in there."
Well, in these definitions of intelligence, what one often ends up with is some combination of "deal robustly with it's environment" and a bunch of categories defined in terms of each other. That's not to say categories/qualities/term like "agency", "free will", "feel they can connect with", "find novel" and such are unimportant. It's just saying people using the terms mostly couldn't give mathematically/computationally exact definitions of them. And that matters for any complete modeling of these things.