Not the GP, but some examples are: (1) how do you get a computer to have a human-like train of thought? (2) how do you get a computer to acquire new concepts (e.g. "debt", "global warming", "weed") and then reason about them correctly, without any reprogramming? (3) automated acquisition of common sense through experience (e.g. "if you pour water on the floor you will get a puddle") (4) deep natural language understanding (i.e. how do you make a chatbot that really understands, and isn't just a thin illusion of understanding).
(4) is an interesting question. Unfortunately it's much harder to understand than it is to ask. For instance, to people really understand, rather than just providing a thin illusion of understanding? What does it actually mean to understand something? Can you make a test that can distinguish arbitrary systems which truly understand from those which provide a thin illusion of understanding?
This is a real problem for physics teachers - you want to find out if the students understand a concept:
Ask them to state it - they memorize the text book definition.
Ask them to apply it to a specified problem - they scan the problem for values of variables and look up a formula list to find one that has those variables.
Ask them to explain why their answer is correct and they form a grammatically correct explanation made by plucking phrases from the problem description and linking them to the answer with "so" or "because".
It feels like they don't understand but they actually can get a long way (ie. not fail) like that - it's certainly human level understanding, even if it's not what the smartest of us are capable of.
Personally, I think understanding is a continuum from special case memorization at the bottom, up to being able to link with a lot of other concepts at the top. There's no bright line between "truly understands" and "illusion".