Generalize does need to be more specific in order to make sense. In current ML work, generalizability means something else than what it typically used to mean in AI work.
One way to look at what generalizability is would be like this: problem domains typically come with sets of axioms. Generalizability is the ability for a solution approach to work across different domains that don't have 100% axiom overlap. The wider the difference between axiom sets for a domain, the more difficult / impressive the generalization.
Solving two problems within the same domain, necessarily sharing a 100% overlapping axiom set, is not generalization at all.
The reason the axioms supporting the domain matter is that they (in part) guide which heuristics work, and how they should be applied. And that's the sort of generalizability that is missing from current work: some solutions can make some pre-programmed choices about applying different heuristics depending on the problem set (driverless cars are doing this now). This is the "bag of tricks" approach. But they don't typically morph in how they are applied, or the end that they're trying to accomplish when they're applied.
I have pondered this and have decided what my standards are. I realize I don't have the authority to set those standards for everyone.
GAI is, to me, when machine is able to be given a problem and then, without prompting, decides which data to consume to learn how to solve that problem. It could be told to optimize an automotive design for a 5% efficiency increase without a loss of safety features and while keeping the performance the same - and then go out and figure out what data it needs to learn so that it can solve that task. It would assemble and process that data and then come up with the answer, which might just be that it is impossible with current tech and here is what is needed and this is how to do it.
That's rather verbose and I'm absolutely not the person who gets to define it. But, when someone says AI, that is how I think of it. More so when they say general AI.
One way to look at what generalizability is would be like this: problem domains typically come with sets of axioms. Generalizability is the ability for a solution approach to work across different domains that don't have 100% axiom overlap. The wider the difference between axiom sets for a domain, the more difficult / impressive the generalization.
Solving two problems within the same domain, necessarily sharing a 100% overlapping axiom set, is not generalization at all.
The reason the axioms supporting the domain matter is that they (in part) guide which heuristics work, and how they should be applied. And that's the sort of generalizability that is missing from current work: some solutions can make some pre-programmed choices about applying different heuristics depending on the problem set (driverless cars are doing this now). This is the "bag of tricks" approach. But they don't typically morph in how they are applied, or the end that they're trying to accomplish when they're applied.