> If the model does not have the capability to transform into a different arbitrary space for every separate sentence, reasoning is not possible.
You're making an authoritative statement on how reasoning works based on conjecture. I remind you that this field is in active study.
The "function" that sufficiently approximates the phenomenon of higher order reasoning may well mode collapse into a compact representation that need not exhaustively span the space of all "sentence representations".
You can illustrate for yourself that it is not at all clear your premise is a necessary condition.
Our brains can reason. Are you proposing its mechanism of action is by "transformation into a different arbitrary space for every sensory experience X"?
I mean maybe? But what if there exists some higher order space that actually generalizes perfectly and captures the basis vectors of these seemingly orthogonal spaces? That can also be the case. Which is why in truth we don't yet know, and to say your view is correct is suspect.
Hope you see now why this claim is an overstatement and missing some qualifiers.
Well, you need to be able to simulate what happens in the problem domain in order to arrive at a conclusion.
There isn't really any other feasible option.
This is what humans do, we visualize and predict the outcome of actions within a system. And if our imagination is not enough, we can outsource the reasoning to the outside world, by doing experiments or using tools like geometry etc. (drawing in the sand, drawing on paper).
It is impossible to arrive at a conclusion without "doing the work", unless you've seen the result before and can recite it.
Therefore, you need to be able to model the problem domain in order to solve the problem.
You're making an authoritative statement on how reasoning works based on conjecture. I remind you that this field is in active study.
The "function" that sufficiently approximates the phenomenon of higher order reasoning may well mode collapse into a compact representation that need not exhaustively span the space of all "sentence representations".
You can illustrate for yourself that it is not at all clear your premise is a necessary condition.
Our brains can reason. Are you proposing its mechanism of action is by "transformation into a different arbitrary space for every sensory experience X"?
I mean maybe? But what if there exists some higher order space that actually generalizes perfectly and captures the basis vectors of these seemingly orthogonal spaces? That can also be the case. Which is why in truth we don't yet know, and to say your view is correct is suspect.
Hope you see now why this claim is an overstatement and missing some qualifiers.