Hacker News new | past | comments | ask | show | jobs | submit login

That makes sense given the analogous LLM, which doesn't 'think' via the individual or even sum of the tokens it produces, but in-between in the hidden layers. The output - or actualization of the token - is simply just the form of communication used to convey the 'thought'.

I guess you could say its inappropriate of an analogy since the LLM must use generated tokens to feed into itself in order to 'think' over time beyond a single token. But I would argue that's simply because we enforce the communication method.

One does not need language to have a known concept for "objects fall to the ground when let go". Babies learn this without language. They do not have words for the particular concepts such as "object", "fall", "ground", but as concepts they are learned all the same.

For thought, all there really is, are concepts, or forms in platonic terms. In our case, learned patterns in the world which correspond to particular sense perceptions. Imagination, or the latent space between the forms, is our initial (and only, I'd argue) mode of thinking. These forms, encoding both logical and sensory - of which they are inseparably bound - can be strung together to form thought. They agree to a type of logic, affording the capacity to extract meaning out of the world.

It is an embodied logic, not boolean algebra. The logic is grounded in the ground, the earth underneath your feet. It is grounded in the sun, in its heat and light. It is grounded in the logos, or the intelligible unfolding of the universe. The intelligibility is afforded because of its own nature - that it is built up out of real patterns, patterns all the way down. But it also requires the appearance of the patterns in our sense perceptions in order to be 'seen' and thus known.

As such, logic of this sort is found to be based off of the learned patterns of how the world appears to us so as to make it intelligible, affording our ability to live within it. Then you get sensory being mapped into grounded logic and vise versa, so as to experience the unfolding of the logic with imagination, and I suppose the affective state changes it produces are then fed back into the logic. So emotions seem necessary in order to 'react' to sense perceptions, so that the logic is grounded in caring about what happens, like a mother does with a child. Otherwise there is just appearance, which has no qualia in and of itself, and so the patterns that can be discovered in the world have no ground.

Determining the necessity of the ground of appearances is probably what will determine if we can have AGI or not. We'd like to think computation alone will get us there, or even embodied cognition, but without the transcendentals, its dimensionality of knowing will always exclude this ground, so it cannot know what patterns are most relevant or salient, so it cannot perform relevance realization and overcome combinatorial explosion. So AI cannot be a temporal embodied agent, since the logos of the world is incomprehensible, except through itself, of which we Beings are made out of. We rely on lower order logic, such as math, for our AI. It is how we make sense of the real patterns we see in the world, but they are an abstraction over the thing itself. So a machine is confined by boolean algebra, while we are confined by the logos itself. Sounds nice anyhow.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: