I like the different P(x) breakdowns, although I think there's something inconsistent in how they are used: A high value of P(correct) in Creative means "screwing up is impossible" but in Planning it means "screwing up is unacceptable."
Perhaps Planning was supposed to have a different term, like Cost(correct) ?
> Current AI models have no inner thoughts, ideas, experiences or emotions, at least not in a way I recognize them
I'm conscious of repeating myself, but one of the biggest AI illusions being cast right now involves confusion between character and author. We humans are presented a story with a clever relatable robot character, and are encouraged to impute all of those inferred fictional qualities onto the real-world algorithm.
> AI is bad, yes, but bad AI is still useful. Therefore, bad AI is here to stay, and we must deal with it.
IDK having lived through what computers could do in the 90s, current capabilities don't seem at all bad to me...
But I get what he's saying. It's not so much about whether it is good or bad, but how useful it is. By looking at my ever-growing AI bills (ChatGPT pro, Anthropic API costs from constantly testing, developing, and using RA.Aid and running other agents, etc.,) AI most definitely is useful, at least for me.
Darwinian evolution, or should I say Turingian evolution - with ~~ a millionfold evolution rate,- will create fields of dead superseded AI dross - silicon dead leaves, as it were.
This race = geometric and the waysides are ready..
Perhaps Planning was supposed to have a different term, like Cost(correct) ?
> Current AI models have no inner thoughts, ideas, experiences or emotions, at least not in a way I recognize them
I'm conscious of repeating myself, but one of the biggest AI illusions being cast right now involves confusion between character and author. We humans are presented a story with a clever relatable robot character, and are encouraged to impute all of those inferred fictional qualities onto the real-world algorithm.
reply