That's a good point. In my head I was considering stuff like chess, where even though it took a long time to reach superhuman performance on computers, the issue was mainly compute. People basically knew how to do it algorithmically before then (pruning tree search).
I guess the underlying issue with my argument is that we really have no idea how large the search space is for finding AGI, so applying something like Bayes theorem (which is basically my argument) tells you more about my priors than reality.
That said, we know that human AGI was a result of an optimisation process (natural selection), and we have rudimentary generic optimisers these days (deep neural nets), so you could argue we've narrowed the search space a lot since the days of symbolic/tree search AI.
> we know that human AGI was a result of an optimisation process (natural selection)
I don't think this is obviously correct.
Three things:
1) Many actions we think of as "intelligence" are just short-cuts based on heuristics.
2) While there's probably an argument that problem solving is selected for it's not clear to me how far this goes at all. There's little evidence that smarter people end up in more powerful positions for example. Seems like there is perhaps there is a cut-off beyond which intelligence is just a side effect of the problem solving ability that is useful.
3) Perhaps humans individually aren't (very?) intelligent and it is only a society of humans that are.
(also perhaps human GI? Nothing artificial about it.)
> no idea how large the search space is for finding AGI, so applying something like Bayes theorem (which is basically my argument) tells you more about my priors than reality.
There are plenty of imaginable forms of intelligence that are often ignored during these conversations. One in common use is "an intelligent footballer" which applies to sport for someone who can read a game well. There are other, non-human examples too (Dolphins, crows, parrots etc).
And then in the world of speculative fiction there's a range of different types of intelligence. Vernor Vinge wrote about intelligences which had motivations that people couldn't comprehend (and Vinge is generally credited with the concept of the singularity). More recently Peter Watt's Blindside contemplates the separation of intelligence and sentience.
Basically I don't think your expression of Bayes' theorem had nearly enough possibilities in it.
> While there's probably an argument that problem solving is selected for it's not clear to me how far this goes at all. There's little evidence that smarter people end up in more powerful positions for example.
Evolution hasn't had enough time to adapt us to our new fangled lifestyle of last few hundred years, or few thousand for that matter, and anyways in the modern world people are not generally competing on things affecting survival, but rather on cultural factors that affect number of children we have.
Humans and most (all?) intelligent animals are generalists, which is why we need a big brain and intelligence - to rapidly adapt to a wide variety of ever changing circumstances. Non-generalists such as herbivores, crocodiles don't need intelligence and therefore don't have it.
The main thing that we need to survive & thrive as generalists - and what evolution has evidentially selected for - is ability to predict so that we can plan ahead and utilize past experience. Where will the food be, where will the water be in a drought, etc. I think active reasoning (not just LLM-like prediction/recall) would also play a large role in survival, and presumably parts of our brain have evolved specifically to support that, even if the CEO probably got his job based more on height/looks and golf handicap.
I strongly agree that the predictive and planning ability is very important - things like agriculture rely on it and must be selective at that point.
But the point has previously been made else humans developed large brains long (1.5M years?) before agriculture, and for a long time the only benefit seemed to be fire and flint tools.
It's not widely understood the causal link here - there are other species that have large brains but haven't developed these skills. So it's not clear exactly what facets of intelligence are selected for.
> also perhaps human GI? Nothing artificial about it.
Lol, thanks, that's quite funny. I should spend less time on the internet.
> While there's probably an argument that problem solving is selected for it's not clear to me how far this goes at all.
Yeah, I meant something much more low brow which is that _humans_, with all of our properties (including GI), are a result of natural selection. I'm not claiming GI was selected for specifically, but it certainly occurred as a side-effect either way. So we know optimisation can work.
> There are plenty of imaginable forms of intelligence that are often ignored during these conversations.
I completely agree! I wish there was more discussion on intelligence in the broad in these threads. Even if you insist on sticking to humans it's pretty clear that something like a company or a government is operating very intelligently in its own environment (business, or politics), well beyond the influence of its individual constituents.
> Basically I don't think your expression of Bayes' theorem had nearly enough possibilities in it.
Another issue with Bayes in general is that you have a fixed probability space in mind when you use it, right? I can use Bayes to optimise my beliefs against a fixed ontology, but it says nothing about how or when to update the ontology itself.
And no doubt my ontology is lacking when it comes to (A)GI...
I guess the underlying issue with my argument is that we really have no idea how large the search space is for finding AGI, so applying something like Bayes theorem (which is basically my argument) tells you more about my priors than reality.
That said, we know that human AGI was a result of an optimisation process (natural selection), and we have rudimentary generic optimisers these days (deep neural nets), so you could argue we've narrowed the search space a lot since the days of symbolic/tree search AI.