Hacker News new | past | comments | ask | show | jobs | submit login

> I've always wondered: why should recursive intelligence improvement be possible?

That it's possible has already been proven [1]. Obviously there are limits to self-improvement, but we are ourselves limited in similar ways, ie. we are very likely simply finite state automatons.

[1] https://en.wikipedia.org/wiki/G%C3%B6del_machine




>Though theoretically possible, no full implementation has existed before.

And it's called a Goedel Machine because it has to prove, within an initial fixed logical language, that its improved program will be globally optimal. Rice's Theorem says this task is impossible in general (provided we stay within the deterministic, logical setting), and Chaitin's Incompleteness Theorem implies it should be increasingly impossible even in specific: the longer a successor program we search for (to incorporate knowledge that makes it smarter), the greater the chance of hitting a sufficiently complex successor candidate that we can't prove its optimality with our existing Goedel Machine program.

Again, that's within the paradigm of deterministic logics in which proof-checking is semi-decidable. Real brains don't stick within that paradigm (they're natively probabilistic), and thus AGI probably won't stay within that paradigm either.


Exactly, you have to tack on quite a few "provisos" in order to argue it's impossible. There are in fact many ways to circumvent incompleteness upon which every theorem you've quoted depends. Goedel machines have actually been built [1], so their existence isn't just theoretical.

[1] http://people.idsia.ch/~juergen/agi2011bas.pdf

[2] http://people.idsia.ch/~juergen/selfreflection.pdf

Edit: to see another way forward, consider alternate/finitistic arithmetics [3] which are neither stronger nor weaker than Peano arithmetic, but which do not exhibit incompleteness. Arguably, Goedel's theorems pass due to the inherent infinities of the successor axiom, but with an arithmetic like Andrew Boucher's system F (not to be confused with system F in programming language theory), no successor axiom exists, and so we cannot assume that the set of numbers is infinite, and we can enjoy full induction without difficulty. There is still a lot of surprising territory to explore in the foundations of mathematics.

[3] https://golem.ph.utexas.edu/category/2011/10/weak_systems_of...


>Exactly, you have to tack on quite a few "provisos" in order to argue it's impossible.

I wouldn't say it's strictly impossible. I would say the limit is environmental, rather than no limit existing. At a certain point, the available signals from the environment are as precise as they're going to get, the mind is as informed as it's going to get, and no further improvements are possible without a whole complicated raft of new experiences.

Kinda like how science often hits the point where we can't actually shift into a new Kuhnian paradigm by pure deduction, but actually have to do expensive, difficult experiments.


Interesting that you say "provided we stay within the deterministic, logical setting" because Paul Christiano's work shows that if we move to the probabilistic domain (the real world) it becomes possible.


Really fascinating stuff. I guess an AGI has to skirt the boundary between the purely rational (i.e. evaluating statements of propositional logic), and irrational (i.e. intuition).


What do you think intuition is, mechanically?


Probably a hidden Markov process[1]?

[1] https://en.wikipedia.org/wiki/Hidden_Markov_model


The point is more that intuition isn't something opposed to rationality, rather it is a decidedly deterministic logical process that you merely lack conscious insight into.


I hadn't ever thought of it that way. I guess it's a false dichotomy.

I've been reading up on deep belief networks and Restricted Boltzmann machines. Using these tools definitely reflects a deterministic logical process, there just happens to be hidden state involved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: