The answer to the question which technique will lead to AI depends on who you ask.
Geoffrey Hinton and Jürgen Schmidhuber are convinced that neural nets trained with backprop will likely lead to AI with minimal additional fixed-function structure. Schmidhuber's intuition is that recurrent neural networks (RNNs) are capable of universal computation, so given sufficient computational resources the RNN can learn any task, including generalization, goal and action selection and everything else that we associate with intelligence. Hinton's intuition is that RNNs essentially accumulate representations in their hidden state vectors that are very similar to human thoughts ("thought vectors"). Hinton thinks that thought vectors will naturally lead to the kind of reasoning that humans are capable of. In his view the human brain is basically a reinforcement-modulated recurrent network of stacks of autoencoders (needed for unsupervised representation learning).
Pedro Domingo sees such universally that may lead to AI in all major fields of AI: (1) Symbolists have inverse deduction (finding a general rule for a set of observations), (2) Connectionists have backprop and RNNs (see above) (3) Evolutionists have genetic programming (improvement of programs via selection, mutation and cross -over), (4) Bayesianists have incremental integration of evidence using Bayes' theorem, e.g. dynamic Bayes networks, and finally (5) Analogizers have kernel machines and algorithms to compare things and create new concepts.
Domingo's hypotheses is basically that only a combination of various of these approaches will lead to AI.
It's also possible that all these approaches (1-5) are different viewpoints of some more fundamental, underlying concept/description of intelligence and will converge at some point (as happened to Turing Machines vs. Lambda Calculus). This would be very interesting.
Is there a proof that generalization or goal and action selection or other things associated with intelligence are actually computable?
If the Strong Free Will theorem is right, there are distinct limits on the ability of computational devices to simulate our physics.
For one, anything a computable machine does "on its own" will be a function of the past history of the universe (deterministic), whereas the SFWT says that elementary particles are free to act without regard to the entirity of history.
E: even a one word reply conveys so much more than an anonymous downmod.
I didn't downvote you but I suppose the criticism would be that our machines are built from the same elementary particles that a brain is, so why would our machines suddenly be so deterministic as to disallow strong ai but a brain is not.
Furthermore a ai does not compute a physics model any more than a brain does, so the criticism does not apply.
But the theory of the machines is entirely deterministic. Any non-deterministic behavior of a machine would be considered an error under any approaches being discussed here.
The point is that the universe can do things that functions cannot. If your only tool is functions (only thing that algorithms can compute), then you will necessarily be unable to handle all possibilities (by which point, I question your intelligence).
Functions routinely use random number generators. Pretty much all AI/Machine Learning techniques use non-determinism as part of their design.
But even so, it's confusing to phrase intelligence in terms of non-determinism. It's easy to come up with a non-deterministic answer to an arbitrary question. It's hard to come up with a correct answer to an arbitrary question. If unpredictability is a component of sound reasoning, it's because we humans are so bad at reasoning.
Pseudo-random generators, for sure, which are entirely deterministic. It does not even matter if you had a source of truly random numbers. Given the same sequence of numbers, an algorithm will return the same result. That is what it means to be a function.
The problem is not that, "my answer must be 'unpredictable'", but, "the actual answer may not be computable" (and so, no algorithm may ever derive it).
I did not downvote (and you're clear now) but your post is not a relevant argument. The determinism that the SFWT is arguing against is that of certain hidden variable theories of quantum mechanics. It states that if the humans are free to choose particular configurations for an experiment measuring this or that spin, then bounded by relativity and experimentally verified aspects of quantum mechanics, the behaviors of the particles cannot be dependent on the past history of the universe. The main characters are the particles, people are incidental.
> "Our argument combines the well-known consequence of relativity theory, that the time order of space-like separated events is not absolute, with the EPR paradox discovered by Einstein, Podolsky, and Rosen in 1935, and the Kochen-Specker Paradox of 1967"
So as far as I can tell, it takes for granted the humans' ability to choose the configurations freely, which though suspect in of itself doesn't matter so much to their argument as it's not really an argument for free will, it's a discussion of how inherent to quantum mechanics non-determinism is.
> "To be precise, we mean that the choice an experimenter makes is not a function of the past."
> "We have supposed that the experimenters’ choices of directions from the Peres configuration are totally free and independent."
> "It is the experimenters’ free will that allows the free and independent choices of x, y, z, and w ."
It is actually, if anything, in favor of no distinction between humans and computers (more precisely, it is not dependent on humans, only a "free chooser") as they argue that though the humans can be replaced by pseudo random number generators, the generators need to be chosen by something with "free choice" so as to escape objections by pendants that the PRNG's path was set at the beginning of time.
> The humans who choose x, y, z, and w may of course be replaced by a computer program containing a pseudo-random number generator.
> "However, as we remark in [1], free will would still be needed to choose the random number generator, since a determined determinist could maintain that this choice was fixed from the dawn of time."
There is nothing whatsoever in the paper that stops an AI from having whatever ability to choose freely humans have. The way you're using determinism is more akin to precision and reliability—the human brain has tolerances but it too requires some amount of reliability to function correctly, even if not as much as computers do. In performing its tasks, though the brain is tolerant to noise and stochasticity, it still requires that those tasks happen in a very specific way. Asides, the paper is not an argument for randomness or stochasticity.
> ” In the present state of knowledge, it is certainly beyond our capabilities to understand the connection between the free decisions of particles and humans, but the free will of neither of these is accounted for by mere randomness."
If an AI is an algorithm, then it will be unable to produce "answers" to what we observe. That is the relevance. One would need to show a contradictory example to the theorem to ignore it.
>There is nothing whatsoever in the paper that stops an AI from having whatever ability to choose freely humans have.
There is if an AI is dependent on deterministic methods. I agree that AI is not a well-defined term, but all proposals I have seen are algorithms, which are entirely deterministic. This is entirely at odds with the human conception of free choice. An algorithm will always produce the same choice given the same input. Any other behavior is an error.
The SFWT says that observations can be made that cannot be replicated through deterministic means, which would seem (I agree there is a very slight leap in logic here) to rule out any AI from ever being able to understand at least some aspects of our reality (and also reveals them to be simple, logical machines, with no choice).
Algorithms are not by definition deterministic, which seems to be one of your key points. Probabilistic algorithms exist. They may or may not be used in machine learning currently, but they do exist.
Can you provide an example? All probabilistic algorithms I have seen rely on a pseudo-random generator a rely on an external source if numbers. I have argued elsewhere in these comments that both cases my be considered deterministic.
Geoffrey Hinton and Jürgen Schmidhuber are convinced that neural nets trained with backprop will likely lead to AI with minimal additional fixed-function structure. Schmidhuber's intuition is that recurrent neural networks (RNNs) are capable of universal computation, so given sufficient computational resources the RNN can learn any task, including generalization, goal and action selection and everything else that we associate with intelligence. Hinton's intuition is that RNNs essentially accumulate representations in their hidden state vectors that are very similar to human thoughts ("thought vectors"). Hinton thinks that thought vectors will naturally lead to the kind of reasoning that humans are capable of. In his view the human brain is basically a reinforcement-modulated recurrent network of stacks of autoencoders (needed for unsupervised representation learning).
Pedro Domingo sees such universally that may lead to AI in all major fields of AI: (1) Symbolists have inverse deduction (finding a general rule for a set of observations), (2) Connectionists have backprop and RNNs (see above) (3) Evolutionists have genetic programming (improvement of programs via selection, mutation and cross -over), (4) Bayesianists have incremental integration of evidence using Bayes' theorem, e.g. dynamic Bayes networks, and finally (5) Analogizers have kernel machines and algorithms to compare things and create new concepts.
Domingo's hypotheses is basically that only a combination of various of these approaches will lead to AI.