I can just say that Deutsch is another victim of what being good at theoretical physics tends to do to one's mind. The amount of intellectual hubris and arrogance we can develop is staggering. Sheldon Cooper is not really a parody, it's what many theoretical physicists actually think (but are to socially adapted to say out loud), as in: "Penny - I'm a physicist. I have a working knowledge of the entire universe and everything it contains." Deutsch may think he knows things about "the mind" but he shows in this article that, like many who like to speculate wildly about the mind, he's basing his claim mostly on armchairing about how he feels about certain phenomena. Things he says about behaviourism are indicative that he doesn't really know what he's talking about. Or he does but is extremely bad at communicating his ideas. It would just take too long to comment on each and every empty claim this article makes. It's mostly a rehash of same old boring arguments about how special we are, this time because of our "creativity". Yawn, there is nothing special about "creativity", it's an active area of research, has been for decades, and it's actually very boring. He does make a few good points, like the one that quantum phenomena do not give rise to mental processes, but that is trivial in the sense it's obvious to anyone with some knowledge of quantum mechanics on one hand, and what neurons are and how they work on the other.
Perhaps also why physicists can be dangerous in Finance. They have all the math skills, but they can also have a little too much faith in those skills.
EDIT: I linked to XKCD here... cuz never miss an opportunity for xkcd! Also, I just finished reading the article, so I'll add my comments on the content.
The parent post (for this comment) contains a rant about physicists but has no specific rebuttal of the article.
David Deutsch's arguments seem very sensible to me, in a completely non-mystical sort of way. I feel he asks questions pointing to the heart of AI. Regarding moving goalposts and stuff, you might like to read Freeman Dyson's (excellent and insightful) article on 'Birds and frogs' where he comments on such things. Deutsch is approaching the problem like a bird, typical AI researchers are approaching the problem in a frog-like manner.
And oh, to state possible biases on record: I'm a physicist :-)
I actually catch myself sometimes saying or thinking the tooltip text. :)
But what Deutsch writes here is somewhat opposite in approach to the problem of how to be annoying--he's trying make a non-physics research area seem fundamentally more complicated than it likely is. I'm willing to say it borders on mysticism, and the only reason I can think of for such behaviour in an otherwise fine scientist is a strong emotional attachment to some romantic idea of what it is to be a human being. There is zero (AFAIK) theoretical or empirical evidence for his grand claims about creativity, and he's playing the old game in AI: goalpost moving. Whenever AI does something that was previously thought to be only achievable by humans, the critics immediately jump and say: "Ah, but that's not really intelligent, is it?"
> The parent post (for this comment) contains a rant about physicists but has no specific rebuttal of the article.
It's not a rant about physicists (I am one, for the record), it's just that this is what I noticed--whenever there's an otherwise good scientist giving misguided proclamations on how some research field other than his should proceed, it's always the physicist. Maybe I'm biased in that I take extra notice of what physicists say, but anyway it's anegdotal and not the main point. What matters is: whoever you are, if you're going to seriously opine on a subject, beyond general methodological remarks (which are extremely important), you better have in depth knowledge of the subject matter. Example claim: "behaviourism is abandoned by mainstream psychology". I'll avoid discussing what he means by "mainstream" here (there could be all sorts of ugly dragons lurking behind this statement), and just say that this is false. The term behaviourism is not used anymore (it's not useful anymore to use it), but a huge deal of behavioural theory and results are still alive and well in psychology of cognition.
His central point, that "creativity" is the key to AGI, is meaningless the way he talks about it. Creative thinking is not a mysterious process, there is a lot of empirical work dealing with it. He further postulates that we need philosophical (of all things) breakthroughs. I can't help but think this is some sort of Chinese room style argument all over again. Having stated, multiple times, that philosophical/epistemological breakthroughs are needed, he speculates that maybe better understanding of genetic diffs between humans and other higher primates hold the key. It's all over the place. It's hard to give specific rebuttals to extremely vague ideas.
Thanks for the pointer to Dyson's article, found it and stashed it for later reading.
> And oh, to state possible biases on record: I'm a physicist :-)
Have you read Chesterton's The Man Who Was Thursday? This discussion is starting to feel like it, just do a %s/anarchists/physicists/g :)
> He does make a few good points, like the one that quantum phenomena do not give rise to mental processes, but that is trivial in the sense it's obvious to anyone with some knowledge of quantum mechanics on one hand, and what neurons are and how they work on the other.
This didn't prevent Penrose (surprise, another physicist) from establishing his little cult with Quantum Theory of Mind.
Right, I remembered him after I posted. I don't know what to think of that. He obviously knows quantum theory. He can open any undergraduate textbook on neurophysiology and read all about how neurons work. So is he merely crazy, filling some emotional void religion might had filled in the old days, or something else? I don't know.
Consciousness and free will seem mysterious, so he posits the only mystery he knows in the universe at that scale. Predictable human bias not unique to physicists.
I am not sure we positively know every single detail there is to know about neurons, but we know an excruciating amount of details. At the level of a single neuron, it's physiology and connections to neighbouring neurons, we know everything we need to know that it has absolutely nothing to do with QM. There are no mysterious phenomena here, nothing outside biochemistry and physiology.
Could not agree more. He states a lot of statements as if they are proven facts, where they are really speculation and opinion, which is the opposite of an objective scientific argument. And he seems to fundamentally miss-understand machine learning
"only a tiny component of thinking is about prediction at all"
Here I really struggle to follow his argument. It seems, he is saying that thinking is not prediction, but how does he know that? As I see it, we don't know what thinking is, so is he then saying that thinking does not feel like prediction, and therefore it cannot be prediction? It is well known that we are only aware of our very highest level of thought. What goes on below the surface may very well be very much about prediction without us knowing about. When we think about "the world" we must have some internal representation of the configuration of objects that we think about. In my opinion this is implicitly a kind of prediction, because we create (at least mostly) configurations that are plausible, if not true (counterfactual, but possible).
It seems to me that creativity is mostly about finding good approximate solutions to hard (e.g. NP-complete) problems. How we do it is so far unknown, but it seems that it is not in principle so very special. The explanation for the calendar he speaks about is actually not very complicated inference at all, and is probably in reach for artificial systems. All he has to do is to generalize 19 as belonging to a number category, and then apply the predictive properties about how numbers work. Not all prediction is naively predicting that the world is as it was before. If change is predictable in the past, then change can also be predicted to the future. Prediction is difficult, but that does not make it less important.
I think the predictive framework is a fairly universal one. You could always as a thought experiment feed a predictor all sensory experience humans ever had, and provided this predictor works in a manner in some sense optimal it could answer whatever question at least as well as any person.
Likewise we can already simulate fairly large biochemically accurate neural networks, and it seems a scaling in power would lead to an AGI.
The problem with the previous models is their inefficiency. It does seem to indicate a lack of understanding of fundamentals -- i.e. you can brute force essentially every well defined problem in CS, yet we don't declare them solved modulo some time for computing power progress.
You criticize the author for making unfounded assumptions about the nature of cognitive processes and follow up with your own equally unfounded assumption about the nature of cognitive processes. I do agree with you, but it is still bad form.
What do you mean is unfounded? My suggestion for what creativity might be as be? I guess I was unclear, I don't mean to say I know what creativity is. What I do mean to ask, is whether creativity is something more than just solving optimization problems? To me that way of thinking about it explains many of my own experiences (a-ha moments, unknown time to completion of thinking process, thinking going on in the background etc.) using a formulation that I understand. It is free for anyone to prove it wrong.
I think there's a strong argument to be made that the answer to this question depends on exactly what you mean by "reason" and "predict".
If you mean "predict" in the same sense as it's used in the phrase "predictive model", then the answer could be a resounding no - there's a hypothesis that the brain implements just one basic learning algorithm, and everything else is just emergent phenomena coming out of that.
I think there is a difference, but my view is that it is impossible to reason without sufficiently accurate prediction, and that the concepts we use for reasoning are largely formed based on how predictive they are. Consider playing a game where two players throw dice, and the one with the larger number wins. There is no predictability between turns, so there is no point in reasoning far into the future, unlike in chess, where the positions change slowly, one piece per turn and in predictable patterns. In chess you still don't know what the other player will do, but you can reason about it, because there is a level of predictability.
I would define reasoning as answering a question or optimization problem, e.g. What is my best move? What is a good move? This can be formalized as calculating some kind of score for a possible action, whereas prediction is just about assigning a probability value to an event.
Bad example. I can predict the dice game's problem space. E.g. for a single normal die (1-6), I can predict that I'll never observe a 7 [P(7) = 0%]. I can also predict the probability distribution. E.g. for two dice thrown together, the probability of throwing a sum of 7 is 1/6, whereas the probability of throwing snake eyes is only 1/36. Yes, the game's output is random. But what you're talking about is control, not prediction. If random meant "impossible to make predictions about", then statistics wouldn't exist.
Regarding winning, there's no way to improve one's chances (besides cheating?). But merely knowing that the game is predicated solely on chance can be useful for other applications. E.g. realizing that the lottery is a scam.
Personally, I'm in that camp that says reasoning is a instrumental value towards prediction.
I would define reasoning as answering a question or optimization problem, e.g. What is my best move? What is a good move? This can be formalized as calculating some kind of score for a possible action, whereas prediction is just about assigning a probability value to an event.
So would it be fair to summarize this as "reason is prediction with a goal"?
How on earth is
Bayesianism a form of behaviorism"
? And what is it with the cult of Popper. Popperism isn't "underrated", it's considered the gold standard of epistemology despite being a vague philosophical theory full of holes. Contrast that with Solomonoff induction which has a rigorous grounding and which offers hypothesis testing and rejection as a trivial subcase.
> Contrast that with Solomonoff induction which has a rigorous grounding and which offers hypothesis testing and rejection as a trivial subcase.
I may be wrong, but I still couldn't find anyone that could convince me that Hume's response to induction was not valid. From here (which was linked to in the Solomonoff induction wiki page) http://en.wikipedia.org/wiki/Occam%27s_razor#Practical_consi...
> The pragmatist may go on, as David Hume did on the topic induction, that there is no satisfying alternative to granting this premise. Though one may claim that Occam's razor is invalid as a premise helping to regulate theories, putting this doubt into practice would mean doubting whether every step forward will result in locomotion or a nuclear explosion. In other words still: "What's the alternative?
Again, and this is also my opinion, I think we're afraid to admit that either we don't know anything at all or that most of the things that we "know" are based on intrinsic faith, with the accent on the word "faith". Granted, this is at least a 2,500 years problem, dating back to the Old Greeks.
How is Bayesianism not a form of behaviorism? The essay devotes a paragraph to explain this point allthough Deutsch assumes the reader is familiar with behaviorism and probabilistic methods used i AI.
Had he said that Bayesian decision theory with optimization of utility functions etc. was behaviorism, then I would agree, but Bayesian formulations can be used for many other things as well. At least one can propose very complex internal models, which behaviorists were criticized for not considering.
Agreed. Besides, behaviorism was abandoned in psychology because it was crazy, not because there's something wrong with reinforcement learning.
It was treating the human mind as a black box trying to pretend we have no insight in human thoughts. It was trying to avoid antropomorphizing people as Morgenbesser put it. In fact, quite the opposite, I'd attribute this blatant idiocy to Popperism. After all, introspection isn't testable or falsifiable in a popperian sense. There's no room for the concept of models in Popperianism. There are just big black box ideas that you test and are falsified or not.
There's a certain amount of twaddle in Deutsch's arguments in general but it's quite hard work wading through the argument to see what's wrong. An iffy statement of his here is "the field of ‘artificial general intelligence’ or AGI — has made no progress whatever during the entire six decades of its existence." Which is tricky to totally dispute because of his vagueness as to what AGI is. But let's assume he means thinking like a human. Well fair enough no computer thinks like a human yet but a lot of progress has been made in that direction such as Watson in verbal processing and the Google cars in geospatial awareness. So the argument is kind of misleading. I find his arguments in physics similar - basically not much good but written in a form that is hard work to dispute.
> So, why is it still conventional wisdom that we get our theories by induction?
I thought the other conventional wisdom is that we get our theories by building mental models. I throw a ball, and I can either remember how it flew in the past and use that to predict, I can also build a mental model from first principles to find out what will happen. Maybe run a short 5 second idealized simulation in my head and I can predict what might happen. Well then I might realize that there was wind that my original model was too idealized.
Another example someone on HN (or an article posted to HN) talked about how would they be able to predict what happens to a pen dropped on the moon. And why did Astronauts stood firmly planted on the moon's surface. It was interesting how many people got that wrong. And the difference was that some seemed to have been reasoning by analogy or have encoded faulty facts (gravity happens on earth), some by past experiences (they have heavy boots), some that have the correct understanding of first principles can so to speak simulate and understand what would happen -- the pen would drop to the moon. Moon has some gravity. So do other planets. And so on.
Now most of these are still based on some principles that we hold true and are "like the past". Gravity is like yesterday, the fact that winds affects objects in flight is like yesterday, speed of light is like yesterdays. There needs to be wide set of known and predictable rules about the world and then deriving or simulating the future can be done.
There can be something in between, maybe if I built a mental model just recently it gets memoized and instead of re-running it I just remember that it is like the "past one" but with a small tweak.
Another mechanism is the "seeker of inconsistencies". Something that understands that these facts together don't make sense. So if I first thought the pencil would float away on the moon and then thought but Astronauts are planted firmly on its surface. This would, so to speak, raise an exception and say "aha, something is not consistent, pay more attention to this one".
But, that is how my brain works. Others brains might work differently. I can inspect to some degree my own thought process but I cannot get into the head of someone else and "see" how they think about things. Do they visualize, to the see words and formulas instead and plug values is. Are some completely unaware of how they think and lack much self reflexive.
I am papering over and being too simplistic here and don't necessarily disagree with the article, that just that reaction when read that paragraph.
I thought the other conventional wisdom is that we get our theories by building mental models. I throw a ball, and I can either remember how it flew in the past and use that to predict, I can also built a mental model from first principles to find out what will happen. Maybe run a short 5 second idealized simulation in my head and I can predict what might happen. Well then I might realize that there was wind that my original model was too idealized.
My current understanding of how sports skills are developed is closer to what you first said, than the bit about 'running a simulation'.
The pro ball players have practised enough that they've seen the ball coming in at this angle, at that angle, and with this wind condition and that wind condition. And so they 'remember' where the ball is going to land, and they remember how to move to get there.
Our brains are not, as far as I know, running any kind of recognizable simulation like we would with a computer.
"Our brains are not, as far as I know, running any kind of recognizable simulation like we would with a computer."
One way to implement the 'remembering' of the ball position is to run a forward model of the trajectory in your head.
Your brain has specialized hardware for running forward models (short-term simulations) of physical systems. You need these to control your body precisely in the face of delays in the feedback from your limbs.
A nice story - it's a just-so story, but a nice one - is that we repurpose this simulation hardware to simulate systems we observe outside our bodies. Expert ball-catchers have a very good simulation, beginners have a bad one. If you're already an expert, you can get even better by running your simulation off-line: this may be why experts can practice by visualizing their golf swing or swim stroke but newbies can not improve much that way.
There is some physical evidence that the off-line model stuff is real, as when you imagine yourself walking, or watch someone else walking, some of the same parts of your motor cortex are active as when you are actually walking.
My very favourite part of this explanation trail is as follows: animals got very good at modelling the behaviour of other animals, in order to predict what they will do next: Eventually, they/we started re-using the same systems to understand and explain our own behaviour into the future. This is (at least part of) what we call consciousness.
Now, you may or may not buy this, but it's rather elegant as a hypothesis.
Our brains are not, as far as I know, running any kind of recognizable simulation like we would with a computer.
By this logic, a professional ball player should miss any ball thrown at an unfamiliar angle, and amateurs should only be able to catch balls thrown to them at specific angles and velocities. It doesn't take much to build an integrator or differentiator out of electronics, so with [big number] neurons available to the task, I'd be surprised if we weren't running some kind of predictive analog simulation.
And so they 'remember' where the ball is going to land, and they remember how to move to get there.
Do you have a mechanism in mind for how this 'remembering' takes place, if not via simulation encoded by DNA marked, proteins folded, and/or synapses grown in response to successes and failures over time?
Interesting. I do visualize in my head how kinetic energy depletes as it goes higher and potential energy increases. It is like a bubble that shrinks. I see motion vectors and how they change.
Other examples are a conflict between 3 people, kind of see a graph of relationships with different colored arrows and how they change.
Interesting. I do visualize in my head how kinetic energy depletes as it goes higher and potential energy increases. It is like a bubble that shrinks. I see motion vectors and how they change.
I do similar sorts of things... it is excellent that you have trained yourself to think that way. But for time critical tasks such as catching a fly ball, that doesn't work so well. We have lots of different processes we use for different situations.
Why is the mental model building not seen as a special case of induction? One needs to generalize (i.e. interpolate/extrapolate) from the known instances. The mechanism may be just "replaying" known sequences, but there is still a measure of induction because one needs to decide that which simulation applies to the ball I'm seeing right now (and that what I am seeing is a ball, and that intuitive physics models of balls apply).
> Why is the mental model building not seen as a special case of induction? One needs to generalize (i.e. interpolate/extrapolate) from the known instances.
Maybe you are right. Now, the problem is that my explanation pushes the problem down because now I am saying, "I don't know how this model simulator works. I throw a bunch of objects or concepts in a box and they evolve according to some rules, I can see where thing will end up in the future". But I don't have a fine grain understanding of what the simulator is built, or if it has frames or steps that execute and how it combines the rules underneath. That is perhaps in the unconscious. So I haven't solved the problem but just exposed the highest level of reasoning that I am aware of. There is some assembly running that I am not aware.
> Despite this long record of failure, AGI must be possible. And that is because of a deep property of the laws of physics, namely the universality of computation.
This actually doesn't follow. It is logically possible that general intelligence is the result of a non-physical process (or at least a process outside any conception of known physics). It could be, as philosophers have put it, a homunculus that interfaces with but is not a part of the physical brain.
Or take the brain-in-the-vat hypothesis. It could be that intelligence/consciousness is the result of some process that can't be modeled within the physics of the simulation, and can only be injected from "outside" the system. This already happens, actually; there are demonstrably intelligent entities in World of Warcraft (the human players) but that doesn't imply that the computer physics of the WOW game engine are sufficient to model intelligence.
If these were true, It would mean that atoms in our brain violate physics as we know it. This source of intelligence would be detectable, because it is significant enough for neurons to detect it.
The hidden premise of the quote is that that we have observed brains enough to be confident that they are running on physics.
The current theories of physics do not even begin to offer an explanation for "subjective experience"/consciousness, which definitely has everything to do with the brain.
It's not 100% proven that "souls" don't exist, but I would bet money that intelligence can be explained by the laws of physics just like every other scientific mystery. At one time people thought life was basically magic and too complicated to understand.
"or that increased computer power will bring it forth (as if someone had already written an AGI program but it takes a year to utter each sentence)."
No, 1000 years would be more like it, if you add in the years it takes to learn language.
Here's my very quick and dirty calculation:
A rough estimate of brain data throughput would be 100 billion neurons x 200 firings per second x 1,000 connections each = 20,000,000,000,000,000 bits of info transmitted per second. 20 million gigabits (20 petabits) of information move around your brain every second.
The biggest/fastest numbers I can find for an IC is going to be about 3 billion transistors x 8Ghz x 3 (connections per transistor) = 6,480,000,000,000 = 6.5 Terabits per second.
20 petabits -vs- 6.5 Terabits, that's roughly 3 orders of magnitude difference. If someone wrote a true AGI today, it may well still take 1000 years to do what humans can learn and accomplish in one year (assuming moving bits of data is the key to AGI).
There's something even more fundamental about his own thought process than creativity, which is odd that he doesn't see -- namely, "Why does David Deutsch care about creating an AGI?".
Why is there a drive towards creativity / creation at all? So, creativity is still merely a simulation of a top level human function--you could probably program something to be creative, but how about /wanting/ to be creative? How about not wanting to be creative? How about being obstinate, or bored?
I think you could get pretty close to general intelligence by modeling three properties in a computer program. Each of these is difficult, but not outside the realm of possibility.
- Awareness of environment, including self. It would need to be able to re-program itself
- Ability to form intention. It would need to be able to form objectives and utilize the first property to accomplish them. Also, information gathered from the environment should be able to inform this.
- A creative function, constantly running, that constantly combines information from the first two inputs to form and re-form concepts.
I think that a program with these three aspects, running constantly, could eventually form opinions and act intelligently. The intention function could delve into the creative function and form intentions to refactor and make more efficient the mental processes.
It could have a representation of its own creative function in its environment, allowing the intention function to explore its own creativity as an environment. Connections between the three aspects would grow in this manner, and the program could become deeply introspective.
It would probably take a long time for it to be able to do anything like human-like thought. We could help the process along by restricting its environment and giving it 'games' to play. If it's sufficiently fluid in creating concepts, it could then reuse them for different games.
The difficulty I'm having with this article is that many people, when discussing intelligence, invoke some specific capability, X; for Deutsch, X seems to be "creativity", for Penrose, X may or may not be the ability to grok the truth or falsehood of an arbitrary statement as if one has the personal cellphone number of God. Then, they go on to assert "computers (or whatever) cannot (currently) do X, therefore we need to create a grand theory of X to create Artificial (General) Intelligence" (or "computers cannot do X, therefore A(G)I is doomed").
Unfortunately, they never seem to provide proof that X exists, or is true. Sure, you can provide a plethora of examples of creativity, but the plural of "anecdote" is not "data", the plural of "data" isn't really "proof", and I don't know what creativity is well enough to tell what I do commonly (or even occasionally) is creative and a forward-chaining formal logic thingy printing out a novel, true formula isn't creative.
Ultimately, that is why I like the Turing test---I don't have to understand all sorts of magical X's, all I have to do is to give you the benefit of the doubt if you don't have to have a bunch of truly arbitrary conditions on what you do.
>Unfortunately, they never seem to provide proof that X exists, or is true
I think we can forgive Deutsch for this, because he's claiming that creativity is part of an unsolved philosophical problem. Which means we don't know how to think about it yet. We'll know when when we do know how to think about it because there'll suddenly arise answerable questions, proofs, definitions, and whatnot.
Unfortunately the Turing Test can't cut through the philosophical wrangling because, for example, imitating a human being successfully is not the same thing as evidence of thinking. Without an explanatory theory we wouldn't even know how to interpret the relevant evidence
Has anyone ever tried using the first law of thermodynamics to prove that AGI is impossible. It's somewhat dependent on what one considers "intelligent" of course. But let me give a stab at it.
Say AGI is a computer machine and/or algorithm that's capable of "creatively" building a smarter version of itself. Then (here's the proof) say we did build a machine that was capable of building a smarter version of itself, then that machine would technically be a perpetual motion machine, and therefore a violation of the first fundamental law of thermodynamics: "In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced." (Rudolf Clausius, 1850)
Or assuming that AGI is just "in the software", the heat produced by the computation would continually increase, as an ever more complicated algorithm/computation is formulated, and therefore violate the first law of thermodynamics -- again. Assuming a more "intelligent" computation consumes more heat/energy.
>observing on thousands of consecutive occasions that on calendars the first two digits of the year were ‘19’. I never observed a single exception until, one day, they started being ‘20’...
...it is simply not true that knowledge comes from extrapolating repeated observations. Nor is it true that ‘the future is like the past’, in any sense that one could detect in advance without already knowing the explanation.
This is obviously untrue. You could train a machine learning algorithm to count easily just by showing it examples of numbers. This is how we teach human children to count, by showing them examples. Not giving them some magical "explanation".
And he goes on to base the rest of his argument on this point.
>Some people are wondering whether we should welcome our new robot overlords. Some hope to learn how we can rig their programming to make them constitutionally unable to harm humans (as in Isaac Asimov’s ‘laws of robotics’), or to prevent them from acquiring the theory that the universe should be converted into paper clips (as imagined by Nick Bostrom). None of these are the real problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive — economically, intellectually or whatever — as most people; and that such a person could do enormous harm were he to turn his powers to evil instead of good.
Einstein's brain was only slightly different than any other human's, even the dumbest village idiot. Humans themselves are only slightly different than chimpanzees.
So yes, an AI with a brain of entirely different architecture, running on computers millions of times faster and several times larger than a human brain, would indeed be pretty concerning.
> The very laws of physics imply that artificial intelligence must be possible.
What if the chemical reactions that our intelligence depends on somehow allow for computations using real numbers? We wouldn’t be able to reproduce these processes with our means of computation then.
"‘Smartest machine on Earth’, the PBS documentary series Nova called it, and characterised its function as ‘mimicking the human thought process with software.’ But that is precisely what it does not do."
I heard a grad student who did some work on the Watson project say basically the same thing.
So evolutions "works" in the sense that anything that doesn't work doesn't stick around to show its face. There isn't any intention behind it, yet us people, who clearly have intention (and I'm not even going to stand for an argument about whether or not free will exists. You take that shit outside with the rest of the garbage) are a product of it.
But we're kind of long past the point where just any old random slurry of chemicals is going to get means-tested in the great arena of life. Life is not finding the right set of chemicals to combine, life is a specific set of chemicals and the right orientations of different copies of those chemicals.
So I think we're at a point were binary code, the instructions to run on the processor, is akin to chemicals in the physical world. We try to treat them like DNA, but you can't just toss a bunch of chemicals in a bucket at random and expect life to come out. 100 billion times out of 100 billion times, random chemicals in a bucket makes you nothing close to life. Ultimately, chemicals are at the core, but they aren't sufficient. The right chemicals are needed, and they interact in such a way that infinite variation is the result.
And DNA is a code--a deterministic, exceedingly discrete code. Yet somehow (hand waiving), from such arises the non-deterministic, comparatively-infinite variability of human behavior. So in that sense, I don't think he's necessarily correct that AGI is a "different type of program than we've ever programmed."
But all that is just to create a system that is intelligent, it's not intelligent itself. It's a road, not a destination. Living things, on the other hand, have goals and try to achieve them. Not just have goals, but generate goals. Create it's own notions of what to do and how to do them and why the doing of it is important.
You touch fire, your hand recoils, because in you is a system for detecting potential damage and the understanding that damage is not something you want on your docket. Computer touches fire, computer recoils, because in it is a system for detecting potential damage and YOUR understanding that damage is not good. The computer didn't conclude on its own that damage was bad. It never had the sense that it existed. And this isn't even an "intelligent" response, this one is merely instinct.
So I think the big, missing question in AGI is, "what could a computer want?" We could program a computer to have certain goals, but that is not the same thing as a computer sitting around and saying, "hey, you know what? Let's go to the beach this weekend." We foist our own goals on the computer and instruct it on how to understand those goals, and are disappointed when it fails to get the point of the goals at all and sits there blinking at us. How can you ever hope to have an intelligent computer if it isn't intelligent on its own terms?
IDK, I am rambling. Would you be intelligent or have any hope of becoming intelligent if you didn't create your own designs on your future, couldn't perceive anything to test your actions against your desires for the future, and had no means of your own to ever come about correcting these issues? It just seems like the only way AGI will happen is through something incredibly simple that allows a computer to put its own parts together, see the result, and arbitrarily evaluate it.
> Unfortunately, what we know about epistemology is contained largely in the work of the philosopher Karl Popper and is almost universally underrated and misunderstood (even — or perhaps especially — by philosophers). For example, it is still taken for granted by almost every authority that knowledge consists of justified, true beliefs
-------------------
The Gettier problem has been well known in Philosophy since the 60s. Probably the most famous example would be barn-façades, first showing up in the mid 70s. It's taught in undergraduate courses. It's the second bullet point on the Standford Encyclopedia of Philosophy's Epistemology article.
-------------------
> How could I have ‘extrapolated’ that there would be such a sharp departure from an unbroken pattern of experiences, and that a never-yet-observed process (the 17,000-year interval) would follow?
-------------------
One rather assumes your experience with numbers has included 20 following 19 before and you extrapolated the rules from that and similar experiences with assigning numbers to things. You were, after all, previously told that it's how years worked - and doubtless you've lived through at least one year changing.
-------------------
> Even in the hard sciences, these guesses have no foundations and don’t need justification. Why? Because genuine knowledge, though by definition it does contain truth, almost always contains error as well. So it is not ‘true’ in the sense studied in mathematics and logic. Thinking consists of criticising and correcting partially true guesses with the intention of locating and eliminating the errors and misconceptions in them, not generating or justifying extrapolations from sense data.
-------------------
Be that as it may, some guesses are better than others. Generally the guesses that are based on more data. You lock someone in a room for 19 years, don't talk to them beyond the basic social interaction required for them to develop language, and then ask them to guess what an atom bomb is, they're not going to get very far. Even guessing what an atom is, or a bomb they'd be hopelessly out of the context of their experience.
Guesses, in so far as they're meaningful, have foundations. Those foundations limit what you can guess about and have any practical chance of refining towards truth with more evidence. You can't start off knowing nothing and then go to nuclear weapons in one step. You have to make smaller guesses, based on what you know.
And much though the author you might criticise Bayesian probability, this is bound up in the idea that probability is based on dividing the search space between the explanations we can come up with for a thing and then weighting it with evidence.