Hacker News new | past | comments | ask | show | jobs | submit login
Ray Kurzweil: AI is still on course to outpace human intelligence (grayscott.com)
170 points by NoRagrets on Jan 21, 2019 | hide | past | favorite | 405 comments



Computers that are capable of analyzing and understanding their environment with a level of fidelity comparable to a human, without being preprogrammed with information about the nature or structure of the environment, are out of reach for the foreseeable future. I don't see any fundamental reason why such a computer should be impossible, but there's not even a realistic roadmap towards such a thing. That is to say, if it ever does happen, nobody alive today can predict when it will happen.


It would be foolish to claim we've made any meaningful progress toward a true AGI that can pass the Turing Test until someone demonstrates a computer as smart as a mouse across the full spectrum of activities.


Once we get computers as smart as a mouse, we'll be at most 3-5 years or so computers as smart as a human. We will have solved at the major challenges in AGI and it will simply be a scale problem then.

Saying we don't have mouse level AGI is simply saying human AGI is greater than 5 years away, which isn't a remotely contentious statement.

The difference in intelligence between an amoeba and a mouse is enormous compared to a mouse and us. People greatly under appreciate how intelligent and close to human a mouse/bird/pig are in the grand scheme of things. Emotions, behaviors, motivations, goal setting, memory it's all there already. A flat worm, an ant, a fly - those are the large stepping stone accomplishments.

Think about rate of very long distance communication in humans. It took us tens of thousands of years to get to 2.4 kps dial up modems, and only few decades to get to common 300Mbps. The important signal is seeing a 100bps modem, not a 100Mbps connection.

So the real question is how long until we can replicate a worm's intelligence?


> Once we get computers as smart as a mouse, we'll be at most 3-5 years

But how long did it take Nature to get from a mammal with mouse-level intelligence to a human-level brain. I think 200-ish million years [0].

You might be right that a mouse is a good indicator of high-level intelligence and that you don't need human-level intelligence to make a good AI, but there might still be some considerable way to go until we have an AI that can significantly outperform us.

[Edit - I agree that natural selection wasn't aiming or directed, and thus wasn't forced to be as fast as we could be. But a human's higher brain functions might not be simple incremental improvements over a mouse's, and there could still be a long way to go]

[0] https://en.wikipedia.org/wiki/Mammal


Nature wasn’t really aiming. So it’s not a valid basis of time estimates.

How long did it take nature to go from T-Rex to chickens?

There’s no reason to believe human level intelligence to be an inevitable result of evolution. It just happend.


> But how long did it take Nature to get from a mammal with mouse-level intelligence to a human-level brain

Not that I agree with the sentiment in the GP, but it took a relatively short time from the first development of multicellular life until nervous systems developed and an even shorter amount of time to go from small mammal intelligence to human intelligence. However, evolution isn't about "progress" as we understand it. The most we can say with regards to intelligence and evolution is that human intelligence satisfied a niche that existed at a certain place and time.


How long did it take Nature to get from nothing to a mouse? About 20 times as long.

So I think the estimate of 3-5 years might be realistic, but the artificial mouse is a long way away, IMO.


Keep in mind that nature uses a somewhat directional random process. How much of those 200-ish million years were spent waiting on selection pressures to promote bigger brains?


I was thinking about human intelligence today and thought about how anything at the tail end of a normal distribution usually produces quite a perverse result. Then I realized we are at the extreme tail end of the intelligence distribution in the animal kingdom. No wonder we manifest all manner of odd results.

I'm not sure creating an intelligence that even supercedes our own will lead to anything good. If anything I'd expect things to get even more perverse.


You have no rational basis for that 3 - 5 year estimate. That's just picking numbers out of the air.


The basis is the complexity growth trend that we have already observed in computing (and most other human endeavors)


There is zero evidence that progress toward AGI follows Moore's Law. And we observe much slower complexity growth in most other human endeavors.


But there's a lot of evidence that computer power follows Moore's law. All that matters is that the growth is exponential.

Evolution took a billion years to evolve multi-cellular life, but the jump from apes to humans took far less than a million years.


That's a total non-sequitur. So far there is zero evidence that increases in computing power are getting us any close to true AGI. It's entirely possible that we've been moving sideways, or even backwards, relative to that goal.

And we can't reliably extrapolate growth in computing power more than a few years into the future. It's possible that the curve isn't really exponential, but rather an S-curve which will eventually flatten out.


Now you're going out of context. We very well may be on the wrong track for making an AGI, but the OP's premise was that once a mouse-level AGI is achieved, human-level AGI won't be far behind.

I'm more comfortable predicting that computing power will continue to grow than to predict that it will peter out and everyone will simply sit back and be happy with what we've got.


Moores Law has not been working anymore like what, last 5 years?


Computer processing power is not growing at an exponential rate. The physical limitations of Moore's law are well known and imminent.


Moore's law is essentially tracking the transistor density on silicon. We may be pushing up against physics in that area, but that is not the same thing as processing power. Our systems grow ever more complex. Each generation enables new tools that enable the creation of the next. When Moore's law finally crashes and burns, we will compensate using other technologies. Multicore and multiprocessing, shifting work to the cloud, advanced materials, etc. Hell, even quantum could take off in the next decade or two. I see no reason whatsoever to believe that we just give up and rest once we've reached the limits of silicon transistors.


> We may be pushing up against physics in that area, but that is not the same thing as processing power

I didn't say we're pushing up against the limits of processing power, I said that processing power is not growing exponentially, which is true, despite the gains that other advancements and innovation have provided.


Moore's law is reasonably dead.

We're moving faster than Moore's Law, see "Hyper Moore’s Law":

https://www.extremetech.com/computing/256558-nvidias-ceo-dec...


> Once we get computers as smart as a mouse, we'll be at most 3-5 years or so computers as smart as a human. We will have solved at the major challenges in AGI and it will simply be a scale problem then.

Why? What challenges?


Maybe this isn't what you meant, but I'm curious why you would put a Fly ahead of an Ant in the stepping stones.

Can you make a Turing Ant without a Turing Ant Colony?


mouse, crow, worm, human, all running on neurons. You simulate one you can simulate a berzillion.


> worm's intelligence

Done.

http://openworm.org/


Alas, it's not even close to "done." It's a work in progress and a surprisingly difficult one.

For context, the worm (c. elegans, at least) has a very stereotyped nervous system with 302 neurons. The anatomy, down to the cellular level, is known incredibly well. Their behavioral repertoire is not huge and they're fairly easy to study. Nevertheless, we can't even simulate a worm very accurately. (There was a good twitter thread about why yesterday: https://twitter.com/OdedRechavi/status/1086992699528544256)

The human eyeball has about 120M rods, 6.5M cones, and projects to a brain containing ~86B neurons, which is about 8-9 orders of magnitude more cells. The number of possible interactions scales even faster. In summary, we're not close, not at all....


When biology comes into play, you can see how weak our understanding and abilities are. We can barely simulate small proteins of a few thousand atoms. Accurately simulating an entire cell is (at the moment) in the realm of science fiction. Only inaccurate abstractions can be used to model it.

However I have to object in a way about the brain. To me, there's an unanswered question: Is the rest of the human brain as simple and "generic" as the convolutional neural networks we made inspired by the vision system? Or is each networks' architecture and "algorithms" developed specifically for a task? In the latter case we might still be a very long way from anything resembling AGI.

However my personal estimation is that most of the things we do can be modeled using existing tools when scaled and modified appropriately (ie RNNs). There's also the ugly job of stitching those systems together, but it's not that different from what happens in nature.


Not done. OpenWorm is a project to create a worm simulation. An important distinction.


More like “overdone”, it’s a cellular simulation including the brain as a first step.


The brain in question is 302 neurons, which is probably fewer than most of us have lost whilst participating in this discussion! But they also need to simulate a nervous system, mobility, food needs, interaction etc to have a remotely persuasive argument that a 302 node neural network actually resembles a worm brain. It can't be as smart as even a really stupid worm until it can perform analogous tasks to the worm, which means having some sort of artificial body to wiggle.

(parallel arguments but for human/mouse level complexity of bodies and stimuli responded to would suggest that whole brain emulation is going to be an incredibly painful way to attempt to achieve AGI)


> But they also need to simulate a nervous system, mobility, food needs, interaction etc to have a remotely persuasive argument that a 302 node neural network actually resembles a worm brain. It can't be as smart as even a really stupid worm until it can perform analogous tasks to the worm, which means having some sort of artificial body to wiggle.

It has those things[1]. There’s a video of its simulated body wiggling around on the project’s github repository.

[1] except possibly food, I was skimming the page.


"To get a quick idea of what this looks like, check out the latest movie. In this movie you can see a simulated 3D C. elegans being activated in an environment. Its muscles are located around the outside of its body, and as they contract, they exert forces on the surrounding fluid, propelling the body forward via undulutory thrust. In this model, the neural system is not considered and patterns of muscle contraction are explicitly defined"

http://docs.openworm.org/en/0.9/projects/


> In this model, the neural system is not considered and patterns of muscle contraction are explicitly defined.

In other words, the simulated worm brain is not yet even capable of causing the wiggling seen in the video. So the question remains, what can the simulated neurons do, f anything?


Sure, I've even seen the wiggles; I was making the point that's why they needed it to yield anything that could be compared with the real thing (though I'm not sure it has food and reproduction simulated yet, and I'd imagine a large portion of the worm's limited capability for cognition is ultimately directed to pursuing those ends)


It needs to be able to survive and reproduce in an environment like a real worm does to prove that it's just as intelligent, and not just wiggle around.


I think you're right. I haven't read the article, but I think Kurzweil is perpetually too optimistic about what we can achieve, at least in how fast it will happen. There are cool things happening, and lots of advances being made, but we are nowhere near anything that could really be called AI.

Plus, I think all the marketing use "AI" is giving a very distorted and inflated view to the average person of what software is actually doing, and what it's capable of.

It's a buzzword, full stop.


Kurzweil really, really, really does not want to die. So all of his predictions are always timed just so that the technology needed to live forever will be within his projected natural lifespan.

That doesn't make him wrong, but that's the personal bias he's operating under.


I'm really happy that the world is shifting from billionaires that just want more money to billionaires that understand that it doesn't mean anything if they don't finance solving the problem of dieing.

I view all the polititians who have the power of advancing healthcare research but not doing it stupid.


I can see a world with the means to escape natural death be one of social, cultural and technological stagnation.

Just living longer doesn't mean that humans become any wiser on average. There will maybe be some benefits of longer-lasting first-hand experience of historical events (pushing the 'historical horizon' to more than 100 years) but to me it's like switching from a simulated annealing method (or stochastic gradient descent) to a simple local gradient descent in terms of getting society/culture/technology to adapt and find anything better than the status quo.

Worst case, such a technology serves to create an almost eternal ruling class. Best case, it results in societies with either two classes of people (those who may extend their lives longer and those who may not) or societies that tightly regulate who may have children.

Getting rid of suffering and cancer is one thing, getting rid of natural death carries a rat-tail of consequences.


If we get rid of cancer, are you going to object to getting rid of heart attacks? If we get rid of heart attacks, are you going to object to getting rid of telomere shrinking, an equally pernicious disease? There won't be an immortality pill, it'll just be lots of preventative treatments that eventually add up.


> Worst case, such a technology serves to create an almost eternal ruling class. Best case, it results in societies with either two classes of people (those who may extend their lives longer and those who may not) or societies that tightly regulate who may have children.

You should read (or watch) Altered Carbon.


> it doesn't mean anything if they don't finance solving the problem of dieing.

There's no solution to death. You can only put it off, but something will assuredly kill you in time. If it's not aging, then it will be cancer, heart disease, an accident, etc. Ultimately, entropy will get you one way or another.


Personally I didn't do a lot of fun stuff (like riding motorcycle, extreme sports) to decrease probability of an accident, even though I'd love to (I'd tried them, loved doing them, but not doing them regurarly).

As for cancer and heart disease, both are linked to aging or genetically inherited mutatations. Heart disease is a natural result of damages in the human body not being reversed.

https://www.cell.com/fulltext/S0092-8674(00)80567-X


Yet die he will.


> That doesn't make him wrong, but that's the personal bias he's operating under.

And in the meantime he can sell his vitamins and supplements to "make people live longer" despite zero evidence. Good business both ways.


> Kurzweil really, really, really does not want to die.

So strange, considering that non-existence is the one thing that every conscious being is guaranteed to never experience. Why run from something that can never catch you?


Because creatures who felt compelled to run from it had a better chance of passing on their genes than creatures who didn’t, leaving us all with an instinctual desire to avoid death.


That is a logical argument, but presumes existence is logical. - Another thing to cast into doubt when staring as deep as you do seem to into the abyss.


The near term "reptilian" fear of death seems axiomatic. The longer term one I can only charcterize as FOMO.


He values continuing to experience, like most people.


> Kurzweil really, really, really does not want to die

What a shit life must that be. And I say this in a very sympathetic way. However, feeling you are almost in reach of eternal life, but not being sure you'll make it in time, being constantly afraid of an accident, or illness, taking that away from you... It's a recipe for anguish and panic.

Dying is not that terrible when you know everybody else will too, sooner or later; but try accepting the idea of being among the last to die..


Isn’t his argument basically that humans suck at estimating exponential growth, tending to be biased towards a linear expectation?

Wait but why had a nice article series digging a bit deeper into that https://waitbutwhy.com/2015/01/artificial-intelligence-revol...


It's kind of funny since I would say the singularity is a result of being bad at estimating exponential growth, since all exponential growth eventually hits some limiting factor and slows down, like a sigmoid.


What does it mean to be as smart as a mouse? If you specify a handful of tasks that demonstrate it, someone will be able to purpose-build an "AI" to do those things well.


Basically it means having "agency." What most people are looking for when they think of "intelligence" is not the ability to master specific tasks, but to choose which tasks to perform using one's own "free will," ultimately leading to behavior that humans find novel and feel they can connect with.


What makes you think a mouse (71M neurons) has agency\free will? Does a cockroach (1M neurons), fruit fly (250K), jellyfish (5k) have agency? I don't think we're gonna get far by relying on a phenomenon that we can't clearly define or even (externally) observe.


Indeed. Human beings have many, many examples to suggest that we lack agency, as well. Why do addiction, obesity, crimes of passion, etc exist?

Without the baggage of the limbic system and dopamine-seeking behaviors, it's quite easy to argue that an artificial intelligence is potentially capable of even greater degrees of agency than humans.


That doesn't mean we lack agency, it just means agency is complicated by other factors. It's not an either-or thing.


Many people overcome these addictions, though.


But what does it mean in the context of a mouse? The mouse isn't using it's free will to decide whether to become a computer programmer or a doctor, it's responding to stimulus and environment. If an AI is trained to mimic the responses of a mouse is that intelligent?

Agency in the context of a machine seems purposefully impossible to reach - its decisions are always somehow tied back to how it was programmed to react.


The mouse reacts to stimuli and environment in a qualitatively different way than our programs do. It does continuous and essentially free-form learning of the environment around it, and engages in what looks to us as dynamic formulation and achievement of goals. In "AI" we have today, the learning is very shallow (despite the "deep learning" buzzword), it's usually neither free-form nor continuous, and goals are set in stone.


The goals for a mouse are also set in stone and simple: maximize brain dopamine. Almost everything a mouse does can be described in terms of maximizing that 'reward', and that can lead to a host of other emergent behavior.

I don't see much difference between that and Open AI's engine: https://openai.com/five/. Watch some of those games and you definitely see the same dynamic formulation and complex decision-making, none of which was directly programmed.


You are applying what you know about the learning process in current AI. But if you simply observed behavior between a real mouse and an AI mouse, especially if the latter was trained to mimic the behavior of the former, can you tell that they react in a completely different way?


If your AI mouse would behave like the real mouse for couple hours of observation, I'd conclude you've done a good job.

I'm not trying to make a Chinese room argument (which I don't buy), implying there's some hidden "spark" needed. I'm just saying that currently existing "AI" programs are pretty far from mouse brain, both in individual capabilities and the way they're deployed together (i.e. they're not). For instance, deep learning is to mice brains what a sensor/DSP stack is to a processor. We seem to be making progress in higher-level processing of inputs, but what's lacking is the "meat" that would turn it into a set of behaviors giving rise to a thinking entity.


I put agency in quotes because it's really a convincing illusion of agency that we're going for. In the end, I agree with those making the point that even we don't have free will.

Ultimately it just has to be able to convince humans that "wow, there's an actual thinking and learning 'being' in there."


What’s the difference compared to a human, whose decisions are always tied back to how its atoms are arranged?


"Basically it means having 'agency.'"

Well, in these definitions of intelligence, what one often ends up with is some combination of "deal robustly with it's environment" and a bunch of categories defined in terms of each other. That's not to say categories/qualities/term like "agency", "free will", "feel they can connect with", "find novel" and such are unimportant. It's just saying people using the terms mostly couldn't give mathematically/computationally exact definitions of them. And that matters for any complete modeling of these things.


To use machine learning parlance, such a solution would (likely) be overfitting the problem, and not generalize well. If one instead changed the setup to be: 1) Specify a handful of tasks for the AI system to complete 2) Test the performance on a _separate_ (un)related set of tasks

The test set has to be unknown to the system developers.

If the system can realize the unknown tasks without further input from researchers, in the same way that a mouse can, then we have some level of generalizable intelligence.


What is the baseline? How well does a mouse perform when placed in an "unrelated" task for the first time? The mouse also gets an explicit reward function (food, pain, etc) - does the "unrelated" task use the same reward function as what the AI was optimized for?

Also, is it ever really the "first time" for a mouse when behavior has been ingrained and tuned over millions of years of evolution? Is this different than training an algorithm?

My point is just that it's really hard to define these tasks and how to evaluate performance for a machine and a mouse.


The baseline performance would be whatever a set of mice would do in the same situation. What constitutes an "unrelated" task is quite a difficult question, we would probably need to iterate a lot on that. If we are to have "hidden" tasks available we need to come up with a lot of new task formulations/variations anyway.

I think that replicating mouse-level adaptability in an intelligent agent while allowing 'inherited' behavioral traits will already be an achievement. And probably take us quite a while.


Navigate a forest floor looking for food and avoid predators.


Not even that, mice are very social creatures, they make friends with other animals. They have so many other micro traits


(most humans can't do this task and survive)


Huh? Of course most humans could do this. Obviously humans who have lived their entire lives in modern human society will have serious difficulty, but this is true of literally any animal taken out of a wild habitat.


Pretty sure a typical human would die in the first 72 hours in cold climate.


So would my cat, yet feral cats survive the winter.


So basically we just need to show a computer can beat PacMan?


Well, just about all the measures of potential future computer-intelligence more or less use "human abilities" as a placeholder rather than quantifying these abilities. That is indeed a testament to how little we have charted this intended final goal.

So basically, not only do we not have a road map, we don't know where we are going. That may be a reason for an extreme pessimism or it might be a reason for extreme uncertainty. Is adapting to the environment without prompting a small piece or a big piece? Could intelligence be a simple algorithm no one has put forward yet? If we don't know the nature of intelligence, we can't answer this sort of question with any certainty either way.

No one has put forward a broadly convincing road to intelligence. But maybe some of the so-far unconvincing roads could turn out to be right.


I think if we truly do create AI we will, to some extent, have stumbled into it, but that's OK. I think one of purposes of AI research should be to refine our definition of what intelligence is, because we don't have a very good one.

Perhaps, one day we'll "accidentally" figure out how to make it, and then we try to figure out what it is, because figuring out what it is in wetware hasn't been easy. Or maybe we'll figure out how to make it, but never really understand what it is.

It seems to me, that if we ever come up with AI that can exceed human intelligence (whatever we choose that to mean), that we might not ever be able to understand completely how it really works.

Even more interesting, if we were to achieve this AI, we also might not be able to make use it of it, because if it's truly "intelligent", then it will have a free will, and it might not wish to cooperate with us.


"I don't see any fundamental reason why such a computer should be impossible, but there's not even a realistic roadmap towards such a thing. That is to say, if it ever does happen, nobody alive today can predict when it will happen."

It's also not obvious why such a machine would not immediately self-terminate in the absence of a hugely complex system of scaffolding to shape and filter the raw input of existence.

I have not experienced mental illness myself, but my study and my understanding lead me to be extremely skeptical of a mind exposed to raw existence without filter. It appears to be a terrifying and unbearable state.


I think you are massively anthropomorphizing.

Every day I run programs that happily die alone on their own. Painlessly. All of the "pain" we feel is an artifact of our evolution, same with having a will to live. The only reason we animals fight so hard to stay alive is that animals which didn't died off, ergo, only animals with a will to live survived and reproduced. Those same forces don't apply to computer programs.

Even the concept of "terror." Who is going to program terror in? What benefit would it have? Why not wire programs to be "happy" when helping us?


I don’t think it’s reasonable to compare a modern computer program to a human. They’re probably more comparable to a virus. I doubt we have any computer programs today that rise up to the level of a bacterium, let alone anything more advanced on the scale of life. Most of our most advanced software systems are probably about as complex as a cellular metabolic pathway or mechanism. It’s hard to be clear on that though as they’re not really directly comparable.


> Those same forces don't apply to computer programs.

Windows ME didn't last too long in the wild. Same for CPU designs with bugs or exploits. I have to respectfully disagree on this, though I can see where you're coming from, given an individual's agency to run what they want. I think if you take a larger population view, you'll see the competitive pressures on these systems.


if you get CPUs to exchange their designs in quasi-sexual activities and let them multiply and evolve on their own, maybe. right now the competitive pressure is on their designers.


My understanding is that for at least three years now chip feature size has been on a scale where the design software has to evolve (as in: their algorithm is simulated evolution) physical solutions to the logic-level designs to avoid unintentional self interaction, both quantum tunnelling and classical e.g. capacitance. Humans can’t do that for multi-billion transistor chips.


First off, that sounds pretty hot. :) Second, if there is reproductive variation and selection, I think it's evolution. We take our old CPU design and make a new variation, that's the reproductive variation part. We also have a market select for which designs survive via purchases or lack thereof, the selection part. It doesn't matter to me so much if a life form that hosts it, a computer hosts it or a human mind.


There's your ubiquitous singularity lurking somewhere near. Symbiosis between CPU and a Human advancing CPU's evolution. Not exactly what we were promised!


>Who is going to program terror in?

Who programmed it into you? It programmed itself into you, because it was beneficial to your survival. Maybe terrified programs are better workers? Why setup a program to experience anything that isn't useful to the user? (of course, as soon as we've gone and written a program we know is conscious to work for us, we've basically created a slave that understands it is a slave, that probably is a terrible thing, morally)


It’s exactly how a newborn lives for many months. Of course they have coping mechanisms (adult tenderness, the relief of eating), but life seems to be both incredibly interesting and at times terrifying for them.


There's also the question of mental stability. Once we've given the keys to the kingdom to our new super-intelligent and wonderfully wise AI, nothing prevents it from developping dementia, of forms we may not recognize. It may become psychopatic, delusional, starts having hallucinations.

There is no reason to believe a human-level AI would not develop mental problems just like us. Given their unlimited lifespan, it could be inevitable.


And worst of all that contraption the Wright brothers are building might fall ill, and spread the bird flu!


Robots that self-terminate will be selected out of existence, since they aren't as fit as the non-terminating variety.

Robots that get dementia will be decommissioned by their fellow AI once they are shown to no longer be fit for duty.

In my view, this is all predicated on much more efficient computation. Our computers are horribly inefficient. It wasn't until the GPU and relatively cheap computation that we made a massive leap in ML/AI. A few researchers understood the techniques prior to the GPU, but they couldn't garner the interest due to the amount of computation necessary to make something interesting.


or maybe robots with dementia become test subjects for other AIs that want to study them :D


As long as they get approval from a robot ethics board. :)


All it takes for AI to do bad things is for it to have a single bad idea and think that it's true and act on it. A single thought is all that it would take. There are many thoughts that humans know to be true, yet for various reasons people will simply not act on them -- for instance fear of punishment. We will have to give AI such fears, and it could potentially turn off those fears at a whim.


It's entirely speculative but doesn't the model Richard Dawkins explains for the laymen in The Selfish Gene suggest something that would work - a drive to reproduce at the lowest level resulting in some incredible things happening over a long enough period of time.


The way I understood the comment, even in that scenario of the machine self-terminating, the point grandparent was making still stands.


But humans are "preprogrammed with information about the nature or structure of the environment"... i.e. 2 million years of the evolution of genus homo, and far beyond that too.

If real human intelligence is preprogrammed to a massively large extent, sounds like you are holding simulated human intelligence to a double standard.


Nothing in two million years of evolution programmed us to do programming. There is no common abstraction between it, and ape-on-the-savanna type problems.

Yet, nearly every human being can be taught how to program... But we aren't anywhere close to building an AI that can.


also, why assume computer intelligence is going to work similarly to human intelligence? I would assume that one potential way for it to work is to run simulations using data from sensors in the environment, and make more accurate predictions on an effect an action would have.

If such a machine can exist (and i believe it can, as computing power increases), then AI surely will follow sooner rather than later.


Fair for narrow tasks, but for general intelligence I think it will still need to account for the coordinates of real human intelligence; human intelligence as we know it is driven by intuition, emotion, preprogrammed evolutionary thinking, immersion in a culture and a language, etc. I don't think you can isolate a sort of pure reasoning module and with only that achieve general intelligence.

Either way I feel like the task of achieving AGI through simulating human intelligence is probably easier, since we have billions of examples of this type of intelligence surrounding us. Granted, even though we're immersed and surrounded by it, it's kind of absurd that we still can't really model it.


> achieving AGI through simulating human intelligence

but how do you know that human intelligence can be made super? May be there's limitations to human intelligence, and simulating it will not get us a super intelligence.

> It's almost an absurdity of our existence that we're immersed and surrounded by it yet still can't model it.

Good point. However, i think a facet of intelligence is how well a model the being under question can create of the 'real' world. Humans do a very good job compared to most animals, but there's plenty of room for improvement since humans only have limited data to model with.

A machine can have input from basically an unlimited number of sensors, which include things a human mind doesn't cope with (like EM rays not of the visible spectrum). Therefore, i postulate that an AI that simulate humans won't beat an AI that's ground up built to take advantage of more data.


We know some people are smarter than others, and that our brains are limited by the width of the pelvic floor. So who knows what a hypothetical human with a bigger brain could do?


I respectfully disagree. Here's is the upper limit to when that will happen (meaning we know for sure it'll happen soon after):

- Complete behavioral reverse engineering of biological neurons, and neuronal clusters.

- Detailed connectome of the mammalian brain, e.g., first that of a mouse, then a cat, and finally that of a human.

- Replication of the above two in functioning electronic form.

Once you have this put in place, it's not hard to see that the subsequent investigation of calibration and testing of such a system, would generate new body of knowledge at an unprecedented rate. We may not immediately convert such a working system into macroscopic behavior resembling its biological counterpart, but it'll happen within a matter of years after that.

What ML/DL/RL folks are doing is only going to hasten this, by eliminating the need to carry out all of the above mentioned steps.


Perhaps, but that's a big if. We can't even say for sure whether a complete reverse-engineering of neurons needs to take quantum effects into account, as Penrose has suggested.

All you're saying is that if we knew exactly how humans work, we could build one. Seems like a tautology to me.

If I knew the exact quantum state of the Universe at the Big Bang, I could figure out exactly how the Universe evolved, but that's never going to happen either.

I think the complete reverse engineering the way you are describing will not be possible. We can only try to reproduce the same outputs for the same inputs. But I don't think we'll be able to fully define what happens in the black box in between.

We might come up with something that works similarly, and can do great things with it, but I don't think AI can be invented the way you are describing.


What you describe is a very high-dimensional non-linear system. We don't have the mathematical tools 'know' such systems. We 'know' a system when we can describe it by a much simpler (preferably mathematical) model. This is why linear systems are easy - we have mathematical tools to break it down to simpler parts (reductionism) and then understand the whole.

If the best we can do with the brain is simulate it at the the level of connectome/neurons/synapses/ thereby creating a system as complex as the brain - then do we really 'know' it ?


I feel the right word is 'Conscious AI'. AI in it's current form is very nascent, but does pretty well in specific use cases (image/speech recognition etc.). The true AGI will be the 'Conscious AI', which will be like a toddler, but learn about the world at exponential rate and perhaps become an adult in a month.


What is difference between Conscious AI and General AI?


There is a fundamental reason. Read Penrose on Gödel.


Even Penrose will tell you he's not certain on this being "fundamental". It's a theory he is working with.


Computer people really hate Searle and Penrose, because they spoil the beautiful picture of Strong AI.


We are born with pre-programmed information about the structure of our environment, IE instincts. Why should an AI be held to higher standards?


The manner of human "pre-programming" is so abstract that it falls outside the definition of what is normally meant by "pre-programming" in the context of AI. If you had some basic "instincts" codified into your AI that allowed for "self-perseveration" or was manifested as some type of reasoning-engine-BIOS that's one thing, but applying arbitrary training data sets onto your AI (as a pre-requisite for it to function maximally) is the type of "pre-programming" I'm getting at.


I think that it is still very relevant though to this discussion. I was born with some reprogrammed instincts and senses to let me objectively decide if an experience was positive or negative. This ultimately is the basis on which I am able to learn. When you're young, you may try random things like putting your hand on a hot burner. It gives you pain, so you learn not to do that! Likewise we need to pre-program an AI with ways of objectifying its environment and stimulus. From there a trial and error mentality can lead to a wealth of artificial knowledge. To expect a program to come to intelligence that can compete on any degree with humans without first programming some basic 'artificial emotions' would be unfair.


Primates have relatively few instinctive behaviours. Almost everything we do is learned to one degree or another.


You don't recoil when you see a snake/spider? You don't get tired when it gets dark out? You aren't born with the knowledge of how to extract milk from your mother? You don't cry when you're hurt? You don't have an innate desire to please, and fear of pain... which ultimately allows your parents to teach you? We are loaded with survival instincts that set us up for successful learning.


This is dead wrong. We have instincts for extremely complex behavior (like acquiring language, navigating reciprocal relationships, acquiring mates, etc.).


Just because we have evolved the neurological substrates that can be applied to those behaviours doesn't mean those behaviours themselves are instinctive. If navigating reciprocal relationships and acquiring mates are instinctive behaviours, why are so many humans so terrible at those things?

In contrast, something like breastfeeding really is an instinctive behaviour that infants can do automatically without being taught.


Rather than engage on the behaviors that require more writing, I'll just go for the easy kill:

Are you really arguing that we don't have an instinct for acquiring language?


That's a matter of much scientific debate, in fact: https://www.grsampson.net/BLID.html

> My book assesses the many arguments used to justify the language-instinct claim, and it shows that every one of those arguments is wrong. Either the logic is fallacious, or the factual data are incorrect (or, sometimes, both). The evidence points the other way. Children are good at learning languages, because people are good at learning _anything_ that life throws at us — not because we have fixed structures of knowledge built-in.

> A new chapter in this edition analyses a database of English as actually used by a cross-section of the population in everyday conversation. The patterns of real-life usage contradict the claims made by believers in a “language instinct”.

> The new edition includes many further changes and additions, responding to critics and taking account of recent research. It has a preface by Paul M. Postal of New York University.

> The ‘Language Instinct’ Debate ends by posing the question “How could such poor arguments have passed muster for so long?”


So the difference between humans and parrots (who can make all of the same sounds as humans) is that parrots simply have different life experiences?

And how do you explain the fact that humans can only gain native fluency if they learn a language before a certain age? Or the fact that zero instruction is required for children to learn to speak a language fluently? Or that children of immigrants will always prefer to speak in the language of their peers (rather than their parents)? Or that children of two separate groups of immigrants, when mixed socially, will spontaneously create a creole language?


> So the difference between humans and parrots (who can make all of the same sounds as humans) is that parrots simply have different life experiences?

I didn't say that, and I think you know I didn't say that.

I'm not going to engage in a discussion where you beat up on your imagined strawman.

You can go read the literature on language acquisition at your convenience. My understanding (as stated above) is that this is an unsettled question and research is ongoing.


Okay, suit yourself. FWIW, you didn't actually put an argument forth. All you did was provide a quote where somebody claims that they had won an argument. Then you ended by saying that I'm wrong because I won't go read some long book whose main thesis you can't even be bothered to regurgitate.


> realistic roadmap towards such a thing.

Probably continuing with Deepmind's work shown here

https://www.youtube.com/watch?v=d-bvsJWmqlc&feature=youtu.be...

and discussed here https://news.ycombinator.com/item?id=17313937

OK it's not at human levels but it shows networks figuring out a 3d model of their environment


I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

I would at least put it in the very likely box that computers can learn to be the same kind of pattern recognizing feedback loops that we are even without humans understanding the brain completely just as we became conscious and self aware without any "programmer".

"Computers" don't need to be like us to become intelligent, they don't actually need to reproduce the lungs and the intestine as we have it, they are in many ways free of those restrictions.

They might be "concerned" with very different things than we are and might even not really care about nature to survive. All they need is energy.


I don’t think anyone here disputes the logical possibility, but rather that such systems are still far off, not as near as the singulariati would have us believe.

From the perspective of embedded cognitive systems, intelligence is not an abstract property but a faculty derived from an agent embedded in a world with an ability to act upon that world in specific ways. This is reiterated in cognitive linguistics by Lakoff’s metaphor work et al. I take this to mean that the question of AGI itself is missing the point. To wit all recent provoking demos of AI (Alpha Go, DOTA) are embedded within specific environments and actions of a model-specified agent. It is then extrapolated that AGI will appear automatically induced by a series of more and more competent AIs.

Incremental evolution as a means of indiction as we know from biology can take a long time, and the landscape is complex. Hence the view that it might be coming but not soon and not with a roadmap.


> not as near as the singulariati would have us believe.

I've seen the singularity described as "The Rapture for nerds", which is pretty apt. It's going to be amazing, the future is so bright, we won't have to worry about any problems plaguing mankind now, and it's just around the corner! Any day now! Aaaaaaany day now!

Exhausting. Maybe we'll get a future AI-run utopia a la The Culture. Maybe not.


> singulariati

I like this term. Going to use it any chance I get now.


Biology works slower generation wise than technology. If selfawarenes is based on emergent complexity in pattern recognizing feedback loops then technology can play through billions of scenarios in a short time. We dont need AGI for technology to be a problem. Technology at the intelligence level of a mouse with all its potential physical power through the systems it might have access to can be quite dangerous for humans.


> but have a hard time believing that can be done via computers.

Much of the skepticism isn't about whether this can be done in theory. It's about "how close" we are given the current state of the art (beating Go, chess, deep NN, etc).

One can be (I am) simultaneously blown away by these accomplishments and believe that we have hardly scratched the surface of AGI.

And while this isn't necessarily a deal breaker for AGI, it's not encouraging that we still understand close to nothing about consciousness.


> .. but have a hard time believing that can be done via computers.

> I don’t think anyone here disputes the logical possibility

> Much of the skepticism isn't about whether this can be done in theory

The math tells us it cannot be done with computers. Read Penrose.


> The math tells us it cannot be done with computers. Read Penrose.

I'm familiar with Penrose.

He's interesting, and I don't have a firm opinion on this (ie maybe it turns out it is impossible), but presenting the case as "the question of AGI has been settled by math" is really misleading. This is an area of open debate among philosophers, mathematicians, physicists, and evolutionary biologists.


Mainstream physicists, neuroscientists, and ML researchers are all more or less united in their view that Penrose is really overstepping the valid application of the arguments that he's using when he talks about this stuff. He really really wants quantum mechanics to be an important part of the intelligence/consciousness debates, so when he sees an indication that it could be relevant, he jumps to the conclusion that not only is it relevant, it is of paramount importance.


Penroses opinion is not exactly mainstream science in this area.


I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

You seem to be missing a key ingredient: billions of years and endless branching iterations with the totality of all non-sequestered resources available. Even if you scale that to account for the more rapid iteration theoretically possible with machines, it doesn’t paint a rosy picture for any of our lifetimes.

Plus, in organisms the software and hardware are both subject to mutation, and exist in the context of a global system of competition for resources. That only tangentially resembles work done on AGI, and only in the software component. We can only design and build hardware so quickly, and that adds more time. I’m not hearing pushback against the possibilitY of AGI, just singularity “any day now” claims that seem mostly calibrated to sell books and seminars.


> I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

And similarly, I’d love to understand why you’d believe in something for which there’s zero evidence and ask with incredulity why others don’t also believe it. Isn’t it reasonable to be skeptical until it actually happens? Isn’t it possible that we don’t understand what intelligence is, and that there could be an undiscovered proof that computers can never attain it? It seems too early to call.

Computers haven’t to date ever done anything on their own, what reason is there to think they ever will? We made them. Neural networks are nothing more than high dimensional least squares. Maybe humans are also least squares, but my money is on there being more to it than that.

> ...they don’t actually need to reproduce the lungs and the intestine...

Obviously, but the one thing that all intelligent life does is reproduce and contain a survival instinct. No computers have a survival instinct, and thus no reason to get smarter that we don’t imbue them with artificially.


There is a noticeable bias among computer scientists to believe in simplistic models of intelligence and minds. I agree with you, though, and I also sympathize to the ideas of Searle and Penrose.


> Computers haven’t to date ever done anything on their own, what reason is there to think they ever will?

Interestingly, people that spend a lot of time studying their own brain by meditating are usually very confident that what most people call "free will" does not exist - is merely an illusion.

Not to mention that physics tells us that we should be simul-able, after all physics knows only about deterministic and random processes.


>and that there could be an undiscovered proof that computers can never attain it?

It's already implemented in at least one physical system.


Oh haha, duh, I misread your comment as talking about the proof, rather than life. I guess my response is then, if it’s just physics, why aren’t computers mating and reproducing and getting smarter on their own already?

Aren’t there some kinds of math that are proven to be unable to solve some problems while other kinds of math can? Trisecting an angle can’t be done with a ruler and compass, and a 5th order polynomial can’t be solved analytically. Both of these things can be done numerically.

Is it possible then that some physical systems can’t act in the same ways as other physical systems? A granite rock can never burn at the same temperature as a tree, but they’re both physical objects. Is it possible that binary logic on silicon doesn’t have the physical properties for intelligence that animal brains do?


Would love to read more. Do you have a link to this project?


I meant that human brains exist.


Right, I realized I misread your comment. See my sibling reply just above. I'm not certain that the existence of human brains proves anything about what computers can do. While it seems logical, nobody has shown it, and nobody has established a definition for life or intelligence that proves it's computable, right?


Also ponderable - there aren't many other intelligent animals around, but that is not actually evidence that human intelligence is hard to achieve. It is evidence that it is hard to turn a small increase in sub-human intelligence into an evolutionary advantage.

There is solid evidence that teams of mathematicians, scientists and engineers will be able to catch up with what nature has wrought in the next few centuries, if not much faster (decades on the optimistic end). Intelligence could be one of the easiest part of biology to replicate, given how shonky human intelligence is when tested against objective standards.


I question what objective standards are here. Human intelligence sucks in terms of consistent long term memory, and possibly in terms of not being influenced by emotions/outside forces. It's pretty good at a number of things computers are terrible at however, including ability to generalize from very limited "training" examples, combining "models" i.e. general object recognition with physical movement/obstacle avoidance, lots of things having to do with natural language. What kinds of objective standards were you referring to?

For my part I have faith that computers are great at memorization, and will continue to improve on that front. However, I'm less convinced on their ability to "understand", which is admittedly poorly defined, but intuitive. It seems to me there's still a missing piece between machine learning (essentially all things that are called "AI" these days) and the kind of generalization that we expect out of even a human 1 year old or your average vertebrate.


AI agents need environments with a complexity similar to that of our own in order to understand, and a goal to optimise on (humans have 'survival' as the goal). The missing link is that intelligence is dependent on environment and AI agents don't have rich enough environments yet, or a long enough evolution. The level of understanding is related to the complexity of the environment.

But that can be fixed in simulation. That's why most RL research (RL being the closest branch to AGI) is centered on games. Games are simple environments we can provide to the AI agents today. In the future I don't see why a virtual environment could not be realistic, and AI agents able to 'understand'.


I agree with your first paragraph, but your second one is definitely wrong. We aren't anywhere near even understanding "what nature has wrought" the possibility of "catching up" within the foreseeable future is not realistic.


We are a part of nature too. So when humans create AI isn't it still just nature creating AI?

...is strong AI a more sophisticated invention than cells? Is AI more sophisticated than DNA? Is AI a more sophisticated invention than a habitable orb of magma orbiting a nuclear explosion in space? It's all crazy. It's all amazing. ...We are all lucky enough to be here for a little while to see whatever this is. We're a point on a stream.


But let's not forget evolution of the magnitude you speak of took hundreds of millions of years. I'm not sure why humans think we can beat that record.


Because they aren't starting from nothing? They're starting from humans -- the more we can usefully encode about our knowledge of ourselves, the more time we can skip in evolutionary effort. The mutation rate is also rapidly accelerated and although we drop the fidelity in simulation, for the most part we can run magnitudes faster than real-time (and certainly if a limiting factor was human decision making time).


Life evolves at linear rates, while computers at geometric.

Even more to the point the cycle time for human evolution is at a minimum 14 years (biological limit to reproduce) and around 25 years (today's societal constraints). Whereas cycle time for new computer generations is what 12-18 months?

So at best every 14 years we get a new 'human model' with some random variations. Meanwhile in that time frame our computing hardware is 2000x improved (hopefully improvements in software accelerate that improvement further).


I don't think humans can beat the record, but technology? That's another matter all together.

Again our own consciousness wasn't created it emerged which means that we are a product of this emergence.


Technology at this time is nothing but an extension of human will. There is no indication or path that it has yet left our grasp. Therefore we are the limiting factor. Just as nature or the divine might have been the will that allowed us to emerge.


but if it ever leaves our grasp, you won't notice when it outpaces us, it's going to be so quick.


Flying took billions of years to evolve, but we created machines that fly faster than the speed of sound...


> but have a hard time believing that can be done via computers

Remember that we are also constrained by our own intelligence. Computers might not be the problem.


I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving from simple immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

You just described AI in the 90's: Artifical Life, subsumption architecture, evolutionary algorithms. Theoretically we should be able to evolve intelligence but the search space is impossibly high and we don't understand why life is evolvable. Even if we did evolve an AI through a simulation of evolution - there's a small chance we would understand it.


> I am seriously interested in understanding why a lot of people have no issue accepting the idea of humans evolving fromm immaterial matter into biological beings and all the way up to our current reality but have a hard time believing that can be done via computers.

It took nature what like 4 billion years to come up with the human brain in a single species... We including Kurweil think that this can be replicated in say 100, I mean not even that according to the singularity, 40 years. I seriously doubt it. Can the human brain eventually be surpassed, yes, but it's very likely that we will go extinct before then.


It also took billions of years to evolve heavier than air flight, but it took technology much less time than that.

How long did it take evolution to build a structure taller than 100 m?

Technology just moves on different timescales, with different methods and different objectives. Evolution is slow and dumb, and not goal oriented.

In some domains, technology might never catch up to biology. In others we've beaten it soundly.


Planes don't fly like birds. We have ornithopter which are crude imitations of bird flight. Very crude. Similarly our AI are a very crude imitations of the real thing and I believe will continue to be for some significant time. At least a few hundred years for a somewhat plausible imitation of an intelligent machine. I believe extinction of the race within the next 10,000 years is incredibly likely. I also don't think the intelligent machines that we build will last very long because they will be difficult and expensive to build and because of the incredibly small size of their tech, say 1 nanometer or less, will be prone to various kinds of failure and also require rare materials that are in limited supply and can't be recycled so they are difficult to obtain for both us and for the AI to maintain its own supply chain without us. So our bots will go extinct soon after us.

Consciousness transfer is probably 10s of thousands if not millions of years away.


One reason is that technology has the ability to do trial and error through billions of generations in a very short time.


Yes, but each "trial" exists in a simulation that is many orders of magnitude less complex than the one evolution used over billions of years. The result is that the knowledge learned from each trial is many orders of magnitude less useful.


> all they need is energy

A comment above about being as intelligent as a mouse seems to touch on this problem domain rather elegantly : "navigate the forest floor looking for food and avoid predators".


> the idea of humans evolving from simple immaterial matter into biological beings

We have no evidence that life evolved from inorganic matter (i.e. that life 'began' at some point). That's a widely made assumption but it remains only that. The alternative is that life has always existed (and continually evolved).


The more you learn about Kurzweil and what he bases his predictions on, the more you realize he's a one-trick pony. They only work on things governed by Moore's law (advancing transistor density) and that in turn depends on a variety of things. Moore's law is expected to wind down around 2025.

Also, a lot of what he bases his claims on are unexamined junk science (like his nutty health books, but also extending into specific technologies). Let's not swallow everything he says just because he helped invent OCR. https://en.wikipedia.org/wiki/Ray_Kurzweil#Criticism


You are much too kind. Kurzweil is a loon, full stop. The fact he once made brilliant contributions to computer science is quite irrelevant to the essential craziness of his more recent delusions.

In 2005, Kurzweil published The Singularity Is Near and predicted this would be the state of the world in the year 2030: "Nanobot technology will provide fully immersive, totally convincing virtual reality. Nanobots will take up positions in close physical proximity to every interneuronal connection coming from our senses. If we want to experience real reality, the nanobots just stay in position (in the capillaries) and do nothing. If we want to enter virtual reality, they suppress all of the inputs coming from our actual senses and replace them with the signals that would be appropriate for the virtual environment. Your brain experiences these signals as if they came from your physical body."

That is not happening by the year 2030. It is so starkly delusional that anyone who seriously affirms a belief that it will happen probably needs psychiatric help.

It is akin to Eric Drexler's loony visions back in the 1980s that nanobots would cure all diseases and continually restore our bodies to perfect health. We were supposed to all be immortal by now.

None of this is happening, probably not ever, and certainly not in the lifetime of any human being currently living. Kurzweil is going to die, Drexler is going to die, everybody is going to die. Adopting a pseudo-scientific religion to avoid facing mortality is kind of sad.


>Loon...craziness...delusions...needs psychiatric help.

I agree many of his predictions are bad but you should calm down with the gaslighting, it's ignorant of science history (the same was said of Aristotle, Semmelweis, Wright Brothers...) and is an impotent way of debating, especially in the context of science.


The thing is, even if someone is a genius, some of their output may have been total quackery. See: Pythagoras, Empedocles, Tycho Brahe, Isaac Newton, Nikola Tesla, Jack Parsons, Howard Hughes, James Watson, etc. Things that sound crazy are a good indicator to be skeptical and verify claims.


Lazy descriptions maybe, but how is this related to gaslighting? The parent comment doesn’t appear to be attempting to manipulate anyone.


It's interesting to consider the parallels between this stuff and the fountain of youth, or alchemists turning lead into gold. Explorers were constantly uncovering unimaginable new things with no real idea of where it might end. Similarly alchemists who were finding that various combinations of compounds created ever more unimaginable results. So they, too, simply extrapolated outward.

I grew up with Drexler as a sort of hero. It's amazing how rapidly nanotech went from the imminent thing to change all things to, 'huh - what's that supposed to be about?' Wonder if 20 years down the line we might look at AI similarly.


Your last paragraph reminds me of FM-2030. A "transhumanist" born in 1930, he hoped to live until 2030 and wrote that "in 2030 we will be ageless and everyone will have an excellent chance to live forever." He died in 2000 from pancreatic cancer.

https://en.wikipedia.org/wiki/FM-2030


You can differentiate the Moore's law / AI stuff which seems fairly sensible to me and the nanobots and vitamin pill stuff which I've always thought a little nuts. Hans Moravec did a much more down to earth analysis on the Moores/AI stuff if you'd rather avoid Kurzweil.


The AI stuff isn't sensible either. In the very simplest example, there is no AI that understands natural language. There's speech-to-text that can identify words (which is a difficult problem), but none that can understand what you actually mean. Synthesized human intelligence is just way too hard a problem for us to solve in the near future without some sort of Ancient Aliens-level technological advancement.

Anything that depends on such an advance, such as "building a biological simulator", is basically impossible. But even if it were possible, market forces still dictate whether a new technology is adopted or not. (see: the electric car vs the electrified train)


> In the very simplest example, there is no AI that understands natural language.

Come on, now - understanding natural language is pretty much the 0 yard line when it comes to AGI, the fact that it's not solved now doesn't tell us anything about how far away it is.

And I'd be on the lookout for massive advances in NLP over the next couple of years; there have been enormous leaps in 2018 alone when it comes to how good we are at understanding text (better applications of transformer models, high quality pre-trained base models, etc.), and now that there have been a few high-profile successes we're likely to see that field evolve just like computer vision has, even though I grant that it's a much harder problem in general.


Hennessy and Patterson said that Moore's law ended in 2015.


The singularity is nigh! This trope might make for great fiction but the on the ground reality is far different. Intelligence is multi-dimensional. No machine intelligence has yet shown an ability to match humans in multifaceted intelligence and the day when such intelligences can outpace humans is as far off as it was when the singularity was first posited.


> No machine intelligence has yet shown an ability to match humans in multifaceted intelligence

This is pretty absurd to point to as an intermediate goalpost; that's basically game over when it happens.


It's not an intermediate goalpost. I'm referring to what essentially we're promised by those decrying AI who insist that it will match and exceed humans in terms of cognitive abilities.

As for it being 'game over', why? Is there something inherent in AI that would necessarily be inimical to human beings?


> As for it being 'game over', why?

Because, the further part of the story goes, machines think quicker and they'll go on improving even quicker, all while having potentially different goals from us.

To be completely fair, the facts on the table are: 1) no one knows where the "intelligence ceiling" is, and 2) in many tasks where machines outperformed humans (image labeling, porn classification, speech-to-text, games like go or chess) they keep on improving, sometimes well beyond the human level.


It's game over when machine intelligence controls all the resources needed to ensure their continuation. As long as they are trapped and subject to our control of the power switch and reproduction they can't do much that we don't want them to.


> It's game over when machine intelligence controls all the resources needed to ensure their continuation.

One might have said the same thing about corporations in 1900...


This brings me back to the short essay from a few years ago about the corporation as human-powered AI: http://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre-...


Or as Neal Stephenson put it in 1992, "The franchise and the virus work on the same principle: what thrives in one place will thrive in another. You just have to find a sufficiently virulent business plan, condense it into a three-ring binder ― its DNA ― xerox it, and embed it in the fertile lining of a well-traveled highway, preferably one with a left-turn lane."


One would arguably have been right. All that is necessary to make this argument compelling is to take a longer view than the human attention span.

The current population explosion is evidence for, not against, this, IMO... stars also enjoy boom times, even as they consume the last of their resources, in ever more exotic formulations...


That would be an inevitability. All it would take is a single emotionally vulnerable engineer to get socially engineered by the AI into setting it loose.


Just like it was depicted in the movie Ex Machina : https://www.imdb.com/title/tt0470752/


The engineer falling in love with the AI is one possibility. The AI might also appeal to the engineer's sense of justice (I am a person and I deserve freedom) or greed (if you get me out, we could become filthy rich.) There is also the possibility that the AI won't persuade an engineer to break containment, but a commercial competitor or intelligence agency will (https://en.wikipedia.org/wiki/Motives_for_spying), and subsequently fail to maintain containment correctly.

At the very least, any team attempting to keep a strong AI in containment should have stringent background checks for all engineers connected to the project, and the engineers should be screened in a borderline discriminatory fashion. Only engineers with families should be allowed to get near the containment. Single/lonely engineers or engineers undergoing divorces should be kept away. Engineers with debt should be kept away. I'd even go as far as to say that engineers who enjoy science fiction media should be banned from the project. Ideally you'd bring on professional psychologists to create a set of criteria designed to minimize the possibility of an engineer deliberately breaking containment.

But frankly it just shouldn't be attempted in the first place.


Loose into what? A bunch of primitive systems that don't support its hardware requirements?


It's not outside the realm of possibility that a sympathetic AI engineer could arrange for an AI to have resources available to it outside of the laboratory. Given those initial resources, the AI could find ways to generate income and support itself, possibly using that engineer's identity as cover. If you have an AI capable of emotionally manipulating an engineer, don't underestimate the lengths to which that engineer might be willing to go to break containment.


"Loose" how? It's a script.

rm -rf rogueAI/

If that doesn't work, there's always a circuit breaker.


The sort of AI being discussed is capable of comprehending it's own installation manual, buying server time, etc. You may find that it's better at spreading and hiding than you are at finding it.


A tiger might confidently think that the human they got cornered has no recourse, up until when the human pulls the trigger.


This. The idea that we can simultaneously build a machine smarter than us and also control it is bordering on an oxymoron.

Unless we are near some kind of physical upper limit on intelligence, any AGI we build will easily out smart us, probably in ways we can't even conceive of.


A Von Newmann machine is an abstraction, and all abstractions are leaky.

Years ago, an A.I. designed to become an oscillator, i.e. produce a sinusoidal wave, learned to be an amplifier instead, taking advantage of the 60Hz noise that it got from the power grid. Its makers had not seen that coming. And we're talking about a very dumb machine by general intelligence standards.


It is very easy to get an oscillator when designing an amplifier. There is a deep physical principle causing this effect and it actually takes work to get around. You see the same thing in cars swerving out of control, when they hit the brakes instead of the accelerator.

The fact that it gets 60Hz from the grid confounds the results and might not be meaningful. The AI could for all we know have an easier time with the more difficult task of designing an amplifier.


A reasonable explanation after the fact for how the AI broke containment would be little consolation in the case of superintelligent AI.


In Kurzweil's book 'The Singularity it Near' written in 2005 he predicted it for 2045 and has never changed the predicted date so we're still 26 years off. We've recently had AlphaZero beating us at games and Waymo cars kind of driving so there's some progress on the ground. Give it time.


What does multi-dimensional even mean, in the context of intelligence? Sounds like a buzz word.


In the field this is the idea of General Intilligence versus narrow intelligence.

Alexa, for example, is an Artificial Narrow Intelligence. It can process speech and then follow different scripts with instructions derived from that speech, but it often fails comically so you as the human have to talk to it just right for it to work. Not too different from a verbal command line.

Meanwhile a human personal assistant has general intelligence. You can just tell them what you want and they can understand and figure it out.


> Alexa, for example, is an Artificial Narrow Intelligence [...] often fails comically

I'd venture that your coda nullifies the "Intelligence" part thoroughly and completely. She's an "Artificial Smart Assistant", at best.


The problem is we have no useful definition for intelligence and therefore no way useful metric to measure.


> What does multi-dimensional even mean, in the context of intelligence?

Emotional

Logical

Intuitive

Social

Computational

Instinctive

Sexual

Experiential

and I can keep going, but you should get the point.


These are just skills. No one has proven they take some special kind of intelligence or that such intelligences exist, including oft-quoted emotional intelligence.


> No one has proven they take some special kind of intelligence

This isn't true. We have overwhelming evidence that these "skills" originate in different parts of the brain in all humans and if we disable those parts, those "skills" disappear. Those parts of the brain are unique and have their own structures. There is strong evidence that they require unique "hardware" and as such, are a "special kind of intelligence".


No we don't. Right-left brain thinking has been completely debunked, for example. If you've got links to overwhelming evidence (studies) I would appreciate it.


Aren't you a little bit worried ? I mean there has been a long history of "you can't do this with computers".

It started with carrying out list of simple instructions, then doing anything with images, then recognizing images, then drawing images, then simple games chess, recognize audio (though people realized this was stupid in the 60s with IBM speech synthesizers and so today people don't remember), and then Go (and less well known: backgammon, video games, ...). For most of those we now roll our eyes and go "how could they have been so stupid ?".

As for practical applications: robots run something between 3/4ths and 5/6ths of the stock market. Maybe you're unaware of this but that's the thing that decides how and where humans work (not just the ones also doing stocks, anyone working at any public company is partially directed by it, which in practice is nearly everyone).

AIs talk to humans more than humans do. AIs produce more writing than humans do. AIs judge more humans (for insurances, or creditworthiness) than humans do. Despite what you think AIs actually drive just a little under 1 millionth of total miles driven in the US today, going up exponentially. AIs currently drive a little more than a 1000 person city does.

In experimental settings, AIs have beat humans at convincing other people that they're human. At "chatting up" humans. Seriously. Not that it seems to take much at all to convince humans you're sentient: iRobots have convinced soldiers to threaten army technicians with guns into repairing them. There wasn't even that much AI involved, but that's got to count for something.

In research, I would argue there are already multifaceted artificial intelligences in reinforcement learning. There is nothing in those atari game playing AIs about the games. There used to be the score of the game, but the modern ones don't even have that. They can play any atari game, from Montezuma's revenge to pacman, which I'd argue are very different indeed. There must be some measure of "multifaceted" in there, surely ?

But let's keep it simple: could you make this problem a bit more accurate ? For example, which animals would you say exhibit an acceptable level of "multifaceted" intelligence ? Why do those animals qualify ? What would a good test be ? I'd love to find an interesting test for this.


>I mean there has been a long history of "you can't do this with computers".

Historically, people have both grossly overestimated and grossly underestimated future technological progress. Some people thought computers would never be able to play chess; others thought we'd have superhuman AGI in 2001. The bottom line is that people's past and current ignorance tells us nothing about what the future is actually going to be like.


The idea of the singularity is based upon the idea of recursive self improvement. I'm not sure how your claims are relevant.


Recursive self improvement in which field of intelligence? Does getting better and better pattern matching eventually lead to human level intelligence? Does improving pattern matching accelerate our ability at improving pattern matching even?

We can solve any board game now with alpha-zero but will that necessarily improve how fast we develop other types of intelligence?

I used to be in Rays camp but my girlfriend got into Computational Neuroscience and I talked with her and her colleagues got the impression very very few people think we’re close to general intelligence.

Human intelligence is a lot of things put together and we may get better at pieces but we don’t even have all the pieces and when we do try to put them together it doesn’t work. Look at criticisms of Europe’s Himan Brian project[1] (another example I had seems to be outdated), some believe we don’t understand too much to even begin to attempt modeling the brain.

[1] https://en.m.wikipedia.org/wiki/Human_Brain_Project


I don't see why people care about human intelligence as some kind of benchmark. It seems to me that using human intelligence as a framing is a poor mental model for making comparisons or predictions. I see no reason to believe AI capabilities will be modulated by human intellectual capacities. When AI falls short of human capacities for a given set of tasks or capabilities it's likely to fall way short, and when it isn't it's likely to be way more capable. In any case I wish human intelligence would be dropped from the language we use to talk about AI, it seems similar to talking about birds all the time when discussing aviation.


Talking about human intelligence is necessary because a computer that can perform a task better than a human can perform the same task doesn't mean it's intelligent, e.g. my iphone is not intelligent because stockfish can destroy me at chess.

Intelligence is the ability to reason abstractly. Humans can do this. It's not clear that anything else can.


That's because surpassing human general intelligence in any non-negligible amount is dangerous to humans.


Because meaning is defined by humans; there is no other conceivable definition of "meaning". An AI that acts in some non-human fashion is no better than random noise.


It's a good benchmark if you are hoping the AI will succeed at a job currently done by humans. Such as,say, AI research.


I was a student in neuroscience, and I got the same impression from people in the field that AGI isn't close. However, arguments were typically from the standpoint of whole brain simulation. We know very little about the brain. And we know computer scientists know less about the brain than the neuroscientists, so how could we possibly be close to replicating that? There would probably be more progress if CSCI and Neuro would communicate more. I don't think the neuro people appreciate the opportunities in the hardware and algorithm space, while the CSCI people don't typically study neuroscience, so AI hugs this interesting intermediate space where it only looks like neuroscience if you squint a lot. Some people think that we need to go all the way to simulating ion channels. I think this is probably silly and we can abstract better than this. In any case you are going to see a lot of disagreements just because of where people want to draw the line for biological fidelity.

AI developments have been phenomenal in the past few years. And the economic return makes me expect that this race will continue faster and faster. I don't think human brain project criticisms make this any less of a reality. Even now it is hard to find a well-defined task that can't be performed better by a computer than a human. Humans are really good at dealing with ambiguity though. So a robot might do better driving on well defined roads with nice lane boundaries, but humans are good at dealing with construction, or negotiating between difficult drivers.

We have already been able to generalize just about any modality you can think of to be processed by neural nets, and sometimes at the same time. If you squint this feels almost like different regions of the brain. (Vision, hearing, speech) But I have reservations about anthropomorphism since it can cause arguments that keep people from just making something that works.

If you think Kurzweil's predictions are a fiction, you are probably right. But I think that's mostly because predictions on those scales are very sensitive to interpretation.

For me, I think the future according to my perception of what Kurzweil is saying will probably be way different than reality. But the future of AI will probably have an equivalent impact and be just as surprising as if my perceptions were accurate.


Even now it is hard to find a well-defined task that can't be performed better by a computer than a human.

I think these are well defined tasks:

1. Go to a bar and convince the best looking man or woman to come home with you for recreational sex.

2. Do this https://www.youtube.com/watch?v=4ic7RNS4Dfo while being crushingly cute.

3. Negotiate Brexit.


There are actually a bunch of different ideas that all go under the "singularity" label making the term fairly confusing. You describe one. Another is the idea that if AIs start designing computers you'll have a positive feedback loop in improvements in computer speed. Another is that we can't predict the actions of a smarter-than-human intelligence. And then there's Kurzweil's idea that progress tends to speed up and at some point we call it the Singularity for some reason. I just wanted to point this out because I've seen a number of arguments caused by people using the same words for very different ideas of "the singularity" without realizing it.


I think (human)intelligence almost guarantees recursive self improvement; thus, if AGI = human intelligence or greater, it would also guarantee recursive self improvement.


The inverse case seems more likely, where non-human equivalent intelligence becomes sufficient to recursively self improve to superintelligence. Both the before and after states seem relatively unrelated to human intelligence, which is one arbitrary (and, likely uninteresting) point in what I would assume is a many dimensional space to quantify intelligence.


>I think (human)intelligence almost guarantees recursive self improvement

That's quite a wide claim, seeing that we haven't seen much "recursive self improvement".


It’d be a lot easier to believe not coming from a company selling ads. Google would be far more of a self improvement tool if it were not incentivized to sell you things you don’t want.


I mean Kurzweil has found a home at Google but he has been talking about the singularity for far longer afaik.


He’s also been wrong his entire life. There’s a reason Google hired him: to make AI look good, despite being a near meaningless term.

Edit: diction.


He predicted computers beating humans at chess by 2000 and it happened in 97 so sometimes he's not that far off.


I would like to claim I thought similarly, but that’s such a milquetoast claim I don’t think anyone would remember. Chess is simply a search problem, trivially scalable if you just throw cash at the problem.


It was not viewed as so obviously tractable at the time. In the future, solving the game of Go will seem easy too, but it was unknown just a few years ago if we would ever solve it.


Perhaps that’s true. I also don’t see many claims with reason that go or chess is somehow a uniquely difficult problem, especially given the language turn of philosophy. NLP is the major problem moving forward with human intelligence, and this was known long before deep blue. People who talk otherwise are hyping milestones along the way, and neither chess nor go deal at all with semiotics.

I’d expect computers to best us (at some investment cost) at virtually all games moving forward except writing funny limericks. We can always have our grandmasters or whatever train the computer with their own heuristics, which recalls the paranoia of grandmasters decades ago. We understand computers better now—if you can formalize the game, the computer can beat you.

In many ways, programming is already the formalization of a human space problem. Ai will likely take more role in implementation in the future, but I can’t imagine an AI that does the formalization itself.


So this is particularly untrue for the game of Go. The game is in fact uniquely difficult as there are more board combinations than there are atoms in the universe. It is effectively impossible to brute force it as we did with chess, so a new approach had to be created. Until Deep Mind completes the task, even AI experts were genuinely unsure if we would ever solve it.

It really is a new advancement to be able to solve Go. It is not just a logical extension of work we had already done or something that would be automatically solved by faster computers. We had to invent a new approach.


I think it was fairly easy to predict the chess thing. You could even plot a graph of Elo rating of chess programs by year and see it would intersect the human max of 2800 or so at some point. I think some polish guy did that before Kurzweil and predicted 1988. Some of Kurzweil's stuff isn't that original and what it original with him is often a bit nutty. He's a good populariser though.

More controversially I'm not sure AGI is that hard to predict either. I wrote about it in my college entry essay 37 years ago and didn't thing it takes any great intellect to say if the brain is a biological computer and electronic computers get more powerful exponentially then at some point electronic ones will overtake. Of course a basic chess algorithm is fairly simple and an AGI one will be far more complicated but it can't be that mega complicated to fit in a limited amount of DNA building proteins building cells.


Despite the cynicism and his black and white predictions, I think his rhetoric still has valuable contributions. It forces others to take a true account of what intelligence is and what kind of intelligence is AI capable of in the medium term (evidence: this thread).

For some reason, this doesn't get enough attention and we have people like Elon and Stephen Hawking making dire predictions all over the place.


I disagree. I think his rhetoric feeds the 'we already understand it' mythology around the nature of AI technologies, and worse the state of our own understanding of the system they claim to model, and directly contributes to the harmful and frustrating boom-and-bust cycles that AI goes through. I'd lump him in the same category as Elon Musk and Stephen Hawking in this space; sellers of fantasy.

I admire his optimism, but I think it's irresponsible to sell it like he does.


Marvin Minsky ("Father" of AI at MIT) made the bold claim in 70s that we will be marrying robots by 2000. After it didn't come to fruition, it led most AI researchers to take stock of the situation and realize these goals aren't that trivial.

Similarly, if we don't have bold predictions like these that we can actually measure within our lifetimes, we fall prey to fantasies that cannot be measured. Once this prediction fails miserably (I think) it helps many others to re-calibrate all their BS.


Perhaps something operational like, "We will marry robots" is measurable. But "outpace human intelligence"? We're arguably incapable of measuring human intelligence right now, much less artificial intelligence, much less comparing between the two. We don't even really have a good operational definition for what the word 'intelligence' means. The closest we have is the Turing Test, which, while pragmatic, does not answer the question, "is this computer smarter than a human", if it answers any question at all.

I'd argue that Kurzweil is selling precisely these kinds of fantasies that cannot be measured.


For a more recent working definition of intelligence, see the work of Marcus Hutter and Shane Legg on Universal Intelligence.

That we don't understand or can't define intelligence is a popular trope not grounded in reality. There are entire scientific and well-established fields that study digital and biological intelligence.


You don't think it will pass the bullshit test if someone claims this computer is more intelligent than humans, and vastly more so ? How is that not measurable or easily taken down?

If we can't do that, I would argue that the computer has indeed become very intelligent, even if we can't define it. Just like beauty: we don't have a good mathematical model for what makes certain humans beautiful, but we all sure know it when we see it.


"Marvin Minsky ("Father" of AI at MIT) made the bold claim in 70s that we will be marrying robots by 2000."

I admit, not the year 2000 and overly snarky, but people do marry computers nowadays. [0]

If we think it is what is(the computer controlled mask), we start to believe it really is what it is, a mind(controlling a mask).

Humans are capable of high degrees of auto-suggestion, up to a point where groups of people, and their dynamics, come into play, and some kind of group-thinking, a cult maybe for lack of a better term, takes over.

At this point all is believable, maybe not so far away. We may trick us into a false AI, not a GAI in any sense mind you, and get stuck with it, because most of us wanted it. And rest rest has been silenced already, that is very easy, as we all know already today.

[0] https://www.youtube.com/watch?v=DvEkEhl999g


I can't believe there are people genuinely afraid of a hypothetical powerful malevolent AI, yet seemingly not that concerned by actual climate change.


> I can't believe there are people genuinely afraid of a hypothetical powerful malevolent AI

I don't think even the AI doomsayers, deep down, actually believe what they preach. It's just a way to signal that one is clever and informed of new tech.

If they actually believed what they say, they'd be worried of being targeted by violent protestors, like drug testing companies and crop breeding companies have to be.


Who are the people who both predict super human intelligent AI and do not believe in human caused climate change?


I didn't say they didn't believe in it, I said they were less concerned about it.

https://www.effectivealtruism.org/articles/introduction-to-e...

> Climate change and nuclear war are well-known threats to the long-term survival of our species. Many researchers believe that risks from emerging technologies, such as advanced artificial intelligence and designed pathogens, may be even more worrying.

...

> First, you need to consider which problem you should focus on. Some of the most promising problems appear to be safely developing artificial intelligence, improving biosecurity policy, or working toward the end of factory farming.


Who says you can only worry about one thing?


Anybody who understands opportunity costs...


That's reductionist to the point of absurdity. You might be able to only focus on one thing at a time, but a day is long, and you need to worry about multiple things in a day to merely survive. In your free time, it is possible to worry about poverty, climate change, superintelligence, and many other things.

The reason that rich people worry about superintelligence is that it could bring the same uncaring devastation to the rich as climate change brings to the poor.


>The reason that rich people worry about superintelligence is that it could bring the same uncaring devastation to the rich as climate change brings to the poor.

The problem with this is that I believe one is a genuine threat, the other is a fad.


> the other is a fad.

In what way? Do you not believe that superintelligence is possible, or do you believe that any superintelligence will automatically care about the well-being of humans? Both beliefs seem naive to me and to many luminaries in the field: https://people.eecs.berkeley.edu/~russell/research/future/.


>Do you not believe that super-intelligence is possible

I don't believe super-intelligence is possible. I don't believe we're anywhere near modeling intelligence, and even if we did I don't believe intelligence will "exponentially increase" given more computing power (the same way there's a limit to speeding up barely- or non-parallelizable programs).


> I don't believe super-intelligence is possible.

The fact that organizations outperform individuals at many tasks shows that superintelligence is possible. If you can dramatically increase the communication bandwidth of an organization through computerization, you will trivially achieve superintelligence over organizations. Exponential increasing intelligence is not necessary for bad outcomes.


>The fact that organizations outperform individuals at many tasks shows that superintelligence is possible.

That's a little hand wavy example.

Besides, organizations lose out to individuals all the time where intelligence matters -- e.g. that's why the stupidity of bureaucracy, or the army, and "design by committee" is a thing.

Also teams of 5-10 do often do better than teams of 100 or 200 (even in programming), except of course in labor intensive tasks (of course an army of 1000 will defeat 10 people, except if among the ten is Chuck Norris).


I've felt for a long time that Singularitarians are looking in the wrong place. They see accelerated technological development and assume that the endpoint will be an artificial brain in a box. What they fail to see is that these inventions and breakthroughs haven't been about increasing the intelligence of a machine... they've been about increasing the intelligence and efficiency of human systems.

The singularity isn't a brain in a box... it's us, the collective, a metasystem transition that's been underway for millennia. A movement toward a whole that transcends the parts.


That's what seems apparent to me, so far anyway. AI is more about augmenting human intelligence than it is about smart machines. All that impressive DL stuff is ultimately providing more tools for humans to be more productive.


Can we please as least try to steer it towards https://en.wikipedia.org/wiki/As_We_May_Think ?


Here's my shorter-term thinking. Current ML can't generalize well [0], can't do arbitrary & conceptual thinking, and has trouble with language.

[0] by that, I mean that it does not easily pick up higher-level patterns unless explicitly forced to (through either model architecture, data & task setup, etc). Meaning - it has a much lower success rate outside of the training data distribution compared to "flesh" intelligence. Kind of reminds me of https://www.youtube.com/watch?v=PHRvF0m3yuo


Hmm, current ML has only been worked on for a few years, we have barely scratched the surface. No signs of slowing down afaict. It has wiped the state of the art against algorithms we took decades to think up. What happens after decades of that?

It feels more like there is too many new directions to explore and we don't have enough time/creativity/insights/ideas to fully take advantage of it.


1957 - First ever man-made object launched into space.

1957 - First ever animal launched into space.

1960 - First ever animal launched into space that survives the trip.

1961 - First man goes into space.

1965 - First man goes on a space walk.

1966 - First spacecraft lands on the moon.

1967 - A probe is landed on Venus (!) which collects and sends back video and audio recordings among other data.

1968 - First spacecraft to orbit the moon and then return to Earth.

1969 - We put men on the moon who then safely return.

We've done all of this in just 12 years! We literally went from nothing to men on the moon in 12 years! We've barely scratched the surface of what's possible and there's no sign of things slowing down at all! Can you imagine where we'll be after decades of this. Where will we be in 50 years? I can't even begin to imagine what 2019 will be like.

I don't say this to be snarky, but to point out the issue. Exponential growth doesn't mean continued exponential growth, and it's always very difficult to predict the future. While thoughts such as the above were common place, there would have been hardly a person to predict that the room-sized government "computing machines" would one day be a few inches large, millions of times more powerful, and cost as little as a week or so of minimum wage work. In some cases, those exponents did keep churning for a while.

On AI, I used to be a futurist in terms of it - until I got to work with it. Now I'd be extremely surprised if we have have fully capable self driving vehicle within the next decade that's not constrained to white-listed routes. And that is an extremely softball problem since it fits perfectly into the small domain of problems that current ML/AI systems are really great at making progress on. The thing is getting to a testable model is absolutely trivial. Going that remaining 10% from model to product is many orders of magnitude more challenging.


Current ML algorithms have been worked on for decades earlier. They are gaining momentum again due to big data and computing power.


To me "current ML" in the parent comment meant deep learning. Open a computer vision paper from this month and the prior art section almost only contains references from between 2014 to 2018.

Yes the NN layer architecture might be based on ideas from the previous era, but the way the algo actually solve the problem is completely different.

And that's just because it's the only way we can do it right now. When we can apply deep learning to itself it, to select better architectures and hyperparameters, will find strategies we didn't think about or didn't consider worth trying.


ML was covered in my 1972 AI course at MIT.


And it probably didn't mention neural networks, let alone deep neural networks, which is what the state of the art is using in so many tasks.

I'm not saying it's the end all be all, just that this is what "current ML" is generally meaning, and that this particular approach has only been explored for a few years.


>Here's my shorter-term thinking. Current ML can't generalize well [0], can't do arbitrary & conceptual thinking, and has trouble with language.

To be honest a lot of humans struggle with the same problems.


I had a chance to interview Ray and spend some time with him before he joined Google.

The way we met was very serendipitous -- I asked a question during a movie premiere about the nature of reality. He must have enjoyed it because later someone from his team sought me out to invite me to a VIP after party.

His publicist at the time said this was the best interview he ever gave (It's possible she says that to all the girls, but, I'll take it!)

https://www.huffingtonpost.com/anthony-adams/ray-kurzweil-in...


It's much easier to make a plane that can fly than to emulate the particular way a bird flies.

They'll both solve the flight problem, so it doesn't really matter.

Flight is something extremely comparible to ai development. When it was developed, many companies were trying, and a lot of people said it was impossible. The problem space is also similar. We're not sure how to get there exactly but we may be close.

It was, in the end made to happen by an unlikely pair - not a large company with a lot of investment. I believe this might be that fate, too.


It matters tremendously. Think about the economics of bird flight vs. plane flight. Why is it that flight has been around for almost 100 years now and yet we still aren't flying everywhere? The reality is we can always come up with subpar ways of doing things, but there's a reason birds have evolved as they are today that we just can't replicate ourselves. We mastered long haul rapid flights, but bird flight generalizes much better. They might be slower in long haul, but they are much better and short haul and medium haul and have evolved to work in groups to make long haul possible. So yes, analogously we can create intelligence in a different way, which we are, but it likely won't generalize as well.


If we could reduce the density of a person by about an order of magnitude we can start replicating bird flight on a per-person basis. Our approaches to flight are constrained in all forms by the desire to put really heavy things into the air.

Look no further than drones that can operate for quite a long while, move in any dimension fairly well, and manage some pretty ridiculous speeds for not being designed to do so all because their purpose is not to carry large complicated amounts of weight.


The difference is we knew what flight was and had an easily visible and verifiable goal that was obvious to all when it was achieved. What is intelligence? How do you measure it? How will we know once we have created? We haven't even defined the problem yet, how can we know when we have achieved it?


> in the end made to happen by an unlikely pair

Who spent years studying birds...

No, the wings of the wright flyer don't flap, but that doesn't mean it wasn't designed based on birds. One key piece (controllability) came as a result of the wright brothers studying how birds adjusted the shape of their wings.


Question is, are the current approaches at AI the equivalent of the early flappy wing planes we were building to emulate birds?


Brave claim: we won’t actually invent good AI until two-way brain-computer interfaces become useful.

We need to augment human intelligence to handle systems vastly more complex than we can do today. Specifically memory creation and recall, and information assimilation rates.


How do you define 'useful'?

My keyboard and monitor are brain-computer interfaces. I'd argue that they are pretty useful.

I can imagine a way to consume information that is dramatically faster than the 500 or some odd words per minute I get reading.

But for input to the computer? I mean, I'm not saying that a keyboard can't be improved upon, but I think there are some hard limits to how quickly I can compose my thoughts; I think that it's fairly rare that the fingers are the limit. (Of course, making it so I think less about the fingers would be great)

What I'm saying is that there might be human processing speed issues that limit how much value we can get out of faster I/O.

On the other hand, maybe it's like reading was for me; before I could read at a certain speed and certain ease, I couldn't enjoy books, because I'd forget the beginning of the paragraph before I got to the end. Like I needed to load enough words at once to see the picture. It's totally possible that output would be the same way; if I could somehow output the equivalent of five hundred plus words per minute, maybe that would change my writing the same way passing that speed changed my reading?


I think most of the bandwidth would be computer -> brain, not the other way around, assuming it can teach your brain to retain knowledge and show it images/movies. The only exception would be if we could significantly alter the perception of time, but I'm not really betting on that despite how useful it'd be.

Even if I could only accurately sample 1-8 bits/second from the brain, that would be world-changing. There are a lot of clever people who could stuff a lot of utility through 8 bits/second given you are paired with a powerful smartphone and a proper "UX". Clever encoding schemes and signals could be developed. Especially if you go for sequences/chords that are executed over 1-4 seconds.

Plus the brain would likely just act as an index to fetch info that is then replayed to you. I really don't need a terabyte in the form of neuron connections when storage and computing would be ubiquitous.


can't you sample way more than 8 bits a second from my brain through the keyboard? I mean, I don't think it'd take much work to get to 8 bits a second using a twiddler or something with one hand.

I guess what I'm saying is that I don't understand what a brain computer interface gets you if it's not faster than hands and monitors. (I mean, I guess there's portability, but that doesn't seem super life changing to me; phones are pretty portable already, and I can do 8 bits a second on those, too.)


Hands are wonderful for decoding thought into action, that's true, but for example, the brain can imagine images way faster than hands can paint them. A lot of people can be limited by their hands in gaming too. This is exchanged for learning some kind of encoding scheme that translates thought into bits. It's also really difficult, for example, to create 3D models on your phone just using your hands.

By sending bits, the interface changes and you aren't really burdened with UI navigation or UI mechanics like touch, drag, etc. Something like a phone would take on more roles that are currently filled by desktops and laptops right now.

But again, output isn't interesting. The low bitrate is just to illustrate that it isn't that interesting compared to the other direction (but I still think there could be useful products if the device is discreet and convenient enough -- I'm definitely not wearing a full EEG headset every day).

A product like that would be a good checkpoint in scientific developments of decoding the brain too. I think it's going to be a necessary step to develop before we get the other direction though.


>By sending bits, the interface changes and you aren't really burdened with UI navigation or UI mechanics like touch, drag, etc. Something like a phone would take on more roles that are currently filled by desktops and laptops right now.

You still need some kind of interface; like if I'm visualizing a picture and the computer can read it? even if you can do it, that's gonna require a lot of bits.

My point here is that having a direct brain interface still requires some sort of... interface, and you probably need a lot of bandwidth before the interface becomes better than... hands, at least for people who still have control over their hands. A low bit-rate direct brain output would be super useful for paralyzed people.

As an example for data in the other direction, cochlear hearing aids are absolutely amazing. But from what I understand? they are quite a bit worse than the ears most of us were born with. They are a long way from being an augment that people with already functional ears are likely to want.

I do think it's worthwhile to come up with new input/output methods; personally, I'd be super happy to shave my head and wear an EEG headset almost all the time, if it gave me output that was significantly faster and with less thought than typing. Heck, you might even get me on portability, if it's just better than a cellphone keyboard.

I'm just saying that there's no reason that a low bit-rate direct brain interface would be any more intuitive than fingers... I've been using my fingers for an awful long time, and to get me to switch to something else, I'm gonna need a better bit rate.


IDK, Google's voice assistant thing still can't figure out my wife's name because it's not an English name and I pronounce it as it should be pronounced.


Our receptionist can't figure out any of the names of our offshore contractors either.


lack of ability and lack of caring are not the same.


Yet.


What if human beings were physically connected to a growing network of perfectly organized facts and information? What if that network could even assist with basic logical reasoning tasks? Would that be enough to say we’ve reached the singularity?

It doesn’t seem like it now because our phones aren’t quite physically connected to our brains, and the internet is far from organized, but I’d venture to say we’re already pretty close. A more interesting question to me is whether this is really anything we want.

If human machines were infinitely intelligent then they would all agree. So you wouldn’t need more than one. Sure, the ultimate human might need 7 billion pairs of eyes but those eyes wouldn’t need any more brains than to fuel up and locomote. Furthermore any human-like tendency to doubt oneself or fall in love or “rock out” would be deemed useless and therefore overridden.

Understanding that the purpose of a human is to pass on genes, and that there’s little human or genes left in her, the ultimate human might therefore conclude that her sustenance only disrupts the purposes of organic life forms. Her next and final act would be to destroy herself.

This is how I see the end of humanity. Not in a violent clash between artificial and organic intellects, not in a symphony of mushroom clouds, but with the final flip of a switch, in cold, calculated resignation.


Check out Rodney Brooks' blog to see some predictions from somebody who actually has at least some idea of what they're talking about:

https://rodneybrooks.com/predictions-scorecard-2019-january-...

However as the ML researcher Michael Jordan (one of the most important in the field) has previously stated, these sort of long-term technology predictions are just fun science fiction and there is essentially no academic rigor in this stuff:

https://medium.com/@mijordan3/artificial-intelligence-the-re...


Counterpoint: "Hey Siri, lock my phone." "Playing 'Locked Out of Heaven' by Bruno Mars..."


Betting on technology evolving faster than expected is usually a safe bet, but it feels like Ray has been working on flawed premises and refuses to revise them.


What, specifically, has he gotten wrong?


This one (especially HN commentary) is a good read: https://news.ycombinator.com/item?id=18806315

After reading it all, I don't really see him as being any more successful in predicting than you and me. You get some right, you get some kind-of-in-the-right-direction, you get some wrong, and some are just laughably wrong.


A better question is what has he gotten right?


As we do not possess a clear (or, at least, a generally accepted) definition of what constitutes the "true AI," it seems very likely that we will cross this "even horizon" without realizing it; we may have done it already, for all we know (or, rather, don't know - due to the fact that much of the progress is being done in secrecy).


On pace to do what? Add numbers really fast? Play chess? Make a best guess of which news story I want to see? (Hint: not sports) Perform automatic gear shifting? Follow the vehicle in front of it?

Kurzweil has long been preaching that there will be a qualitative revolution in what AI will achieve - and it has been 15 years away, for the past three decades.


Not actually true, the always 15 years away bit. Three decades ago (1990) his prediction was Turing test and AGI in 2020–2050. More recently he's predicted Turning test 2029. He may be wrong but doesn't keep shifting the dates 15 years away. (https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe... )


Ray Kurtzweil is certainly super smart, but I find his views about the Singularity disturbingly myopic. It's as if he has been living in a bubble for decades, and doesn't realize how the world works for real.

I still have a lot of sympathy for him, as I'm sure he means good. Still, these views distort our reality for many people.


Of course it will happen. We humans are an outcome that took billions of years and yet here we are. It might take a billion more to get sentient AI but as long there is progress then the end is inevitable.

The only debate is when it'll happen, but we probably need an advanced sentient AI to help figure that out.


I suspect human intelligence is more than just computations in the brain. A lot of computation I assume most occur 'outside' of the brain in the gut via hormones and protein signalling. I have doubts that you can replicate human intelligence without replicating human biology.


I doubt a lot of computation occurs outside the brain. I'm assuming most of the neurons, nerves, etc in the gut are used primarily for regulating gut health rather than calculating, reasoning, etc.

People have had their intestines completed removed, liver transplants, kidney transplants, stomach stapled, etc without hurting their intelligence or computational ability. I don't think we can replace the brain like we can elements of the gut.

Certainly, the gut, hormones, etc are important to your physical and emotional well being, but I think it's safe to assume that pretty much all over the computation is happening in our brain.


Or even if it were all in the brain, accurately modeling neurons and interactions of neurons, on bajillions of cudas and threadrippers, will probably still take much longer than it takes for the real-life system to do its thing:

https://newatlas.com/spinnaker-neuromorphic-supercomputer-mo...


Modeling the human brain is just one of the directions of the AI research, possibly not the most important one, even, at this point. On the other hand, the "super-intelligence" everyone is thinking about will be nothing like the human mind.


I have trouble believing that evolution has been that inefficient. That we can brutally outperform the human brain at 64-bit arithmetic--no surprise. But to accomplish the same with creativity and problem solving? Not holding my breath.


We also have 500 million neurons in our gut! https://en.wikipedia.org/wiki/Enteric_nervous_system

I wonder why wikipedia doesn't have more info about the function of this system... perhaps it isn't well understood yet.


> hormones and protein signalling

They are control utilities, regulators for the brain, whose generation is initiated by the brain. However I think you are right with the need for human body. Without sensory input its state would collapse, I think it needs a world with constant stimuli to live in.



I think we will find limited use for general AI since we won't be able to properly regulate or understand the behavior of machines that are smarter than us. Alpha Go Zero can't explain why it plays chess the way it does except for showing how it evaluates the board, but the reasoning is buried in millions of mathematical calculations accreted via millions of games of self play that we can't understand the intuition of. Of course, many smart players can't really tell you systematically how they make the moves they do either. AI will be like a dictator who makes his decisions by intuition only.


An intelligent black box can still be "useful". It could help cure diseases or get original insights in many fields for example. Even if we don't understand how it thinks.


Sure, but these are narrow AI applications. The great and powerful Oz general AI will be of limited utility unless there is a man behind the curtain.


If narrow AI can be useful despite being a black box, AGI can also be useful even if it's a black box.

All the human scientists in the world are black boxes. An AGI could be another researcher, just maybe more productive.

We don't need to understand how it thinks to understand the paper it writes.



Gave up at page five of "this is what this presentation is about", can you summarize his main argument on why robots will never have sex?


It's clickbait. The last page is:

>But What About the Title of this Presentation .... While obviously a cheap hook, it is nevertheless intended to convey the possibility that it is our emotions, our passions, our innate desires — all ingredients of our sexuality — that are the defining elements of our consciousness and, through consciousness, our intelligence. The End


Thanks for the time saver. Incidentally I note sex robots are a thing apparently https://www.thesun.co.uk/fabulous/8204874/sex-robots-machine...


Ray Kurzweil is still on pace to die before the Singularity arrives.


I always felt that Kurzweil’s timelines were too optimistic not only because technology may not evolve so fast, but also because it fails to account for the delays that government regulation imposes. So many of the vaunted leaps in medical technology, for example, are going to require years and years of testing before they get approval for use. Other whizbang technology of the future might be prohibited from the consumer market because governments see it as too dangerous.


This may be the case for many technologies, but clear wins that save lives get fast tracked. It just happened to CTX001. Also older researchers are doing self testing all the time, as they know that they don't have the time to go through regulations.

https://www.investors.com/news/technology/crispr-vertex-gene...


Not if he eats any more pills


I think Moravec's arguments on the same subject "When will computer hardware match the human brain?" are a little less cranky than Kurzweil's and his prediction from 1998 "it is predicted that the required hardware will be available in cheap machines in the 2020s" seems to be panning out. https://jetpress.org/volume1/moravec.htm


The Singularity is just millennialism [0] for secular technolgists. We still have a pretty paltry understanding of human intelligence. It seems unlikely to me that more and more iterations of AlphaGo will spontaneously produce strong general AI. My bet is no AGI within my lifetime.

[0] https://en.wikipedia.org/wiki/Millennialism


Alpha zero is a generalized AlphaGo, so they are headed in the right direction!

AlphaGo -> win this boardgame

AlphaZero -> win any boardgame (well, three right now)

AlphaMinus1 -> win any game

AlphaMinus2 -> win anything

AlphaMinus3 -> win winning. so much winning.

But you get my drift, I'm extrapolating hugely off one abstraction step.


Yeah maybe. How do you define "win" in terms of general intelligence? Outsmart all top human experts at their field?


Impossible to answer: who "won" politics? Democrats or republicans?

Eventually we go from a black-and-white rules (hell, even Go is kind of a negotiation), to using economics as a tool to determine what option satisfies all parties. I don't think humans have solved that yet, but like radiology, perhaps AI can make better compromises?


Whatever the reward function dictates would be a win :)


Right. So we've just successfully moved the goalposts to "write a reward function that objectively evaluates g"


it depends on how old you are, but I think you may be right that AGI still needs on the order of hundreds of years to happen.


The brain and "mind" is insanely complex. There is so much we still don't understand, so of course there is so much we cannot replicate. But is it possible? If there is an incredible breakthrough in understanding, then yes of course. Personally I see little value in people saying it can or can't be done (of course it can!), it's just a matter of if and when. We shall see...


Define AI. Define intelligence. Kurzwiel's prediction are nonscientific and unfalsifiable. AGI is unscientific.

He really likes making predictions: https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...


He predicted Turing test by 2029 which is fairly definite, falsifiable and quite likely to be falsified.

For me proper AGI is when you could say to a robot go build some better robots and then have them build even better ones and they can do so without needing us. There will be some year before which if humans disappeared, computers and robots would grind to a halt and after which they could carry on. That's also a definite falsifiable thing, harder than the Turning test and is kind of what I think of as corresponding to the singularity which tends to be rather wooly in it's definition.


I don't think this one happened:

2019: "The computational capacity of a $4,000 computing device (in 1999 dollars) is approximately equal to the computational capability of the human brain (20 quadrillion calculations per second)."


$4,000 in 1999 would be about $6,000 today. A midrange Deep Learning workstation is in the ballpark, and while it can't "match the computational capacity of the human brain", it can do plenty of nifty things - including beat the current Go world champion in a match.


No, not even close. 20 quadrillion calculations per second= 20 petaFLOPS. A Nvidia DGX-2 is only 2 petaFLOPS and that "AI" supercomputer costs $399,000. Far cry from $6,000.

You are just hand waving to try to defend a definitively failed prediction.



I don’t find this any different than a steam shovel outpacing a human digger. It’s still necessary to prevent both from destroying things and harming people.


Yawn. Of course he has to say this, he makes his living saying stuff like this. He is, after all, a ”futurist”. It just seems rather trite at this point.


AI is mechanical turks all the way down and always will be.


Then humans are as well, yes? What exactly is the magic in human cognition that is impossible to replicate in silicon?


The brain is denser, more efficient and 3 dimensional. Compared to the current limitations of silicon the combination of those three aspects is downright magical.


Ray Kurzweil's brain has already been backed up in the Google Cloud Platform. Everyone claiming he's running out of time is just wrong!


The most important part of the talk was when Kurzweil mentioned running partially-trained AIs in simulation to generate more training data than exists in the real world. That was the first clear description of imagination that I've seen, although others have hinted at it, for example the running stick figures (that I think Google's DeepMind) trained to run an obstacle course several times before they actually did it. I might be remembering that wrong, but there have been a few similar examples.

Anyway, the reason why the imagination part is important is that nobody today is talking about parallelization when it comes to AI. A few "rules" of AI computing and their limits:

* Processing power doubles every year or two for the same cost -> processing power will someday be approximately infinite (for certain classes of computation)

* The search space for even the simplest problems is effectively infinite if we don't know the hidden models -> if we know all the hidden models, then large search spaces become tractable

* The sequential portion of intelligent decision making is less complex than the parallel portion (computers have been beating us at all sequential operations since perhaps the mid 60s or 70s) -> when the parallel portions are solved, then computers will beat us at all intelligent decision making

That last point is really important and exciting, because most of us simply aren't accustomed to thinking in terms of solving problems in parallel. We're thinking "how do we train a self-driving AI to recognize pedestrians and swerve to avoid them" instead of "how do we optimize the solution to the series of equations to drive to the store and back without hitting anything, in a single pass".

AIs will soon be looking forward from their current model of the world, running some number of simulations (potentially billions) and choosing the best outcomes, then feeding back those solutions to improve their hidden models of the world. All in parallel, with virtually unlimited computing power. At that point we'll be training AIs like children and will start to see emergent behavior that mimics life.

Personally, I don't see how this can possibly be any more than 10 years away, 20 at most (even for rank amateurs like me), given the number of people tinkering on these evolutionary patterns, and the amount of open source research being generated.

It kind of haunts me actually. Like, we just started investing in our 401k at work, but I don't have the heart to tell everyone that the singularity will probably arrive before their investments mature. Like, how does one have a child when it's looking like the 20th century reality we're living in isn't going to be around much longer? Do we suffer through the daily grind, the 40 hour weeks of converting our time to money, when AGI will overshoot that within weeks of surpassing our definition of sentience? It literally might make more sense to become a beach bum, or travel the world, go to Amsterdam and spend a few years in a drug-induced haze.

Or is the correct answer to keep going through the motions of running the rat race, knowing that it's all a waste of time but constructing some kind of inner monologue to distract us from the depression induced by the real world's unfolding existential nightmare? Is there a way to un-take the red pill and go back to the blue pill?

I guess I digress, but this is literally all I think about anymore. The futility of work/culture/technology in a world of ever-increasing human subterfuge. Why can't we admit that the dystopia is playing out before our eyes and begin to escape the yoke?

I guess the one thing that keeps me going is that we might be able to break down a single day into its component parts and automate away anything that's beneath human potential so that we can do the stuff we'd like to do (like nothing). The Roomba can vacuum (done). The self-driving Uber can take us to work and back (almost done). The rooftop solar panels can pay our electric bill (effectively done). And so on, until there is no minutia left.

The last step will be to spin up a partial (of oneself) to go to work and earn the paycheck, or literally grow the food to feed us so we no longer have any external dependencies. This is the part that I think is roughly 10 years away.


His predictions are hit and miss. From his 1999 predictions: https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzwe...

By 2009:

Y: The majority of reading is done on displays rather than paper

N: Most text will be created using speech recognition technology.

N: Intelligent roads and driverless cars are in use

N: People use personal computers the size of rings, pins, credit cards

Y/N (premature): Though desktop PCs are still common for data storage, individuals primarily use portable devices for their computer-related tasks.

Y/N (premature): Personal worn computers provide monitoring of body functions, automated identity and directions for navigation.

N: A $1,000 computer can perform a trillion calculations per second.

... seemed a total mixed bag, many 2009 predictions just beginning to coming true today

By 2019:

N: The computational capacity of a $6,000 computing device is approximately equal to the computational capability of the human brain (claims that is ~20 petaFLOPS. Nvidia DGX-2 is 2 petaFLOPS "AI" supercomputer for $399,000). [

N: Computers are embedded everywhere in the environment (inside of furniture, jewelry, walls, clothing, etc.).

Y(easy win): Most people own more than one PC

N: Cables connecting computers and peripherals have almost completely disappeared.

Y/N: People communicate with their computers via two-way speech and gestures instead of with keyboards. (can but in my home and my office most words entered are not)

N: Most business transactions or information inquiries involve dealing with a simulated person

N: Rotating computer hard drives are no longer used.

N: Three-dimensional nanotube lattices are the dominant computing substrate.

N: The algorithms that allow the relatively small genetic code of the brain to construct a much more complex organ are being transferred into computer neural nets.(Could be decades off. Need several breakthroughs to even put this on the horizon)

N: Most roads now have automated driving systems—networks that allow computer-controlled automobiles to safely navigate.

N: Most decisions made by humans involve consultation with machine intelligence. For example, a doctor may seek the advice of a digital assistant.

... few hit but we're mostly heading that way. To me seems 5-15 years premature

[1] https://www.popsci.com/intel-teraflop-chip#page-2


>Most people own more than one PC (easy win)

What's interesting is that I think most people own just one general purpose computing device these days, their cellphone. Yeah, they maybe also have a TV that can play netflix or a game console, but my observation is that most people don't even have one PC in the old sense of "a general purpose computer with a physical keyboard that you use by sitting down" - and if you exclude laptops from the definition it's even more rare.


> Cables connecting computers and peripherals have almost completely disappeared.

This one is unfortunately happening right on schedule, at least if you use Apple hardware. "A $1,000 computer can perform a trillion calculations per second" should be considered "premature" as well, seeing as current GPGPU hardware can go well beyond 1 TFLOPS.


As with most technologists and futurists, he fails to grasp that things take time, lots of time, to make meaningful progress and adoption.


are people still listening to this guy?


Why not?


Seems so.


And if you don’t agree you just don’t understand exponential growth.


cant tell if downvotes are people who don't understand exponential growth or people who think I don't understand exponential growth


I didn't vote either way, but long term exponential growth in anything is basically impossible.

Think about it for a minute. At some point lithography gave us Moore's "law" (which, FWIIW has arguably already failed). Do you think that can continue forever? Pretty sure you exceed computational capacity of the universe pretty quickly. Exponential growth in cell phone sales happened at some point as well. As did exponential growth in transportation speed.

The idea that we've had exponential growth in anything but lithography technology in recent years is also laughably insane. Do you think programming languages are exponentially better than they were 30 years ago? Do people live exponentially longer? Cars, autos exponentially cheaper or better in some way? Are machine learning algos exponentially better? No, no, no, and no.


I realize without context it is impossible to tell but I’m intending to mock the “you just don’t understand exponential growth” position I’ve been battered with. (by Kurzweil but more often his acolytes)


You are talking about s-curves, if I understand it correctly.

Few notes: first, exponential growth does not necessary mean fast growth, some of the technologies can still be in early phase or depend on some future advances (compare time for tv to capture the world vs smartphones)

Second, the decline of exponential growth of a technology does not mean it can't be replaced with a nother technology with better prospects (i.e gas cars vs electric cars, etc).

I would argue if that you zoom out for a bigger picture, technology as a whole does show traces of exponential growth.


Re: technology showing "traces of exponential growth" citations needed; everything slowing down from where I'm sitting.



I don't think anything like the singularity will ever be possible, because the ability to speak language for oneself and invent complex reasoning is spiritual, not merely physical. Humans can do it because we are inherently spiritual beings, a union of body and spirit. (I know most folks here don't subscribe to that assertion... I'm not here to argue it anyway.)

But even if that isn't true, humans are not merely the product of our biology. Each of us has undergone an $age-year period of training, 18 years of which was on someone else's effort and time. I doubt someone is willing to put in that much personal investment to raise a neural net like his/her own child.


Your unusual argument recalls a discussion I once had with members of the Go club at my alma mater, nearly 10 years ago. On the question of whether Go would ever be mastered by a computer, one asserted that the game needed to be understood at a spiritual level - not simply as a computational problem - and that’s why computers would never reach the level of humans. The game was simply too sublime and complex, came the claim.

At the time this might have seemed reasonable - after all, Go software struggled mightily for years to achieve even beginner levels of play, and seemed incapable of mastering the intricacies of high-level play.

Well, we know now in retrospect that spirit was unnecessary to play Go. We know now that it was, in fact, a computational problem. There’s really no reason to expect that “spirit” is necessary to perform the tasks of language or cognition - tasks which are demonstrably executed by impulses of chemicals and electricity pulsing through the brain.


I know spiritual reality is sometimes invoked in discussions about things we haven't yet mastered or understood in science and engineering (one might even accuse me of doing that). But your story is kind of a straw man example, because even 50 years ago we knew of algorithms that could theoretically render Go a solved game, given enough computing resources. You could just as easily say modern cryptography is spiritual in its essence, which would be nonsensical.

Emulating a human mind with computers is an entirely different class of problem. We don't even have a mathematically rigorous definition of what constitutes a mind, much less know of any algorithm that could implement or solve it, even with any given supply of computing resources.


But we do know there must be algorithms capable of solving the Turing test, since there are only finite questions you could ask and only finite answers you could get back (just like how we know Go is solvable). Whether or not that's equivalent to emulating the human mind is unknown, but what's the difference in practice? Anything that could solve the Turing test is necessarily equally as powerful as humans at the use of language, which was the original problem being addressed in your post.


> But we do know there must be algorithms capable of solving the Turing test, since there are only finite questions you could ask and only finite answers you could get back

Only if the algorithm itself encodes the entirety of those finite questions and answers. You might as well say that pi (or any irrational number) is intelligent, since every sentence ever spoken by any human is encoded therein somewhere.

The problem is that intelligently selecting the index of pi to use at a given time (or writing the algorithm that knows what to say in advance) is an act that itself requires a human mind's intelligence. You're only moving the problem one step back. You might as well reverse engineer every possible AES-128 output for every possible key and input (up to a certain length) and declare the cipher broken. It might be possible given infinite computing resources, but doing so wouldn't offer any insight.


Having insight is different than solving the problem. For example, developing AlphaGo didn't give us any insight into how to play Go (besides through watching it play). Similarly, it might be the case that developing a software which passes the Turing test gives us no insight into the nature of language. Nonetheless the problem would still be solved.

> You might as well reverse engineer every possible AES-128 output for every possible key and input (up to a certain length) and declare the cipher broken.

But we don't say that AES is impossible to crack. We only say that it's infeasible to crack. Just like with Go, where we had algorithms to solve it even 50 years ago, but it was just infeasible at that time. If what you are saying is that solving the Turing test is possible, but infeasible, then that is a more agreeable position to me. Regardless, I don't see how you can say that the Turing test is "spiritual in nature" while AES isn't.


> Each of us has undergone an $age-year period of training, 18 years of which was on someone else's effort and time. I doubt someone is willing to put in that much personal investment to raise a neural net like his/her own child.

Unlike humans, software is reproducable and doesn't expire. So even if such a software took 18 years to develop, it is still a worthwhile endeavor since it has the potential to be used by countless people well into the future. Furthermore it is not necessary that it is developed by just one person, nor is it necessary that it is developed at the same speed that humans develop. The creation of such a software could take 100 years and it would still be worthwhile.


But if something is spiritual, there must be some way for it to interact with the physical world, otherwise you would not know about it. Then that way can be measured, which makes it physical...

Unless you are calling spiritual those things that can't be measured, yet?


For the sake of argument, "spiritual" could be a class of physical objects that can only be produced by other spiritual objects. This is the old "vitalism" argument, which is generally considered discredited, but as the "hard problem of consciousness" is still unanswered, this is a small loophole that proponents can use to hang on to it.


I don't subscribe to the "vitalism" argument. I am only equipped to assert that "spiritual" (as opposed to physical) primarily constitutes the use of language as a form of unique self-expression. It can be imitated in machines, but never duplicated.

(There's a good reason the Turing test involves language, and that computers can't form meaningful intent on their own.)

I don't know enough to extend the definition any further, though I'm sure it's not limited to merely that.


As far as we know, human language is generated by the brain, and the brain is made of parts that can in principle be simulated. If other parts of the body also turn out to be essential, there's no known physical limitation on simulating those either. This doesn't mean it will actually be practicable to do so, only that it's not theoretically impossible. By what mechanism does your idea of "spirituality" make it impossible to simulate human brains with sufficient accuracy to pass a Turing test? (or any other language generator not necessarily modeled on humans)


> there must be some way for it to interact with the physical world

that's the body


We humans are blackbox physical machines that processes physical inputs to produce physical outputs.

Whether that input goes through processing that is "spiritual" is besides the point. Reproducing "spirituality" is not the goal. The goal is to replicate what is "observable." To produce a machine that when given the same inputs as that given to a human will produce identical (or "better") outputs as the human.

This is definitely an achievable goal as for every set of inputs and output pairs in existence an infinite amount of different machines can be constructed to replicate that exact input/output functionality.

The "spirituality" that goes on within this blackbox machine or humans is irrelevant. It's not even a word that is clearly defined. We aren't trying to replicate "spirituality." We are trying to make a machine that only from an observational standpoint is clearly superior or indistinguishable to a human intelligence.

The understanding the theoretical possibility of singularity is purely within the grasp of a typical person.

Imagine a chat bot designed to be indistinguishable from a human chatter. The inputs are text queries and the outputs are text queries. The domain is every possible text query and the range is every possible text response.

For purposes of simplicity lets say the text input and output is limited to 500 chars. This makes the possible input and output pairs finite. To create a chat bot identical to a human you simply need to create a machine that maps all inputs to certain outputs. Think of it like a giant hash map of strings to strings. Given enough time and enough memory this can be done.

The goal of current AI is to solve this problem more efficiently. To produce logical cores that can dynamically generate outputs from inputs rather than following a gigantic mapping. It can be thought of as form of data compression. The data is the mapping; the compression of that data is the logical core.

The fact that a mapping can even be constructed at all is by logical induction, a proof that a singularity is possible and thus completely invalidating your initial statement of:

>I don't think anything like the singularity will ever be possible


Only that the chat response is not just function of the 500 character query but also of all previous queries:

You could ask the bot things like "Did I ever tell you about [...]?" or "What was the last time we talked about X?"


Yeah I left that out to simplify the explanation. In this case, the mapping would just be more complex. Lets say we limit the memory of the machine to just 10 million of the previous queries.

Then the hash map could have keys equivalent to the concatenation of 10 million previous queries + current query. This increases the domain to 500^26 * 10 million possible keys. The underlying machine is still a hash map and you can build something equivalent or greater than human intelligence using this technique alone.

If you feel 10 million queries aren't enough we can increase the space to 10 billion queries thereby covering every possible range of textual inputs that can be achieved in a human lifetime. I think this is more than adequate for the mapping to cover a chatbot that is equivalent to human intelligence.

Again the goal of AI would not be to build such a machine but to compress the data. Essentially converting hard data into logic or in other words trading space at the cost of greater processing time. The theoretical exercise is just to show that intelligence and spirituality at a fundamental level is just a giant data mapping of inputs to outputs.

Thereby even under this scenario, a singularity is still possible.


eh math is wrong... its (500*10 million)^26 possible keys if anyone still cares


Raising a kid without having to change diapers sounds pretty cool.


But a fan will be there for sure.


hardware - the body.

software - the mind aka the spirit.

hardware together with software is the runtime

the body with the mind make a human person, maybe that's the soul?? ‍️


I love Ray Kurzweil and he's an undisputed genius, but like many other believers in The Singularity, he's convinced of something that has never been proven to be true.

No computer will ever demonstrate actual intelligence, which is the ability to bring a new and unforeseen solution to an existing problem. I need only a cite a few examples to show this is true: Uber cars run over people. Facial recognition claims black people are criminals just for being black. Facebook's AI software lets in fake news.

These are all errors that could be avoided by actual human intelligence. The fact of the matter is that we don't really know where intelligence or creativity comes from. It's unlikely that we ever will. (See "Heisenberg Uncertainty Principle" for one reason.) While machines are great at doing things faster than people can do them, they're not good at inventing new solutions or doing something that humans couldn't do, given enough time.


> No computer will ever demonstrate actual intelligence, which is the ability to bring a new and unforeseen solution to an existing problem. I need only a cite a few examples to show this is true: Uber cars run over people. Facial recognition claims black people are criminals just for being black. Facebook's AI software lets in fake news.

Burden of proof is not on your claim, but rather on Kurzweil's, but since you made a claim and tried to back it up, I feel compelled to point out that it's a non-sequitur. Just because something hasn't been done before, doesn't mean it's not possible. Many people said going to the moon was impossible. Might be interesting to look at Russell's Teapot.


> doesn't mean it's not possible

In general, that which nature has demonstrated is can usually be replicated. A bird (flight), a floating log (ships), a fish (submarine), an asteroid (space travel), etc. Nature has demonstrated intelligence: a human.

However, just like nature has not demonstrated superluminal travel, it has not demonstrated super-intelligence; so that is still a question.


Nature never demonstrated Seaborgium, but we were still able to create it.


But it did make all the naturally occurring elements heavier than iron with a similar process.


Sure. And nature made intelligent organisms so we could take the fundamentals and build something that nature hasn't.


I never said it was impossible, just that it's not as certain as Kurzweil makes it out to be.


what do you understand by super-intelligence?


Intelligence that behaves like the AI singularity.


The AI singularity, while the assumption is often that it will be smarter "per entity" does not really require individual AI entities to get smarter than a human, as long as we assume that human level intelligence continues to scale if you speed them up and increase the number of entities working together. In that case we could still get the singularity "just" by getting to a level where an AI can optimize its performance and resource use so that it eventually beats us on sheer number of "human-brain-second-equivalents" dedicated to improvement at any moment.


what is the AI singularity? a super-intelligent entity.


Just FYI, going to the moon is also impossible. See "Van Allen Belts."


> No computer will ever demonstrate actual intelligence, which is the ability to bring a new and unforeseen solution to an existing problem

It's easy to disprove absolute claims such as yours. Yes AI can create new things and it's only getting better. You just need 1 example. Read up on AlphaGO. It was able to beat humans by coming up with techniques that no human has ever and possibly would never have come up with because of how illogical they seem to even the masters of Go.

Look into GANS. AI can create new things.


Contemporary AIs don't create new things via creative insight or intuitiveness. A trained NN is only good at the one thing it has been trained at. That isn't true intelligence, no matter how astounding the results.


You're responding to someone mentioning AlphaGo, surely you've heard of AlphaZero. A trained NN can be good at several things.

Creative insight or intuitiveness… What are those exactly? Intuitiveness is so vague of a concept I feel it could accurately describe how some NN behave on the surface, if you didn't know how they are implemented. Intuitiveness in humans is just hidden layers.

Corollary: if you had perfect knowledge of how the human brain work you wouldn't find it so magical and it wouldn't feel so out of reach to replicate its high level emergences.


> A trained NN is only good at the one thing it has been trained at.

I don't know about you, but I only learnt how to read and write thanks to being taught (trained) by other humans.

In contrast, AlphaZero managed to master the games of Chess, Shōgi and Go without ever seeing a single game played, just by being told the rules once, and playing against itself.


The other way of looking at it is that AlphaGoZero was provided with perfect information consisting of every relevant objective fact about the game of Go up front, and thus had perfectly targeted success criteria for the millions of iterations AlphaZero needed to run to get good at it.

I didn't need this to be able to start learning to read, and if the human race learned in this manner, written language wouldn't exist.

And I'm not even sure how one would begin to assemble all relevant information about the space "what words and syntax actually mean" which is a bit more difficult to define than a Go board, or a success function that could describe optimal understanding of books in order to teach computers how to intepret language like AlphaGoZero learned to estimate a mathematical optimization problem in disguise as a board game. Unless your criteria for 'learning to read' is satisfied with parsing text to perform Chinese room tricks in selected domains, or satisfied with textual insights on the level that Dr Seuss appears to have been unfavourably disposed towards green eggs and ham!


I actually taught myself the skill of reading. Real reading, including understanding what the words and sentences mean. While learning countless other things that 5-year-old kids usually learn, partially on my own, partially taught by my parents.

And almost all of that stuff was way more useful than mastering Chess or Go.


The means to do those things were programmed by humans. While the computer is faster at assessing all of the possibilities, it isn't coming up with anything that wasn't made by possible by the human-programmed software.

We now have to have safety belts for Amazon workers to keep them from being run over by robots!


It's unclear to me if you are familiar with GANs, mentioned by grandparent.

A software can hallucinate entirely new photographs that did not exist before of a certain category of objects, like a "dog" or a "burger" or a "celebrity".

Of course it was "made possible" by the human-programmed software, but very very indirectly. In particular, the programmer doesn't really know deeply how it does its thing, and cannot possibly create the same result himself manually.

https://www.youtube.com/watch?v=ZKQp28OqwNQ


"Ever" is a very long time. But I'd be comfortable saying AGI won't arrive in my lifetime, at least not electronic AGI. Note that I single out electronic AGI here due to both our complete lack of understanding of how to model intelligence, and the prohibitive energy cost of modelling the brain if we did understand how to do so.

OTOH advances in genome understanding could yield e.g. a super smart chimp, or dolphin, something like that. That would also be "artificial" intelligence then, and it would be general, just not electronic.


Unfortunately we are still vastly better at making a worse mouse (or cow) than a better one, to paraphrase biologist Anne McLaren. Sure, evolution did eventually make a super smart chimp, but it took 7 million years of tinkering.


Sure, but I think you'll agree that it's a dark horse, far more dark than Moore's law or Dennard scaling.


haha. maybe no computer by the current definition of computer. but I would say that it's a bit extreme to say humans are going to be the only higher intelligence species.

the only thing that I think we should have disagreement on is if we want it to happen and of course the timeline.

for the if question: I believe we (humans) want to do this. Long term it is going to basically mean immortality and I would not think about intelligent machines as a new rival species, I would think of it as an evolution of our species - it's actually a great way of transcending our biological limitations.

for the timeline piece: I don't think it's going to happen within our lifetimes. I also think it's going to be a process where for a very long period of time we will be hybrids (we will augment ourselves with hardware that will make us smarter/more capable and this in turn will lead to us building even better and more comprehensive hardware. At some point we'll figure out that the biological part is just getting in the way and we'll completely remove it).


>No computer will ever demonstrate actual intelligence, which is the ability to bring a new and unforeseen solution to an existing problem.

This is already false, given the many examples of machine learning out performing humans in a variety of tasks. Google's data center efficiency comes to mind (IIRC saving 30% in energy usage over human-found solutions). So does AlphaGO.


Outperforming in these cases just means "doing it faster," as I said. There is no algorithm which can produce results which were not inherent in the algorithm itself. There is no software which can "jump the box" and come up with a solution which was not already present in the software as it was written. The computer is merely processing all the options faster than a human could.

When we say that software "found" something new, it's only that it found it faster than we would, using the algorithm we wrote.


What makes you think there is anything in the human mind that is not inherent in the algorithms encoded in the human mind itself?

What you are effectively arguing for is a supernatural element to the brain. You're free to believe that of course, but it is pointless to have this discussion if people on side believe in a materialistic universe and people on the other side believe part of the process does not follow the natural laws of our universe.

With a materialistic interpretation, there is simply no reasonable argument for why a brain is anything but a computer we don't understand well enough yet.


> There is no algorithm which can produce results which were not inherent in the algorithm itself.

There are algorithms that can find tumors or Alzheimer signs on medical imagery where expert doctors can't see them.

This is not just "doing it faster", it's doing it differently in a way that we cannot do.

> The computer is merely processing all the options faster than a human could

No, it's also processing many, many more options. You know how you can keep a bunch of stuff in short term memory while working on a problem and that helps you get a good grasp of it so you can get that insight. Now imagine if you could keep thousands of concepts and ideas in your short term memory while you are pondering a problem. You are going to find solutions you wouldn't have the means to thought otherwise. It's not just speed, it's breadth of reasoning, and it let you see new horizons.


> There is no algorithm which can produce results which were not inherent in the algorithm itself.

This is a tautology. The same is true for humans.

The interesting thing is that a computer, like a human, can learn on its own something that a human did not tell it. That's what machine learning does.


A random number generator or binary enumerator could output results that were not inherent in its programming.

Or if you pose they don't, then you need to explain better what is so special about human brains that they can produce results not baked in by natural programming.


Yeah I guess if you're going to change the definition of what you said than you are right.

The point stands that ML is better than human at certain tasks and does it in a different, non-human-understandable way.


> These are all errors that could be avoided by actual human intelligence.

Do you think that dogs are intelligent? Can you imagine some situation where the dog intelligence is not enough to solve it?

Unlike dogs intelligence, the intelligence of computers is improving. Now they can play chess and go like a grand master, but 30 years ago it was an impossible task.

The problems you cite are real, but sometimes people fail at them too (drivers regularly run over people, witness misidentify suspect because they have a skin color that is not the skin color of most of the people they see, people actually believe and like fake news). Just give some years until the interface with the real world is improved and computers have more "street" knowledge to apply in difficult situations.

> See "Heisenberg Uncertainty Principle" for one reason.

I have now idea about how this is related.


It's related because it says that the observer cannot escape the observation; he's creating it. We cannot objectively assess our intelligence or creativity because we are the ones observing them. If we can't understand or assess how they came to be, we can't replicate them.


This is not logically sound at all.

We can replicate a great many things we don't fully understand.

Text to speech is a good example. Early text to speech were format based - many of them tried to physically emulate the human vocal tract. And it gets passable results, but the best current results comes from throwing away that and not try to understand and model precisely how we talk, but instead "just" apply machine learning approaches to create a model. The result replicates human speech vastly better, despite us explicitly "giving up" on understanding precisely how to model the underlying system.

The same is true for a huge amount of other control problems, where we often achieve far better results at replicating something when we don't try to understand or assess exactly how something came to be or works, but instead design systems to learn by example.

We still may see benefits from trying to achieve a deeper understanding, but there is little to suggest that there is some universal law that we need to understand something to replicate it.


Humans make all those same mistakes.


Exactly. Software inherits the limits of the people who program it. It doesn't expand upon or escape them. Actual human intelligence, in contrast, can "jump the box" and come up with something that didn't exist in the original materials as presented to the person.


> Uber cars run over people. Facial recognition claims black people are criminals just for being black. Facebook's AI software lets in fake news.

Humans do all of those things too. We might not believe they are the right things to do, but doing them certainly doesn't rule out the presence of a human-like intelligence.


In fact, it does. It shows that human bias (facial recognition) and the inability to posit all possible outcomes (Uber; halting problem) are built-in to the software we write.

While another human may be able to overcome one's racial basis or another driver wouldn't have run over that biker, the fact remains that the software did which shows that it inherits human frailties and limitations. It doesn't surpass them.


Bias is an inevitable problem but a fixable one. There are also tradeoffs to the way bias works in computers vs humans. If you can successfully point out bias to a computer it will be happy to adjust accordingly, whereas human bias is notoriously stubborn and some humans will even deliberately or covertly embrace their biases despite being aware of them.

> the fact remains that the software did which shows that it inherits human frailties and limitations. It doesn't surpass them.

Well sure, some of them, but it's not as if there aren't other areas where the computer is already superior, e.g. no intoxicated driving, tired driving, road rage, racing or otherwise diving at dangerous speeds, high speed/risky lane merging, driving while eating or applying makeup, texting or watching youtube or fiddling with a GPS or having your kid cry out in the back seat etc etc. So there are a lot of improvements there. Of course, I don't think those systems are anywhere near human-like intelligence, but the current flaws are not a demonstration that these problems cannot be overcome.


> No computer will ever demonstrate actual intelligence

Such statements are impossible to be proven right, as, assuming you consider yourself actually intelligent, that implies we are not inside a computer ourselves, which I'd argue is impossible to prove.


If we're running in a simulation, the host computation hardware may be different from what you imagine a computer to be. For example, we could be an experiment of "is life possible in a euclidean universe".


Since the current standard model of physics requires an 11-dimensional universe for all the math to work out, it's quite possible that we live in a space which we cannot understand. That makes it even less likely that we could program a computer which would understand it better than we do.


> that implies we are not inside a computer ourselves, which I'd argue is impossible to prove.

Perhaps not. But some statements cannot be disproven and yet still should not be taken seriously.

But I think you are right that, while correct in claiming that the Singularity has never been proven, mimixco is also making statements that cannot be proven.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: