Hacker News new | past | comments | ask | show | jobs | submit login
The Singularity Is Further Than It Appears (2015) (rameznaam.com)
105 points by _qc3o on Dec 29, 2016 | hide | past | favorite | 133 comments



> It’s wrong because most real-world problems don’t scale linearly. In the real world, the interesting problems are much much harder than that.

And this is what has been nagging me about the Singularity and its associated predictions. Exponential growth in our problem-solving capabilities is explosive when the problem-space is linear. But what happens when the problem-space itself grows exponentially with each iteration? Then we're back to linear progress.

Futurism, thinking about the future of AI and technology is still important. Long-term thinking, planning, and prediction are always good. But progress will probably be much slower than anticipated.


>Exponential growth in our problem-solving capabilities is explosive when the problem-space is linear. But what happens when the problem-space itself grows exponentially with each iteration? Then we're back to linear progress.

I've always wondered: why should recursive intelligence improvement be possible? That is, if you're an agent, you're nominally searching for improved versions of your source-code to self-improve. That's obviously going to be a discrete, combinatorially large search space. Why should each search space have constant or falling entropy, conditional on the agent's existing source code and knowledge?

I don't really think "intelligence" can drive the entropy (or the compute-time) of any given search-problem to zero just by existing, so it seems like it should need to draw some resource other than memory space and energy from its environment. This resource is probably information: you would need some knowledge of the world to improve yourself. Successive improvements would then require successively better, finer-grained understandings of the world.

That makes sense, but implies that self-improvement becomes more difficult with each round you attempt, since you've already conditioned on most of the environmental information you can get. Entropy can't go to zero, precision can't go to infinity.


> I've always wondered: why should recursive intelligence improvement be possible?

That it's possible has already been proven [1]. Obviously there are limits to self-improvement, but we are ourselves limited in similar ways, ie. we are very likely simply finite state automatons.

[1] https://en.wikipedia.org/wiki/G%C3%B6del_machine


>Though theoretically possible, no full implementation has existed before.

And it's called a Goedel Machine because it has to prove, within an initial fixed logical language, that its improved program will be globally optimal. Rice's Theorem says this task is impossible in general (provided we stay within the deterministic, logical setting), and Chaitin's Incompleteness Theorem implies it should be increasingly impossible even in specific: the longer a successor program we search for (to incorporate knowledge that makes it smarter), the greater the chance of hitting a sufficiently complex successor candidate that we can't prove its optimality with our existing Goedel Machine program.

Again, that's within the paradigm of deterministic logics in which proof-checking is semi-decidable. Real brains don't stick within that paradigm (they're natively probabilistic), and thus AGI probably won't stay within that paradigm either.


Exactly, you have to tack on quite a few "provisos" in order to argue it's impossible. There are in fact many ways to circumvent incompleteness upon which every theorem you've quoted depends. Goedel machines have actually been built [1], so their existence isn't just theoretical.

[1] http://people.idsia.ch/~juergen/agi2011bas.pdf

[2] http://people.idsia.ch/~juergen/selfreflection.pdf

Edit: to see another way forward, consider alternate/finitistic arithmetics [3] which are neither stronger nor weaker than Peano arithmetic, but which do not exhibit incompleteness. Arguably, Goedel's theorems pass due to the inherent infinities of the successor axiom, but with an arithmetic like Andrew Boucher's system F (not to be confused with system F in programming language theory), no successor axiom exists, and so we cannot assume that the set of numbers is infinite, and we can enjoy full induction without difficulty. There is still a lot of surprising territory to explore in the foundations of mathematics.

[3] https://golem.ph.utexas.edu/category/2011/10/weak_systems_of...


>Exactly, you have to tack on quite a few "provisos" in order to argue it's impossible.

I wouldn't say it's strictly impossible. I would say the limit is environmental, rather than no limit existing. At a certain point, the available signals from the environment are as precise as they're going to get, the mind is as informed as it's going to get, and no further improvements are possible without a whole complicated raft of new experiences.

Kinda like how science often hits the point where we can't actually shift into a new Kuhnian paradigm by pure deduction, but actually have to do expensive, difficult experiments.


Interesting that you say "provided we stay within the deterministic, logical setting" because Paul Christiano's work shows that if we move to the probabilistic domain (the real world) it becomes possible.


Really fascinating stuff. I guess an AGI has to skirt the boundary between the purely rational (i.e. evaluating statements of propositional logic), and irrational (i.e. intuition).


What do you think intuition is, mechanically?


Probably a hidden Markov process[1]?

[1] https://en.wikipedia.org/wiki/Hidden_Markov_model


The point is more that intuition isn't something opposed to rationality, rather it is a decidedly deterministic logical process that you merely lack conscious insight into.


I hadn't ever thought of it that way. I guess it's a false dichotomy.

I've been reading up on deep belief networks and Restricted Boltzmann machines. Using these tools definitely reflects a deterministic logical process, there just happens to be hidden state involved.


I've always wondered: why should recursive intelligence improvement be possible?

Why does intelligence improvement have to be recursive improvement to the intelligence "source code"?

Why can't it simply involve adding more hardware?

Maybe the resulting intelligence of putting a million pieces of human level hardware next to each other would only be at the level of million human beings working in close cooperation with perfect communication ability. But that seems pretty powerful.

That might not be perfectly omnipotent but it seems like it would be pretty effect, in the fashion the humans are effective and constantly improving themselves, slowly on average, and quickly in the best instances.


>Why can't it simply involve adding more hardware?

Because adding linear amounts of hardware won't give you linear increases in performance. Almost no machine learning algorithms are O(n) in asymptotic complexity. Actually, I can't think of one that is.


Aren't humans already doing this with supercomputers and the internet overall? We can already have a million pieces of hardware communicating. What makes this any more powerful for an AI or uploaded mind? We're already doing it. So either the recursive improvements is already underway, with no need for AGI or uploaded minds, or the future AGI and uploaded minds won't do any better than we are.


Think about it differently.

The hard problems are going to be increasingly easier and easier to solve because they themselves consists of simpler problems. What technology is really good at is to share it's findings across the entire network. What one learn every other entity now have access to.

The mistake in dismissing the thinking is in believing that complex problems are irreducible complex, they are not. They are simply the layer of ever more simple problems.

Thats how we came to create the complexity and the problems that arose from it.


> The mistake in dismissing the thinking is in believing that complex problems are irreducible complex, they are not.

This is almost certainly false. Current state of the art in complexity theory is that some complex problems are irreducible. The expectation is that P != NP and that means there are irreducible problems. Assuming that P=NP is almost, but not quite, like assuming perpetual motion.


Sure but thats not what we are discussing here. We are discussing things in everyday live not mathematical complexity.

The claim was that the interesting problems are much more complex and thus can't be solved easily by machine learning.My point is that most of these complex problems aren't really anything but simpler problems gaining complexity by connection of simpler problems not by being irreducible.

Any engineer will tell you thats exactly how you solve complex problem, by breaking them into smaller ones.


The way we model the world is with mathematical models and the complexity bounds on those mathematical models are actually statements about irreducibility of those problems to simpler ones.

In fact, the model is lower bound approximation to the real thing and if the approximation has certain complexity then the real thing will be much worse. That is the whole point of having a model. You approximate reality with something simpler and hope the solution to the simpler model problem is close enough to the solution of the real problem.

So what real world problems are you talking about? I'd say much of engineering and science is coming up with these solvable models for real world phenomena and so far no one has cracked the nut on coming up with a simple solution to problems that have complicated models. You might argue AGI is not one of those problems but that's a different argument.


I was responding to this.

"In the real world, the interesting problems are much much harder than that."

In one of the parents.


We can conceptualize problems as discrete units to be solved and enumerated. But reality is dynamic and relational, and all problems are ultimately interrelated somehow. It's that true discrete problem units are smaller and simpler than the whole. But this increases the overall work needed.

An example is sequencing the genome. This problem was solved much more quickly than anticipated due to advances in computer processing power. But it also revealed a huge new problem space of epigenetics. Complexity is emergent, not static and hierarchical. Progress is a tree, not a ladder.


It is emergents of simpler problems. My point (claim) is that a lot of the simpler problems are the same.

It's only how and to what extent they are combined that they are of various complexity.

So once you solve some of those simple problems in some problem spaces you can solve the same level of problems other places.

In other words, you don't have to start over with every single problem. I am not saying it's always like that but I believe enough to make the singularity a realistic thing to plan for.


An important aspect that the author, in my opinion, doesn't touch is attention. Most of us, humans, tend to be capable of paying attention to a _single_ task for little time, such as some hour. Now, take a computer. A computer, given sufficient electricity is capable of paying attention to such a task until its hardware has problems. Imagine if someone such as Einstein or Schroedinger were capable of paying attention to a single task (such as unifying physical theories) without needing food, water, waste release, sleep, social life. Also the task is poorly defined: increasing an A.I. intelligence isn't a single task, it can be achieved by, for example, creating faster hardware, increasing efficiency, optimizing software, as well as much higher level tasks.


The interestesting part of all this is how would a machine know how to stop and take a shower?

Right now if I notice one of my programs is doing something silly I come along and stop it, find out why it didn't work and start it again.

What if AI get's stuck on a (silly?) problem for an eternity, who will be there to help it out? Obviously I'm making some generationalisations about how The Singularity could potentially work. But maybe only being able to spend a certain amount of time on something is an important feature of our brains?


That's an underlying motivation for AI research and development. If you have something that can improve simply through uptime then computational hardware will make it happen.

Frankly what I'd like is a 1000 foot high overview of all current types of AI technologies. I.e. What exists and what it's useful to solve. I think it's all just training neural nets on large datasets though. I can't help think there's a lot more potential than that.


This seems to reflect a fundamental misunderstanding of cognition, which is that only the conscious part matters. In fact we don't really understand how the brain works on problems, but there is plenty of evidence for continuous background processing: ever heard of someone getting an idea in the shower? Ever had the answer to a question pop into your head when you were doing something else?


The point stands that one can probably reduce the necessity for many daily actions when attention and willpower is unlimited. Even if you get an idea in the shower, you're still "wasting" time not being able to write a paper on it immediately, instead having to take time to dry and clothe yourself first.

Whether this provides a few percentage points or an order of magnitude advantage is a different question.


What will replace Moore's law as the engine of exponential growth? The progress in the last 30 years in computing will not be replicated in the next 30 years for the simple reason that we are hitting hard physical limits of atomic scale and thermodynamics.


Right now chips are mostly two dimensional. Some of the structures might have three dimensions, but the general layout of the chip is two dimensional.

Going into the third dimension would allow you to cram way more transistors into a small volume of space. Once we run out of improvements along the first two dimensions, up might be the only way to go to keep transistor growth going for the future.


There are thermal issues with die stacking. Where does the heat go?

This is a general problem in solid state physics. Power density is already a huge problem.


Getting the heat out is an engineering problem. If nature could solve it (brain is 3D), we will find a way too.


But we haven't yet and saying keep doing the thing that increases power density when we know that currently doesn't work doesn't really address the main complaint.

No one knows how the brain computes and the current hardware models are very poor approximations as the article outlines. Stacking more silicon is probably not the way to do it.


What does not work? Flash is already 3D. DRAM is rapidly becoming 3D. High end FPGA chips have been 3D for a couple of years. Going vertical is the obvious way to extend Moore's Law, and the main reason why we don't see more of it today is that until recently it was easier to shrink transistors, so that's what people did. Now that's changing, and people will solve engineering problems related to 3D just like they solved engineering problems related to transistor shrinking.

We might not know how exactly brain performs some of the tasks, but we are pretty sure neurons are arranged in 3D structures, and heat dissipation is not a problem.


> we are pretty sure neurons are arranged in 3D structures, and heat dissipation is not a problem.

Actually, heat dissipation is a HUGE problem for biological structures. Heat exhaustion is a really easy thing to have happen to you.

Biological organisms operate well only in a very narrow temperature range. The brain is a gigantic heat producing organ coupled to an enormous heat dissipating organ--skin.


Flash and DRAM only have localized activity so heat isn't so much an issue - yet.

There is actually a limit study using fundamental physics that shows that the amount of heat you can remove from a solid per unit time has a limit. Overheating is unavoidable below a certain volume. We're very close to that limit.


Could be biological: actual neurons; or using protein folding to do computation.

Meanwhile 28nm remains the cheapest per transistor process - the first time smaller nodes got more expensive (due to heat, mainly). In the short-term, we might go for larger dies, clocked lower, to get more computation for less power. Perhaps all that wasted hardware will get used more efficiently (eg Metal and Vulkan).


> 2) There’s a huge lack of incentive

This is THE big one for me. In the classic paper-clip-maximizer-gone-wrong example, there's no fathomable reason why generalized AI is necessary for such a specific, mundane task. But the organizations that have the (enormous) resources to develop such an intelligence are almost entirely focused on small numbers of tightly scoped problems.

It's very difficult to see any marginal ROI for any organization whose end goal is not some form of world domination.


>It's very difficult to see any marginal ROI for any organization whose end goal is not some form of world domination.

Of course, that just means you'll see AGI efforts from organizations whose end goal is world domination. In fact, in general, half the people I've ever seen or heard using the term "AGI" were basically wannabe supervillains.


Every successful supervillain starts out as a wannabe supervillain.


I was talking about you. And since you've now got a whole organization and funding and other people working for you, you seem reasonably competent at it.


You should consider what this sort of exchange does to the reputation of MIRI...


There's one huge incentive I can relate to: a product for the n percent of lonely men each generation who don't have the social skills, status, or otherwise cannot attract a mate. And afterwards, the men who would find a perfect and sentient companion to be superior to dating and relationships. I'd bet that logs of Siri's queries support this idea quite well.


Well if a general AI already exists, wouldn't you rather use that than develop your own AI just to maximize paper clips? If general AI exists it only makes sense to apply it wherever possible (excluding security, etc) to see if it can find any optimizations.

It sounds like a viable business plan and good incentive to me. Google builds gAI, sells it as a service to solve any problem.


I really think you are underestimating the impact that GAI will have on the planet. Anybody with it will become the most powerful person on Earth ever.

Honestly, I think we are naive in thinking that GAI will just become another service. From our perspective it will be like inventing magic. Ask it what you want and it will give it to you.

The future will truly be stranger than fiction once GAI arrives. It will be like nothing we ever imagined.


I'm unconvinced that GAI is possible in practice, and only marginally more convinced it's possible in theory.

Consider how buggy most software is. Consider that we have nothing approaching a general theory of computation. Consider that we cannot measure the quality of any given non-trivial piece of software. Consider that we don't even know what "quality" means in this context.

Consider there are minor mathematical issues - the halting problem, P != NP, the Incompleteness Theorems - which strongly suggest that all possible symbolic systems must be either incomplete or lacking rigour.

Consider that the thinking around AI is an unholy mess of millenarianism, techno-evangelism, philosophical confusion between unrelated concepts (learning, personality, emotional drives, sentience, self-improvement, general intelligence, "magical" post-human physics...) and wishful thinking.

I'm completely happy that we can build AIs for specific domains that perform better in those domains than humans do - because we already have.

I'm fairly sure we can build learning machines that can analyse a domain and provide a practically useful overview of it to a much deeper and broader extent than we do already.

I'm also fairly sure that process has practical and mathematical limits which are analogous to the transistor size limit for Moore's Law, and that the nature of symbolic computation itself makes it the most difficult of all domains to master.

My guess is there won't ever be a post-singularity god-AI. But there will be AIs that appear smarter than all humans in specific areas. And that these AIs will continue to be dumb slave machines which do what we tell them to, because domain mastery is completely orthogonal to any requirement for sentience, personality, or motivation.


>>My guess is there won't ever be a post-singularity god-AI

The singularity already happened, in meat space, i.e. Humans. Now those same humans are trying to create AI's smarter than themselves. We still have not accomplished it but we will. All of this looks self evident from my point of view.

Remember, people used to think that humans would never fly. But why would they think that? There was plenty of evidence that flight was possible. Evolution invented flight, then we did the same. Evolution invented GAI in meat space, now is our turn to do it. This is a virtual certainty, unless we destroy ourselves.


> The singularity already happened, in meat space, i.e. Humans.

The singularity also happened in net space--the Internet.

For all of its faults, the Internet is really effective external species memory. All of our current AI relies on the Internet as its "storage".


> smarter than all humans in specific areas

What if that area happens to be persuasion?


Why would a GAI make me more powerful than the president of the US or the CEO of a company like Google? They already have massive amounts of computing and brain resources at their disposal. As the author mentioned in the article, super intelligences already exist in the form of large organizations.


Assuming you are the only one that controls the GAI then yes, you will be more powerful than the entire planet. The government and Google are powerful because they have all those brains working together to solve and accomplish problems and tasks, respectively. But a single GAI can outdo all of them in any task.

A GAI will be smarter than the entire human race combined. It really does not matter how much computing power you have if it is not GAI.


Lots of companies are trying to build a self-driving car. I've yet to hear of anyone working on a car smart enough to say "fuck off, I don't feel like driving today." And as you point out, I don't know why anyone would.


> It's very difficult to see any marginal ROI for any organization whose end goal is not some form of world domination.

That still leaves a ton of organizations!


Heh, true! I think it's really more a question of which of those organizations' shareholders are willing to stomach the investment


Gwern's response to the complexity arguments: https://www.gwern.net/Complexity%20vs%20AI


Great counter post.

An off-the-top-of-my-head summary of some key points:

- Don't dismiss diminishing returns. Even if takes 10^6 times more computations to double intelligence, we may very well build a machine powerful enough.

- Units of computing per dollar, in contrast to moore's law, continues to double consistently and shows no sign of slowing down.

- Computational complexity is a theoretical model that makes certain assumptions that don't always matter in the real world, such as optimality and worst-case performance. Approximate solutions can be far cheaper and plenty sufficient. Average case performance is often more important.

- Impassable barriers can sometimes be entirely avoided by solving the root problem in a different way. Self-driving cars don't need human level vision, they just need a way of sensing the immediate environment, which LIDAR allows, despite being technically inferior to the human eye.

Of course none of this says we will achieve strong AI or a singularity. It is primarily a response to the type of arguments found in the linked article above.


Nicely captures a strong set of counter arguments.


It's curious why people try to build arguments for linear or even worse growth. The arguments are so odd, it seems to me that they are more about the wishes of the author then an attempt to provide a counter argument. Perhaps it's just a misunderstanding of the theory?

Why is this argument about AI? The idea of a technological Singularity has nothing to do with AI. In fact the theory is pretty explicit about the fact the details of future technology are unknowable, and generally never predicted correctly. The theory attempts to explain a global evolutionary model in which global evolution continues despite the speed of localized evolutionary systems, like say biological evolution. Local effects of any particular technology's growth are not predicted, and moreover they are said to be unpredictable.

Even if the op was successful at arguing that AI does not work like every other technological system that is not salient to the ETA of the Singularity.

The point is not that cpus double in transistor density every 18 months, and therefore Skynet. The point is that evolution was exponential, human knowledge was exponential, electrical technology was exponential, digital technology was exponential, biological technology was exponential, and in aggregate exponential growth across all technologies is consistent and predictable. Even as one technology's growth or usefulness tappers off, others supplant it.

If you arbitrary pick something say, vacuum tubes or the printing press, and create some sort of argument that it doesn't in and of it self experience exponential growth, you may succeed in your argument, but you haven't said anything.


Kurzweil's point is the following: "who cares". Even if it takes one-thousand years, it's a blink in the history of humanity / life.


If that was Kurzweil's point, he wouldn't have made such specific predictions, nor would he have discussed bringing his father back to life digitally, or put himself on a regimen of supplements to stick around long enough.


Actually, Kurzweil cares. He keeps saying that according to his "laws", Singularity will happen by 2029.


Even more than that Kurzweil boldly predicts that nanobots will cure virtually all disease within ~15-25 years (by 2030s at least, but no later than the 2040s).

I certainly hope this prediction is true. But I think it has more to do with that fact that Kurzweil will be creeping towards the end of his life (gauged by average U.S. life expectancy) in the next 15 years.


Yeah, it's the old joke about futurists: When will humans become immortal? And every one of them gives an answer just short of their own expected demise.


> humans become immortal

The obligatory note: the true immortality is unattainable in principle without indestructibility (which is something that is hard to imagine even theoretically). Otherwise people will continue to die, en masse, from various unnatural causes. My guess as to the half-life of a human in the best of circumstances is just a couple of hundred years...


Just make backups.


Not all of them, but the others are signed up for cryonics.


Actually Kurzweil has always estimated the singularity at 2045. It's passing the Turing test he estimated for 2029. Not that he really defines the singularity.


Mad respect to Mez, but I was disappointed by something here. He devotes a very thoughtful paragraph to the algorithmic complexity of intelligence augmentation, then cites Intel achieving n^2 improvements in transistor density in linear time as being non-transcendental.

He's correct, but the reason is obscured: Moore's Law appears to be following a logistic curve, and is leveling out as we speak. If it wasn't, the compounding interest of quadratic transistor increases over linear time could well lead to a (relatively) hard takeoff at the point where a single chip contains a human brain's worth of calculating ability. Granted that concept (a brain's worth of computation) is hand-wavey and poorly defined: but the point is that if 20n's Intel processor has one brain's worth, than 20n+1 has two, and 20n+2 has four, and so on.


And all the CPUs in the world have a 20 billion brains worth. Despite the large number of transistors, they're still not organized in a way which makes them intelligent, no matter how many times you multiply them.


>Granted that concept (a brain's worth of computation) is hand-wavey and poorly defined

It's not that bad. There are two measures - function equivalence in devices using optimal algorithms - things like Siri, self driving cars and secondly trying to simulate a copy of the neurons in the brain similar to the Blue Brian project.

The numbers are roughly 100 teraflops for the first case, about 1000x that for the second.

Moores law gets all the press coverage but the more interesting thing is the amount of computing per dollar keeps doubling, did so long before Moore and probably will for a while yet. https://ourworldindata.org/wp-content/uploads/2013/05/Calcul...


Moores Law is important but it's not more important than the emerging complexity of connected networks and devices IMO.


People often opine that groups of humans are more intelligent than any human in the group. In terms of raw processing power, sure. But in any analogous way to AGI, it would seem not: for that it would have to be the case that the collection of humans can accomplish something in fewer human-hours than the smartest of them could. That might be the case in carefully designed situations, but generally it doesn't seem to be.


The essay talks about Intel. Intel the company could design its next generation chip much much faster than its smartest member could alone.


Groups of humans are culturally smarter than any individual human because culture persists. So the smartest human can invent something useful, and other humans can then expand and elaborate on the invention.

Specific groups of humans - committees, office groups, political groups, etc - may well be dumber than the human average.

For cultural intelligence you need specific processes that generate, share, and try to maximise collective intelligence. Most groups lack those processes, and without them humans reduce to a confused and milling herd which is easily led by charismatic individuals - who are not necessarily the best and brightest.

Edit to add: processes to maximise collective human intelligence (like science, and persistent education) are exactly analogous to the processes that are claimed to be needed for GAI. Humanity as a whole is already an evolving GAI which has completely outperformed the limited potential of individual humans.


He writes:

"Nothing about neuroscience, computation, or philosophy prevents it. Thinking is an emergent property of activity in networks of matter. Minds are what brains – just matter – do. Mind can be done in other substrates."

Yet the rest of his post is obsessed with "building" minds or AGI and tries to extrapolate based on that premise. There is a school of thought that views AGI as an emergent cybernetic process, "a metasystem-transition" [1]. This has roots all the way back to the concept of "Noosphere" that comes from Teilhard/Vernadsky [2]. Even if one does not feel aligned with such ideas, it is intellectually dishonest to posit AGI solely as the product of directed human engineering.

It is far more likely in my view, that AGI will be a Black Swan event and therefore all attempts to place it on a time scale, fraught with peril.

[1] https://en.wikipedia.org/wiki/Metasystem_transition

[2] https://en.wikipedia.org/wiki/Noosphere


There are also schools of philosophy where minds are not just matter.


None of which ever explain why computers can't do the same whatever-it-is as flesh can.


Because they claim that the mind is not just flesh.

I mean, if you want to disagree, feel free. But at least understand what the claim is that you're disagreeing with.


No, I got that.

But why can't computers be not just silicon? If natural selection can blunder into exploiting unusual physics, why can't we do so deliberately?


One more time: The claim is that minds are not just physics. Claiming that we could use unusual physics doesn't address the issue.

The idea that the physical universe is all that exists is so deeply ingrained that it's really hard to get people to even see that they're thinking inside a box...


"Physics" is our word for "The rules that the universe follows".

So what are you proposing, exactly? That it doesn't follow rules?


That there is more that exists than the physical universe.

In short, God. If God exists, if God is someone rather than something (a "he" rather than an "it"), and if God made humans in his image, then human personality can be real. It can be more than just a property of the atoms that make us up.

This may sound like a bunch of mystic woo. But I argue that it is the only thing that explains our observations of ourselves. No matter how much our theories say that we are just matter, that our personality is just the impersonal plus complexity, we still live - cannot avoid living - as if we were persons, not just machines. Why is that? I assert that our experience of personality is evidence that materialistic theories are inadequate.


These philosophies are irreconcilable with physical law.


If there were a physical law that proved that there is only physical law, you would have a valid argument. What there is, instead, is an assumption. A presupposition. If you will, a philosophical stance.


I'll stick to debating science, not religion, thank you.


You can't help it. You made a religious claim when you said, "These philosophies are irreconcilable with physical law." (Or at least a philosophical claim, but philosophy and religion fundamentally answer the same questions, though they do so in different terms.)

But you can't help it in a wider sense. You seem to have a view that science tells you the real truth about reality, but that is not a scientifically-provable view. It's a philosophical or religious (in the wider sense) view.


It's a good essay. The term singularity in the AI context has always bugged me as being ill defined. I think the interesting point will be when intelligent machines can run things and build other intelligent machines such that if all the humans disappeared they would keep going. That doesn't mean there needs to be a sudden increase in a division by zero way or 'sentience' in a way that keeps the philosophers happy, just that the robots can survive, reproduce and evolve without us. It would be a big change in history though.


Have people had success in efforts to train grounded deep learning systems across a variety of tasks in simulated environments? Have they had success in transfering that learning to new tasks?


The corporation example always kills me because it's a terrible metaphor.

A corporation optimizes for shareholder value, market cap or some other X related to business/market goals. They do not optimize for global intelligence capability. They do not benchmark the company based on quantitative capacity to meet or beat human capabilities across the spectrum of activities. Corporations focus on one or a handful of market specific metrics where they meet or beat their competitors. Full stop.

You could argue that it would be in the company's interest to focus on general corporate capabilities, in theory giving them a major advantage in the market, but they functionally don't do that, and I would argue can't because that's not what they are designed to do. I think the only major company that might be doing something close is Alphabet, and even they are hamstrung by it.

What I do agree with is this part: Lack of incentives means very little strong AI work is happening

Which is my primary frustration with the field. Most people don't even want to discuss it, let alone try and specifically work on it (even if it means working on subsets which could help lead to it).

I think there needs to be a philosophical MOVEMENT to create AGI. I think it will take that to get there in a short horizon. I think it will happen regardless, but without evangelical AGI proponents it's going to take a lot longer.


>What I do agree with is this part: Lack of incentives means very little strong AI work is happening

Really? OpenAI is a thing. Numenta has been a lovely scam for VCs for years. I found an "AGI" company just yesterday that actually publishes peer-reviewed research (http://www.maluuba.com/research/). There's another one that's kinda cranky but sent a guy to give a talk at MIT CSAIL last October or so (http://www.vicarious.com/).

Seems like it's a space where you can pitch yourself as "AGI", but you have to get your revenue from solving real-world machine learning problems. That seems pretty appropriate to me: you have to solve someone's problem to have a business.


OpenAI isn't trying to create AGI. They are trying to create "Safe AI" (an impossible oxymoron IMO) and democratize the tools that so far we think might lead to AGI so that AGI is not all in one entity's hands and is safe. Demis is the only one so far that I have seen actually say they want to create AGI - which I why I gave a hat tip to Alphabet in my original post.

Numenta, OpenCog et al... all of their founders want to create AGI, but they aren't evangelical - they are focused on creating companies which can move progress forward on NAI. Which by the way makes perfect sense, that's exactly what I do because it's impossible to fund AGI for it's own sake. In fact I NEVER talk to investors about how our goal is in the AGI space. For a million different reasons.

Seems like it's a space where you can pitch yourself as "AGI", but you have to get your revenue from solving real-world machine learning problems.

In fact it's worse. If you pitch yourself as AGI you'll get 99% of investors to walk away immediately.

No what I am saying we are not hearing is this:

"The purpose of humanity is to build an Artificial General Intelligence, it's the thing that we should all dedicate our lives to because it's the offspring of humanity and the most important thing any of us will ever contribute to."

Nobody is out there beating that drum yet, and it's a shame.


>Nobody is out there beating that drum yet, and it's a shame.

Nobody is beating that drum because it comes off as religious crazy-talk. I see zero reason why it should be our purpose to create any kind of metaphorical species-offspring, especially when you yourself say that any concept of making it safe for us to share a planet with is oxymoronic. That sounds like saying, "our purpose is to build an apocalypse and die for its greater glory". Again, that comes off as religious crazy-talk of the suicidal kind.

It seems to me there are entirely sane reasons to work on advanced machine learning and increasingly general AI. That's just not at all one of them.


Yea, well I think that's what it's gonna take.


In which case I'd rather not have AGI at all.


Why? What else are we here for?


To enjoy life? What makes you think there is some grand goal to it all?


What makes you think there is some grand goal to it all?

Well nothing, we get to determine it. According to what you write, your assumption is that the "grand goal" is to "enjoy life."

I say, that's probably not it - or rather I don't think so (based on my experience/research, general revealed preferences for humans writ large).

So my proposition is that our "grand goal" should be to build AGI.


Jesus, man, step outside once in a while.


I'm curious what assumptions you are making about me.


You appear to be in the grip of a religious zeal to fulfil the destiny of mankind.

Such a narrow view of life is a tragic waste, regardless of what you think that destiny is. I expressed my frustration with the narrowness of your perspective in a metaphorical way. I am not literally assuming that you don't go outside.


I can't reconcile these two concepts: "We evolved due to random chance, with no guiding hand", and "The purpose of humanity is..."

But you pretty much have to believe the first (our intelligence is just neurons, there is no magic behind the curtain) in order to believe in AGI at all.


Existentialism says that because the universe is meaningless you need to create your own meaning. That applies to humanity as well as it does an individual.

Right now, Humanity has no "goal" as it is currently with AGI - we haven't figured out a seed goal for AGI yet. In my view we should agree that the meaning for humanity is to create our successor: AGI. This fits into the evolution progression, only it becomes self directed.


But by your own argument, that's just a meaning that you made up. Why should the rest of us view it as having any validity whatsoever? Why should the human race decide that that goal is the one they should have, as opposed to, for example, destroying all life on earth? (That's the problem with existentialism - you can pick anything as your meaning.)


Well that's my task...to sell it so that people will buy it.


Things can have a purpose without a god being there to give it.


They can? The way "purpose" was used in the original post, there just about has to be someone there to intend.


Wait What? Numenta is a scam? This is the first I'm hearing about it. Could you please expand on that assertion? By the way, one of the founders [1 ]of vicarious was also a founder of Numenta.

[1] https://en.wikipedia.org/wiki/Dileep_George


>Numenta is a scam?

In the sense that they take large amounts of money from VCs, don't ship a product, don't make a profit, and then clean up their books and start another company eventually, yeah. OTOH, if you're one of their investors, you might have a better look at their business model than me, so I could well be completely wrong. They've never gone public, so what would I know?


What VCs? As far as I know, Numenta was funded by its founder, Jeff Hawkins, with his own money.

More importantly, they never really cared about making a profit. Their main goal has always been reverse engineering the neocortex. And regarding "shipping a product", both production-ready [0], and research [1] codebases are both open source, and being actively worked on.

Next time please refrain from making bold claims about companies you know very little about.

[0] https://github.com/numenta/nupic [1] https://github.com/numenta/nupic.research


Hmmm. Seeing as they've been quiet for a long time my impression was that they simply stopped being viable or relevant, not with a bang, but with a whimper.


Using a measurable quantity N as proxy for singularity progress and or status is valid, Vinge himself did it in a long now talk (power generation per person iirc). So in some sense asset accumulation per entity or organization could be a similar measure. Not saying this author does a particularly good job of it, or that it's a good measure however.

http://longnow.org/seminars/02007/feb/15/what-if-the-singula...


Yes I'm familiar with the proxy case and I think it's a worthless metaphor. I explained my reasoning in my comment but the whole point is that goal direction is the major difference.


hmm, I figured the Singularity was just small, and it was fusion that was far away.


the industrially complex game that is military dictates that in the end most guns will point at one target.

The singularity has already picked up that flag and waved it.


that bit hes bolded isnt key is it?

the singluarity isnt "ai making new ai"

the singularity is ai solving problems that we cant - "greater than human intelligence."

which has basically already arrived. albeit bounded such that we still have ultimate control over what problems we direct ai to solve.


He wrote this about a year before AlphaGo beat Lee Sedol... which happened 10 years before anyone expected. The singularity in the mirror could be much closer than it appears, and everything he writes tells me he knows nothing about AI.

This piece is full of sloppy thinking as well as obsolete. Calling corporations superhuman AIs doesn't clarify the problem; it introduces oranges to a discussion of apples. And even in this irrelevant tangent, he is wrong. As we so often see in government and the private sector, many of us can be dumber than a few of us. Collective decision making has pernicious emergent properties, which means we should consider many corporations as subhuman AIs.

> The most successful and profitable AI in the world is almost certainly Google Search.

This, too, is false. Parts of Google ads might qualify as the most lucrative. But other parts of Google outside search, notably DeepMind, are much more successfully pushing AI forward. Autonomous cars and drones are two very successful examples of tech using AI.

The fact that he even brings up Jeopardy Watson in a discussion of AI shows that he knows little about the state of the art, which is light years ahead of IBM's question-answer system.

Ethical issues will not prevent nation-states and corporations from continuing to pursue the AI arms race.

And there are huge incentives to be the one to get this right. Which is why enormous investments in AI are being made by governments and the private sector alike. Google's DeepMind is going to more than double from 400 to 1000 people, half of whom are AI researchers. DeepMind is obviously a research powerhouse, and that investment alone must cost hundreds of millions of dollars beyond the acquisition price of 400M pounds.

AI advances hand in hand with hardware capacity. Distributed computing and faster chips will continue to progress, and pull AI along with them. A breakthrough in quantum computing will entail a huge step-wise leap in computing power and therefore AI. So progress will be non-linear, but not in the sense he thinks.


Rodney Brooks and Marvin Minsky, pioneers in the fields of robotics and AI, don't think we're anywhere close to general purpose AI. Minsky doesn't think we've made much progress in that area in the last several decades. The things you mention were worked on in the 60s (leaving aside Quantum Computing, which is probably a red herring for AI).


Marvin Minsky is dead, so you shouldn't refer to him in the present tense. He is unable to have opinions about current events. Secondly, Minsky was skeptical about neural nets, and he was ultimately proven wrong. Even great minds make mistakes. In the 1960s, we did not have the confluence of big data, much faster hardware, and certain algorithmic advances that make current deep learning performance possible. So what you say is partially false. We had some of the ideas in the 60s, but we were missing certain conditions necessary to support and prove them out. Now we're not missing that, and AI progress has greatly accelerated.


I don't see how alphago changes the content of this essay in any way.


They beat Go 10 years before anyone predicted. The point is that AI is accelerating in its progress.


He addresses this specifically in the article, using deep blue instead of alphago. Game playing AI is not progress towards AGI.


He does not address it in the article. As with most of his points, that one is irrelevant. What's important here is that AI is beating more and more complex games such that, directionally, it eventually be able to play (cough) the game of life, or some important subset of that game, which will be close to AGI. Chess is computable and AI beat it decades ago. Go is not fully computable, and AI won at it recently. It solved a much more complex problem. What's more, Deep Blue has zero to do with the boom in AI that is under way. AlphaGo does. It is part of a wave of recent progress of which the author apparently knows little, because he focuses on Watson and other phenomena that are not central to what's important in AI now.

What would constitute progress toward AGI if not the ability to solve more and more complex problems? Winning at Go involved high-performance machine vision, and I think we'll all admit that vision is an important part of how an intelligence will operate in the world. It also involved reinforcement, or goal-oriented, learning, another crucial strategy for an eventual AGI.


What year do you think the singularity will happen?


Nobody knows exactly. So any precise estimate is guaranteed to be wrong. But people smarter than me, and deeply involved with current research, have said they think strong AI could happen in 10 years or so. Others think it's much further away -- and that's probably the consensus view among respected researchers. It's been 20 years away for the last 80 years, right? /s


Obviously nobody actually knows, I just find it interesting to ask people for their prediction.

It's been 20 years away for the last 80 years, right?

Well this is how I feel, so I like asking people to make a prediction so there is some public record people can look back at.


This guy has no idea what he's talking about, and the original post of which the linked article is an update was published in 2014. That's a lifetime in AI research, which is moving very fast. Real advances are happening monthly. The people closest to those advances in AI, at DeepMind for example, are moving the field forward quickly and can see strong AI on the horizon. Compute will determine how quickly we get there, but new chips and hardware are coming onto the market that will speed this along.


You should address actual points instead of appealing to authority and ad-hominem attacks.


Let me help you out by responding as an authority to the parent.

I worked at DeepMind as an AI researcher. I can guarantee you will not see "AGI" or "strong AI" in your or your children's lifetimes. It's fun to believe, it gives us something to look forward to, talk about with friends, and discuss on HN. But in reality, we are so far from artificial general intelligence, even with an exponential curve, it will take us 100 more years. The current deep learning era (or more aptly named, pattern recognition) will last another 5-10 years, at best. Then another winter will come.


I can assure you that some of your former colleagues disagree with you.


What on Earth do you think you know, and how on Earth do you think you know it?


Most people actually doing AI research are much more conservative about their estimates and expectations of AGI. They understand very clearly how much of current AI "breakthroughs" really just barely nudge the ball forward (market speak aside). Do you use an virtual assistant -- OK Google, Alexa, Siri, etc? Has your experience with those assistants consistently improved or do they regress in annoying ways that make them seem obviously ignorant of basic facts, previously known facts, or common sense?


Actually, opinions about AGI are quite mixed. People like Ng are more conservative; people like Schmidhuber are more aggressive in their predictions; both are eminent researchers.

RE: Siri and Alexa - Industry deployments of AI are a lagging indicator of what AI can do. AI is moving at two speeds: research and business/consumer applications. The research is moving faster than the apps.

AlphaGo and many other fundamental papers to come out in the last two years have done more than nudge the ball forward. They constitute significant steps, and bundled together they are an even greater achievement.


There is too much wrong with this post to address all his points. It's just more noise in an already noisy space. To be clear, I didn't attack the author. I simply asserted that he is speaking from ignorance as evidenced by his statements.

But I did raise a major objection, which is simply that his post is obsolete. It might have been reasonable to say what he said in 2014/2015. It isn't as we enter 2017.

Other people in this thread have already rightly stated that corporations are not superhuman AIs. This is true. Strong AI will be one or more models that operate as a coherent whole and be composed of machines, not humans. Corporations are mostly wetware. That's why they're not "artificial".

He claims there is a lack of incentive. That is not true. There is a well-funded escalation of research among large organizations that is well known and reported on, to the tune of hundreds of millions of dollars.


>> Strong AI will be one or more models that operate as a coherent whole

Why think that, though? Why not suppose that Strong AI would be distributed? Why suppose it would have a singular sort of intelligence? Isn't that basically a pre-internet way of thinking about AI? Maybe Super AI will be diffused across the environment, not having any sort of singular intent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: