When I was a young person in the 1970s reading about technology, space travel was the focus of futurism and predictions. I read a lot about the "accelerating rate of progress" in space technologies, and many experts confidently predicted permanent lunar colonies around 1990 and luxurious martian and orbital space colony living post-2000. The reasoning was more or less the same as the "AI singularists". If you look at the "curve of progress" in space tech from WWII to 1970 and extrapolate from it, we ought to have the full "Star Trek" technologies now.
However, as we know, the pace of progress in space travel and settlement technology slowed down and almost stagnated, despite a lot of very smart people who thought fast space colonization was a sure, inevitable prediction.
I think the same goes for extrapolations from our current computing technology - as exciting as the blue-sky scenarios are, and as fervently as I hope that brain uploading and similar goals will be achieved, I think the barriers to computational transcendence of the current limits of our bodies will probably be at least as challenging as the barriers to permanent off-earth colonization.
The same situation existed for life extension technologies in the 1970s. People were far more optimistic than the science merited. Here's an explanation as to why things are different now for that field, and you can perhaps draw the parallels for the other parts of the SMILE (space migration, intelligence increase, life extension) equation:
But in the 1970s, getting proto-SENS to work in mice would not have been a billion-dollar, ten year proposal as is presently the case. It would have required a far greater initiative, one to rival the space program or the formation of the NIH. In 2009 we live in an age of biotechnology and life science capabilities that is far, far removed from the rudiments of 1970. Work that today can be achieved by a single postgrad in a few months for a few tens of thousands of dollars would have required entire dedicated laboratories, years of labor, and tens of millions of dollars in the 1970s.
The advocates of 1970 who predicted engineered longevity in their lifetimes were wrong for two reasons: a) because the technology of that period, while advancing rapidly, would remain inadequate to the task for decades, b) there was no set of events likely to lead to the necessary (enormous) outlay of resources required to make significant progress in life-extending medicine.
Another reason i think is that life sciences applications are inherently difficult for ethical reasons. It's one thing to risk a few people's lives who get sent to space once in a blue moon, and another to purposefully put the lives of patients at risk experimenting. On the other hand, experimenting with computers is harmless.
In the sci-fi stories of a libertarian bent, ethics were a little more mutable in the search of their ultimate goals. Such externalities were often avoided or ignored.
As Kurzweil writes in the article, this is exactly one of the arguments against the singularity that Kurzweil brings up and discusses in his book.
Lots of people argued that if space exploration or car engineering continues growing at an exponential pace we'll have flying cars and space colonies. They didn't explain why the growth should continue to be exponential and not end up as an S-curve.
In "The Singularity is Near" Kurzweil argues and shows why computation and reverse engineering of the human brain are subject to exponential growth. Short answer: there are lots of competing technologies and researchers that have different ways of ensuring that current computational advances continue well into the future - we are far from reaching any physical laws that will stop us.
I'm a huge believer in the transformative power of technology, and AI in particular. I don't see, though, why AI is so fundamentally different from nanomachines, "fully programmable" genetic engineering, and "free energy" technologies in either feasibility or in potential impact on humanity.
You might say my problem with the singularity talk is that I think we have ALREADY gone through "technological singularity" using the current, rather flaky definition. We already have experienced technology-driven change which has made the future completely unpredictable from the past. The integrated circuit has already revolutionized the nature of human life. The claim that the future changes will be orders-of-magnitude more important in some way seems difficult to measure in an objective way, which is part of why the topic has a bit of a "code smell" of pseudoscience.
Once I really start thinking about how technology has changed the human condition, it seems to me that the invention of both spoken and written language is actually the TRUE "singularity" because human civilization is radically different from how other animals live, and it is the ability to transmit and preserve information that is the most important enabling technology.
Even if I can upload my brain into a computer, isn't that just an upgrade to the capability I already have, which is to preserve my brain contents by writing and transmitting that information into the future?
tl; dr - Humanity has already experienced fundamentally transformative technological change. We will certainly experience more in the future. Is there any objective way to measure the impact of technologies, and are any potential future changes actually "more important" than those we have already experienced?
Again, I really recommend that you read The Singularity is Near before arguing against it (actually not sure if you are arguing against it anymore). But one of the topics there is a long argument on how the pace of change is increasing.
(Eg compare the difference in life between someone born in the 13th century to someone born in the 14th century - not much of a difference. But compare how life have changed in just the last 50 years).
Imagine that you are playing a strategy game, and that there is an upgrade that makes all other upgrades go 1000x faster, and cost 1/1000 as much. Surely you would want to get this one as soon as possible?
Imagine a silicon brain that we create that is 10% more capable than our brain. It's nothing astounding, but it's smarter. And it has access to all knowledge within seconds. So this brain is able to actively comb through that information, and look at it's own design, and being 10% smarter, it's able to design a more efficient AI. That AI, now X% smarter will be able to do the same, etc, etc... It's not unlike what we do now. You couldn't design and build a modern computer without a modern computer. But the AI will be able to do it better, and faster. So it's not about a potential advancement that changes the world a little. It's about the possibility of being able to jump human knowledge by years or even centuries in a very short period of time.
Comparison to space travel is silly. I'm your age, and I remember clearly what happened. Don't you? We kids were so excited about getting closer and closer to a man walking on the moon. The closer we got, the more exciting. Finally, we watched it happen. Woohoo! We did it! And then we all lost interest. It was done. It wasn't entertaining anymore and it wasn't worth any money to anybody, so we got on with our real lives and did things that WERE fun and profitable.
It was a government program built for reasons other than economic, so it didn't grow organically as an economic system serving and adapting to the needs of customers. When the non-economic reason went away, the growth rate dropped to near zero.
Compare that to information technology. Is that one centralized federal program divorced from economic reality? No, it's growing organically, serving countless different markets demanding countless things. Machines can do more every year, and the more they can do, the more diverse the demands for them to do more, and the more people there are who are willing to pay, the more people who are working on better IT technologies, the more investment, and so on.
Will there be a singularity? No, we're not all converging on a point in technology or time. We already have domains where machines are orders of magnitude better than people and those that are the reverse, and the machines are sliding past us gradually. There isn't one "general intelligence" model, either. People do some cognitive activities so poorly that it wouldn't be considered intelligent at all if we didn't use ourselves as the benchmark for general intelligence. We'll find ways to improve machines in specific domains and find other ways to generalize their intelligence across various categories of domains, one step at a time without ever crossing any visible "general intelligence" threshold.
The process has been underway for a long time. We'll eventually reproduce whatever Nature has done, and much that it hasn't, if we don't kill ourselves first.
How soon? How soon what? A gradual change is already underway and will just continue. Maybe at some point augmented humans will notice that they can't think of any specific thing an unaugmented human can do better than a pure machine, but others will debate it and the augmented people and machines will keep improving until they all think it was sometime in the past.
Okay, how soon will it be possible to upload my mind to a long-lived substrate so that I won't disappear like an unsaved term paper when the power goes off? Probably not long after my power goes off....
You are missing a fundamental key difference. There is no profit in reaching the stars. Besides that, there are also no barriers that come close to the issues with the Physics and energy requirements of space travel. Unlike a warp drive which we don't even know is possible, the human brain is an actual physical device that exists, and simply uses chemicals as it's logic gates and transistors. We both run on electricity.
I'm a biologist by schooling, and a programmer by occupation. So I understand the science on both sides, and it's not a matter of if, it's a matter of when. And it's coming a lot sooner than people think.
This is an indicator you hardly understan the science behind it. We barely have any understanding how brain works, so even if singularity is possible it is very very far away.
Broad understanding of how the brain functions it not necessary (and would not be sufficient) to replicate it. To make a model of a brain that runs on a computer, we only need to be able to replicate it's basic building blocks, and then copy the structure of a brain over.
The basic technologies to do this already exist, and are presently used to reverse engineer microchips (among other things). What you do is first freeze a brain, and then carefully slice a few micrometers off the top. Then record all the connections between neurons in the top layer. Slice another layer off and continue.
This would take a very long time, and cost hundreds of billions. It would not, however, require any advancement in technology over our present level. The key technology we presently lack is a computing substrate sufficient to run the simulated brain.
That's one approach, but I don't think there's any evidence that you would end up with a working brain at the end, let alone a copy of the individual you started with. And good luck finding a volunteer. :)
The brain's functionality is not just due to the physical connections between neurons, it's the chemical and electrical connections as well, operating on various simultaneous levels (that is, the frequency of firing as well as the strength of firing of neurons matters). A frozen brain isn't working, and you need to copy all that stuff too, and the technology to do that doesn't exist. We can barely measure it, let alone reproduce it.
I don't think you can say that a thorough understanding of all those multiple levels of complexity in the brain just isn't necessary. I really don't think it is that easy.
Someone might have said the same thing about molecular biology before watson+crick. Neuroscience has been growing with leaps and bounds in the past decade, and, exactly because it is still in its infancy there are very significant low hanging fruit to be discovered as visualization and stimulation techniques are improving and are employed to scale. It actually does look like we will be able to figure out how our brain works in a decade (decades?). Once that is figured out, we already have the computer framework to do massive simulations of the brain.
Allen himself is funding a Brain research institute, so one wonders why he's pessimistic about the future of neuroscience.
How we work is completely irrelevant to reaching a singularity. These arguments are like talking about how we will need only a few computers as large as cities to handle our needs or the making of OS/2 out of intel assembly.
When we have a github of standardized computations and deep learning (or evolutionary) algorithms that pull it together with no human intervention we will have the singularity. Then it can try to figure us out or do something useful instead.
It wont look magical, it wont take a huge market force, it wont answer your religious questions or guess what you are thinking... But it will change how we approach and automate solutions to what are currently intellectual problems. Reality is always mundane.
Well, that statement was a lot more correct even just a few years ago. But we're starting to get a much better grasp on it. In the past 10 years or so we have learned more about how the brain functions than in all of human history. It's true, we're not yet experts, but you're making the assumption that we need to know entirely how the human brain works in order to build something that supersedes it in capability, and this is simply untrue. I don't need to know anything about how a gasoline engine works to build a faster and more efficient electrical engine. In truth, such thinking may even impinge on my ability to do so. The singularity isn't about creating a human brain in chips, it's about a point when the chips are able to process more than our brains are capable. What forms that takes and consequence of that is simply unforeseeable. Simple math proves it's coming.
I'm sorry, I mean no disrepect, but the idea that because you are a programmer by occupation doesn't really mean you understand what it would take bring about the sort of rapture predicted by Kurzweil.
Last night I read several posts on HN about people calling themselves "hackers" or "programmers" because they played with Wordpress and learned a little Javascript on a YouTube. This is not to be elitist but I think claiming expertise on this topic requires a little more insight than what is passing for being a "programmer" these days.
And you're somewhat wrong, I think. There is no reason inherently that traveling "to the stars" has to be unprofitable. In fact, many of those predictions about colonization the parent discussed, usually had some sort of capitalist profit driven motive. The pace of "accelerating progress" could manifest a cheap means of such travel. I don't know that speculative physics about faster than light travel (warping space-time, etc) is any more unrealistic than some of these prognostications about AI and spiritual transcendence, whatever that is.
But honestly, responding to this post sort of reminds of responding to Way of the Master type people anyway. :)
And it's not just a matter of profit. In fact, the research for the achievement of superhuman intelligence must hijack the ordo cognoscendi and jump in front of the line, because with superhuman intelligence humans can leave the rest of science to machines (hopefully).
There's stupendous profit in every small incremental step towards it. AI research in general has proven to be unbelievably profitable -- even when you fail in your goals (like most AI research to date), every minor partial result can probably be turned into a multimillion business.
Before the AI winter, there was a lot of criticism of how the US govt spent a lot of money on a lot of projects which fizzled out. One of the results built from the ashes of those DARPA projects was the Dynamic Analysis and Replanning Tool, used to optimize and schedule transportation of supplies and personnel.
From wikipedia: Introduced in 1991, DART had by 1995 offset the monetary equivalent of all funds DARPA had channeled into AI research for the previous 30 years combined.
Imagine having a computer that would have the same level of cognition as a human. I think most people would disagree with you on that not being valuable.
True if everybody has access to strong AI. I doubt that is the way it will play out. At least at the beginning. Whomever invents it first will probably hoard it for themselves as an advantage against everybody else. The technology will eventually spread to everybody else but by that time our economy will probably have adapted.
You're a little unclear as to why it's called the singularity. It means a point so drastically different that we cannot predict what the world will look like after it happens. However, should a planet still exist on the day after this occurs, you can pretty much guarantee that the company that controls this technology will make Apple, Google, and Microsoft look like technological infants. Well, unless it is Google, which seems to be the play they are making by hiring RK.
Also, World of Warcraft is virtualized, and everything in that game is potentially infinitely abundant. And yet, it still seems to pull in billions. So...
Yes, but my point is essentially that talking about profits in a post singularity world is probably as sensible as talking about Mao as CEO of China. After the singularity we will likely have a different socio-economic system and profits will probably be something like a noble rank is today.
As I've said before, I think Kurzweil's focus on a singular A.I. is misguided... he's missed the target but hit the tree. Much of the exponential growth in technological progress (from the printing press to the Internet) has to do with improving the efficiency of information flow between individuals. I'm thinking that at some point this flow may approach neuronal levels of complexity and speed, at which point the collective intelligence of society (or groups in society) will tip over into something resembling a multi-human organism. It's hard to grasp now, but I doubt that whatever the Singularity ends up being will be easy to grasp at this pre-Singularity stage.
Arguing about brains in boxes just seems a little myopic to me.
> And while the translation of the genome into a brain is not straightforward, the brain cannot have more design information than the genome.
We know this is mostly true, but not entirely true. While the genome is probably the biggest repository of information driving brain development, there is no hard and fast law saying it's the only one. It's easy to see how a bit of information external to the genome, like "maternal alcohol consumption and lead ingestion are bad," can effect brain development.
And besides that, the information content of a design is not related to the number of unique structures the design describes. By analogy, the output of `print i for i in [0...100000]` contains 100000 unique lines, but the information content is bounded by the program that generated it.
Oddly enough, Kurzweil actually misquoted Allen there, and his refutation does apply quite well to Allen's original statement, which was that each neural structure has been individually refined by evolution. Evolution does, in fact, operate solely on the genome (plus or minus some epigenetic factors here and there), and it really is impossible for it to tune that many structures individually.
Mitch Kapor and Kurzweil have a $20,000 bet on whether or not any "machine intelligence" will have passed the Turing Test by 2029. Kurzweil, obviously, is on the "for" side. Kapor's taking the "against" side stems from his experience as a software developer. I think he has a point: consider how incredibly difficult it is to give any machine a set of instructions that will get it to do EXACTLY what you want it to do. The promises of new languages, frameworks, and methodologies have all failed to magically make software easy and bug-free. At some level, a machine intelligence is going to need some sort of software. We like to call the game engines we play against "AI"s, but are they really intelligent? In the end they are just rules engines. It IS fun to dream, and those dreams can certainly lead to technological advances and progress, but as someone who spends much of my days attempting to coerce a machine into understanding what I need it to do, I also share Kapor's skepticism. I think Kapor himself said it best: "In the end, I think Ray is smarter and more capable than any machine is going to be."
Perhaps you don't need perfectly designed code, just the right kind of selection pressure? Consider the spaghetti that is our DNA or brain structure. YAGNI design might well be a poor fit for the fractal redundancy necessary for truly complex systems.
I am aware that there are already some programs that are capable of "learning", and you could certainly say that they are selecting the correct paths based on trial-and-error, and "remembering" in order to build faster and more accurate responses. I don't pretend to understand how human brains are wired, but I don't really view DNA as "spaghetti". It is code. Wonderfully designed TERNARY code integrated into a system complete with an interpreter and built-in code cloning, error-checking and correction. We humans have yet to design something that works so efficiently. We still suffer from errors in the code, though - mutations that cause such things as CF, Downs, Sickle-Cell, etc.
I suppose it is really the brain one would seek to emulate if they were trying to create some form of true AI.
Yes it's dangerous to draw from examples that we don't understand the workings of. But all the evidence seems to indicate that brains and DNA don't pay much attention to parsimony or micro-optimization. Or high-level architecture. Or modularity. Nature just does what works.
The essence of the story on Watson: it beat the humans because it was faster on the trigger finger. Same story as John Henry and the steam-powered hammer. As for how it acquired its "knowledge" ... by "reading" documents ... yes, an ocean is bigger than a teacup. But water can't swim. Call me when a general-purpose machine is motivated to mull over all that knowledge and originates new, original, testable answers to long-standing questions.
One aspect of 'the rapture of the nerds' that I find most interesting, is that it will drag parts of metaphysics into the realm of science, since a simulation of the meuronal connections is (rather likely) just a computational problem, while explanation of a working AGI woild be quite a challenge for dualistic ideas.
We will never achieve artificial intelligence unless we create a program that can differentiate good from evil, pleasure from pain, positive from negative.
The basic building blocks of life.
If we continue making faster and faster machines at calculating formulas and storing knowledge we will have just that, a giant calculator.
I definitely think you are on to something. The attempts at artificial intelligence I am aware of all consist
of some sort of optimizing, so trying to find a good thing to optimize seems a very reasonable thing to try.
(This is just my speculation, so take it with a grain of salt)
Here I think it is reasonable to look to human motivation. Maybe by making an agent that optimizes what a human brain optimizes, we could see similar behaviour?
A reasonable start is Maslows hierarchy of needs.
1. Biological and physiological needs. For an embodied AI, this could correspond to integrity checks coming up valid, battery charging, servicing.
2. Safety needs. I think these emerge from prediction+physiological needs.
3. After that we have social needs. This one is a little bit tricky. Maybe we could put in a hard coded facial expression detector?
4. Esteem needs. Social+prediction
5. Cognitive needs. I have no idea how this could be implemented
6. Aesthetic needs. I think these are pretty much hard-coded in humans, but are quite complex. Coding this will be ugly (irony)
7. Self-actualization???
Now, from 1 and 3 it is reasonable to suppose (provided the optimizer is good enough) that we could train the AI, like one trains a dog. You give command, AI obeys, you smile/pet it ( -> reward).
It does something bad, you punish it.
In order for the optimization procedure to not take unreasonably long time, I think it is important that the initial state has some instincts.
Make sound if you need battery. Pay attention to sounds that are speechlike.
Maybe give it something aking to filial imprinting could also be a good idea.
Extensive research on neural basis for motivation should be prioritized in my opinion.
Good and evil is subjective, and therefore not a "basic building block of life".
Pleasure and pain require emotion, something most researchers don't assume an AI will have.
What do you mean by "positive and negative"? Like, positive and negative numbers? Good and bad business decisions? We already have computers that can do that.
you're talking about what it means to be human, not what it means to be intelligent. you can't judge good from evil if you're not intelligent, so AI's first goal should be just that: intelligence, not morals
However, as we know, the pace of progress in space travel and settlement technology slowed down and almost stagnated, despite a lot of very smart people who thought fast space colonization was a sure, inevitable prediction.
I think the same goes for extrapolations from our current computing technology - as exciting as the blue-sky scenarios are, and as fervently as I hope that brain uploading and similar goals will be achieved, I think the barriers to computational transcendence of the current limits of our bodies will probably be at least as challenging as the barriers to permanent off-earth colonization.