Hacker News new | past | comments | ask | show | jobs | submit login
So You Want to Save the World (lesswrong.com)
112 points by inetsee on Dec 28, 2011 | hide | past | favorite | 40 comments



Helping fund development of ways to cure aging is also a thing that falls under saving the world, I think.

Interestingly, there's a great deal of overlap between this arm of the AI community and the most forward-looking longevity engineers. They go to the same conferences, seek funds from the same philanthropists, and for forth. (Aubrey de Grey, biogerontologist and now SENS Foundation CSO, even used to be an AI developer, back in the day when "AI" meant "incrementally better expert systems"). See, this, for example:

http://www.fightaging.org/archives/2010/08/artificial-intell...

The penumbra of people, funding sources, and networks surrounding the Singularity Institute and Less Wrong are much the same as the one surrounding the SENS Foundation and Methuselah Foundation - both of whom are making their year end pledge drives, by the way.

http://sens.org/node/2543

https://www.mfoundation.org/donate


Developing a way to cure aging and saving the world are generally very different things, depending on how you define saving the world. Because the concept of saving involves an idea of saving from something. If we have a concept of the world, what should it be saved from? And are we an essential part of the world? Does the world end if we all die?


Mathematicians, computer scientists, and philosophers are going to save the world?

As I read it, a group of researchers is creating philosophy around a branch of thought in order to begin to define the problem. Thats all good, but proper scientific research starts once a well defined problem is synthesized into a testable hypothesis.

The gap between thought exploration and observable hypotheses can be called philosophy. However, usually when this gap is paired with an assumed purpose (absent a testable or confirmed hypothesis), this is best described as religion.

Why say 'save the world' in order to solicit donations? You really don't know enough about the problem to pre-suppose that any philosophy you are generating will create testable observable knowledge, or that in turn, that the findings of these observations would even come close to 'saving the world', or even providing any measurable benefit to the world - you just don't know.

Why not just say you are a group of researchers trying to better define potential problems generated from AI-related extrinsic risks?

Why do I care? Well, I think the SIAI, etc guys are really bright, and that their work produces a lot of valuable insight about applying theories like solomonoff induction, information theory, bayes, etc to theories of economics and intelligence. There is a lot of value here. However, the religious-like beliefs of the group just come off as strange to most, and frankly, unscientific. I'd like to see these ideas propagate, get fleshed out and get evolved. However, when I mention my interest/projects in AI/semantics to the average silicon valley engineer/entrepreneur/investor and you would be surprised how often I hear something like 'not like those singularity people, right? those guys are like a weird cult'.


There is a clear hypothesis: that a sufficiently advanced AI will manipulate most humans toward its own goals, and even its creators won't be able to stop it once it gets going. So the stakes really are "saving the world".

This hypothesis isn't easy to test. That doesn't make it unscientific -- cosmology and evolutionary theories are also hard to test.


Thats not a hypothesis in the scientific sense, that is a scientifically-based speculation. http://en.wikipedia.org/wiki/Scientific_method

btw, does SIAI, et al, actually align to a common speculation that logically unfolds to the notion of 'saving the world'? Could you send me a link? The best I could find is in this pdf - http://singinst.org/upload/artificial-intelligence-risk.pdf - pages 18, 19 gives an overview of this paper: http://singinst.org/upload/artificial-intelligence-risk.pdf

I believe the paper is intended to explore possible risks - not to state scientific hypotheses. Is it science that a super-intelligence will emerge at all? Is it science what the timing of the onset of super-intelligence will be? How about the transition speed of the onset? Is it proven or accepted that this notion of 'manipulation of human goals' is meaningfully defined or even a negative thing? Is it proven that Friendly AI would prevent any set of bad scenarios from occurring? No.. of course not. Each of these points is discussed as speculation in a general discussion of AI risks.

To take a stacked set of speculations, and say you are 'saving the world' - I don't know how you get there without religion.


I don't think they really claim that. I think that they'd rather say "a singularity is plausible and its effect would likely be of very high magnitude, so even marginally positively increasing the kind of singularity that might come about has expected value on the level of saving the world."


The future is absurdly interesting, I'm so grateful for my love of technology. I actually am sorry for those who does not understand it.

Since I'm just 21, the future is naturally hugely important to me. I've already started to save money for body/brain updates and longevity treatments. I'll gladly admit that it's more important to me than saving for a house/car or my pension. I'll rather live in a crappy apartment/with a crappy car for the next decades and afford to live 150-200 years (when immortality surely will be an option for the rich, which I plan to be) than to spend all my money on a house/car and die a natural death at 80-100 years old.

That being said, it's important to live life to the max now instead of delaying it in the hope of "eternal" life, which some longevity extremists unfortunately do. I'd rather die young with a life truly liven than die 100 years old having spent a life avoiding all potential dangers.

I'm positive and hopeful about the future. We'll experience a lot of hickups like huge societal and environmental changes, but overall it'll work itself out.

I don't understand the "evil AI" issue though. We'll eventually reach a point were it'll become difficult to seperate humans from robots. Humans will become more techological while robots will become more biological and we will eventually converge.

Research has shown that AI needs emotions in order to be truly useful or else it won't have a way to decide what's important or right, leaving it decisionless like people with a damaged amygdala. I don't see a logical reason why artificial emotional "beings" would favor the future of purely technological "beings" over humans/cyborgs - they/it doesn't have the same evolutionary drive to advance it's own species as natural life, including humans, have.

The real issue won't be man versus machine, it'll be superhumans versus poor/technological conservative humans. Some people still won't have access to clean water while others will enhance themselves to degrees barely imaginable today.

I'm not saying this issue shouldn't be worked on, but I don't think it's worth fearing the future over.


The issue isn't "evil AI". Very few people in the transhumanist/AI movement are concerned about evil AIs. The issue is that unless we are very careful to engineer an AI that cares about human values it's going to be completely indifferent to human values. Indifference is the enemy, evil is not.

To illustrate, consider the African Elephant. After it reaches 40 to 60 its teeth fall out and it slowly dies in agony from starvation. This is not because nature is evil or because natural selection is evil. This sort of thing just happens because nature is completely indifferent to human values. An AI, just like a process like natural selection, is just not going to care about humans unless we get it right. And since we have only one attempt to get it right, the stakes are so absurdly high.

Anyway, the guys at LessWrong have put a great deal of thought into these issues. If you're interested in this sort of thing, check out the sequences[1]. It's a few million words, but it has a terrific ratio of text to insight.

[1] http://commonsenseatheism.com/?p=12774


Sure, "Evil AI" was just a quick and sloppy way to describe an AI acting detrimental to humans, I didn't actually mean evil. Sorry, my mistake.

The thing is - in order for a true AI to act independently it has to have a purpose. Humans act the way we do because we want to have some sort of positive experience and want to avoid negative experiences. If we couldn't experience anything either positive or negative we wouldn't have a reason or motivation to do anything, we would just be. At that point we might as well be static dead objects. For AIs to act intelligently & independently and not just as algorithms solving a single task they need to have some sort of purpose/goal to reach. An independent AI can't be indifferent - it needs a basis for taking decisions.

I don't see how the purpose we give the AI could be detrimental for humans without severe negligence. In addition - for an AI to be useful for humans it needs to understand humans. We obviously create AIs to serve us and in order to serve us independently it without needing manual input of tasks (which would just make it an advanced computer) it needs to understand us.


Let's say that we tell the AI to eliminate malaria.

So it incinerates the biosphere. Now we don't have any malaria, but we also don't have any humans.

The AI would have done exactly what we asked it to do, but not what we wanted to do. For any reasonable request, you need to specify a ridiculous amount of background information as to what is and isn't acceptable. Probably any simple list you create will be missing something, and we'll be miserable/unhappy as a result of it's exclusion.


I didn't actually mean evil.

I think "evil" is exactly the right word. An entity with goals which are incompatible with your goals (in the sense that both of them cannot be reached) can be described as literally evil. That's exactly what "evil" means.


I agree with what you're saying, but since the nature of your objection is semantic...it depends on what one means by evil. I'd say killing me to make paperclips from my atoms is evil.


The concern is over how to eliminate a certain class of bugs from our software as it becomes increasingly complex to the point that no human understands it. If you ask your robot to get you a glass of milk, and it finds you are out, you don't want it to rob the local grocery store, killing anyone who tries to stop it. Fixing such bugs by enumerating things not to do isn't viable -- we have to get machines that share our values or stick to Oracle type machines that just don't care about the physical world.


That's the thing - I don't agree with your premise that humans and robots/AI would be as seperate as you frame it. "Vanilla" humans may not understand the AI, but the people working with the stuff would surely be vastly enhanched humans, cyborgs.

I agree that we have to create AIs that share our values. However, I don't understand how/why we would/could not. We obviously create AIs to serve us and in order to serve us independently it without needing manual input of tasks (which would just make it an advanced computer) it needs to understand us.

I simply don't understand how the default AI would be detrimental to humans, what purpose would such an AI serve and why would we create it?


The mind design space is HUGE. (http://lesswrong.com/lw/rm/the_design_space_of_mindsingenera...)

The "default AI" is a program that we build. That's all we know. Most programs that we build do not properly represent human values. If they don't properly take those into account, then we lose things we care about. The AI that "optimizes our supply chain for paper clips" will, in the limiting case, consume humans and the environment and the earth and the sun in order to produce as many paperclips as possible and distribute them as widely as possible. A "default AI" will not care about its survival or the survival of its creators.


>I've already started to save money for body/brain updates and longevity treatments. I'll gladly admit that it's more important to me than saving for a house/car or my pension. I'll rather live in a crappy apartment/with a crappy car for the next decades and afford to live 150-200 years (when immortality surely will be an option for the rich, which I plan to be) than to spend all my money on a house/car and die a natural death at 80-100 years old.

That is IMHO a bad decision. I was also thinking about this way of life for some time. The big issue here is that you are betting with your wealth life in ages (house, pension, ...) against some possibly available life extension tool.

From my point of view the technology growing rate slightly decreases in the last decade. There is already 2012 outside but we still do not have flying cars (as has been thought twenty years ago), we were not able to beat HIV, we still do not understand how the human brain works, etc etc etc.

So, my bet is that this life extension medicine will exists in a usable and working way maybe in 50 years. At this time your are already 71 or maybe even older. Do you really want to have a long life at this age??? No, I bet not. For us (I am 28) it is already too late now. And I do not want to live 200 years at an age of 80 or so. I would be old crappy man, and will be psychologically labile since there would be so much young people around me living a much longer "young" life as I did. I don't want imagine how this feels like.


> I don't see a logical reason why artificial emotional "beings" would favor the future of purely technological "beings" over humans/cyborgs - they/it doesn't have the same evolutionary drive to advance it's own species as natural life, including humans, have.

Wouldn't that depend on how these "artificial beings" are designed in the first place? They might not necessarily favor a robot-dominated world, but they're not necessarily going to coexist peacefully with human beings, either. Whether they serve everyone's interests or whether they'll need to have their rogue asses kicked by Will Smith ultimately depends on what kind of intelligence and emotions are programmed into them. Even if they are expected to learn on their own, the design of the learning algorithm (as well as the stimuli they're initially exposed to) will have a significant impact on the outcome.

In particular, human philosophical conjectures will inevitably make their way into the design of artificial beings. From abstract concepts in metaphysics and epistemology to the most practical parts of value theory, human philosophy pervades every "intuitive" assumption that we make on a day-to-day basis. But I have yet to see a philosophical position that would be safe if extrapolated to its logical conclusions by hyper-intelligent beings. So there is definitely cause for concern, in addition to well-known issues of inequality.


"Evil" is a label for a particular subset of human behaviors, which are a subset of all possible minds. Thus it is said:

> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

The main point of the Friendly AI movement is that the vast majority of possible (stable) minds will not work towards a universe that humans would enjoy living in. An AI that values what we value is a stupendously narrow target - particularly since we can't precisely define our values yet.

If you're interested in this stuff, there's a lot more on LessWrong. Here's a teaser: http://lesswrong.com/lw/ld/the_hidden_complexity_of_wishes/


"Evil" is a label for a particular subset of human behaviors, which are a subset of all possible minds.

I think it's far more useful to define "evil" to be about goals which are fundamentally incompatible with your own. Not only does this cover every human example, but it matches the intuitive feel for what "evil" means even if the entity in question is not human, and leaves systems which have no internal representation of goals as morally neutral, like hurricanes, no matter how dangerous or damaging.


> Research has shown that AI needs emotions in order to be truly useful or else it won't have a way to decide what's important or right, leaving it decisionless like people with a damaged amygdala.

Research has shown what, exactly?


If you want to save the world from existential threats you might also consider donating to the B612 Foundation. It's a group ex-astronauts working to fly a spacecraft to find and deflect asteroids on a collision course with earth. http://b612foundation.org/


I voted for asteroids as the most statistically feared existential risk in one of the LW surveys and was shocked when the results came back:

"Of possible existential risks, the most feared was a bioengineered pandemic, which got 194 votes (17.8%) - a natural pandemic got 89 (8.2%), making pandemics the overwhelming leader. Unfriendly AI followed with 180 votes (16.5%), then nuclear war with 151 (13.9%), ecological collapse with 145 votes (12.3%), economic/political collapse with 134 votes (12.3%), and asteroids and nanotech bringing up the rear with 46 votes each (4.2%)."


I think the reasoning is that an AI singularity is more likely than not within the next 2 centuries. During that time, people can differ about whether they think nuclear war, nanotech, or bioengineered pandemics are likely. But the risk of a catastrophic asteroid strike are basically known, and small. (1km asteroids hit the earth every 500k years.) So, depending on your assumptions, asteroid deflection might be the most cost-effective existential risk to mitigate, but it shouldn't be the most statistically feared.


Interesting and quite dense post. Singularity related articles and worries about "good" AI building makes me a bit uneasy when the performance of machines on many important tasks, e.g. object recognition, is dismal. So we have quite a way to go.

The usual rebuttal to the above is the "intelligence explosion" argument: once AIs can improve themselves, their progress will be exponential. As a tl;dr for the post, consider the Dewey summary of the argument:

  1. Hardware and software are improving, there are no signs that we will stop this, 
  2. and human biology and biases indicate that we are far below the upper limit on intelligence. 
  3. Economic arguments indicate that most AIs would act to become more intelligent. 
  4. Therefore, intelligence explosion is very likely. 
If you think about it, only (1) is a fact, the rest are assertions that are open to debate. For example, in (2) how do we define the "upper limits of intelligence", when even defining intelligence is problematic. Currently the human brain is the most complex and intelligent object we know. For (3), I don't know what sort of "economic arguments" are meant but as we all know becoming "more intelligent" is not a simple hill climbing process, in fact it's not clear how to go about it. Is Watson more intelligent than a person?

If you think about these facts for some time, you will find that AIs designing more intelligent AIs scenario has very little plausibility actually, let alone being highly probable.


"Singularity related articles and worries about "good" AI building makes me a bit uneasy when the performance of machines on many important tasks, e.g. object recognition, is dismal. So we have quite a way to go."

Yudkowsky (2008), "AI as a Negative and Positive Factor in Global Risk" http://singinst.org/upload/artificial-intelligence-risk.pdf addresses this in section 7 (though it's really worth it to read the whole paper): "The first moral is that confusing the speed of AI research with the speed of a real AI once built is like confusing the speed of physics research with the speed of nuclear reactions. It mixes up the map with the territory."

===============

"in (2) how do we define the "upper limits of intelligence", when even defining intelligence is problematic."

Given the huge number of reproducible cognitive biases humans are known to exhibit (http://wiki.lesswrong.com/wiki/Bias), it seems very unlikely that humans are optimally intelligent (one way to define intelligence, from the Omohundro paper below: 'We define “intelligent systems” to be those which choose their own actions to achieve specified goals using limited resources' so "more intelligent" means better at achieving those goals using limited resources)

===============

"(3), I don't know what sort of "economic arguments" are meant but as we all know becoming "more intelligent" is not a simple hill climbing process, in fact it's not clear how to go about it."

Omohundro (2011), "Rationally Shaped Artificial Intelligence" Makes the case for #3: http://selfawaresystems.files.wordpress.com/2011/10/rational... (summary: by "more intelligent" we mean "more capable of achieving goals." So any AI which has goals will act to become more intelligent so that it can more effectively achieve those goals)


Thanks for a great reply! I think my (and I don't think I'm alone in this) bias is that when I think of intelligence I tend to think of tasks that the human brain has evolved to perform, like linguistic and visual analysis. It may be argued that these tasks form a small subspace of the "intelligence space", but, boy, is the human brain good at solving these! That doesn't mean that we are optimal; however, for many AI tasks the brain operates on such a vastly different performance level that comparisons with machine AI seem out of place. Even systems like Watson, which were created by many hundreds of man-years, can stump humans in narrow domain tasks.

Now, it might be argued that some sort of paradigm shift will occur as machines get more intelligent (e.g. similar to how results from STR and quantum theory were impossible to predict to Newtonian-constrained thinking) and some effects that we cannot predict now will prevail, like the massive increase in intelligence in humans that caused/was caused by the advent of language.


Singularity through AI won't happen, not by electronic AI, but by biological means. The advancement in biotech will out pace AI. Our sequencing technology is advancing at exponential rate, a rate faster than Moore's law.

One way of creating AI is through imitation. We study and understand the brain and build neural network chips. But why build the system from ground up when we can just hack a already intelligent system, our brain. Evolution has already created a cheap and very efficient system for intelligence. It's far harder to build from ground up.

Once we reach immortality and bio brain upgrades, AI won't be a threat. Humans are inherently selfish; we will spend more efforts on self improvement to stay competitive. The new humans will be the super-intelligent being and our greed is the mechanism for singularity. But that greed could also be hacked.

If a system capable of intelligence is build, why would it make self improvements? or do anything? If a human have absolutely no drive, no sex hormones, no food drive would it think about anything? Is it intelligent?

The intelligent system that we build will be for human's selfish gains and will be used to improve humans. That means humans will improve along side with the system, or humans are part of the system. What's really scary is if the system is build with all the drive and motivation build in...


This future sounds like the present to me.

The underlying worry in this article is that intelligence begets more intelligence and that the new forms of intelligence might not have the same morals/goals as the author. Beneath this worry is that due to its exponential growth, intelligence and thus power quickly concentrates. Any thing/life that is not directly supporting whatever goals this new intelligence has is at great existential risk. We hope that these goals include benevolence towards other, less powerful forms of life.

This all sounds reasonable, however they are held back by the idea that AI must look like a really smart human that is remotely controlling a robot/computer interface. The reality is that there is a lot of intelligence which looks nothing like us. The 'intelligence' of a market would be one basic example.

If we take this more abstract/literal view of Artificially/human created intelligence, then most of what the article worries about is already upon us. Some people have so much wealth that we might as well consider them augmented. In many cases their interface to their augmented intelligence is another human, or many humans but that does not take away from their power.

  they have hoards of wealth at their disposal
  wealth is power and can currently be exchanged for both human and computer intelligence
  that purchased intelligence is then used to further increase the wealth/power/intelligence of the agent (billionaire/corporation)
  NOTE: The richest 0.000000044% of the world's population have the same amount of wealth as the bottom 8.8%. http://www.stwr.org/poverty-inequality/key-facts.html
The result is that the rich/powerful/intelligent get more so, and everyone else must hope that they can somehow serve the few or that the few will be benevolent.

If you'd rather avoid the rich/99% memes, try to describe our current market to someone from 1000 AD, and it sure sounds like an artificial intelligence:

It grows. No single person controls it. It controls how resources and human labor are allocated in such a way so as to optimize its own growth. There are cases where people tried to take over the job of the market and they almost always fail (or always fail, depending on who you ask). Sure sounds like Artificially created intelligence to me.

Either way, augmented billionaire, ultra-powerful cultural meme or human like intelligence in a machine, he seems to imply that our best bet is to teach and install morals and values in our 'children', with the hope that they will be benevolent.

######

I think that creating decentralization and diversity is more important than moral/value education; especially if you take the view that we are already in the midst of it all.

One strong measure of the health of an ecological system is its diversity. This is partially because a monoculture is risky - it has a single point of failure. An ecosystem which is diverse is also an ecosystem which has the resources and stability to explore and learn. This is required if an ecosystem is going to survive change, as ours is now.


"This all sounds reasonable, however they are held back by the idea that AI must look like a really smart human that is remotely controlling a robot/computer interface."

I think you misunderstood their position. The only assumption is that the AI must have some goal, and the point is that, if there's no term for humans in that goal, then we will likely be destroyed, not due to malice from any superintelligence, but indifference -- we are made out of resources that the AI could use to achieve its goals.

I think you're vastly underestimating how much better than us at achieving goals a superintelligence would be. A superintelligence would be a lot more powerful than any augmented billionaire, in the same way that a human is more powerful than the head of any wolf pack. The worry isn't about slightly augmented humans or slightly superhuman intelligence.

See these two papers for good arguments as to why human-level AI will likely result in an "intelligence explosion":

Yudkowsky (2008), "AI as a Negative and Positive Factor in Global Risk" http://singinst.org/upload/artificial-intelligence-risk.pdf Chalmers (2010), The Singularity: A Philosophical Analysis http://consc.net/papers/singularityjcs.pdf


Thanks for the comment and links. I had only ever read 'science fiction' like Ian Banks and Charles Stross Accelerando, but never any of the 'non-fiction'. The linked articles were interesting.

I understand that the risk of an AI that wants the mass of the entire solar system as its own and quickly becomes a matrioska brain. However, I question the idea that we aren't already there. I would argue that an augmented billionaire or better yet, a market economy compares to someone in the poorest 5% of the world just like a modern human compares to a wolf pack. A billionaire can decide not just to go to space, but to build an industry out of it. The poor can't find enough food. That is a huge difference. Is there any research into quantifying these types of differences?

Sure a human level computer AI gets 'free' speed doubling every 18 months, but so does the intelligence that surrounds an augmented human. Just look at how much more intelligence we have available to us now as compared to 20 years ago.

I agree that there will be/is an intelligence explosion. My point is simply that we are already in the midst of it, or that there is no single instant to point at and say 'that is when the singularity started.'

I think that this is an important point to make because it changes the framing of the question from "How can we survive abstract superintelligence explosion of the unknown future" to "How can we survive the existing intelligence explosion." From "How can we teach the superintelligence we are bound to create to be nice" to "How can we convince the existing superintelligence to be nice" and/or "what changes in our social/governmental/memetic structure should we change to survive the explosion we are experiencing?"

Also, if we view ourselves as already being in the intelligence explosion we can look at how existing superintelligences treat other less intelligent beings to see where our culture is likely to head as the explosion continues. If we don't like how superintelligences treat lessintelligences now, then maybe we should figure out why and how to change it.

The framing that the article provides sounds about as silly as a pack of wolves discussing tactics they will use to make sure their new human creations focus all of their energy on catching rabbits; so I tried to come at it from an angle that has a hint of pragmatism and practicality.


I guess the issue is how likely you think a hard takeoff/FOOM scenario is: http://wiki.lesswrong.com/wiki/Intelligence_explosion

"Sure a human level computer AI gets 'free' speed doubling every 18 months, but so does the intelligence that surrounds an augmented human. Just look at how much more intelligence we have available to us now as compared to 20 years ago."

A motivated human-level AI could get free speed doubling a lot more quickly than every 18 months -- it could acquire more resources that already exist, it could increase the speed of progress in hardware improvements, and, perhaps most importantly, it might be able to improve itself to achieve a qualitatively superhuman intelligence rather than just quantitatively. Sections 3: "Underestimating the power of intelligence" and 7: "Rates of intelligence increase" of the Yudkowsky paper I linked before, "AI as a Negative and Positive Factor in Global Risk", address this well. http://singinst.org/upload/artificial-intelligence-risk.pdf

I think it's a mistake to compare an AI superintelligence to anything on earth currently, like market forces. An AI superintelligence could probably improve itself a lot faster than any market could.


Thank you.

I think I have been confusing the various types of singularity here. http://yudkowsky.net/singularity/schools

I have been mixing the intelligence explosion with the accessing technological progress.

Ill have to give all of this a bit more thought.

Thanks again


The first thing I thought of when I saw this article was the Marathon trilogy (and Durandal's rampancy) by Bungie (pre-microsoft acquire). http://en.wikipedia.org/wiki/Marathon_Trilogy#Rampancy

(sorry I just nostalgia'd myself :( )


Man is dead.


Rapture for nerds. Don't waste your money, or your time.



Rationality suggests incorporating new ideas into existing ones. Where are these people change their projections based on the slowing increase in computing power? (aka second derivative of computing power).


The intelligence explosion discussed in the original LessWrong post is a separate issue from exponential growth, only loosely related, although folks like Ray Kurzweil constantly conflate the two.

Please don't dismiss all ideas about AI, intelligence explosions, and superintelligence (e.g. http://singinst.org/upload/artificial-intelligence-risk.pdf ) just because some related concepts may be misguided.

(Not saying that you are being dismissive, just pointing out a distinction that you or others may not be aware of.)


There far more issues on that side of the singularity idea than simple questions of computational power. There are fundamental limitations on information as a direct result of quantum mechanics which limit how accurately you can say predict the weather without directly controlling it.

So, a super intelligent Jupiter brain simply can't predict* the temperature of every cubic centimeter out to 5 decimal places 10 years from now regardless of what computational power it's given or what measurements it takes. And, peoples behavior is influenced by the weather so again it's can't model human behavior to that nth degree. Now that's just one example, but you really can't predict the stock market accurately over long time frames for the same reasons etc etc.

*again ignoring more direct influences.

PS: Now QM might not be correct, but assuming a world view without evidence is the realm of religion not reason.


I don't think anybody from the SIAI is disagreeing with you here. I don't think the kind of prediction you're talking about is part of the intelligence explosion hypothesis. A superintelligence doesn't need literal omniscience, or even something that close to it, in order to be much, much, more effective than humans at achieving goals.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: