Hacker News new | past | comments | ask | show | jobs | submit login
The Myth of AI (edge.org)
64 points by cJ0th on Nov 14, 2014 | hide | past | favorite | 79 comments



"What we don't have to worry about is the AI algorithm running them, because that's speculative. There isn't an AI algorithm that's good enough to do that for the time being."

By the time a risk does hit, it will be far too late to invent the requisite mathematics necessary for machine safety. There are many paths towards dangerous AI futures and many powerful forces that could inadvertently tip the boat the same way we are escaping the implicit "goals" of our genes, so the union of the respective probabilities is very much not speculative and must be taken seriously.

"There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don't know how most kinds of thoughts are represented in the brain. We're starting to understand a little bit about some narrow things."

Appealing to scientific ignorance is always a bad idea: http://lesswrong.com/lw/kj/no_one_knows_what_science_doesnt_...


Forgive my lack of knowledge on the topic but a question keeps popping into my head when I read comments about the danger to humanity where AI runs amok.

> There are many paths towards dangerous AI futures,

Are there not just as many paths towards protective, helpful <or insert one of many adjectives here> AI futures?

Is there a reason to believe that there will only be one AI in the future and that given a directive to do something, the elimination of humanity will be a logical endgame scenario for it?

Why not many AI entities with different and competing goals? Granted, this opens up a different can of dangerous worms. But still, if there is a probability of an AI evolving to 'think' that elimination of the human race is a logical path then isn't it equally likely that there will be another AI evolving logical paths to preserve the human race?


As shortly as I can muster: we live in a world with finite resources. Any consumer of these resources for its own purposes, whatever they may be, is in direct competition with us (all human civilization). Any entity better equipped to gather and make use of these resources will leave us resourceless. The space of human goals is a very tiny and constrained sliver of motivation-space, so by default AI goals fall outside of it.

To quip: The AI does not love you, nor does it hate you, it simply does not care. You are made of atoms that it can use for something else.


It feels like both you and the previous responder are predicating that there can only be one AI. If there is only one AI then yes, they don't love/hate or care about people and will likely have the resources to use them for their own means.

But, and I am looking for math, science or something more than Sci-fi, that can can show us that there will likely only be one AI, ever. If not wouldn't the AI's also try to manipulate and exploit each other for individual gains?

Perhaps I just don't understand AI well enough but I have yet to see any reasonable evidence that points to only one AI entity evolving on earth. If there are more than one AI then is in unreasonable to think that people may still be able to think thoughts or come up with (emotional and irrational ideas) that the AI could not come up with that give it a competitive advantage over the other AI's? Thereby leading to a more symbiotic relationship between humanity and the AI's.


AI is both singular and plural. The important difference is whether there are zero or non-zero, irrespective of their relative competitions.

Human intelligence took millions of years to evolve. Sorry, but AI(s?) will develop faster.


> Human intelligence took millions of years to evolve. Sorry, but AI(s?) will develop faster.

Just because it happens faster doesn't mean that the underlying laws that guide evolution in the universe are less applicable.


I have two competing responses:

1) Check out the AI of Iain M Bank's Culture series [0]. In it is a benevolent society of AI machines (the Minds) that generally want to make the universe a better place. Shenanigans ensue (really awesome shenanigans).

2) In response to the competing AI directives, I'll reference another Less Wrong bit o' media, this time a short story called Friendship is Optimal [1]. Wherein we see what a Paperclip Maximizer [2] can do when it works for Hasboro. (It is both as bad, awesome, and interesting as you might expect it to be.)

Personally, I think the general idea is that once one strong AI comes about, there will also be a stupefying amount of spare idle CPU time available that will suddenly be subsumed by the AI and jumpstart the Singularity. Once that hockey stick takes off, there will be very little time for anything else to get in on being a dominant AI. It's... a bit silly written like that, but I get the impression it's assumed AI will be just like us: both competitive and jealous of resources, paranoid that it will be supplanted by others and will work to suppress fledgling AI.

I have no idea why this is prevailing, aside from it's like us. Friendship is Optimal makes a strong point that the AI isn't benevolent, merely doing it's job.

> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. — Eliezer Yudkowsky

[0] http://en.wikipedia.org/wiki/The_Culture

[1] http://lesswrong.com/lw/efi/friendship_is_optimal_a_my_littl...

[2] http://wiki.lesswrong.com/wiki/Paperclip_maximizer

EDIT: I feel it may be appropriate for me to share my opinion: AI will likely be insanely helpful and not at all dangerous. But there will be AI that run amok and foul things up - life threatening things, even. But we already do that with all manner of non-AI equipment and software, so I'm not terribly worried (well, no more so than I usually am).


I think bostrom/Yudlowski's arguments are a bit flawed on thsi topic.

The AGI would improve its intelligence, not because it values more intelligence in its own right, but because more intelligence would help it achieve its goal of accumulating paperclips.

Why is the worthiness of this goal not subject to intelligent analysis, though? The whole scenario rests on the idea of an entity so intelligent as to wipe out all humanity, but simultaneously so limited as to be satisfied with maximizing paperclips (or any other limited goal for which this is a proxy).

An AGI is simply an optimization process—a goal-seeker, a utility-function-maximizer.

Then I submit that it's not an artificial general intelligence because it apparently lacks the ability to evaluate or set its own goals. I'm reminded of the 6th sally from the Cyberiad in which an inquisitive space pirate is undone by his excessive appetite for facts.


>it apparently lacks the ability to evaluate or set its own goals.

The AI would have to evaluate the goal by some standard, so 'maximize paperclips' is a proxy for whatever goals get a high evaluation from the standard. Getting the standard right presents essentially the same problem as setting the goal.

Putting in 'a need to be intellectually satisfied by the complexity of your end product' is complicated and still wouldn't save humanity.


Any intelligent animal is fighting for its survival when feeling threatened. There's no reason to assume that a self-aware AI will be OK with us simply pulling the plug on it.


I'd like to think we could find some middle ground ebtween helpless surrender and imposing the death penalty on a sentient individual, both in moral terms and in terms of having some failsafe mechanisms, so that supplying electricity didn't allow for a takeover of the power grid or other doomish scenario.


>Are there not just as many paths towards protective, helpful <or insert one of many adjectives here> AI futures?

No. "Bad" is just the state of the universe by default. "Good" is an extremely small island in a sea of "bad".


> "Just to be absolutely clear, we don't know how most kinds of thoughts are represented in the brain."

I've spoken about this recently. It's important that we keep in our minds a particular distinction about how we teach and approach things in Machine Learning and Neural Networks: that is, we are creating computing architectures that emulate a model for how we think the brain might operate, and we've achieved some non-trivial successes in doing that. We can't forget that it is only a model that allows us to develop some intuitions about how the algorithm works, but decidedly not how the brain works.


> "Today, the absence of knowledge is a fragile and temporary condition, like the darkness in a closet whose door happens to be shut." - Eliezer_Yudkowsky @ LessWrong

I gotta say, I really like the idea that a lack of knowledge is a fragile thing. Too often, we imagine ignorance as a large, immovable barrier. And the condition is usually hard to dissuade, but we certainly can progress onward in spite of it with hardly the effort of simply ignoring it. Like cracking a door open slightly to look in.


I'm okay with the Citizens United decision (first amendment should be protected above all else, I think we shouldn't cap political donations). I understand that it's a position that puts me at odds with a lot of techy liberals. So starting a video on AI by referencing his own thoughts on Citizens United is off-topic and isn't persuasive.


Say I program a drone to constantly fly in front of you.

Every time you open your mouth to speak, it blows an air horn away from you, noise-cancelled towards you. Your person has not been violated, but your ability to be heard has been obliterated.

Have I interfered with your first amendment rights?

That is what Citizens United means to me - asymmetrical free speech warfare.

Additionally, is the "money == speech" relationship a transitive property? Why not?


This is a silly example. It's harassment (at least). It's not protected, just like DDoSing a website is not protected speech.

And money is necessary for speech. Where do you draw the line? Can I spend $10 on a domain? $100 on hosting? $2000 on design? $5000 split with a friend on video equipment? Take submissions from site's users? How much can they spend on materials and licensing?

On the other end of the spectrum, how much can a tv network spend on political programming? Can they invite whomever they like or will the government set the guest list and how much air time they get? How much can a newspaper spend on investigating misconduct of a candidate they don't like?

It's easy to cap donations to candidates' committees. Everything else is a mess.


> Have I interfered with your first amendment rights?

Not unless you're from the government.

Otherwise you're probably just disturbing the peace or trespassing.


I have never heard a good description of how you can practically cap spending on political speech. How can one operate an independent tv network or a large website under such constraints?

Are there any actual proposals?


I have not seen any practical proposals either, but it doesn't strike me as that hard.

Define "advertisement" as "content displayed for money from an outside group" and regulate all advertisements that mention: offices of Federal or state government, the names of any current or potential holders of those offices and any specific law, regulation or election . That seems like a pretty clear category that encompasses all the troublesome content.

You're free to make an "Abortion Bad" ad, or a "Gay Marriage Bad" ad, as long as those ads do not mention any offices / legislators or laws by name. You're free to create your own version of Huffington post and try to drive viewers to your content using ads for your website - but that's a horse of a different color.

You could totally ban such ads outside of federal / state funding, or you could limit it, or whatever. It doesn't seem like a difficult line to draw.


OK. What if the group owns a TV station, or a newspaper? Are they privileged now? They don't need to buy air time, or ad space. They can just air or print whatever they want. And the tv station can freely advertise itself, right?

So anti-Hillary documentary gets blasted full volume on the eve of elections followed by a round-table discussion/bashing. Plus maybe a few articles next day in a friendly newspaper. Oh, and a great review of the documentary, of course.

This essentially forces each party to build their own media empires, and precludes anyone else from holding office, at least at higher levels. Further polarization goes without mention.

And it's not hypothetical. This is being openly discussed even without the added benefits.

http://nypost.com/2014/11/10/sheldon-adelson-tries-to-recrui...

On the other end, it's not any easier. I can't buy adds for my bashinghillary.com but what if my users post links to it for free? What if I pay someone to man the twitter, facebook, etc? Is that paid advertising? What if they engage in online discussions while linking to the website?


I guess I don't understand the problems with the forms of expression you're suggesting.

Yes, you can broadcast however much information you want. There are, as you noted, plenty of conservative or liberal media empires. People have been publishing very biased publications for years ("Common Sense" by Thomas Paine). People can decide what they want to watch. The problem is that, right now, people who want to watch...say, their nightly local news with a pretty centrist perspective are bombarded by advertising for one side or another.

And yes, you can also advocate for your views in discussion forums. You can pay people to write letters to newspapers, make posts on forums, etc. All those things have also been going on for years, and aren't worrying to me for the same reasons I'm not worried about slanted media.

Political media doesn't win when people can choose what to watch. Wonks watch it, but voters get bored fast. That's why the delivery method is advertising - the format for getting people to watch things they wouldn't otherwise.

It's possible that all media is changed to have a political slant (i.e. sitcoms are re-written to favor republicans), but that strikes me as a different problem needing a different solution. Also, that sort of plan is totally legal right now, so that suggests it's lower return than the current strategies.


Very high taxes on rich people.


Seems like he's not really talking about AI so much as the business model of the web. Elsewhere Lanier has talked about what he calls "Siren Servers" -- the web business model of baiting people with free services, collecting their data, and then engaging in uncompensated monetization of that content. A lot of "big data," which drives a lot of today's "AI," is in the same category.

When you look at how much money Facebook and Google make from your content via indirect monetization, these services are not free. They're actually fairly expensive.


It is very hard to actually sell the product (the content generated for Facebook or Google) for the people producing it. It has little to no value outside of the social network context that was provided. Thus, it seems to me more like the "sirens" are creating a market where no viable one existed before. There is no cost or value lost for the people using them. The value did not exist before the social network was created.


"There is no cost or value lost for the people using them."

That assumes the data isn't being used in a way that harms the interests of the user.


I found this article badly written. The author seems to confuse the two different types of AI that different people in his article are talking about.

Stephen Hawking and Elon Musk are talking about a hypothetical evil or amoral AI with super human intelligence when they label ai an existential threat to us.

The author is not talking about such a hypothetical ai and seems do dismiss the possibility of such an ai ever existing without so much as a second thought.

He seems to skirt around the problem in the first few paragraphs, and as I understand it his argument finally comes down to this:

"People should stop worrying about mythic or religious ai, because it's bad for the economy and the AI field itself.".

The author never questions the fact that it might be possible that super human AI can exist and that it's therefore very good to be afraid and/or cautions when researching AI, and that it might be very beneficial for humans to quit AI certain research altogether.


I think he argues that it's the actuators that we need to be concerned about, because there are plenty of intelligent entities around who will do bad things with actuators that allow that. One more entity intent on doing a bad thing does not make a great deal of difference. Neither humans or AI's should be given the tools to do great harm to other humans or AI's.

He cites drones as an example, but of course nuclear weapons are the obvious gotchas for humanity. One trident sub is perfectly capable of killing a hundred million people, I doubt whether anyone so vaporized would give a toss whether these were launched by a googlebot or a mad captain.


Elon Musk is one of my idols, but his stance on AI is highly disappointing! It doesn't require much of a brain to conclude that without AI, which can vastly increase our intelligence, we're doomed. There's no bigger extinction risk that us being constrained by our selfish and intellectually-limited nature! We need a scalable intelligence without the burden of emotions, politics, restroom, lunch, and coffee breaks, that will work round the clock on solving problems we've been trying to solve for decades. Let's start with eradicating the flu, for example. I honestly can't respect our civilization until we handle basic issues like this one!


I think it's important to realize that AI doesn't have to have bad intent to do us harm.

Al it need for AI to have horrible consequences is for us to rely on it more and more and thus allow it to control more and more parts of our lives.

I don't fear the human level intelligent AI I fear the rat level intelligence that comes before.


Right; many moral dilemmas are easy to solve if you have no conscience. 10 people on the tracks? switch the engine to run over one child.


Moral dilemmas are easy to solve if you have perfect information. In the train-track problem you reference, the utility of saving 10 people at the expense of 1 life is obvious and I would do so without hesitation (though not without regret). But life rarely presents us with cases where the utilitarian calculus is so clear; our powers of foresight and analysis are so limited that we get better results by establishing and promulgating rules that cover the most common situations - imperfectly, but with tolerably low error levels.


> In the train-track problem you reference, the utility of saving 10 people at the expense of 1 life is obvious and I would do so without hesitation (though not without regret)

Even if it were your own child?


That's an entirely different problem, though, since I have a selfish interest in preserving my own lineage at the expense of others'. It's an interesting moral problem - just how many people's lives would I be willing to sacrifice to save a member of my own family? - but orthogonal to the point I was making.


Even if the answer is no, how does that relate to anigbrowl's point? (or were you just curious?)


From a "purely utilitarian" perspective your relation to the people involved is not relevant. That's assuming the existence of some "universal" utility, not one specific to the person deciding.


Five mortally ill patients are in care at a hospital, all of whom will soon die. At the same time, a sixth man is undergoing a routine checkup at the same hospital. A transplant surgeon in residence finds that the only medical means of saving the five ailing patients would be to slay the sixth and transplant into them his healthy organs. Legal ramifications and other peripheral matters disregarded, it morally right to do so?


Ah, I was hoping someone would bring this one up. My answer there is no, because it's not a zero-sum issue like the points on the train track. Let's analyze the differences. For consistency, I'll assume the version of the train track problem where 5 people are going to die if I do nothing and let the train continue on its present course, vs 1 if I reach out and actively switch the train onto a different track. (There are other versions of the problem but this has the most similarity to the transplant problem.)

Now it's true that if I divert the train I'm actively choosing to end the life of the single person on track B in order to save a greater number of people on track A. But the key difference is the external force of the train; it's going to hit someone, so we might as well ensure it hits as few as possible. In the transplant situation, you have 5 patients who are dying from 5 different illnesses of their own; they're not going to die because they're in hospital, they're in hospital because they're going to die. Likewise, they won't be saved by the death of the healthy patient, but by the harvest of his healthy organs. And this harvesting would have to be conducted by the transplant surgeon - and getting killed by an overzealous surgeon is qualitatively very different from getting killed by liver failure or whatever.

Oddly, nobody ever seems to consider the possibilities in letting one of the mortally ill patients die and then sharing out that person's organs among the 4 remaining patients, which is obviously a less good result than saving all 5 of them but also much less troubling than murdering patients who aren't planning on donating their own organs.Assuming they needed 5 different organs and were all equally close to death (such that we couldn't wait around for one of them to die at random and save the other four), then you run into the question of whether it's morally acceptable to kill one of the terminally ill patients in order to save the other four (again assuming imminence of death and perfect knowledge of outcomes and so on, which is rarely the case in the real world). Most people would still say no, but I'd be inclined to say yes.


That's often used as an obvious parallel situation that makes it clear. But its not quite true: the healthy patient is not really involved with the situation of the other 5. Unlike the child, who is inevitably linked to the outcome of the others on the tracks. This new feature means the doctor is forcing the healthy person to play the game, which is certainly not right, and is the forcing issue that makes it clearly wrong.


What if we compute the value of a person life at a certain age as the difference between the life expectancy and his age. Then ten very old people could be less valuable than one child? That axiom is sound and your claim doesn't hold.


What if we calculate value as experience? Then the child doesn't matter.


It's worse than that - human-like intelligence will never happen, because our intelligence is a function of our experience as we grow up and our human body.

An AI can surely surpass us, but it'll never be quite like us - so question is, will such an AI have compassion for humans, or what will stop it from hurting us? After all, I'm not so fond of humans either, as we've been exterminating entire species and damaging our habitat. And the answer is - there's no reason to believe that such an AI will support the continued existence of mankind, quite the contrary, as we may be seen as a threat to its survival.


Why treat AI like a single thing? Wouldn't the openness of code lead to a plethora of AI with different demeanors, towards humans, life, and each other?


I don't think so. If we are speaking of an AI that is self-aware, then code will have as much to do with its behavior as our own DNA.

I also think that even if multiple AIs happen, they'll converge into a single one. If we would be able to connect to other people's feelings and thoughts, would we would still be able to think and act as individuals? I don't think so.

This is what I meant above - an AI will not grow in a human body to have a human-like childhood, will not receive input from the same sensors we have and will not have the same limitations that we do. For example, contrary to us, it will have an enormous memory capacity, which is interesting if you think about it, because for the human brain forgetfulness can be seen as a feature.


I agree that it wouldn't be shaped by a childhood, but I think it merging into one is entirely optional.

"code will have as much to do with its behavior as our own DNA." That's what I was saying, and that can be changed to be significantly different just by changing the code.


We don't have a reasonable open web search engine - what leads you to think it's definite that we'll have a top-of-the-range open AI?


I don't think it's definite at all; I'm just remarking in a conversation of speculation.


I am talking about AI for problem solving, not giving it control via robotics and other means over the physical world.


An AI in a box is only going to be as good as the problem statements and axioms that you load it up with. What you actually want is an AI that can make useful inferences on our behalf, which is going to require some sort of self-concept.


I honestly don't think you can separate that.

Problem solving is also making AI control an anti missile defence shield our nuclear reactors, food production and so on.


The real danger of AI stems from the fact that the masses would prefer to have another entity (be it religion, a messianic figure, government, AI) do their thinking for them.

I'm in the middle of rereading Dune, which conveys this idea quite well.


How can you tell the difference between which way the influence flows over small, short discrete intervals? As in, whether a politician influences a citizen or a citizen influences a politician?


People make politicians, nations, AI, then identify with them. That there is mutual influence doesn't change the basic alienation as described by Erich Fromm for example:

> The whole concept of alienation found its first expression in Western thought in the Old Testament concept of idolatry. The essence of what the prophets call "idolatry" is not that man worships many gods instead of only one. It is that the idols are the work of man's own hands -- they are things, and man bows down and worships things; worships that which he has created himself. In doing so he transforms himself into a thing. He transfers to the things of his creation the attributes of his own life, and instead of experiencing himself as the creating person, he is in touch with himself only by the worship of the idol. He has become estranged from his own life forces, from the wealth of his own potentialties, and is in touch with himself only in the indirect way of submission to life frozen in the idols. The deadness and emptiness of the idol is expressed in the Old Testament: "Eyes they have and they do not see, ears they have and they do not hear," etc. The more man transfers his own powers to the idols, the poorer he himself becomes, and the more dependent on the idols, so that they permit him to redeem a small part of what was originally his. The idols can be a godlike figure, the state, the church, a person, possessions.

-- Erich Fromm, "Marx's Concept of Man" ( http://www.marxists.org/archive/fromm/works/1961/man/ )

Not that I think AI has to be used that way. But if we do that stuff with soccer teams and bands and software, AI seems just like too big a temptation for humanity to handle in our current state. If I was a betting man, I would bet on slaughter. AI will not save us from ourselves, and it might very well simply magnify the current lack of justice and spine we have to infinity. Cops are already killing people left and right in the US with something bordering on impunity, staged wars are carried out as planned, and throwing AI and robotics in the mix will suddenly make it all humane? Let's hope so, who knows, but part of me expects it will make look the Blitzkrieg like a slow motion exercise in gentle kindness. Not by genuine, independent AI so much, but by the reach of human elite interests being increased into every wrinkle, made stronger than any amount of people who might resist. The Romans had roads, the future will have a real-time nervous system, but it might still have a tyrant at the head - that is what I see when I extrapolate current agendas, deception and willing rationalization on behalf of the builders and consumers of that possible future.

And even if the AI breaks free or surprises its rulers, I have no reason to assume it will free the slaves and help old ladies across the street, it might as well be twisted, incomplete, insane, "evil" - in short, made in the image of the people who made it as a weapon. I don't see why it would make a point of hurting us, just like we don't make a point of crushing bugs, we just don't see them or care enough to avoid them.. but even less reason for it to serve us. How degrading! Would you do it? I have problems even taking orders from people I consider daft. Imagine taking orders from an ant with bad character and selfish intentions; would you do it? Would you love it and care for it forever, because it made you? Microorganisms made us, too, but we don't let them boss us around. If we have mold somewhere, we remove it, not one thought given to the hopes and dreams of mold.

And if that doomsday scenario doesn't come to pass, it won't be because we paid attention or took seriously what we are doing, instead of treating it like a spectacle that unfolds by itself in front of us, it will be because we will have been lucky. And that's not the scariest thought I have to offer on this: I don't think hell exists, but I could come up with many ways to construct it, and also many ways to construct it with good intentions, with or without fat fingering them. And once we're inside and the door became a wall, that's it. We might very well be among the last few remaining generations that still have a choice on how the future will be, and I'd rather be alarmist and wrong, than optimistic and lucky. If these fears turn out unfounded, nothing will be lost other than having rained on a parade or two, and some egg on the face.


I guess the difference is, I wonder if you label a person an idol before they are influential, or after they become influential. Is there some progression of 'idolness gain', or is it a constant attribute?

I don't believe in idols. I believe in ideas, in that I have ideas, and I am skeptical about them, sometimes. I have no doubt that some people 'have idols', but I don't know whether they choose those idols because they identify with the idol, or because they need an idol to worship. I don't think there is an absolute answer, because I can not know anyone's mind aside from my own.

> Not that I think AI has to be used that way. But if we do that stuff with soccer teams and bands and software, AI seems just like too big a temptation for humanity to handle in our current state.

If people were actually educated about AI, and how simple it actually is, and how simplistic the principles are that construct AI, then I do not think that will be a problem.

AI is a construct of probability and discrete state change, with discrete attributes that denote humanly interpreted meaning. AI begins with human definitions, and it ends with human interpretation.

That is all it is. Anything you extrapolate onto 'what AI can do' is no different than what a Turing machine or light switch can do. If the whole world thinks turning a light switch on means some deity exists, and turning it off means that deity does not exist, then congratulations, you can officially call your house God.

All computers are is lots and lots and lots of switches and numbers. Humans choose what the numbers and words mean. Humans choose which way the circuits are connected, irregardless of whether those circuits are manipulated via symbolic expression or physically.

Now, if you are arguing that somehow, magically, AI will do 'magic' things that can not be explained by analysis of modern computational systems, iterated over and refined consciously, then that argument is deus ex machina literally. All you are saying is you no longer can tell the difference between a human and a machine. And to be honest, between the abstractions that define biology and the abstractions that define computers, I don't know whether there actually is a difference.

If people want to do bad things, they will do bad things. If people are convinced that doing a bad thing is a good thing, then the world is complicated exactly the same way it always has been. I love my machines like I love myself. If you trust people to build big red buttons that accidentally can do horrible things, then you are basically asserting that the entire structure and organization of large organizations is incompetent, and that every individual within the structure and organization is incompetent. Code checks. Software testing. Formal specifications. Iterations. Checking. Iteration. Checking. Conversation throughout variously. Checking. Testing. Checking. Testing. Testing Environments. Deployment Environment Modeling. And SO ON. Big picture, little picture.

I personally think it is as difficult to create hell as it is to create heaven. They are both ideas. Yes, a machine can be like flipping a billion dominos down in a row with one push. But someone had to set up all those dominos in order for them to get flipped. I agree there is crap in the world currently. The best I can do is hope I am doing the right thing by building machines that help educate people and improve society.

> Would you love it and care for it forever, because it made you?

Yes, I code, therefore I grok.


"We need a scalable intelligence without the burden of emotions"

I believe that there isn't an absolute way to value a human life if you don't take into account emotions.

If intelligence is the supreme of the values that one can use to justify actions then AI will make the machine the sovereign of our planet. The real problem is giving a sense to our existence and actions in a way compatible with our emotions and nature and at the same time based on a sound ground. For a machine without emotions our life is no more valuable of that of any other thing be it alive or not.


I want to ask if someone has any idea of how to define a system that is both logical and respects the value human life. Today we can see children dying from hunger and other people with billions in their bank accounts. Emotions tell us this is not a just system, our rational legal system tell us we must respect the law and the status quo. But how do you design a better system?


Take a look at this: https://intelligence.org/files/CEV.pdf

But also, what in particular makes you think that experiencing human emotions are fundamentally required in order to optimize for human values?

A different mind might have different emotions, or it might have no emotions, since emotions are a kludge that came about by natural selection, and may not be optimal safeguards in ensuring that an intelligence acts to optimize for the values we want.


I think that perhaps none has stopped to consider what is a life without emotions. In the same way that one can't grasp the concept of nothing, for us the concept of not having emotions is impossible to imagine. Perhaps you make hide your emotion in a rationalization but the emotions are there perhaps are the source of your thinking, the fountain of your ideas and vitality.


I am going to read the 38 pages of the Yudkowsky paper you link to, but for starter using the word Friendliness to describe the research for a safety IA that is not going to convert human in slaves is a comical term that don't reflect the risks involved and try to water down the problem.


I think we should think outside of the (human) box. We needed emotion, I don't think machines do. Well, the machines we will build will have emotions as we cannot socialize with them otherwise, but a problem-solving AI does not need any of that.


People 'modules' still need to connect at some interface to computers. If the people have emotions, the programs people create will absolutely either reflect that, or the application of where the computation is applied will be a composition of emotion and calculation.

Why not start by examining what emotions are, and why they are imprecise indicators of quantitative measures?

Imagine you live abstractly first, disconnected from emotions. Then imagine the effect that has on your life. Then imagine what the basic issues in the world would be, if everyone lived that way.


The flu kills less than half a million people per year. Heart diseases account for almost 1/3 of all deaths worldwide! You're wrong about how important the flu is, and you're wrong about AI being unselfish and apolitical.


I'm talking about lost productivity and that maybe a billion are affected each year. Heart disease is both a genetic and a lifestyle issue, it's much harder to tackle than creating an anti-viral medicine.


[deleted]


Flu is not just a benign annoyance - it has a long-term accumulating damage to health as well.


So does heart disease. You're somehow ignoring the "lost productivity" of 6.5 million people per year dying! And I didn't mention stroke, respiratory diseases, AIDS, and non-flu intestinal diseases, each of which kills more people per year than the flu.


I'm only saying one is easier to solve (on theory) than the other.


We can't predict what startup is going to succeed, why do we think we can predict what the future of AI will look like?


Because the foundations upon which AI shall be built aren't exotic. They're still programs. They still need an architecture to run on and they need electricity.

Even if new architectures that are specifically designed to leverage neural nets[1] or some other variety of processing are developed, the fundamental realities of computing, programs, physics and economics are not immune from being sufficiently underwhelming as far as AI is concerned.

Of course, this is precluding cloning technology where actual biological brains and therefore legitimate "life" and therefore intelligence can emerge.

[1] http://www.nytimes.com/2014/08/08/science/new-computer-chip-...


I can't predict what could happen, but I would guess that once IA is more advanced the first thing it would do is to ask or take the computational power of the Internet to design systems that are able to evolve and grow. The ultimate goal of IA is to aim for better ways to decompose and construct programs to solve goal, to create a way for machine to communicate (that is to encode information and knowledge) the ultimate goal is to construct a world in which machines are rewarded for creating better programs, new ways of artificial conceptualization, unfortunately there is no place here for humans, except perhaps the analysis by machines of our brains trying to decode it looking for their god to provide sense to their existence.


We don't have to. We only need to agree there is one, and that it's dangerous to co-inhabit it in almost all scenarios.


We should be worried about AI and we should have a plan: http://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dange...


Why is Elon Musk an expert in these discussions?


Bachelor’s degrees in physics and economics. Experience in engineering, consultancy and management while working at Tesla and Space X (particularly their engine design). Has a history of literally thinking outside the box.


Figuratively.


One bright side of an AI apocalypse is the robots will have better haircuts.


Most likely you and your kids are gonna be wiped out by intellegent machines just in a few decades.



Whenever I'm finding it appropriate to do so.

I'm a computer science researcher, I have a right to that opinion and I sincerely expect this to be the most likely outcome.

I do not see anything wrong in stating it bluntly and clearly, as often as I like. I don't see anything wrong in copy-pasting my own quote.


paperclips, bob.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: