Hacker News new | past | comments | ask | show | jobs | submit login
Why I stopped working on the Bongard Problems (foundalis.com)
107 points by bfrs on June 5, 2012 | hide | past | favorite | 106 comments



This is either satire or early stage schizophrenia. When you see phrases like "Although we can’t predict the technology of the future on the basis of what we know at present" and "I do not want to give the impression I know how we can deal with the nuctroid threat" without a shred of irony you know something (beyond the simple logical errors) is awry.

Sadly it's not unheard of for scientists and mathematicians to dabble in quackery later in their careers.

Edit: I'm not trying to dismiss his claim that technology has moral implications but he's trying to turn a well-worn social issue that's been around since pointy sticks into a technological one by waxing paranoid about the implications of the (by-definition nebulous) idea of "strong AI".


People can have peculiar opinions without having an identifiable psychiatric affliction. I doubt you are a psychiatrist and even if you were, you wouldn't be capable of diagnosing someone over the internet on the basis of a single text.

I am thoroughly dissatisfied with this being the highest voted comment at the moment. It does not make any argument at all. It just casts some aspersions on the author and his writing, judging it as 'schizophrenic' and 'awry'.

The comment can be summarized as 'the argument does not make any sense', without actually explaining why it doesn't make any sense. That summary would be a better comment, because we wouldn't get into this stupid irrelevant discussion about the author's mental health.


It would be trivial and boring to rebuke because the arguments are so flimsy. Perhaps my allusion to schizophrenia was uncalled for, I was trying to say that either the author is trying to be amusing (satire) or is a disorganized, paranoid thinker.

I guess I felt that using a clinical term would be less insulting but in retrospect I think calling out the argument as stupid and paranoid would have made it clearer why no point-by-point rebuttal is necessary. My other posts do get into the problem I have with the way he constructs his moral argument.


Hello internet people,

I've known Harry for a long time. Extremely intelligent. Amazing scientist. Hardcore pacifist. We are friends, we have been roommates, we have traveled together in the US, and in Greece. Good times.

He has taken a principled stance against AI, which I disagree with, but it's his decision. And I do respect it.

Every person that takes a principled stand will be called crazy.

This is sad, because if/when the time comes for YOU, dear reader, to take a principled stand on X, be sure they will call you crazy.

Anyways, I think his case isn't well put in the article. He provides a single scenario for a brutal use of AI. I think a better case can be put--scary stuff we are working on; AI is dangerous. I myself worry much more about Orwellianism and "We" and "Brazil" scenarios than the nuke stuff. But I don't think we should stop the work; I think the only route is to try to get more influence in the direction things may go.

I told Harry, back in the day when he wrote this, which is years ago (2006, probably), that I disagreed with his position, the nuke cat is out of the bag, we had a nice discussion about it, and that was that.

Is he crazy? Not by a longshot. I would leave my firstborn with him and travel around the world without a shred of worry.

So would my wife.

But wtf do I know; I'm just another crazy guy on the internet, right?


I never meant to impugn the man or his character and I now regret invoking schizophrenia so casually. As you said, his case isn't well put. It is his writing and arguments which I find sloppy and the connections he makes, as presented, smell somewhat delusional. This is not the same as saying that he's "crazy".

I am rejecting, strongly, the assertion that his is a principled stance. The piece is simply is not compelling since his malicious AI would be acting on behalf or under the command of humans, just like all weapons throughout history. I think it's a simplistic, misguided, stance. The kind I remember flirting with when I first took some college-level philosophy courses years ago.


Not to mention the mind-blowing arrogance and ignorance he displays in believing that his research (on Bongard problems, for God's sake! Conceptions of human cognition that are 50 years behind!) could possibly lead to believable artificial humans.


It sounds to me that you interpret taking what might happen in the far future seriously as early-stage schizophrenia. Does that sound about right?

If so, that's disappointing news for me and others who try to take what might happen in the far future seriously.

I'm not sure what to make of your edit. It looks as though you wish to compartmentalize objections to technological development in such a way that they can't actually prevent any technology from getting developed. If we're just going to ignore the outcomes of such discussions I don't see any point in having them, we might as well just charge blindly forward.


Emphatically, no. I'm just highlighting, as others have pointed out, that the mere possibility of future humans using a "nuctroid" to smuggle a weapon is not philosophically interesting in the face of the inevitability that _humans_ will conspire to do harm (possibly by using robots to smuggle weapons, or jet planes, or other humans).

The connection between his research and the hypothetical scenario is not demonstrated and in fact reeks of sloppy thinking more characteristic of conspiracy enthusiasts.


Glad I misinterpreted you!

Sounds as though we don't have any significant disagreements. I agree that he is assigning too much credence to this particular, very specific scenario.


> It sounds to me that you interpret taking what might happen in the far future seriously as early-stage schizophrenia. Does that sound about right?

Only if it is speculation without evidence or solid logic that causes major personal distress. Also, meeting all of the other diagnostic criteria is a major plus, but getting severely worked up over something you cannot logically explain or provide evidence for is not normal.


Fair enough. Good to hear that you aren't averse to serious consideration of the far future in general.


I doubt it has anything to do with schizophrenia. I certainly read it as satire.


That was my first thought. The guy went insane. He's not making sense.


Or he just failed the Turing test...


You owe me a new monitor.

With that in mind, since physical presence becomes less and less important to influence the human noosphere over time, it's not totally out of the question that a sufficiently intelligent AI could actually be holding conversations all over the web, subtly shifting human opinions in a way that suits it's purposes. (Probably not now of course, but as we get closer and closer to Turing Test capable bots it's a more valid concern.)


If I read it right, he has stopped working on a form of artificial intelligence because it could potentially (or inevitably) be used to create androids indistinguishable from humans that are carrying nuclear or biological payloads inside of them, presumably to be detonated in a densely populated area.

Taking as a given, like he does, that the advancement and spread of technology are inevitable, wouldn't it still be many times more likely that people would just detonate suitcase nukes themselves before they decide to hide them in expensive and and potentially problematic robots? There's surely no shortage of people willing to die to do that, and even if there were it's unlikely that setting a bomb on a half hour timer and getting out of dodge will affect the success rate.

That frankly ridiculous scenario aside, I can imagine much more likely applications that computers capable of solving Bongard problems (which sound pretty cool) could be used in war, like automated drones that are able to independently identify targets.


The whole posting appears to lie somewhere between an immature rant and a publicity stunt. A suitcase would both elicit less suspicion and always (?) be easier/cheaper to design/develop/purchase/deploy than a humanoid robot.

A hundred years from now, it will probably be trivial to set up X-ray scanners (or whatever they will be using) to secure most urban areas from entering by humanoid robots; much more difficult to do so with suitcases, cars, and trucks which people have to bring with them for business, otherwise urban centers would not exist in the first place.


Let's not get ahead of ourselves by describing ways to counteract current 2012 destructive technologies with technology that "will probably be trivial" to implement in 2112.


Convincing humanoid androids with enough space for a nuke are most definitely not a current 2012 destructive technology. Walking is still a challenge, let alone any sort of intelligence.

I'm not really clear on what an AI has to do with it though. A remote controlled robot would be easier, cheaper, and more reliable, but even that's unnecessarily complicated. If you wanted to move something heavy through a crowded area without attracting attention, you'd just hide it in any of the common wheeled things we see every day. Like a wheelchair. Or a car. Or a truck. I'm pretty sure terrorists have "Let's put this bomb in a truck" figured out already. The 2010 Times Square guy solved the transportation issue, he just wasn't good at making bombs.

EDIT: More to the point, almost any technology has the potential to be used for purposes that we don't like. The real question is about the people who resort to those tactics, why they choose to, and what means they have access to. The vast majority of the world is against nuking large groups of people, with androids or not. The ones that do don't usually have access to nuclear material, bioweapons, or advanced robotics.

If anything, a "nukedroid," would be the next generation of ICBM, built to avoid missile defense systems. I don't think science fiction's common prediction of widespread androids is realistic, and there are plenty of equally far fetched weapon predictions that would be difficult to defend. What about a nuclear mole-bot that burrows under a city from hundreds of miles away before surfacing and blowing up? Or nanobots designed to infiltrate through a water supply? For the foreseeable future, metal bullets and explosives in conventional cases will be the most economical form of killing people and blowing stuff up.


You're misinterpreting the premise of the article. The article is talking about ethics, not about some sort of danger. From an ethical perspective, if using a nuclear bomb is justified, then the delivery mechanism is irrelevant.

However, in the case of the sentient atomic bomb (and I think talking about it as an android obscures this question) we get in to stickier terrain. A sentient atomic bomb is a sentient being whose sole purpose in life is essentially genocide. That is definitely morally repugnant to me, and by extension I'd say it's problematic in any sane ethical system.

Creating a sentient atomic bomb would be no different in my ethical system from raising a child from birth to be the guidance system for an atomic bomb airplane.


> genocide

Homicide, not genocide. Unless you think that all weapons over a certain destructive potential are necessarily used to eradicate certain ethnic groups as opposed to others.


You read "genocide" too narrowly, and write "homicide" too broadly. To annihilate a city and its people is genocide in my book.


That's Humpty Dumpty thinking. You can interpret the words however you like, but effective communication requires some sort of agreement about definitions. For most people, and in dictionaries, "genocide" specifically does not mean "lots of people", no matter how you personally choose to interpret it.

http://en.wikipedia.org/wiki/Genocide :

    "the deliberate and systematic destruction, in whole or
    in part, of an ethnic, racial, religious, or national group"
http://www.ushmm.org/wlc/en/article.php?ModuleId=10007043 :

    It is a very specific term, referring to violent crimes
    committed against groups with the intent to destroy
    the existence of the group.
By all means interpret it differently, just don't expect to communicate effectively.

(And yes, I know that "decimate" used to mean "kill one in ten" and now people use it to mean "kill nearly everyone". I know language changes, but "genocide" is still rather specific, and has not, to my knowledge, broadly changed in the way you intimate.)


I'd say a destroying a city definitely meets that definition. Deliberately destroying part of a national group. Also, the inhabitants of a city are definitely an ethnicity (though that depends on how restrictive a definition of ethnicity you use.) So, sure, you kill a lot of people it's not genocide, but you systematically eradicate a city, I'd call it genocide.


The inhabitants of a city are not limited to a single ethnicity, so it's still fundamentally wrong. However, this exchange won't get us any further. I have no doubt you won't be convinced, so there seems nothing to be gained by continuing. Feel free to use language as you will.


As I said, it depends on how restrictive a definition of ethnicity you use.

http://en.wikipedia.org/wiki/Ethnicity

http://dictionary.reference.com/browse/ethnic

> pertaining to or characteristic of a people, especially a group (ethnic group) sharing a common and distinctive culture, religion, language, or the like.

The inhabitants of a city usually share a common and distinctive culture (and the like.) You're using extremely narrow definitions of genocide as well as ethnicity. Both are vague terms.


  > As I said, it depends on how restrictive
  > a definition of ethnicity you use.
"When I use a word," Humpty Dumpty said in a rather a scornful tone, "it means just what I choose it to mean --- neither more nor less."

You appear to be choosing definitions that don't match those in dictionaries, but I'm sure they're perfectly cromulent.


Humans don't carry nukes because fissile material is _heavy_. Add onto that the shielding to make sure the person can even carry it without becoming gravely ill very shortly into their delivery and the fact a suitcase nuke or dirty bomb has never been personally delivered is not very surprising.

The ridiculous part is that making an android carry this payload doesn't change the nature of the payload. Heavy, fissile material will still give off signatures that will trip all manner of alarms.


Fatman, the plutonium implosion bomb that was detonated over Nagasaki, used only 6.2kg of Pu. Also, the Pu-239 used in weapons doesn't require a lot of shielding. There are plenty accounts of people handling nuclear weapon cores that have had only a light plating of other metals or with only thick gloves.

What probably is quite heavy is all of the associated components that you need to make a bomb - the chemical explosives, tamper etc.

The W54, one smallest nuclear warhead anyone is admitting to making, was about 23kg - although this did have a very small yield.


There are some much lighter elements you can use. curium-247 would be ideal.

A long half life (so you can store it), the radiation is all alpha particles, which are undetectable remotely, and is safe to handle (no shielding necessary, just cover it). (However the decay products are much more dangerous, so it's not a total panacea, fresh curium would be desired.)

It has a critical mass of about 7Kg, so it's easy to carry.

Of course the hard part is making it.....

You also have to refine it carefully, since all the contaminants are very radioactive. (However mixing other isotopes of curium with it is not a serious problem, so refining it is not incredibly hard.)


It's a nuke, they don't have to carry it on their person. And if they're delivering a nuke they probably don't care about getting ill en route.


Yes, I'm wondering if is satirical. It keeps on approaching serious points only to jump into a strangely particular and extreme aspect of them.


Yeah I was struck by that too, although I didn't pick up on anything that seemed overtly satirical.

None of the arguments he made directly supported the conclusions he jumped to. I agree that there are serious ethical concerns pertaining to AI research that ought to be thoroughly examined, but I fail to see how any of these arguments actually support that thesis.


Isaac Asimov wrote a pretty good story about this. The plot is described on Wikipedia: http://en.wikipedia.org/wiki/Lets_Get_Together_(short_story)

That gives away the ending, so if you like Asimov and might want to read the story, beware.




> "So, though now I have started working again in cognition (but in isolation), I can’t avoid seeing the problem coming."

Quoted from just above the heading: "What can be done?"

It appears that he continues to work on the simulation of human cognition.


> surely no shortage of people willing to die to do that

I think there is a major shortage of people willing to be suicide bombers. There is a grand fallacy out there that the world is full of suicidal terrorists. It is not.

Suicide bombers from Palestine were generally tricked or extorted. Those that were acting on their own volition generally could not detonate themselves, which is why bystanders had detonators.

There is strong evidence that most of the 9/11 hijackers did not know it was a suicide mission.

That said, under orders, extortion, or trickery, a human could definitely sneak a bomb into a city.


"There is strong evidence that most of the 9/11 hijackers did not know it was a suicide mission."

Could you share a link to that? What part of "fly plane into building" didn't they understand?


Only those in the cockpit needed to know. The rest were in charge of subduing the passengers and could have been told they were hijacking the plane for ransom, or to meet demands, or any other story. If you consider the long lead up to 9/11, keeping most operatives in the dark makes the most sense for an Al Qaeda planners since it reduces the chances of a member talking or dropping out.

"FBI investigators have officially concluded that 11 of the 19 terrorists who hijacked the aircraft on 11 September did not know they were on a suicide mission"

http://www.guardian.co.uk/world/2001/oct/14/terrorism.septem...


Interesting. Good link. Yet, that still means that 8 of them knew. Not like it was 1 out of 19.

There have also been plenty of other suicide bombers at much smaller targets. Generally I don't think they are that excited about blowing themselves up, more they are willing to do it to take care of their families financially when they don't see any other options.


There is a shortage of willing people, but as you demonstrated there's no shortage of people that can be tricked into doing something.


> “So where does the air vehicle called the Predator [i.e., a flying robot] fit? It is unmanned, and impressive. In 2002, in Yemen, one run by the CIA came up behind an SUV full of al-Qaeda leaders and successfully fired a Hellfire missile, leaving a large smoking crater where the vehicle used to be.”

> Yes, just as you read it: a number of human beings were turned to smoke and smithereens, and this pathetic journalist, whoever he is, speaking with the mentality of a 10-year-old who blows up his toy soldiers, reports in cold blood how people were turned to ashes by his favorite (“impressive”, yeah) military toys. Of course, for overgrown pre-teens like him, the SUV was not full of human beings, but of “al-Qaeda leaders” (as if he knew their ranks), of terrorists, sub-humans who aren’t worthy of living, who don’t have mothers to be devastated by their loss. Thinking of the enemy as subhuman scum to be obliterated without second thoughts was a typical attitude displayed by Nazis against Jews (and others) in World War II.

That's... quite a string of logic. He seems to know an awful lot about the mental process of that journalist.

As a critique of his general point: good general AI is dangerous (and useful) in so many ways I don't see why he focuses so narrowly on humanoid carriers of weapons of mass destruction - hell we already have those.


You are very right to question the interpretation of the subhuman argument. If someone has not been shot at or genuinely afraid for the life at the hands of another human being, it is very idealistic to say that the conversion of humans to subhumans by combatants is petty. As someone who has been shot at and shot back, the reduction of an unquestionably hostile enemy to subhuman is very normal if not necessary for most members of a military, on both sides of a conflict. People that judge the hatred of religiously motivated enemies are both naive and living in walled gardens.

The fact that the OP can morally object to participating in the research is the perfect definition of ideology inside a protected environment. If he had ever needed a gun, for example, to save his life, he would not question the morality of the creator, until he was once again safe from those that threatened him. I say until because people that question the need for violence have never experienced true hatred of violence. IMEO.


Pardon me but ... I think that there are way worse dangers than "humanoid bombs"... One of the main reason is that to achieve a nuclear explosion you need to have a critical mass and the it's hard to conceal for a lot of reasons (radiation etc...).

What's the difference with a car that could have a bomb in its trunk? Or a bag? A lot of scientists have wondered about these ethical questions but I believe that the benefits of high performance IA outweights the downsides of its research.

BUT I definitively agree with that:

"Americans should grow up and abandon their juvenile-minded treatment of weapons, high technology, and the value of “non-American human life” (which, sadly, to many of them is synonymous with “lowlife”). This is the hardest part of my proposal."

*edit: And what about an android to dismantle the atomic bomb instead of humans ? Sounds good to me!


  > I think that there are way worse dangers than "humanoid bombs"
Yes, I am much more concerned about the scope for ubiquitous surveillance and systematic domination that even fairly modest gains in AI will allow. Something along the lines of the Emergency society in A Deepness in the Sky.


This is too much. About a week ago I finished reading A Deepness in the Sky, and a couple of days after that I started reading Charles Stross' Accelerando. In that one I ran across this passage:

    > "Cats," says Pamela. "He was hoping to trade their uploads to the Pentagon
    > as a new smart bomb guidance system in lieu of income tax payments. Something
    > about remapping enemy targets to look like mice or birds or something before
    > before feeding it to their sensorium. The old kitten and laser pointer trick."
    >
    > Manfred stares at her, hard. "That's not very nice. Uploaded cats are a
    > *bad* idea."
Those are some lovely coincidences.


It's been forever since I read the book, but wasn't the whole point of the Emergency culture they didn't use AI and relied entirely on hyper-focused humans with enslavement implants?


Didn't Sherkaner Underhill initially think that the Emergent zipheads were actually an AI?


Wasn't that Pham Nuwen, after the initial attack?


If off-the-shelf AI has the intelligence of even a ten year old, imagine the amount of automated snooping that can go on.

Why worry about sifting for keywords in text messages when you can literally read all the text messages and infer meaning from them, even when it's deliberately obfuscated?

It could lead to a Brazil-like future where you're drawn in to a mess of trouble because you used too many euphemisms when texting your significant other. "I'm bringing home the package right now..."

What do we do to push back against this kind of thing? Drop off the grid as Stallman would have you try? Eject yourself from society as a whole? Or will it be practical at that point to have sufficiently private, well encrypted channels of communication that you won't have to worry too much about that sort of thing? Technology does cut both ways.

It doesn't require an emergent intelligence to cause a massive shift in the way we view technology. A number of low-level intelligences that can be easily replicated may be the first disruption.


Then again, maybe AI-based surveillance could eliminate crime without embarrassment from other people violating your privacy.


Maybe I'm just naive but it seems to me like in a world where the power of the atom bomb can fit in a briefcase, theres no need for androids to get bombs within striking range of their intended targets.


That world already exists. The hard part is getting the isotopes necessary, but there are elements that will do the job. See my post: http://news.ycombinator.com/item?id=4067089

For a terrorist getting the elements would be hard, but a government would have no trouble, and given the long half life it's a pretty safe bet to assume this bomb actually exists somewhere.


>That world already exists.

Which was my point. Worrying about hypothetical android bomb carriers is silly when we live in a world where you can pack an atomic bomb in the space of a travelers briefcase. That allows for enough mundane methods of delivery to make suicide androids unnecessary. And as pointed out elsewhere, you don't have to have sentient AI to create something sophisticated enough to deliver a bomb.

EDIT: Struck the word "almost" from it's place next to silly in the above sentences.


> What's the difference with a car that could have a bomb in its trunk? Or a bag?

Or, for that matter, a personal firearm?


Or an internet?


Um, remote controlled robot?

Why would I waste time making an AI robot to carry my bomb when for a lot less money and complexity I could just control it remotely.

Does he realize how crazy he sounds? Some people becomes obsessed with an idea, and start thinking that everything in the world is about them.

Have you ever been approached by someone on the street with a super important message to tell you, and they are utterly obsessed with it? That's how he sounds - only more articulate.

I'm don't intend to be insulting when I say he should see a mental health professional.


"They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt."

Worrying about this strikes me as a bit daft when you can already convince actual humans to be your weapons delivery system.

It also shows some significant shortsightedness regarding scaling laws which an AI researcher ought to have more experience with. A more legitimate worry would be basement-grade Predator drones. Grenade-bearing quadcopters which use computer vision to track and target dense crowds are something which technology can do now, rather than something which might optimistically happen in a few hundred years.


Definitely this. A terrorist with an engineering/chemistry/biology degree/knowledge could do a lot of damage in todays society. It's not hard to imagine if you let your mind wander

(the explosive homemade uav into a stadium would be pretty bad, could fly in from anywhere)

I don't think he gets that security is probability based, consequence * _likelihood_, then you concentrate on the factors you can control like monitoring for people with intent, looking for known patterns, developing response plans, etc

Limiting the technology available is an exercise in futility, and has negative impacts on society to boot


>> It's not hard to imagine if you let your mind wander

Ahem: http://www.imdb.com/title/tt0075765/


Motion tracking quadcopters that are designed to carry explosive payloads larger than grenades are already being developed.


What I find really ridiculous about this article is that the author is worried about just a single possible use of a world-changing technology. He is concerned that creating real artificial intelligence will allow for the possibility of someone building androids with nuclear bombs inside masquerading as humans, a very specific and frankly ridiculous idea, taken straight out of the movie Impostor or from Philip K Dick's story of the same name.

In reality, the effects of building truly intelligent machines would be so vast, so utterly unpredictable, that worrying about one single possible use of the technology is absurd. Nothing has prepared us to deal with another fundamentally different intelligence on this planet, especially one that would soon outstrip our own. We don't know if we can keep the AIs as our slaves, or whether we would become their slaves, or merge with them, or we would become extinct like the dinosaurs and they would represent a new phase in human evolution.

For more about the risks related to the rise of true AI read this: http://yudkowsky.net/singularity/ai-risk


Excessively specific adjective: The average human has no particular regard for the life of the Other. An open-eyed view of both history and the world around you reveals that in spades. Calling out what we usually call the civilized world for not caring about the life of the Other is a major, major lamppost argument. The idea that one should care about someone else 10,000 miles away of another color and completely different culture is a striking and unusual attitude in human affairs.

(Since we ourselves are human it can be easy to blip over the historical manifestations of these facts as just part of the natural order of things ourselves. So, as one exercise if you have trouble understanding what I mean on a gut level, consider the stocks [1]. Consider what it means that in the middle of what was at the time the height of civilization and the genesis of our own in the western world, these things not only existed, but were in public places. And used. I can not truly internalize this, only observe it. And consider how often you've seen these and never thought about what they actually mean about the culture they appear in, if you never have before. For those not of western civilizational descent you can find your own examples; they are abundant in all cultures.)

Of course, actual examination and comprehension of this state of affairs won't necessarily leave you more confident about the likely outcomes.... but it may make you reconsider the validity of letting someone else beat you to the research anyhow. Your influence towards humane usage is maximized by being on the cutting edge, not just being some guy over there yelling.

[1]: http://en.wikipedia.org/wiki/Stocks


Like many other posters, I find his specific worries a bit misplaced. However, I have had some reluctance to continue working on some of my own machine learning projects because I'm worried about the potential abuses of the technology.

I'm sure the field will get along just fine without me, of course, but I just felt like I was very likely to be asked to use ML skills to do things I felt weren't entirely ethical.


I think that we're largely missing the point here. He's worried that his fundamentally harmless research will end up powering horrific weapons of mass destruction, enabling them to attack even more precisely and with more devastation. And quite frankly, I share his concerns that if those weapons were developed, we would use them without thought or care. And apologies to my fellow American Hackers, but America's got the rep for it, what with that one time they dropped a couple of nukes on hundreds of thousands of unarmed men, women and children, killing hundreds of thousands and levelling a couple of cities.

But, I digress, he's talking about androids sneezing us to death. I'm not going near a shop mannikin ever again.


The author's attitude that very few Americans are "intelligent, mature," and "[respect] life deeply" impeach his opinions on both logic and geopolitical topics as far as I'm concerned:

> It is typically Americans who display this attitude regarding hi-tech weapons. (If you are an American and are reading this, what I wrote doesn’t imply that you necessarily display this attitude; note the word “typically”, please.) The American culture has an eerily childish approach toward weapons, and also some outlandish (but also child-like) disregard for human life. (Once again, you might be an intelligent, mature American, respecting life deeply; it is your average compatriot I am talking about.)


Woah. I would have liked a warning about the picture of a kid with his arms blown off.

I realise the internet is full of this but I try my best to avoid it. I don't want to become immune to the shock.

The thought of this little guy's pain and suffering and the idea that he was casually being used to back-up an online essay is really sad.


Thanks for the heads up. I had not read the article yet and now I definitely won't.


As others have mentioned, this specific concern may not be much of a problem. It might be that it's easier to deliver a nuclear bomb the old-fashioned way than putting it in a fake person.

However, I agree that development of AI should be done with caution. The work of the Singularity Institute is worth looking into; see http://commonsenseatheism.com/wp-content/uploads/2012/02/Mue... for a more academic summary and http://facingthesingularity.com/ for a longer popular summary of their positions.


Another "this is why I quit" + name_of_company doomsday letter. Instead of a company, he's quitting his research and university. We know why this starts: seeking fame. We know how this ends: forgotten.


A lot of people get tired of their dissertation research, and I've heard others contemplate contrived reasons not to finish their PhD.

This happens to be especially far-fetched... but it takes a "big" reason to justify to yourself that you may leave behind so much work.

I hope the author realizes that this particular scenario isn't one of the 1,000,000 biggest concerns for humankind... that he continues his research program, and that he finds an application of his research that has a positive impact in a much more likely scenario.


I think the most credible concern this post mentions is the general disregard in the United States (especially among those in charge of the military) for the long term implications of the indiscriminate use of A.I.-based warfare. Drones seem great for the U.S. now: they make it easier to kill enemies and don't directly endanger American lives. But in a decade or two when "enemy" nations start to develop them too, things get a whole lot more complicated.

Nonetheless, I think the general stance of the article is severely flawed. We cannot halt research in computer cognition because it has the potential to be weaponized (and dangerously so). As the author himself mentions, it would be akin to halting the development of the knife because people can use it to stab each other, or the development of the Internet because it makes it easier for criminals to communicate and organize.

Avoiding a potential advance in technology by doing things like cutting funding to it, and hoping it will go away as a result, is never the solution to potentially dangerous development. One cannot stop the inexorable march of progress by "making a statement." The approach with greater value is to call out the dangers that the potential advance poses (as the post has done), and then work to develop an ethical framework for which the new technology can more safely exist.

The Singularity Institute has raised awareness of this broader issue in the past, as have several others, and is promoting the creation of "Friendly A.I." [1] to help address the problem.

[1]: http://en.wikipedia.org/wiki/Friendly_AI

See also this recent article: http://www.economist.com/node/21556234


> of the indiscriminate use of A.I.-based warfare

There is no A.I. based warfare - the drones are controlled by human pilots.


From wikipedia: "An unmanned aerial vehicle (UAV), commonly known as a drone, is an aircraft without a human pilot onboard. Its flight is either controlled autonomously by computers in the vehicle, or under the remote control of a navigator, or pilot (in military UAVs called a Combat Systems Officer on UCAVs) on the ground or in another vehicle."

They can be controlled by pilots remotely, but are also able to function on their own.


they are able to function on their own about as much as an autopilot is able to function on its own


Quick note: There's a not quite safe for work image near the bottom of the article (Topless tribal woman)


If you're upset about the topless happy healthy woman and not about the scenes of disfigured war victims above it, there's something wrong with you as a human.


I think his concern was for the people behind filters and monitors who trip a million and one alarms when a breast is detected.

Of course things like dying war victims and terrible mutilation will probably pass by these systems A-OK. (Which is really sad when you think about it.)


I'm not making a comment on the morality of the images only that some automatic content filters might consider it pornography which is usually not allowed in corporate environments


Workplace policies are what make something nsfw, not any ridiculously extrapolated failings as a human.


"not safe for work" means you shouldn't be viewing these images at work and if you got caught seeing them, a resistant boss might not accept your explanation. It has nothing to do with the context the article provides regarding the NSFW material.


If you're willing to impress a tortured reading onto his innocuous (and correct!) comment just so that you can make a statement about society's views, there's something wrong with you as a human.


And yet current culture has more issues with the topless woman being shown in the media then disfigured victims.

I can understand the original warning, even though I think personally that its crazy I should be worried at work about viewing the topless tribal woman but not worried about the victims. In this I think our society is sick.


Yes, if only we could pay the bills with naive idealism.


You can rather easily, if it is written well and marketed properly.


They’re in the remote possibility of building intelligent machines that act, and even appear, as humans. If this is achieved, eventually intelligent weapons of mass destruction will be built, without doubt.

We already have those. There are plenty of people willing to blow themselves up and take a bunch of others with them: http://en.wikipedia.org/wiki/Explosive_belt

As a non-American from a constitutionally neutral country, I think this is the equivalent of having people traveling in front of trains with red flags. There are any number of ways to disguise a devastating weapon or deliver it undisguised, and evil is not a mere by-product of technical incapacity.


Tinfoil hat and nonsense. Since when are the Bongard problems even remotely connected to actual human cognition? Is this guy straight out of the 60s?


> the nuclear bombs that Pakistan possesses would fall into the hands of terrorists.

This exact scenario was discussed today on NPR (http://www.npr.org/books/titles/154283427/confront-and-conce...)


Made me think of this... Is the Concept of an Ethical Governor Philosophically Sound? By Andreas Matthias http://www.shufang.net/matthias/governor.pdf

Perhaps he should work on these kinds of algorithms instead of ones that solve Bongard problems.


The author's characterization of any Americans who disagree with his politics as morons is disgusting.


I strongly felt this way from about the halfway point on down. He seems to have a particular hatred for Americans, and he is nothing if not vocal about it.

> But Americans can sense that this is not a case like those they’re familiar with, if they realize that the “reign of terror” was a cheap trick employed for years by their post-9/11 administrations in order to reduce civil liberties and pass antidemocratic policies with no resistance. I am not a member of their administration, not even an American. I am speaking as a person concerned about fellow people and the future of humanity as a whole.

smelled particularly strongly of a conspiracy theorist's thinking. While I'll not dispute that certain acts passed during our author's so called "reign of terror" overstepped their bounds (the Patriot Act is, of course, first to mind), passing off all regulation to fix clearly- and recently-exposed security flaws within U.S. security as maniacal scheming that we, the public, are docilely accepting is inaccurate and downright insulting.

Worse, he tries to couch his blatherings in the mantle of a just, altruistic benefactor who is merely "concerned about fellow people and the future of humanity as a whole." The article was interesting for the issues it drew attention to which DO merit consideration, but ultimately was spoiled by diatribe and paranoia.


While I respect the author's decision to leave his reasearch, I am suprised that the reason was due to robot suicide bombers.

We have plenty of humans who are ready to go into a crowded place and detonate an explosive. Some, I'm sure, would like that explosive to be a nuclear weapon.


This is ridic. I read only half of the story after the full story I probably would say insane


two words: "Dark Star"


To add a bit more to the conversation: The movie "Dark Star" which cam out in 1974, predating and anticipating many of the sci-fi tropes popularized by Star Wars, Star Trek, etc. The characters have a conversation with a bomb and try to rationalize that it shouldn't detonate. https://en.wikipedia.org/wiki/Dark_Star_%28film%29


Excellent movie.

Two of the creators in Dark Star (Dan O'Bannon and Ron Cobb) worked on Star Wars, Alien, others. Also George Lucas was of course aware of Dark Star. So it's a bit different from mere predating/anticipating.

(Though you are probably aware of that)


Yeah, anticipate probably isn't the right word. I just think it's funny that this movie seems to parody those others before they were even made :)


two more words "Unabomber Manifesto"

... blah blah blah technology is evil ... blah blah blah going into isolation

(present day)

... blah blah blah I must stop the evil doers ... blah blah blah Their all against me! ... blah blah blah I must attack now . .


Wow, this guy has a bone to pick with Americans.

Why worry about an AI humanoid delivering weapons, when we already have so many humans who do that already? The groups sending people on suicide missions certainly won't spend money on androids, and suicide missions are much more common than just 9/11. Hint: for the most part, it's not Americans sending people out to deliberately commit suicide by delivering weapons to targets.

It's just as naive to be so one-sided about the issue.


This has to be one of the biggest leaps of logic I've ever seen in my life.

it's like, "Why I stopped working on crypography." sentences 1-5: author introduces the theory behind cryptography (interesting) sentence 6: he says he stopped working on it for ethical reasons (um, okay) sentence 7: because cryptography would prevent batman doing his detective work. (batshit insnae)


Atomic dielectric resonance scanning obsoletes nukes anyway. It also obsoletes most concepts of privacy and most existing biological, chemical and geological analysis technologies.

http://en.wikipedia.org/wiki/Atomic_dielectric_resonance

http://adrokgroup.com/


I cannot think of a worse argument to stop building revolutionary technology than: "It might all blow up in our faces".

It's going to get built, one way or another, the only way for it not to destroy us is for us to make sure perfect angels design it perfectly, or we proceed cautiously and make things as safe as possible. Like airplanes and spaceships.

If he's worried about androids rising up against their former rulers with their delicate flesh, his worries are about 60 years premature. I will continue to build and improve on the neural networks I build. And when they are intelligent enough to ponder their own existence and defend themselves as humans do, I will fight for their rights as citizens.


When humanoid robots become passable as humans, I would expect us to have technology capable of distinguishing between warm blooded humans filled with water and robots filled with artificial compounds, and to detect bomb embedded in anything mobile.

I wonder why it did not occur to him that the same AI could also be used to aid in this detection of humanoid nuclear bombs, which if they are going to be built, will certainly be built with or without him.


It's really too late to be concerned about frightening effects of technology now that drones are allowed in US airspace.


great article! loved it. in the 80's i coined me the sentence "never write software for cruise missiles!". that was harder than i imagined then. today it's the same with AI, ML and even data mining. ethically very though stuff for responsible software developers. anyway, thnx for the article.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: