Hacker News new | past | comments | ask | show | jobs | submit login

That's the real danger of AI: not that is wants to kill us all, but that it kills us all simply by accident.



My prediction is that one day someone will say "Entertain me more than anyone has been entertained before". And they will end up dead.


Couldn't be much worse than the mess we're making by ourselves.


No no no no.

These self-deprecating arguments can only come from people who lack the basic understanding of how good we have it especially in the western world and how fragile our societies really are. Seriously, just look at Ukraine or Syria.

It can very much be worse by orders of magnitude. Just imagine this benevolent AI screwing up the electric grid or food production. Not to mention getting access to weapons, especially nuclear.

The water you drink, the food you eat, the energy that keeps you warm and mobile was made available to you by other people and by complex systems you (probably) don’t full understand. It can all be easily be taken from you. And most people (incl. myself) will have a very hard time surviving. So, no, it can definitely get a lot worse.


The problem with that argument is, how would AI get to a position to control these things? That’s somehow always left out. There are brakes in our society that surely has faults, but most governments are more on the slow-moving side of things due to these very breaks. An AI won’t be suddenly replacing elected officials to make any sort of decision.

Surely, one might argue that a “smart enough singularity-level AI” can manipulate people to achieve its goals, but I don’t really see that feasible. Can intelligence really has all that much more “depth” to it? The most intelligent people on Earth are probably doing some obscure PHd research on some minimal government subsidies, people in control has very little intersection with them.


My point was we don't need an AI to make a mess of things, we already have the ability to do that ourselves - and with the way we're going with climate change, a very likely collapse of society and path to extinction, it seems pretty obvious to me that an AI couldn't do much worse (though it might do it much faster and more efficiently, which would probably be a net benefit for the earth).

Basically, everything you point out I agree with, except replace "benevolent AI" with "myopic human".


Climate change is not a path to collapse of society and extinction, in most scenarios that scientists believe even remotely likely.

AI (or other future technologies) really can be a path to extinction.


If you find the idea of climate change leading to societal collapse far fetched, you must surely find the concept of a rogue AI even more so - the science is much more solid and certain on the climate (and I don't think the science says the scenarios that lead to social collapse are as unlikely you think)

I also think it says more about our own human failings than the true risks of general AI, that we imagine it more likely to go rogue and kill us all instead of being more adept, more benevolent and capable at managing the complexity of society than our own feeble attempts.


You have really nailed the ChatGPT argument format there!


Well said


World population has never been higher.

From a species' perspective, you are fantastically wrong.


Sure, if you believe the measure of success is global total mass of the organism. Also, I'm not talking about the peak we're at right now, I'm talking about the cliff we're running towards.


To be honest, that's the real danger of humans too. We now have numerous possible ways that we might just wipe ourselves out by mistake (climate change passing some feedback loop tipping point, forever chemicals making all mammals sterile, nukes, CFCs were a pretty good candidate back in the day too).


I wouldn't blame the AI in this case, I would blame whoever put an AI in charge of critical systems.


Issue: Eradication of humanity.

Cause: PEBKAC

Solution: Eradication of humanity.


I've always thought that "user error" would be what ultimately ends the world as we know it...


Whoops, I thought I was running the nuclear weapons simulation on staging, my bad!



I mean, San Francisco wants to give its police killer robots. There's no shortage of the "whoever" in positions of influence.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: