Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] Yoshua Bengio: Humanity faces a 'catastrophic' future if we don't regulate AI (livescience.com)
20 points by hdvr 11 months ago | hide | past | favorite | 18 comments


Many people instinctively assume that researchers in AI are the best qualified to comment on whether the current line of research is likely to pose existential risks. If you assume that then the situation sure seems dire, with many of the most prominent researchers warning of the oncoming apocalypse.

However, there's another possible interpretation: AI researchers chose to pursue their field because they believed true artificial intelligence was possible and wanted to pursue it. They steeped themselves in the literature of superintelligence and set that as the target, which primes them to buy into their own field's hype.

This is their life's work: of course they believe in it. We should be careful not to take their expertise in deep learning's foundations as evidence that they have expertise in where it will end—no one can know that yet, and they're the people who chose this path of research because they believed in it before anyone else did.


They do have particular insight into the trajectory of capabilities; less insight into how the newfound capabilities will interact with the world; and no particular insight into values.

The usual failure mode with experts is that they overstate how important their field is and demand money to fund further progress in their work. It's at least slightly interesting that this particular group of experts aren't self-aggrandizing in an attempt to make more money and get to develop their research interests further. Instead, they're arguing that their research needs to be frozen and severely regulated.

In my opinion, the existential risk of AI is very real. The issue is that policymakers lack any real understanding of the issue, and most regulations will focus on mostly benign things (what if end users do something naughty with it?) and not the central existential risk (ew that's nerdy and alarmist). You can see that dynamic with Newsom.

We'll end up with the worst of all worlds, where regulations prevent valuable small-scale use cases that improve our lives, but do nothing at all to prevent existential risk. Maybe even making the existential risk worse, though that's more speculative.


Many people arguing that AI risk is real have big monetary incentives. Some are asking for money to study safety and influence the regulatory bodies. Others gain money because believing in AI superintelligence makes their AI startup look like a great investment. The true believers like Bengio are a smaller subset.


> Others gain money because believing in AI superintelligence makes their AI startup look like a great investment.

And still others are hoping that the fear will lead policy makers to build up a regulatory moat that they'll be able to navigate but their up-and-coming competitors won't.


It remains highly interesting, and indicative of something, that researchers in other fields orient almost entirely on the "give us money to improve our technology," not "give us money to research how to fix our technology, and pause it in the meantime."


> The solution is a middle ground, where we talk to the Chinese and we come to an understanding that's in our mutual interest in avoiding major catastrophes. We sign treaties and we work on verification technologies so we can trust each other that we're not doing anything dangerous.

Veritasium has a video on how developing the fast fourier transform a few years earlier may have prevented the nuclear arms race between the US and the Soviets (by allowing independent verification of how many nuclear tests were done.) https://www.youtube.com/watch?v=nmgFG7PUHfo

What technology can monitor the development of AI in a foreign country?


As a European I'm more worried that we will over-regulate it and drive away the next generation of tech companies too.


Worry not, we are already doing that! GDPR + AI act is almost the definition of unstable regulatory regime, compounded by the fact that the specific implementation of those is up to the member states.

I don't have weird AI fears and I'll be shocked if LLMs end up being more than "it helps writing code sometimes, kind of", but there are potentially great use cases to investigate and I don't see what all of this is getting us.


Can we regulate China and USA secret AI programs ? Nuclear non proliferation treaties work because nuclear industry is harder to hide, and is expensive to produce and maintain nukes you will probably not need anyway. But when we are able to negotiate with China and USA to stop this then we can entertain the idea of regulating AI research.

I wish we could regulate based on evidence and data and not on feelings of fear We can regulate some uses of AI based on facts, like not allow "smart dudes" create AI therapists, medics, lawyers as commercial things,because where it is greed there will be less concern for safety and human supervision will not exist .


This headline can be shortened - "Humanity faces a 'catastrophic' future". It is quite hard to take a serious look at the future and not see catastrophe, there are so many threats and the monkeys responsible for dealing with them are barely capable of managing a good day. We're still facing regular 1%-style risks of nuclear war. Eventually one of those trips.

We're better off forging ahead. The regulators, if they get involved, will do what they do and freeze the status quo. The status quo isn't sustainable; we can't afford that in the long term. Even if we do accidentally replace ourselves with robots, at least the robots have a better chance of behaving like civilised, rational beings.


Solve both problems at same time. Put AI in charge of the Nuclear Weapons.


Can someone please explain why my post got flagged? Did I overlook something in the posting guidelines, perhaps?


I could offer an answer to your question but it would also get flagged :)


Because users flagged it.

That pushes the question to "why did users flag it," but it's not hard to speculate reasons why. A dedicated minority roll their eyes at discussions of AI safety. On top of that, although the article itself is sound, it's targeted at non-technical people, and the domain it's hosted on isn't well-regarded (trending banner on top: "What happens when you hold in a fart?")


Ok, thank you for pointing that out.


Unregulated North Korean AI will then rule the world.


Ah yes, the libertarian utopia of North Korea


Who is Yoshua Bengio and why is he LARPing as Nostradamus?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: