Yes and no. AI is no different than proliferating nuclear weapons or deciding to burn all the fossil fuels -- on an individual competitive level, it makes sense to do this more and more to remain competitive. On a systems-whole level it leads to catastrophe. The whole tragedy of the commons thing, or, more recently described, the Moloch problem.
Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).
But what we are seeing too often is a calorie-free media panic where prominent individuals — including scientists and experts we deeply admire — keep showing up in our push alerts because they vaguely liken AI to nuclear weapons or the future risk from misaligned AI to pandemics. Even if their concerns are accurate in the medium to long term, getting addicted to the news cycle in the service of prudent risk management gets counterproductive very quickly.
## AI and nuclear weapons are not the same
From ChatGPT to the proliferation of increasingly realistic AI-generated images, there’s little doubt that machine learning is progressing rapidly. Yet there’s often a striking lack of understanding about what exactly is happening. This curious blend of keen interest and vague comprehension has fueled a torrent of chattering-class clickbait, teeming with muddled analogies. Take, for instance, the pervasive comparison likening AI to nuclear weapons — a trope that continues to sweep through media outlets and congressional chambers alike.
While AI and nuclear weapons are both capable of ushering in consequential change, they remain fundamentally distinct. Nuclear weapons are a specific class of technology developed for destruction on a massive scale, and — despite some ill-fated and short-lived Cold War attempts to use nuclear weapons for peaceful construction — they have no utility other than causing (or threatening to cause) destruction. Moreover, any potential use of nuclear weapons lies entirely in the hands of nation-states. In contrast, AI covers a vast field ranging from social media algorithms to national security to advanced medical diagnostics. It can be employed by both governments and private citizens with relative ease.
"""
Let's stop contributing to this "calorie-free media panic" with such specious analogies.
Furthermore, there is little or no defense against a full scale nuclear attack, but a benevolent AI should be sufficient defense against a hostile AI.
I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.
"Hackers aren't a problem because we have cybersecurity engineers". And yet somehow entire enterprises and governments are occasionally taken down.
What prevents issues in redteam/blueteam is having teams invested in the survivability of the people their organization is working for. That breaks down a bit when all it takes is one biomedical researcher whose wife just left him to have an AI help him craft a society ending infectious agent. Force multipliers are somewhat tempered when put in the hands of governments but not so much with individuals. Which is why people in some countries are allowed to have firearms and in some countries are not, but in no countries are individuals allowed to legally possess or manufacture WMDs. Because if everyone can have equal and easy access to WMDs, advanced civilization ends.
I mean, hackers aren’t such a problem that we ban developing new internet apps to only licensed professionals by executive order, so that kinda proves the parent posters point?
The difference is the scale at which a hacker can cause damage. A hacker can ruin a lot of stuff but is unlikely to kill a billion or two people if he succeeds at a hack.
With superintelligent AI you likely have to have every developed use and every end user get it right, air tight, every time, forever.
Yes, but the AI that is watching what every Biomedical researcher is doing will alert the AI counselors and support robots and they will take the flawed human into care. Perhaps pair them up with a nice new wife.
Can you imagine how much harder it would be to protect against hackers if the only cybersecurity engineers were employed by the government.
And the best path to a benevolent AI is to do what? The difficulty here is that making an AGI benevolent is harder than making an AGI with unpredictable moral values.
Do we have reason to believe that giving the ingredients of AGI out to the general public accelerates safety research faster than capabilities research?
Color me surprised that that the project manager for the Nuclear Information Project is in fact a subject matter expert for nuclear power and not AGI x-risk. Why would they be working on nuclear information if they didn't think it the most important thing?
If you talk to the people on the bleeding edge of AI research instead of nuke-heads (I tend to be kinda deep into both communities), you'll get a better picture that, yeah, a lot of people who work on AI really do think that AI is like nukes in the scale of force multiplication it will be capable of in the near to medium future, and may well vastly exceed nukes in this regard. Even in the "good" scenarios you're looking at a future where people with relatively small resources will have access to information that would create disruptive offensive capabilities, be it biological or technological or whatever. In worse scenarios, people aren't even in the picture any more, the AIs are just working with or fighting each other and we are in the way.
I’m pretty sure what communities you are in are not actual research but some hype alarmist bullshit communities, since as a ML researcher absolutely zero of my peers think the things you say.
You have no clue what communities I’m in other than it self affirms your worldview to assume that they must be irrelevant. To be fair, I’m not telling you anything about myself, so you’re not really needing to take my word for it. And I don’t care enough about you to explain in detail.
Though with a tiny bit of Googling you’ll be able to find several Turing Award winners who are saying exactly what I’m saying. In public. Loudly.
Why does everybody come up with the 'nuclear weapons' comparison, when there is a much more appropriate one - encryption, specifically public key cryptography? Way back in the 90s, when Phil Zimmerman released PGP, the US government raised hell to keep it from proliferating. Would you rather live in a world where strong encryption for ordinary citizens was illegal?
Because encryption is not an inherently dangerous thing. A superintelligent AI is.
It’s no different than inviting an advanced alien species to visit. Will it go well? Sure hope so, because if they don’t want it to go well it won’t be our planet any more
Current AI (the one that's getting regulated, e.g. LLMs and diffusion models) lacks any sort of individual drive or initiative, so all the danger it represents is that of a powerful tool wielded by someone.
People who are saying AI is dangerous are saying people are dangerous when empowered with AI, and that's why only the right people should have access to it (who presumably are the ones lobbying for legislation right now).
It's quite a bit different. Access to weapons-grade plutonium is inherently scarce. The basic techniques for producing a transformer architecture to emulate human-level text and image generation is out in the open. The only scarce resource right now preventing anyone from reproducing the research themselves from scratch is the data and compute required to do it. But data and compute aren't plutonium. They aren't inherently scarce. Unless we shut down all advances in electronics and communications, period, shutting down AI research only stops it until data and compute is sufficiently abundant that anyone can do what currently only OpenAI and a few pre-existing giants can do.
What does that buy us? An extra decade?
I don't know where this leaves us. If you're in the MIRI camp believing AI has to lead to runaway intelligence explosion to unfathomable godlike abilities, I don't see a lot of hope. If you believe that is inevitable, then as far as I'm concerned, it's truly inevitable. First, because I think formally provable alignment of an arbitrary software system with "human values," however nebulously you might define that, is fundamentally impossible, but even if it were possible, it's also fundamentally impossible to guarantee in perpetuity that all implementations of a system will forever adhere to your formal proof methods. For 50 years, we haven't even been able to get developers to consistently use strnlen. As far as I can tell, if sufficiently advanced AI can take over its light cone and extinguish all value from the universe, or whatever they're up to now on the worry scale, then it will do so.
I guess I should add, because so few people do, this is what I believe, but it's entirely possible I'm wrong, so by all means, MIRI, keep trying. If you'd asked anyone in the world except three men before 1975 if public key cryptography was possible, they'd have said no, but here we are. Wow me with your math.
Can someone outline how AI could actually harm us directly? I don’t believe for a second sci-fi novel nonsense about self-replicating robots that we can’t unplug. My Roomba can’t even do its very simple task without getting caught on the rug. I don’t know of any complicated computing cluster or machine that exists that wouldn’t implode without human intervention on an almost daily level.
If we are talking about AI stoking human fears and weaknesses to make them do awful things, then ok I can see that and am afraid we have been there for some time with our algorithms and AI journalism.
> Can someone outline how AI could actually harm us directly?
at best, maybe it adds a new level of sophistication to phishing attacks. That's all i can think of. Terminators walking the streets murdering grandma? I just don't see it.
what I think is most likely is a handful of companies trying to sell enterprise on ML which has been going on since forever. YouTubers making even funnier "Presidents discuss anime" vids and 4chan doing what 4chan does but faster.
You start by saying “show me examples of” and finish by saying yeah “this is as already a problem.” Not sure what point you’re trying to make, but I think you should also consider lab leaks in the sense of weaponized ai escaping or being used “off label” in ways that yield novel types of risk. Just because you cannot imagine future tech at present doesn’t indicate much.
Consider an AI as a personal assistant. The AI is in charge of filtering and sorting your email. It has as a priority to make your life more efficient. It decides that your life is more efficient if you don't see emails that upset you, so it deletes them. Now consider that you are in charge of something very important.
It doesn't take a whole lot of imagination to come up with scenarios like these.
I don't need to know how a chess computer will beat me to know that it will beat me. If the only way you will entertain x-risk is to have a specific scenario described to you that you personally find plausible, you will never see any risk coming that isn't just a variation of what you are already familiar with. Do not constrain yourself to considering only that which can be thought up by your limited imagination.
> Do not constrain yourself to considering only that which can be thought up by your limited imagination.
Don't let your limited imagination constrain you're ability to live in fear of what could be. Is that what you mean? So it's no longer sufficient to live in fear of everything, now you need to live in fear even when you can't think of anything to be afraid of. No thanks.
Instead of taking the most obtuse reading of my point, how about you try to engage intelligently? There are some manner of unmaterialized risk that we can anticipate through analysis and reasonable extrapolation. When that risk has high negative utility, we should rationally engage with its possibility and consider ways to mitigate it.
Why not? It has already been shown that AI can be (mis)used to identify good candidates for chemical weapons. [1] Next in the pipeline is obviously some religious nut (who would not otherwise have the capability) using it to design a virus which doesn't set off alarms at the gene synthesis / custom construct companies, and then learning to transfect it.
More banally, state actors can already use open source models to efficiently create misinformation. It took what, 60,000 votes to swing the US election in 2016? Imagine what astroturfing can be done with 100x the labor thanks to LLMs.
> Next in the pipeline is obviously some religious nut (who would not otherwise have the capability)
So you're saying that:
1. the religious nut would not find the same information on Google or in books
2. if someone is motivated enough to commit such an act, the ease of use of AI vs. web search would make a difference
Has anyone checked how many biology students can prepare dangerous substances with just what they learned in school?
Have we removed the sites disseminating dangerous information off the internet first? What is to stop someone from training a model on such data anytime they want?
1. The religious nut doesn't have the knowledge or the skill sets right now, but AI might enable them.
2. Accessibility of information makes a huge difference. Prior to 2020 people rarely stole Kias or catalytic converters. When knowledge of how to do this (and for catalytic converters, knowledge of their resale value) became available (i.e. trending on Tiktok), then thefts became frequent. The only barrier which disappeared from 2019 to 2021 was that the information became very easily accessible.
Your last two questions are not counterarguments, since AIs are already outperforming the median biology student, and obviously removing sites from the internet is not feasible. Easier to stop foundation model development than to censor the internet.
> What is to stop someone from training a model on such data anytime they want?
Present proposals are to limit GPU access and compute for training runs. Data centers are kind of like nuclear enrichment facilities in that they are hard to hide, require large numbers of dual-use components that are possible to regulate (centrifuges vs. GPUs), and they have large power requirements which make them show up on aerial imaging.
What happens if someone develops a highly effective distributed training algorithm permitting a bunch of people with gaming PCs and fast broadband to train foundation models in a manner akin to Folding@Home?
If that happened open efforts could marshal tens or hundreds of thousands of GPUs.
Right now the barrier is that training requires too much synchronization bandwidth between compute nodes, but I’m not aware of any hard mathematical reason there couldn’t be an algorithm that does not have to sync so much. Even if it were less efficient this could be overcome by the sheer number of nodes you could marshal.
Is that a serious argument against an AI pause? There are potential scenarios in which regulating AI is challenging, so it isn't worth doing? Why don't we stop regulating nuclear material while we're at it?
In my mind the existential risks make regulation of large training runs worth it. Should distributed training runs become an issue we can figure out a way to inspect them, too.
To respond to the specific htpothetical, if that scenario happens it will presumably be by either a botnet, by a large group of wealthy hobbyists, or by a corporation or a nation state intent on circumventing the pause. Botnets have been dismantled before, and large groups of wealthy hoobyists tend to interested in self preservation (at least more so than individuals). Corporate and state actors defecting on international treaties can be penalized via standard mechanisms.
You are talking about some pretty heavy handed authoritarian stuff to ban math on the basis of hypothetical risks. The nuclear analogy isn’t applicable because we all know that a-bombs really work. There is no proof of any kind of outsized risk from real world AI beyond other types of computing that can be used for negative purposes like encryption or cryptocurrency.
Here’s a legit question: you say pause. Pause until what? What is the go condition? You can never prove an unbounded negative like “AI will never ever become dangerous” so I would think there is no go condition anyone could agree on.
… which means people eventually just ignore the pause when they get tired of
it and the hysteria dies out. Why bother then?
>It took what, 60,000 votes to swing the US election in 2016?
this is a conspiracy theory that gained popularity around the election but in the first impeachment hearings, those making allegations regarding foreign interference failed to produce any evidence whatsoever of a real targeted campaign in collusion with the Russian government.
Eh, he's not wrong there -- our elections are subject to chaos theory at this point, more than the rule of law. Sensitive dependence on both initial conditions and invisible thresholds. The Russian "butterfly effect" in 2016 was very real. Even if it wasn't enough on its own to tip the balance in Trump's favor, they were very clear that Trump was their candidate of choice, and they very clearly took action in accordance with that. Neither of these statements is up for the slightest debate at this point.
However, the possibility of foreign election interference, real or imagined, is not a valid reason to hold back progress on AI.
The only thing keeping bioweapons from being easy to make is information becoming easily available.
> Essentially you are advocating against information being more efficiently available.
Yes. Some kinds of information should be kept obscure, even if it is theoretically possible for an intelligent individual with access to the world's scientific literature to rediscover them. The really obvious case for this is in regards to the proliferation of WMDs.
For nuclear weapons information is not the barrier to manufacture: we can regulate and track uranium, and enrichment is thought to require industrial scale processes. But the precursors for biological weapons are unregulated and widely available, so we need to gatekeep the relevant skills and knowledge.
I'm sure you will agree with me that if access information on how to make a WMD becomes even a few order of magnitudes as accessible as information on how to steal a Kia or how to steal a catalytic converter, then we will have lost.
My argument is that a truly intelligent AI without safeguards or ethics would make bioweapons accessible to the public, and we would be fucked.
AI isn't anything like nuclear weapons. One possible analogy i can draw is how scientists generally agreed to hold off on attempting to clone a human. Then that one guy in China did and everyone got on his case so bad he hasn't done it again (that we know of). In the best case, I could see AI regulation taking that form, as in, release a breakthrough get derided so much you leave the field. God, what a sad world to live in.
I dunno. The American Geophysical Union seemed to think it was a BFD in one of their conferences back in the 2000s. Had a whole presentation about how certain parts of the US grid would lose large transformers that would likely take months to years to replace as there are few spares and they have to be wound with miles and miles of copper by hand. Said that large swaths of the US would be without grid power for very long periods.
Said that third world countries would fare pretty well though.
It sounds like they need some type of automatic or robotic winding system... Even without some catastrophe, they surely have to make new transformers from time to time, and winding miles of copper by hand sounds ridiculously inefficient for a manufacturing process.
Hand winding allows for a high degree of precision and quality control that may be difficult to achieve with automated machinery, particularly for specialized or custom designs. Skilled technicians can closely monitor the process and make adjustments on the fly to ensure that the winding is as precise as possible. This is crucial for large transformers, where even minor imperfections can lead to inefficiencies, increased heat, or failure.
Transformers for electrical grids can vary greatly in their specifications depending on a variety of factors, such as location, usage, and existing grid architecture. Customization is often necessary, and hand winding allows for this level of customization to meet specific criteria, including the number of windings, the type of core used, and other design elements.
Copper wire is both flexible and delicate. It needs to be handled carefully to avoid nicks, kinks, or other imperfections that can compromise the transformer’s performance. Human technicians can adapt to the nuances of the material more effectively than machinery in some cases, ensuring that the wire is handled with care throughout the winding process.
This event is twice as powerful as the previously known largest event, which is at least 10X as powerful as the largest event in recorded human history, back in the 1800s, which set telegraph poles on fire. So, yeah, it's a big Twinkee.
This is a succinct explanation of the problem. Do we give the vast majority of users extremely easy, frictionless access to very high levels of security and privacy? Or do we give the vast majority of users a fundamentally insecure solution that with lots of learning and configuring and time can be have very very very high levels of security and privacy?
The crazy thing is that apple hardware beats most other hardware, too, at a high price. Better phones, better tablets, better laptops. More secure, more private OS than the popular consumer alternatives (Windows, Android). Arguably much better OS all around, too (at least IMO -- iOS beats even stock Pixel Android at use-ability, MacOS v Windows is like the Harlem Globetrotters playing the Washington Generals.)
Thing is, if law enforcement is patient they can get the data off the actual devices themselves, if they're still alive. Yes, a fully patched iPhone tends to be a fortress of might to anyone other than a nation state willing to burn a few very expensive 0 days, but with almost any phone if you wait a year or two something will inevitably come out that will allow the ol' Cellebrite crowbar a cranny to slip into.
My favorite part of android is how security patches go through a multi-tiered trickle-down system of testing to make sure they work with the dozens of custom flavors each manufacturer has so that by the time you get patched it's been in the wild for weeks or months. Oooh, ooh, no that's not my favorite thing, my favorite thing is how each cellular company gets to put their own bloatware on top of the bloatware that each phone manufacturer gets to add to it. Oh wait, maybe it's patch support ending for new phones 3 years after they were released. There is so much to love about how Android turned out it's hard to pick just one thing.
> My favorite part of android is how security patches go through a multi-tiered trickle-down system of testing to make sure they work with the dozens of custom flavors each manufacturer has so that by the time you get patched it's been in the wild for weeks or months.
This is not the reason for security patches taking too long to be released to certain phones; Google has a monthly cadence of releasing security patches and zero-days have rarely (I can't remember a case of that happening but maybe it has happened) missed do you have a source for it?
> Oooh, ooh, no that's not my favorite thing, my favorite thing is how each cellular company gets to put their own bloatware on top of the bloatware that each phone manufacturer gets to add to it.
There are unlocked phones available and honestly this problem is mostly a US problem. Rest of the world isn't in the iron fists of their carriers.
> Oh wait, maybe it's patch support ending for new phones 3 years after they were released.
You can vote with your wallet and choose vendors where this is not the case; Google, Samsung and Recently OnePlus offer 5 years of security updates.
>There are unlocked phones available and honestly this problem is mostly a US problem. Rest of the world isn't in the iron fists of their carriers.
In the rest of the world phones are unlocked in terms of being able to use different SIM cards, but mostly the bloatware is still there and can only be disabled (not removed)
> This is not the reason for security patches taking too long to be released to certain phones; Google has a monthly cadence of releasing security patches and zero-days have rarely (I can't remember a case of that happening but maybe it has happened) missed do you have a source for it?
Yet and still Microsoft solved this problem years ago. Why can’t Google? Hell my 2006 Mac Mini got years of Windows 7 updates after installing Windows on it.
This is interesting, they’ll try to tell you it’s because the cellular modem requires extra testing by the carriers and manufacturers, but windows can support upgrades that don’t affect an add-in card cell modem… so what gives?
I'm sure they do the same testing but because they control all the hardware and there are so few models to test on, it makes things much easier. I don't think there's anything in particular about Apple's process that would scale better to the number of devices supported by Android.
I don't, and that's not what I said. My point was that Apple doesn't have to think about testing vendor-specific bloatware every release across a wide range of very different devices.
Tbf the pixel phone does have issues making emergency calls, every time they claim to have fixed it we hear another report of an updated phone not being able to connect.
Except, of course, the human has very little control over what the AI outputs in TXT2TXT scenarios, at least in terms of whether the output would match the definition of copyright infringement of someone else's work. IMG2TXT is kinda different -- I think you could make a much stronger case for derivative work there. So you have a tool that can randomly create massive liability for you, and you can't know if its done so or not until someone sues you.
The humans are the cause of it happening in the first place. Maybe the gun analogy is not such a stretch: if you pull the trigger, you own the consequences.
Reasonable fair use principles could distinguish personal and R&D use from commercial use.
https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...
Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).
https://x.com/ylecun/status/1718764953534939162?s=20