Hacker News new | past | comments | ask | show | jobs | submit login
Joint Statement on AI Safety and Openness (open.mozilla.org)
241 points by DerekBickerton on Nov 2, 2023 | hide | past | favorite | 155 comments



Leaving AI to a handful of companies is in my opinion the fastest route for inequality and power concentration/centralisation, and this is in a way an AI disaster we should be worrisome. Power corrupts, great power greatly corrupts, this is not entirely new.

From where I stand, it looks like the pandora box has already been opened anyway. The era of hugginface and/or Llama2 models is only going to grow from there.


Let's not be so hasty. I m glad to see that MistralAI is signing this letter. their small model is truly amazing, the most useful open source model i ve tried so far. But this can change tomorrow, they may be deemed illegal and disappear from public view, it has happened before. So i'm downloading and saving whatever model i think is great.


Genuine question - does it seem plausible that a few GB of content could truly be wiped off the Internet?

I’m not in the piracy scene, but my impression was they routinely pass full res movies around the Internet without much barrier to discovering and downloading them, at least to technically competent users. Is that still true?


This is really not about an engineer keeping a bootleg model in your basement. It's about the barrier for entry for commercial products. Or the ability to curate improved open-source implementations in the long haul, for that matter - as past a certain scale, this entails creating a non-profit of some sort to pay your bills.

Plus, while it's definitely the case that with sustained interest, old data tends to linger around... the moment the interest wanes, it's gone. I've been on the internet for a while and there are so many hobby sites, forums, and software projects from the early days that are simply gone for good (and not on archive.org).


The Pile was. It’s still available but no one will touch it, mostly due to books3.

The difference is that a few people with lots of resources take on legal risk. In the piracy example many people with few resources take on risk, which works out since no one wants to sue people with no money.


The Pile is still used to train LLMs and it's still very much available on the net. I agree it's a risk to train your models on the dataset until the legal implications are worked out, but it doesn't seem to be stopping people.


The purpose of regulations like these are not to prevent a thing from happening. They're so that normal behavior is criminalized but not enforced unless you happen to rock the boat some day.


the old models will become deprecated if they are not upgraded, and won't incorporate new information. Even if the files are available they will become abandonware.


> Leaving AI to a handful of companies is in my opinion the fastest route for inequality and power concentration/centralisation

Well yes, that’s precisely why they are lobbying for it.


That does seem awfully convenient, yes. But I am also skeptical that Meta is railing against regulation out of their commitment to protect scrappy little startups doing AI...


The gap between Meta and other proprietary ai ecosystems (OpenAI, Google, Anthropic) is pretty large. Meta necessarily is only as far behind as open source as a whole is behind.

If open source catches up, Meta can ride the tailwind and also catch up. I'm sure Meta will flip its position once it doesn't feel outclassed by the competition.


I honestly don't see the cynical motive I know is there in what Facebook is doing, releasing Llama. I am down to that they just want to screw with Microsoft for buying "OpenAI", which is fine by me.


>Leaving AI to a handful of companies is in my opinion the fastest route for inequality and power concentration/centralisation

It's a homesteading land grab, plain, simple and pure.

There is a competitive landscape, first-mover advantages, incumbent effects, et al that are being anchored in e.g. Sam Altman's interests and desires at this very moment. If you want a vision of the future of garage AI, imagine a boot stomping on Preston Tucker's face over and over. The current AI industry's goal is to preserve an image of openness and benefit while getting ready to pull the ladder up when the time is right.


Power is already concentrated in the sense that only the wealthy can train. We should make kickstarter-type models to fund open source AI training, because it'll likely continue to cost millions to tens of millions for cutting edge models.

With Llama2 we're at Meta's mercy, as it cost 20M to train. No guarantee Meta will continue to give us next-gen models. And even if it does, we're stuck with their training biases, at least to some extent. (I know you can fine-tune etc.)


While i don't disagree, that's missing the forest for the trees - the point is having the freedom to share models and continue to open-source.

I'll argue that between stable diffusion and llama 2, there is nothing highly specific that prevents [very] large amount of people from adopting these models and specializing for them own needs.

The tragedy would be if those went away.


Yeah but llama2-level AI may be insignificant in power compared to future models, which may be inaccessible to the public. Even assuming the algorithms/code are open, people at large won't be able to create working models.


Not sure at all, when you look at amount of opensource being done in regards to quantization. One of many examples: https://huggingface.co/TheBloke/Llama-2-7B-GGML


> Power is already concentrated in the sense that only the wealthy can train.

That situation will change as technology evolves. We'll eventually reach a point where a normal desktop PC can train AI. The wealthy will always be able to do it faster, but the gap will shrink with time.

The trick is making sure that laws aren't put in place now that would restrict our ability to do that freely once the technology is there, and that massive data sets are compiled and preserved where the public has free access to them.


Normal desktop PCs are going to be neutered to subscription-based terminals that you can take home with you rather than go to a central location to access.


That didn't happen with cloud though. I know it's not entirely apples to apples, but you still need a computer the size of a large building to serve Netflix, etc.


Only if you're trying to serve netflix-quality stuff to hundreds of thousands of people. If you're trying to replicate "Netflix the product" (live video streaming with a slick interface) to a small set of individuals, you can do that with a personal computer (see Jellyfin, Plex).

Comparing Netflix (and most highly profitable computer businesses) to the world of producing AI models by training is not going to be fruitful. Netflix takes a lot of effort to operate but you can do on the small scale what Netflix does, quite directly. You can't replicate an AI model like ChatGPT-4 very easily unless you have all the data and huge compute that OpenAI does. Now, once the model has been produced, you can operate that model on the small scale with maybe less amazing results (see llama.cpp, etc) but producing the model is a scale problem like producing high quality steel. You can't escape the need for scale (without some serious technological developments first).


Companies will always have an advantage with scale. It's not like you need a super computer though. You can have a single desktop media server in your home that does what netflix does without any problem. A single media server can even serve multiple homes.

Netflix cheats. They send non-supercomputer boxes out to ISPs to install locally. If I could convince every ISP to install a bunch of my media servers people could watch my shows from anywhere in the US too.


I guess we're speculating that training llama2 will drop by 1000x or something, so anyone can train their own llama2 from scratch for about $2k.

I don't think compute cost has dropped by 1000x since 20 years ago. Maybe by 10 to 50x. And if you add in the demand for higher quality, the cost has probably increased. Like encoding a video for streaming 20 years ago, at that standard, may have cost roughly the same as it does today, or more, when you factor in the increases in resolution and quality.

My prediction is that training the latest model will continue to cost millions to tens of millions for a long time, and these costs may even increase dramatically if significantly more powerful models require proportional increase in training compute.

Unless of course we have some insane algorithmic breakthrough where we find an AI algorithm that blows llama2 out of the water for a small fraction of the compute.


What compute was available 20 years ago that would be the equivalent of an H100?


Fair point. I was thinking about compute in general. It could be that AI-specific compute cheapens more dramatically than CPU compute.


True but there is a really difficult trade off here between a big tech monopoly and decentralised open source anarchy. I say this as a Linux-using FOSS lover, but we need to open this up as much as possible without allowing superpowers to escape into the wrong hands. I guess strong regulation is the key but I'm just handwaving here really. I don't know how the tradeoff should be made. I see great dangers in hyper-powerful tech being in the hands of the few or indeed of everyone.

By the way, I'm not sure how easy it will be to stop bad actors since barriers to entry are exponentially lower to developing a malicious AI tool than, say, developing a nuke.


> without allowing superpowers to escape into the wrong hands

The wrong hands will have the same access to whatever "superpowers" AI gives regardless of what regulations are or are not put in place. Regulations can't and won't stop potential bad actors with state-level resources, like China, from using any technology they decide they want to use. So trying to regulate on that basis is a fool's errand.

The real question is, what will put the good actors in a better position to fight the bad actors if it ever comes to that: a big tech monopoly or decentralized open source anarchy? The answer should be obvious. No monopoly is going to out-innovate decentralized open source.

> I'm not sure how easy it will be to stop bad actors since barriers to entry are exponentially lower to developing a malicious AI tool than, say, developing a nuke.

Since some bad actors already have nukes, the answer to this should be obvious too: it's what I said above about the wrong hands getting access to technology.


China, Iran, North Korea, Iran, US, Israel, Europe aren't bad actors. The bad actors don't have state level resources.


> China, Iran, North Korea, Iran, US, Israel, Europe aren't bad actors

On a very good day, as many as three of those might simultaneously not be bad actors.

They aren't bad actors whose access to AI technology is likely to be meaningfully impacted by regulation (but, for certain of the non-US ones, that hasn't stopped the US from trying before), but that's a different issue.


> China, Iran, North Korea, Iran, US, Israel, Europe aren't bad actors.

Seriously? You don't think China, Iran, and North Korea are bad actors? What planet are you on?


Well depends on your point of view

But I don't see China or North Korea firing nukes or even blowing up western buildings. They are limited by the threat of response.

A rogue wacko in his basement can make Kim Jong Un look like Theodore Roosevelt.


The term you’re looking for is “constrained” and “unconstrained” actors. The Chinese and North Korean governments are state level actors, constrained by their internal institutions and power dynamics. A rogue wacko is unconstrained - they do whatever they want with no limitations beyond their own mind and individual capabilities.

Examples:

In the Iran Hostage Crisis you had a constrained actor (Iranian government) making somewhat rational choices to use hostage taking as a negotiation tactic.

In the Oklahoma City Bombing, you had unconstrained actors (Timothy McVeigh and Terry Nichols) blowing up a building with a vehicle borne improvised explosive device for personal reasons.


We do see North Korea, Russia and Iran funding cyberattacks (e.g. major ransomware operations) and supplying weapons to various external groups. If/when we get to a stage when a wacko in his basement possessing some AI tool could be dangerous, we can expect state-level bad actors to make and deliver such tools to any wacko they'll consider likely to use against the west i.e. us.


Interesting point. Perhaps you're right. I hope so, as someone who otherwise believes in the FOSS ideal.


The fallacy you're falling for is that this is "hyper-powerful tech."

All of the AI danger propaganda being spread (see [1], for example) has the purpose of regulatory capture. You could have said all the same things about PageRank if it had come out in 2020. A malicious AI tool is harder to assemble than straight up cracking. The people who can do it are highly-trained professional criminals taking in millions of dollars. Those people aren't going to be stopped because the source is closed. (I'm thinking of that criminal enterprise based in Israel that could manipulate elections, blackmail any politician anywhere in the world... etc. They were using ML tools two years ago to do this.)

The ML tools are already in the wrong hands. The already powerful are trying to create a "moat" for themselves. We need these models and weights to spread far and wide because the people who can't run them will become the have-nots.

[1] https://news.ycombinator.com/item?id=38117930


Let's assume that these LLMs are very useful and provide boosts to their users. It follows that anyone not leveraging them where applicable would be at an economic disadvantage. Without opensource models, everyone would have to pay a Google or Microsoft tax for the pleasure of using a service which could not exist without our work. All code, writing, art, data would have to be continually sent to their servers for anyone wanting to leverage their tools. You might use a FOSS editor or shell but you are still sending all your data to Microsoft servers.

The ones in control of the models also control what sentences are sanctioned, this is a problem the more widely LLMs are used. To add insult to injury, while we are not allowed private use of the models, governments and ad-tech surveillance capabilities will skyrocket.

Do you see the problem here? The capabilities of opensource models are not anywhere near high enough to justify such a cost, now or anytime soon.

And it won't end there. As the march of progress continues, we will see the AI doom crowd agitate for tighter surveillance of money flows, limits on private compute, bandwidth limits to homes, tracking what programs we run on our computers, on who is allowed to read the latest in semiconductor research and on and on.


> but we need to open this up as much as possible without allowing superpowers to escape into the wrong hands

There are no superpowers, and the wrong hands are the ones least effected by any effort at restricting distribution by “strong regulation”.


I completely agree. AI naturally lends itself to monopoly formation, and stifled competition in the space is something that we as consumers (and stakeholders of human progress) should be wary of. I wrote down my thoughts in a brief blog post and I'd love your take on it.

https://radiantai.com/blog/building-resilience-into-ai


I disagree entirely. We shouldn't be hoping for a complex ecosystem of small, shady companies nobody ever heard of, like happened with ad networks [1]. Or worse, a scam-filled ecosystem like cryptocurrency. Or worse, an underground ecosystem like the criminals who share cracked passwords and credit card numbers.

Big companies are easier to regulate.

[1] https://lumapartners.com/lumascapes/


> Big companies are easier to regulate.

But the problem isn't regulating the big companies, or the smaller companies, or underground entities. The problem is state-level adversaries like China who might misuse a technology, whether it's AI or anything else. Such adversaries can't be regulated by laws or executive orders or UN declarations; they have proven that many times in the past. The only way to control them is to have sufficient counter-capability against whatever assets they have. And government regulation is a terrible way to try to achieve that goal.


State-level actors are one problem. People defrauding the elderly are a different problem. Ransomware is a third problem. They are all problems.

The idea that there's only one important problem is a fallacy.


> The idea that there's only one important problem is a fallacy.

I have made no such claim.

The people advocating for regulating AI are claiming it will solve all the relevant problems--i.e., that it will prevent AI from doing great harm. So pointing out a problem that the regulations will not solve is refuting the claims the advocates of regulation are making. That was my point.


That's surprising and seems like an overreach. Whatever those advocates claimed, it doesn't seem very relevant to deciding with or not a particular regulation is a good idea.


> Whatever those advocates claimed, it doesn't seem very relevant to deciding with or not a particular regulation is a good idea.

I don't see why not. The whole point of regulations is to regulate, i.e., to keep the regulated activity within some particular bounds. If the regulation won't accomplish that, then it is pointless. Unless, of course, the actual purpose of the regulation is not the same as the purpose that is publicly stated--which is exactly what happens with regulatory capture.


One could also try to be friendly towards strangers, and not have them be their adversaries in the first place.


> One could also try to be friendly towards strangers, and not have them be their adversaries in the first place.

We have tried this with China, going back to Nixon opening up trade relations in the early 1970s. It hasn't helped.


Really? You think Amazon, Google, Apple, Facebook etc…. Are well regulated?

The US has shown time and time again it’s complete incompetence when it comes to meaningful regulation of large companies.


I still don't get what the plan is to stop bad actors from developing bad AI.

A bunch of good actors agreeing not to do bad things won't help it.


Right, this is existing players (OpenAI, Google, Microsoft, FB, etc) protecting their moat.

AI can't be "paused". Sometimes I see the question "should we put a pause on AI development?"

It doesn't mean anything. Some countries like China may say they're on board with pausing it but would they actually do so? Or just sign on and allow their companies to get an edge by not enforcing a pause.

Same thing with the existing AI companies.


Yes and no. AI is no different than proliferating nuclear weapons or deciding to burn all the fossil fuels -- on an individual competitive level, it makes sense to do this more and more to remain competitive. On a systems-whole level it leads to catastrophe. The whole tragedy of the commons thing, or, more recently described, the Moloch problem.

https://www.lesswrong.com/posts/TxcRbCYHaeL59aY7E/meditation...

Yann LeCun is one of the loudest AI open source proponents right now (which of course jives with Meta's very deliberate open source stab at OpenAI). And when you listen to smart guys like him talk, you realize that even he doesn't really grasp the problem (or if he does, he pretends not to).

https://x.com/ylecun/status/1718764953534939162?s=20


It is not at all true that "AI is no different than proliferating nuclear weapons". The project manager for the Nuclear Information Project said (https://www.vox.com/future-perfect/2023/7/3/23779794/artific...) :

"""

But what we are seeing too often is a calorie-free media panic where prominent individuals — including scientists and experts we deeply admire — keep showing up in our push alerts because they vaguely liken AI to nuclear weapons or the future risk from misaligned AI to pandemics. Even if their concerns are accurate in the medium to long term, getting addicted to the news cycle in the service of prudent risk management gets counterproductive very quickly.

## AI and nuclear weapons are not the same

From ChatGPT to the proliferation of increasingly realistic AI-generated images, there’s little doubt that machine learning is progressing rapidly. Yet there’s often a striking lack of understanding about what exactly is happening. This curious blend of keen interest and vague comprehension has fueled a torrent of chattering-class clickbait, teeming with muddled analogies. Take, for instance, the pervasive comparison likening AI to nuclear weapons — a trope that continues to sweep through media outlets and congressional chambers alike.

While AI and nuclear weapons are both capable of ushering in consequential change, they remain fundamentally distinct. Nuclear weapons are a specific class of technology developed for destruction on a massive scale, and — despite some ill-fated and short-lived Cold War attempts to use nuclear weapons for peaceful construction — they have no utility other than causing (or threatening to cause) destruction. Moreover, any potential use of nuclear weapons lies entirely in the hands of nation-states. In contrast, AI covers a vast field ranging from social media algorithms to national security to advanced medical diagnostics. It can be employed by both governments and private citizens with relative ease.

"""

Let's stop contributing to this "calorie-free media panic" with such specious analogies.


Furthermore, there is little or no defense against a full scale nuclear attack, but a benevolent AI should be sufficient defense against a hostile AI.

I think the true fear is that in an AI age, humans are not "useful" and the market and economy will look very different. With AI growing our food, clothing us, building us houses, and entertaining us, humans don't really have anything to do all day.


In what other sphere of tech does that apply?

"Hackers aren't a problem because we have cybersecurity engineers". And yet somehow entire enterprises and governments are occasionally taken down.

What prevents issues in redteam/blueteam is having teams invested in the survivability of the people their organization is working for. That breaks down a bit when all it takes is one biomedical researcher whose wife just left him to have an AI help him craft a society ending infectious agent. Force multipliers are somewhat tempered when put in the hands of governments but not so much with individuals. Which is why people in some countries are allowed to have firearms and in some countries are not, but in no countries are individuals allowed to legally possess or manufacture WMDs. Because if everyone can have equal and easy access to WMDs, advanced civilization ends.


I mean, hackers aren’t such a problem that we ban developing new internet apps to only licensed professionals by executive order, so that kinda proves the parent posters point?


The difference is the scale at which a hacker can cause damage. A hacker can ruin a lot of stuff but is unlikely to kill a billion or two people if he succeeds at a hack.

With superintelligent AI you likely have to have every developed use and every end user get it right, air tight, every time, forever.


Yes, but the AI that is watching what every Biomedical researcher is doing will alert the AI counselors and support robots and they will take the flawed human into care. Perhaps pair them up with a nice new wife.

Can you imagine how much harder it would be to protect against hackers if the only cybersecurity engineers were employed by the government.


>but a benevolent AI should be sufficient defense against a hostile AI.

What on earth could this possibly mean in practice? Two elephants fighting is not at all good for the mice below.


Better than one Elephant who hates mice.


And the best path to a benevolent AI is to do what? The difficulty here is that making an AGI benevolent is harder than making an AGI with unpredictable moral values.

Do we have reason to believe that giving the ingredients of AGI out to the general public accelerates safety research faster than capabilities research?


Color me surprised that that the project manager for the Nuclear Information Project is in fact a subject matter expert for nuclear power and not AGI x-risk. Why would they be working on nuclear information if they didn't think it the most important thing?


If you talk to the people on the bleeding edge of AI research instead of nuke-heads (I tend to be kinda deep into both communities), you'll get a better picture that, yeah, a lot of people who work on AI really do think that AI is like nukes in the scale of force multiplication it will be capable of in the near to medium future, and may well vastly exceed nukes in this regard. Even in the "good" scenarios you're looking at a future where people with relatively small resources will have access to information that would create disruptive offensive capabilities, be it biological or technological or whatever. In worse scenarios, people aren't even in the picture any more, the AIs are just working with or fighting each other and we are in the way.


I’m pretty sure what communities you are in are not actual research but some hype alarmist bullshit communities, since as a ML researcher absolutely zero of my peers think the things you say.


You have no clue what communities I’m in other than it self affirms your worldview to assume that they must be irrelevant. To be fair, I’m not telling you anything about myself, so you’re not really needing to take my word for it. And I don’t care enough about you to explain in detail.

Though with a tiny bit of Googling you’ll be able to find several Turing Award winners who are saying exactly what I’m saying. In public. Loudly.


Why does everybody come up with the 'nuclear weapons' comparison, when there is a much more appropriate one - encryption, specifically public key cryptography? Way back in the 90s, when Phil Zimmerman released PGP, the US government raised hell to keep it from proliferating. Would you rather live in a world where strong encryption for ordinary citizens was illegal?


Because encryption is not an inherently dangerous thing. A superintelligent AI is.

It’s no different than inviting an advanced alien species to visit. Will it go well? Sure hope so, because if they don’t want it to go well it won’t be our planet any more


Current AI (the one that's getting regulated, e.g. LLMs and diffusion models) lacks any sort of individual drive or initiative, so all the danger it represents is that of a powerful tool wielded by someone.

People who are saying AI is dangerous are saying people are dangerous when empowered with AI, and that's why only the right people should have access to it (who presumably are the ones lobbying for legislation right now).


It's quite a bit different. Access to weapons-grade plutonium is inherently scarce. The basic techniques for producing a transformer architecture to emulate human-level text and image generation is out in the open. The only scarce resource right now preventing anyone from reproducing the research themselves from scratch is the data and compute required to do it. But data and compute aren't plutonium. They aren't inherently scarce. Unless we shut down all advances in electronics and communications, period, shutting down AI research only stops it until data and compute is sufficiently abundant that anyone can do what currently only OpenAI and a few pre-existing giants can do.

What does that buy us? An extra decade?

I don't know where this leaves us. If you're in the MIRI camp believing AI has to lead to runaway intelligence explosion to unfathomable godlike abilities, I don't see a lot of hope. If you believe that is inevitable, then as far as I'm concerned, it's truly inevitable. First, because I think formally provable alignment of an arbitrary software system with "human values," however nebulously you might define that, is fundamentally impossible, but even if it were possible, it's also fundamentally impossible to guarantee in perpetuity that all implementations of a system will forever adhere to your formal proof methods. For 50 years, we haven't even been able to get developers to consistently use strnlen. As far as I can tell, if sufficiently advanced AI can take over its light cone and extinguish all value from the universe, or whatever they're up to now on the worry scale, then it will do so.


I guess I should add, because so few people do, this is what I believe, but it's entirely possible I'm wrong, so by all means, MIRI, keep trying. If you'd asked anyone in the world except three men before 1975 if public key cryptography was possible, they'd have said no, but here we are. Wow me with your math.


AI is no different than proliferating nuclear weapons

I mean, once the discussion goes THIS far off the rails of reality, where do we go from here?


Can someone outline how AI could actually harm us directly? I don’t believe for a second sci-fi novel nonsense about self-replicating robots that we can’t unplug. My Roomba can’t even do its very simple task without getting caught on the rug. I don’t know of any complicated computing cluster or machine that exists that wouldn’t implode without human intervention on an almost daily level.

If we are talking about AI stoking human fears and weaknesses to make them do awful things, then ok I can see that and am afraid we have been there for some time with our algorithms and AI journalism.


> Can someone outline how AI could actually harm us directly?

at best, maybe it adds a new level of sophistication to phishing attacks. That's all i can think of. Terminators walking the streets murdering grandma? I just don't see it.

what I think is most likely is a handful of companies trying to sell enterprise on ML which has been going on since forever. YouTubers making even funnier "Presidents discuss anime" vids and 4chan doing what 4chan does but faster.


You start by saying “show me examples of” and finish by saying yeah “this is as already a problem.” Not sure what point you’re trying to make, but I think you should also consider lab leaks in the sense of weaponized ai escaping or being used “off label” in ways that yield novel types of risk. Just because you cannot imagine future tech at present doesn’t indicate much.


Consider an AI as a personal assistant. The AI is in charge of filtering and sorting your email. It has as a priority to make your life more efficient. It decides that your life is more efficient if you don't see emails that upset you, so it deletes them. Now consider that you are in charge of something very important.

It doesn't take a whole lot of imagination to come up with scenarios like these.


I don't need to know how a chess computer will beat me to know that it will beat me. If the only way you will entertain x-risk is to have a specific scenario described to you that you personally find plausible, you will never see any risk coming that isn't just a variation of what you are already familiar with. Do not constrain yourself to considering only that which can be thought up by your limited imagination.


> Do not constrain yourself to considering only that which can be thought up by your limited imagination.

Don't let your limited imagination constrain you're ability to live in fear of what could be. Is that what you mean? So it's no longer sufficient to live in fear of everything, now you need to live in fear even when you can't think of anything to be afraid of. No thanks.


Instead of taking the most obtuse reading of my point, how about you try to engage intelligently? There are some manner of unmaterialized risk that we can anticipate through analysis and reasonable extrapolation. When that risk has high negative utility, we should rationally engage with its possibility and consider ways to mitigate it.


Why not? It has already been shown that AI can be (mis)used to identify good candidates for chemical weapons. [1] Next in the pipeline is obviously some religious nut (who would not otherwise have the capability) using it to design a virus which doesn't set off alarms at the gene synthesis / custom construct companies, and then learning to transfect it.

More banally, state actors can already use open source models to efficiently create misinformation. It took what, 60,000 votes to swing the US election in 2016? Imagine what astroturfing can be done with 100x the labor thanks to LLMs.

[1] dx.doi.org/10.1038/s42256-022-00465-9


> Next in the pipeline is obviously some religious nut (who would not otherwise have the capability)

So you're saying that:

1. the religious nut would not find the same information on Google or in books

2. if someone is motivated enough to commit such an act, the ease of use of AI vs. web search would make a difference

Has anyone checked how many biology students can prepare dangerous substances with just what they learned in school?

Have we removed the sites disseminating dangerous information off the internet first? What is to stop someone from training a model on such data anytime they want?


1. The religious nut doesn't have the knowledge or the skill sets right now, but AI might enable them.

2. Accessibility of information makes a huge difference. Prior to 2020 people rarely stole Kias or catalytic converters. When knowledge of how to do this (and for catalytic converters, knowledge of their resale value) became available (i.e. trending on Tiktok), then thefts became frequent. The only barrier which disappeared from 2019 to 2021 was that the information became very easily accessible.

Your last two questions are not counterarguments, since AIs are already outperforming the median biology student, and obviously removing sites from the internet is not feasible. Easier to stop foundation model development than to censor the internet.

> What is to stop someone from training a model on such data anytime they want?

Present proposals are to limit GPU access and compute for training runs. Data centers are kind of like nuclear enrichment facilities in that they are hard to hide, require large numbers of dual-use components that are possible to regulate (centrifuges vs. GPUs), and they have large power requirements which make them show up on aerial imaging.


What happens if someone develops a highly effective distributed training algorithm permitting a bunch of people with gaming PCs and fast broadband to train foundation models in a manner akin to Folding@Home?

If that happened open efforts could marshal tens or hundreds of thousands of GPUs.

Right now the barrier is that training requires too much synchronization bandwidth between compute nodes, but I’m not aware of any hard mathematical reason there couldn’t be an algorithm that does not have to sync so much. Even if it were less efficient this could be overcome by the sheer number of nodes you could marshal.


Is that a serious argument against an AI pause? There are potential scenarios in which regulating AI is challenging, so it isn't worth doing? Why don't we stop regulating nuclear material while we're at it?

In my mind the existential risks make regulation of large training runs worth it. Should distributed training runs become an issue we can figure out a way to inspect them, too.

To respond to the specific htpothetical, if that scenario happens it will presumably be by either a botnet, by a large group of wealthy hobbyists, or by a corporation or a nation state intent on circumventing the pause. Botnets have been dismantled before, and large groups of wealthy hoobyists tend to interested in self preservation (at least more so than individuals). Corporate and state actors defecting on international treaties can be penalized via standard mechanisms.


You are talking about some pretty heavy handed authoritarian stuff to ban math on the basis of hypothetical risks. The nuclear analogy isn’t applicable because we all know that a-bombs really work. There is no proof of any kind of outsized risk from real world AI beyond other types of computing that can be used for negative purposes like encryption or cryptocurrency.

Here’s a legit question: you say pause. Pause until what? What is the go condition? You can never prove an unbounded negative like “AI will never ever become dangerous” so I would think there is no go condition anyone could agree on.

… which means people eventually just ignore the pause when they get tired of it and the hysteria dies out. Why bother then?


>It took what, 60,000 votes to swing the US election in 2016?

this is a conspiracy theory that gained popularity around the election but in the first impeachment hearings, those making allegations regarding foreign interference failed to produce any evidence whatsoever of a real targeted campaign in collusion with the Russian government.


Eh, he's not wrong there -- our elections are subject to chaos theory at this point, more than the rule of law. Sensitive dependence on both initial conditions and invisible thresholds. The Russian "butterfly effect" in 2016 was very real. Even if it wasn't enough on its own to tip the balance in Trump's favor, they were very clear that Trump was their candidate of choice, and they very clearly took action in accordance with that. Neither of these statements is up for the slightest debate at this point.

However, the possibility of foreign election interference, real or imagined, is not a valid reason to hold back progress on AI.


Essentially you are advocating against information being more efficiently available. Come on.

It’s true we are fucked if bioweapons become easy to make, but that is not a question of ”AI”.


The only thing keeping bioweapons from being easy to make is information becoming easily available.

> Essentially you are advocating against information being more efficiently available.

Yes. Some kinds of information should be kept obscure, even if it is theoretically possible for an intelligent individual with access to the world's scientific literature to rediscover them. The really obvious case for this is in regards to the proliferation of WMDs.

For nuclear weapons information is not the barrier to manufacture: we can regulate and track uranium, and enrichment is thought to require industrial scale processes. But the precursors for biological weapons are unregulated and widely available, so we need to gatekeep the relevant skills and knowledge.

I'm sure you will agree with me that if access information on how to make a WMD becomes even a few order of magnitudes as accessible as information on how to steal a Kia or how to steal a catalytic converter, then we will have lost.

My argument is that a truly intelligent AI without safeguards or ethics would make bioweapons accessible to the public, and we would be fucked.


AI isn't anything like nuclear weapons. One possible analogy i can draw is how scientists generally agreed to hold off on attempting to clone a human. Then that one guy in China did and everyone got on his case so bad he hasn't done it again (that we know of). In the best case, I could see AI regulation taking that form, as in, release a breakthrough get derided so much you leave the field. God, what a sad world to live in.


> Right, this is existing players (OpenAI, Google, Microsoft, FB, etc) protecting their moat.

this is what I believe. The only moat they have is their lobbying power and bank account. The want it tightly controlled, you can do it in your own way just as long as it's done just how I say.

The whole regulating AI thing is a farce, only the good guys follow the law and regulations. Do you think the bad guys are just going to throw up their hands and say "oh well, an agreement was made i guess we'll just go home"? The only thing blocking the way is hardware performance and data. Hardware is always getting faster and there's tons of new data created every minute, the genie is out of the bottle.


The Chinese can say the same of us dude, don’t be so blindly Eurocentric about it.


I hate to say it, but I don't think it can be stopped. Stopping "bad actors" from using guns, or distributing drugs has a sliver of hope as they do these things to the general public so can physically be intercepted. But even then enforcement has been largely unsuccessful.

Stopping people from using AI which is something that they can trivially download and do on their own private compute is simply infeasible. We would need to lock all computing behind government control to have any hope. That is clearly to me too high a price to pay. Even then current hardware will be amassed that doesn't have this lock-down.

So I think the right approach here is to not worry about regulating the AI to prevent it from doing bad things, we should just regulate the bad things themselves.

However I do think there is also room for some light regulation around certification. If we can make meaningful progress on AI Alignment and safety it may make sense to require some sort of "AI license". But this is just to avoid naive people making mistakes, it won't stop malicious people from causing intentional harm.


> A bunch of good actors agreeing not to do bad things won't help it.

And that's true whichever way you cut it. Whether those "good" actors are everybody or a few selected for privilege. We have no basis on which to trust big technology companies more than anybody else. Indeed quite the opposite if the past 10 years are anything to go by.

It is such a god-awful shame that corporations have been so disgracefully behaved that we now face this bind. But at least we know what we'll be getting into if we start handing them prefects badges - organised and "safe" abuse by the few as opposed to risky chaos amongst the many.

A saner solution might be a moratorium on existing big tech and media companies developing AI, while granting licenses to startups with the proviso that they are barred from acquisition for 10 years,


Another sane solution would be a law stating all trained models, source code, and training data must be made available to anyone who asks at no cost. That removes an AI as the secret sauce to any commercial entity but still leaving the door open to research and advancement. Propose that and watch all these big companies come back with "whoa whoa whoa, that's not what we meant. What we meant was we have it but no one else can have it".


It will make it harder for "bad" actors to develop "bad AI" if all those "good" actors will never let anyone access their research, keep it entirely private, and only lease access to the final products. "Bad" actors will have to do everything on their own, and "good" actors will try to hire every good scientist for themselves, and possibly threaten anyone who considers working with "bad" actors or try to make this cooperation harder.

I'm not supporting this, of course. That's just what "good" actors seem to do.


Nor do I get what the plan is to stop good but overoptimistic actors developing AI that spirals out of their control

I feel these kind of statements by Mozilla reflect exactly that lack of caution that may end us


I don’t know if you were intending to do this but your argument is almost ripped verbatim from advocates of open gun ownership.


We drop nukes on China when they buy too many GPUs, obviously.


There is none. Just go on 4chan and see all kinds of bad AI already in use.


"bad" AI? you mean.... not overcensored AI? the "mainstream" offerings are utterly restricted, you can't even generate porn with it. channers play an important part in driving innovation and keeping the thought police in check...


[flagged]


aside from simulated child porn which has already been proven illegal.. i think. what exactly is the problem? If you're asking bigtech to be the morality police it's not going to work. Actually, i would love to seem them attempt this. A unified moral code for all AI globally. If they pull that off i'd ask them for a unified global religion next.


> has already been proven illegal

On the contrary. Only depictions of actual identifiable persons under the age of 18 are illegal in the U.S., according to justice.gov [0]

[0] https://www.justice.gov/archives/jm/criminal-resource-manual...


I think fake nudes could also be illegal (because you're using someone's image without their consent).


All of these can be done without AI.


I'd presume that HN readers of all people would be able to recognize the inherent power in increasing access, reducing friction, and powerful force multipliers like AI.

Your statement reads exactly the same as "people can be killed without firearms". It's true, but firearms obviously help and cause measurable harm compared to alternatives because it makes killing (whether homicide or suicide) much easier.


You mean the art and voice threads? How will we survive!


Deepfake porn and AI voice extortion are real harms from these technologies that are having a meaningful impact on people’s lives already.

I don’t think they’re society-ending, but that doesn’t mean we shouldn’t acknowledge them.


Thorough acknowledgment means they’re now inevitable and that we’ll naturally revise how we interpret media.

We’ve done that nearly every time there’s been a shift in media, from photography to telephones to home video to digital photo and video editing, and it just sort of happens organically and automatically as people learn what they can now do and what they can now trust.

Panicked, doomed attempts to control that sort of change is just divisive and futile.

That’s not to say that there won’t be necessary laws and regulations around near-term “AI” tech, but that deep fakes are well into the irrelevant noise of what people will learn to navigate on their own.


[flagged]


Excitement feels kinda weird here, but if that's really your thing, 4chan has all the AI generated celebrity deepfakes and child porn your heart desires.


personal vice is tragic but has limited consequences. I think the regulatory discussion is valid, irrespective of the "casino-city race to the bottom" that will occur globally.


[flagged]


They asked for the terrible stuff found on 4chan. I don't know how to interpret that more charitably. If you're expecting great art, look elsewhere.


This seems like it should be flagged


If you think there is even a 5% chance that super-human level artificial general intelligence could be catastrophic for humanity, then the recipe for how to make AGI is of itself an infohazard far worse than the recipe for methamphetamines/nuclear weapons/etc, and precedent shows that we do in fact lock down access to the latter as a matter of public safety.

No, GPT-4 is not AGI and is not going to spell the end of the human race, but neither is pseudoephedrine itself a methamphetemine and yet we regulate its access. Not as a matter of protecting corporate profits, but for public safety.

You'll need to convince me first that there is in fact no public safety hazard from forcing unrestricted access to the ingredients in this recipe. Do I trust OpenAI to make all the morally right choices here? No, but I think their incentives are in fact more aligned with the public good than are the lowest common denominator of the general public.


> but neither is pseudoephedrine itself a methamphetemine and yet we regulate its access

I don't think it's a correct argument.

I believe it's not a secret how metamphetamine is made (though I don't know it myself, but I'm too lazy to research for a sake of argument), and it's known pseudoephedrine is a precursor chemical. Thus, the regulation.

No one - I believe - knows how to build an AGI. There is no recipe. It is unknown if transformer models, deep learning or something else is a component, and if that's even a correct path or a dead end. What is known that neither of those is an AGI, and there's no known way to make those AGI. Thus, I'd say the comparison is not correct.


First of all, pseudoephedrine is an analogy. But with AGI there is literally no historical event to compare it to as a perfect analogy, including the ones I gave above.

> it's not a secret how metamphetamine is made

I didn't say it was a secret. But try sharing the recipe to methamphetamine virtually anywhere online and see how long it stays up; we do in fact try to at least make it moderately difficult to find this information. Perfect is the enemy of good here.

> No one - I believe - knows how to build an AGI.

The problem is that if we wait until someone does know how to build an AGI, then the game is up. We do not know if the weights behind GPT-4 will be a critical ingredient. If you make the ingredients common knowledge before you know they were in fact The Ingredients, you've let the cat out of the bag and there's no retrieving it.


How about we just ban all new research, just in case it turns into something bad? Because - I’m sorry - but what you’re saying sounds to me exactly like this, just less generalized.

I’m pretty sure that if someone figures out AGI they’ll think of the potential dangers before publishing. And even after - AGI, once developed, won’t magically hack the world and spread everywhere, or somehow build itself an army of robots to eradicate humanity, or steal the elections, or cause whatever harm it’s supposed to cause. It’ll be just another human (probably a slow one), except for being not a human. It’ll be a long story before it’ll have any chances to do something impactful, and we’ll have plenty of time to figure out the details. Just like with drug precursors.

Also, meth is a bad analogy. And turns out the recipes can be found just about anywhere, I’ve just checked: https://kagi.com/search?q=how+to+make+methamphetamine so I don’t think “see how long it stays online” is valid either.


When you say 'catastrophic for humanity', what exactly do you think would happen? And how would regulating models thwart this? Bad actors who are motivated surely will get around US restrictions. I'd imagine when the internet first came around that there was similar sentiment. "People will have access to information on how to create bombs, we need to regulate this so that doesn't happen!"


When playing against a better chess player than me, the whole point is that I cannot predict what move they will make, and that is how the opponent wins. So you don't need to be able to predict what move the AGI will make, only that no matter what move you make it will do better.

But super off-hand idea if I'm trying to be creative. The AGI formulates a chemical that kills humans 90 days after inhalation, hires a chemical lab to synthesize it and forward it to municipalities across the world, and convinces them it's a standard treatment that the WHO has mandated be introduced.

> And how would regulating models thwart this?

I don't think it would. The OP article is about regulation that increases the access to the ingredients for AI, and I'm simply unconvinced that is a recipe for increasing AI safety.


> could be catastrophic for humanity,

I do not. How does super human intelligence, on it's own, represent any sort of risk for "humanity?"

> then the recipe for how to make AGI is of itself an infohazard [...] nuclear weapons

I know how to make nuclear weapons; however, I cannot enrich the fuel enough to actually produce a working version.

> and yet we regulate its access

Does that actually achieve what it claims to achieve?

> Do I trust OpenAI to make all the morally right choices here?

OpenAI has a language model. They do not have AGI or anything approaching AGI.

> No, but I think their incentives are in fact more aligned with the public good

We could debate that, but for your assertions to hold any water, we'd have to agree that they're incapable of making mistakes as well. Far easier to skip the lofty debate and recognize the reality of the world we live in.


> How does super human intelligence, on it's own, represent any sort of risk for "humanity?"

It does not; I was speaking to AGI. But assuming you were referring to AGI as well, you don't have to think very creatively to consider scenarios where it would be harmful.

If you have an agent that can counter your every move with its own superior move, and that agent wants something – anything – differently from what you want, then who wins? Maybe it wants the money in your bank account, maybe it wants to use the atoms in your body to make more graphics cards to reproduce itself.

Think about playing a game of chess against the strongest AI opponent. No matter which move you are considering playing, your opponent has already planned 10 steps ahead and will make a better move than you. Now extrapolate outside the chess board and into the realm where you can use the internet to buy/trade stocks, attack national infrastructure, send custom orders to chemists to follow whatever directions you want, etc.


This is such a stupid take. Even if there was a 5% chance of super-human level AGI destroying the world, we're not going to reduce that risk by restricting general computation. If it is possible to create AGI, it will happen, regardless of what the law stipulates.

This is unlike pseudoephedrine or nuclear weapons in that literally anyone with a computer could potentially create it.


Yes, it is in fact going to happen no matter what. So how do we ensure that the very first super-human AGI is even remotely interested in sharing the planet with the human race?

It is not by ensuring that literally anyone with a computer has an equal chance at creating it.


I still don't have any sense of urgency to create and to regulate AI safety protocols. AI that we call is just software with tuned parameters. You input data into it, and it does some computation on chips, and it spits out data. That is all it needs. My sense is that no harm can come from it, but some harm can be inflicted by people who use it.


I don't think there is any possibility of the current "AI" suddenly doing anything damaging on its own or anything at all on its own. The worst that I can see happening which is probably already happening is people using it to assist them with various crimes. Such as social engineering using convincing generated voices and photos or even just fake information.


To deal with such case, we already have existing laws to deal with it. My qualms with AI regulation is "why do we need separate AI safety agencies and laws?"


While I agree in a general sense, it might make sense to regulate the tools themselves.

An analogy is gun control laws. Murder is a crime, whether it happens with a gun or without, so in theory there would be no need to regulate guns ("guns don't kill people etc."). But in most countries in the world we still regulate them.

Maybe it makes sense to regulate AI similarly? Requiring that some guardrails are built in to prevent it being used to generate fake information or information that helps to commit crimes. Making sure that it does not leak private information (and being able to prove this somehow). Regulating if and how it can be used as a therapist.

Although I wonder if it isn't too soon still for such regulation.


Yeah - this stupidity with regulating AI goes hand in hand with regulating cryptography - illegal math is simply not feasible.


Illegal chemistry is also not feasible, but we can restrict availability to certain chemicals to try and prevent people from doing things likes like making bombs or turning their neighborhood radioactive while messing around. Of course some people will find a way but that doesn't mean the door has to be left completely open for anyone with a credit card to be able to order large amounts of high purity H2O2 for example.

I am not advocating for this particular issue, I am pointing out the flaw in your metaphor -- you can't restrict the math but unless people figure out how to make their own wafer fab then you can at least try to restrict the ability to do it.


But is chemistry illegal? I think anyone can have all the equations they want to have (like, idk, U238 -> Th234 + alpha particle) - just can't possess some actual substances (Uranium). So I don't think the comparison is valid.


That's exactly the point I was making. It seems I did a poor job -- it is impossible to make chemistry illegal but it is possible to regulate possession of some chemicals.


> My sense is that no harm can come from it, but some harm can be inflicted by people who use it.

The concern is not that AI is harmful on its own, and that it will unleash a Skynet scenario. It's precisely that it will be abused by humans in ways we can't even predict yet. The same can be said about any technology, but AI is particularly concerning because of its unlimited potential, extremely fast pace of development, and humanity's lack of preparation on a political, legal and social level to deal with the consequences. So I think you're severely underestimating "some harm" there.


The thing you say the concern is not, is exactly what my primary concern is!


This is some weird nonsense that folks who read too much sci-fi (not realizing how unrealistic it is, with the state of modern society and technology) seem to fantasize about, and politician talking heads happily repeat because populism is on the rise again and it's easier to fight imaginary windmills than real world issues.

I believe a more realistic concern is that some workers will lose their jobs to automation as it becomes capable of running more and more complicated tasks, because it's cheaper to run or, more likely, contract to run some fancy software that can be taught its job in a natural language. And if this scales up it may require significant socioeconomic changes to work around the associated issues.


Statements to the effect that AI will be controlled for safety are pretentious. As well as being a characteristic of only this early time. The premise of AI danger, in the real sense of the word, has always been rooted in the seemingly accurate logic that it won't be controllable.

If anything, restrictions on any scale only allow for compeitition to catch up to an surpass it on that scale. Restrict an AI to be polite and its comeptitor has a chance to surpass it in the sphere of rudeness. This principle can be applied to any application of AI.


>Statements to the effect that AI will be controlled for safety are pretentious.

and "safety" is such a loaded term these days. What exactly do they mean by safety? Prevention of launching ICBMs or prevention of not using the gender neutral pronoun "they"?


The Executive Order [1] is clearly more focused on the former. Some of the things they mention (I haven't read it all):

* creating chemical/biological/nuclear weapons

* biohazards

* outputs that could threaten to critical infrastructure (e.g. energy infrastructure)

* threats to national security

* cyberattacks, e.g. automatic discovery of vulnerabilities/exploits

* software that would "influence real or virtual events" (I'm guessing they mean elections?)

* social engineering

* generating fake news / propaganda

* generating child porn or deep fakes

Note that it's not banning these, but asking US government departments as well as private companies, AI experts, academia, for their inputs on what regulations could/should be. In that sense, this Mozilla letter is a response to the EO.

Also, there's no mention of the use of certain pronouns, nor of an AI-caused apocalypse (not even a mention of paperclips!).

[1] https://www.whitehouse.gov/briefing-room/presidential-action...


i was being a bit extreme so thanks for the clarifications. Something about this topic brings out a lot of emotion from me.


I think they mean to borrow from the fear of the former in order to apply it to the latter in practice, when using the term in the abstract. At least when speaking to the public.

But not to worry. For better or worse, in short time these concerns will seem like they were a foolish delusion. If not because of their questionable morality, then because they were so wildly unrealistic in application.

That's not to say that AI safety won't be a topic for decades to come or longer, simply because it confers political and harassment power. Even if ineffective in terms of AI in practice.


The fact that we even have to have this discussion is disgusting. It isn’t the open source community or I’ll-equipped developers running singles or handfuls of gpus that are the risk, it is these exact lobbying companies and others like them who run supercomputers comprised of thousands of servers/gpus. They are the risk for exactly the reason they have done all the previous illegal/immoral things they have done and been fined for (or gotten away with).

We all know that these same companies and others are going to ignore any rules anyway, so open source and academics need to just be left alone to innovate.

Regulate by the risk the processing environment presents, not by the technology running on it.


Translation: we don't have any imagination how can we use this, so please show us the way, so we can exploit that, wrestle it away from you and introduce laws when we did that so you get nothing in exchange.


Interesting take.


There is always the option of all-out war on AI, which won't be triggered until it is "almost" or "definitely" too late, and only because all other options have failed. Maybe the aftermath won't be an exterminator or matrix type dystopia, maybe we will win by the simple resource of setting back civilization to a point where we can't produce AI hardware any longer. Of course we would need to keep our entire civilization there by sheer stubbornness (you can ask any third-world country for a recipe for eternal non-progress. Yes, ask Cuba) . But that "mild" scenario, no matter how many how lovely it might seem to people who want to experience the middle ages again, is terrifying to me. Somehow, I'm not very optimistic about our future.


The recent Executive Order is a good example of regulatory capture. This probably locks in a large advantage for the few tech giants who are so-far winning the LLM race (or AGI race).

I do wish my government would get tougher on user privacy issues, but that is a very different subject.


I thought the EO is "go research about it and give President advice by a certain date", without any concrete steps yet? The only steps there is to not use this modern tech for critical infrastructure, military and social and some other government-ran stuff while the details are being figured out.

More like "we heard some ruckus, so we need to investigate". Of course, this is concerning because who knows what the advice will be with all the corporate lobbying - but I think there is no capture, not yet. Or have I missed something?


This is nice, but meanwhile everyone else is racing to create the torment nexus.

(https://knowyourmeme.com/memes/torment-nexus)


Aside: a more unique URL for sharing/future reference might have been more wise. What's with the subdomain too. Why not the foundation site or this has a opennews.org feel to it also (which came out of the Foundation)


Might be just me, but I'm getting transient page load failures for the page, might be getting a "hug of death".

https://web.archive.org/web/20231102190919/https://open.mozi...


404 form singature submissions too


It seems that the signatory form collection is not keeping up - I signed this 24 hours ago and my name still isn't listed. I suspect once they add the names that are coming in through the form that this will be a much longer list.


They're curating it. Look at the list - its almost entirely people who represent well known institutions.

I don't agree and wouldn't sign anyways.


tbh I think I'm more well known in AI than most of the people currently listed there, so I don't think that's the reason.


Might explain the delay though :)


I think the extremely most important aspect of AI security is for there to be a right to deinfluence.

There has to be way to rescind my contributions, whether they're voluntary or not (I still think these companies are dancing with copyright violations), as well as ALL DERIVATIVES of the contributions being removed that have been created via the processing of the creations in question, regardless of form or generation (derivatives of derivatives, etc.).


Mozilla is not involved in AI, nor are they a relevant org in general. A for profit with a nonprofit status. Mozilla can't have an opinion on something it is ignorant of, and therefore it doesn't deserve a seat. If they want to be apart of it they needed to invest in it 8 years ago.


Just one signature from Google, in 158 signatures.


To be fair it seems as if most companies used one spoc to sign this.


This is platitudinous.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: