Hacker News new | past | comments | ask | show | jobs | submit login

> Except the reason why we dont all just killed each other yet have nothing to do with risk or cost of killing someone

Most of us don't want to.

Most of those who do, don't know enough to actually do it.

Sometimes such people get into power, and they use new inventions like the then-new-pesticide Zyklon B to industrialise killing.

Last year an AI found 40k novel chemical agents, and because they're novel, the agencies that would normally stop bad actors from getting dangerous substances, would generally not notice the problem.

LLMs can read research papers and write code. A sufficiently capable LLM can recreate that chemical discovery AI.

The only reasons I'm even willing to list this chain, is that the researchers behind that chemical AI have spent most of the intervening time making those agencies aware of the situation, and I expect the agencies to be ready before a future LLM reaches the threshold for reproducing that work.




Everything you say does make sense, except those people who able to get equipment to produce those chemicals and have funding to do something like that - they dont really need AI help here. There are plenty dangerous chemicals already well known to humanity and some dont actually take anything regulated to produce "except" complicated and expensive lab equipment.

Again difficulty of production of poisons and chemicals it's not what prevent mass murdering around the globe.


Complexity and cost are just two of the things that inhibit these attacks.

Three letter agencies knowing who's buying a suspicious quantity from the list of known precursors, that stops quite a lot of the others.

AI in general reduces cost and complexity, that's kind of the point of having it. (For example, a chemistry degree is expensive in both time and money). Right now using an LLM[0] to decide what to get and how to use it is almost certainly more dangerous for the user than anyone else — but this is a moving goal, and the question there has to be "how to we delay this capability for as long as possible, and at the same time how do we prepare to defend against the capability when it does arrive?"

[0] I really hope that includes even GPT-4 before the red-teaming efforts to make it not give detailed instructions for how to cause harm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: