> We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai.
I mean, maybe. Sure, LLMs can currently code shockingly well, and LLMs are just programs, so an LLM writing an LLM is def much more likely than a nuke building a nuke. But we don't _know_ if there are limits to what AIs can do. Or, to pick on the word "exponential," we don't know if there are limits to growth rates of AIs improving AIs!
FWIW, I personally think that AIs will be able to invent more powerful/intelligent AIs. But we should be stating this as as conjecture, not a fact - mostly b/c my experience in saying things like this to people just causes them to be _extremely scared_. And for most people (myself included) extreme fear is a really bad place to try to make decisions from.
> We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.
I wish there was more detail on this. Does Yuval Noah Harari really understand how the FDA works and is it really an appropriate model for the AI Safety Administration (AISA)? I mean - the FDA is not really batting 100% - and can we afford to miss swings when it comes to AI safety?
Maybe we can. Maybe we can't. Anyone saying we can afford to miss swings is probably selling you something (or their livelihood depends on it). Anyone saying we can't afford to miss swings is probably terrified. I honestly don't know how to possibly feel other than ???!!??
When someone says "we can regulate x, but we must act quickly," it's important to remember that the "we" they are referring to are often so poorly informed on the subject that the average house cat could outperform them if tested on the material and they are spoon-fed talking points by corporate lobbyists who pay their campaign budgets.
Harari is a scare monger making bank off promoting the pop-culture "Terminator is bad" caricature of AI.
Still, I don't think that having a regulation that AI must reveal themselves when asked would be overstepping.
It would draw a red line between aligned and unaligned AI. It may not even be enforceable, but it would be a revealing characteristic that gives useful information, even when it lies. Especially when it lies.
> > We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai.
> I mean, maybe. Sure, LLMs can currently code shockingly well, and LLMs are just programs, so an LLM writing an LLM is def much more likely than a nuke building a nuke. But we don't _know_ if there are limits to what AIs can do. Or, to pick on the word "exponential," we don't know if there are limits to growth rates of AIs improving AIs!
It seems to me that the current approach to AIs can improve along only two axes: better training data, or better algorithms. I don't think this kind of AI can generate better training data for the next generation of itself, just by the nature of how it operates. It (currently) doesn't know truth from falsehood; how is it going to know what training data is better or worse than other data? And I don't think it can improve its algorithm either. It may be able to implement an algorithm that someone else specifies, but I'm not sure that it can write a working algorithm, let alone an improved working algorithm.
So I don't think that self-improving AI is currently on the table.
The issue is whenever it becomes true it is likely way too late to regulate it.
For example let's imagine a hypothetical world that didn't have guns yet be we think they were possible. Well if you're waiting till one group made guns first you're going to be at their mercy going after their gun armed soldiers with swords.
But it's not exactly "blatantly incorrect assumptions". It's more wishful thinking. (The "self-improving, exponentially more powerful" AI is currently the only hope for an AGI - except that there isn't currently any realistic hope of a self-improving, exponentially more powerful AI.)
I think it's an assumption based on the fact that we don't seem to be anywhere near the maximum amount of computing capacity that we could be using to train AIs. It might be just throwing more compute at the problem, or it might be that we are nowhere close to the ideal algorithms / model architectures, and AI will aid the process of getting there.
To turn it around a bit, what is your null hypothesis? If it's that AI will not be able to make more powerful AI, why do you believe that?
Given that there are two cases, and smart people disagree about which one we're headed towards, and one could have very dire consequences while the other is no big deal, I think it's reasonable not to assume that the latter is true without exercising a lot of due diligence.
"We can apply more compute power" is something very different from "exponentially self improving". Assuming that "exponentially improving" will show up if given enough compute power is very much not a null hypothesis.
Ok, so there's self improving, where an AI tries to make itself smarter, and then there's just exponential improvement, where an AI does one or more of the following, with or without a human in the loop:
* acquires money through legitimate business and buys more computing power to train a smarter AI.
* acquires money through illegal means and does the same.
* designs a more efficient model architecture or training
process that results in a smarter AI.
I don't know how far away from this we are, it's very hard to predict. I do predict that at first we'll see mostly things where a human is directing the AI, but there's selective pressure toward not having a human involved, because someone who just sets up an AI and lets it loose is going to have a thing that reacts faster and can work 24/7. Irresponsible businesses that let their AIs run rampant are going to make more money, and be able to afford better AIs. It's the same story that has played out again and again over history, Moloch (1) pushing us to do the selfish but harmful thing by rewarding those that do it.
Remember that exponential curves seem pretty slow at first, and it can be hard to tell when you are standing on one. It might be that we're still years away from exponential self-improvement, but I think it's worth taking seriously as a possibility. Even with just humans designing AIs, the amount of effort required has become exponentially less as we got more compute, designed better libraries and tools, people published research and copied it and built on it. We can't predict exactly what or when the next step will be, but I think it's just as reasonable to assume that the curve of this progress is exponential as it is that it's logistic.
> I don't know how far away from this we are, it's very hard to predict.
On this point, I am in total agreement with you.
One way to tell might be, what happens if we let an AI try to improve itself? Does it get better, or worse? My money currently is on "worse". Well, how much worse? How far away are we from it being able to improve itself? I think we are an enormous distance away, almost an immeasurable distance, but I could be wrong.
> I think it's just as reasonable to assume that the curve of this progress is exponential as it is that it's logistic.
It's always logistic in the end. (But if we're on the first part of the curve, where it's exponential, that "in the end" bit doesn't matter.)
I believe AI could make exponentially better AI but do we have any evidence that this is likely, or could happen soon? Sounding alarm bells with no understanding of how likely it is seems irresponsible.
>> We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai.
I dropped in to comment on that bit too. You've described it well already, but I'd still like to add:
.. hypothetically, in the future. To be fair, in principle then it seems perfectly feasible that a sufficiently much more advanced nuke might invent an even more advanced nuke too, hypothetically, in the future.
Not really to detract from the rough direction of the argument in the post though, because despite the hype, imprecision and questionably declared speculation of the sort you pointed out, I think he's basically right to be concerned about the hazards of throwing even current and imminent "AI" into Humanity's communications and cultural evolution.
I think it is important to understand Harari's rhetorical style. He likes to "flip the script", tell long stories (without necessarily dotting every i) and end in a punchline.
So, for example, in his book _Sapiens_ he got a warm reception to a "zinger": Humans did not tame wheat - wheat tamed humans! It's essentially a "punch line" to a long, analytical story he develops. There are several other such punch lines in that book.
So, when he concludes AI has done anything, keep in mind that he thinks that because we are (mostly) addicted to carbohydrates and this is not great for our lifestyle that a plant with no ability to cognate has "tamed" our species.
A funnier example of this is a Usenix Keynote delivered by Van Jacobson in - I think 1996. He tells an involved story about language acquisition in humans and Unix and networks and ends with a "conclusion" that using Windows makes you literally brain damaged. I wish I could find an online video of that talk, if anyone knows of one. (I'm not sure it was even recorded.)
Hilarious! And an almost literal paraphrase of something a friend of mine says routinely.
That Van Jacobson talk got uproarious laughter from the audience at the end, as one might predict { surely VJ had an inkling... ;-) }. It's a way to do a long story and a punchline/zinger without anthropomorphically flipping the script "selfish-gene" style.
FWIW, I think Harari knows this is "his brand" and might avoid "off brand" articles, much as much as Christopher Nolan probably thinks doing a simple, linear movie would be off brand for him. (Or they both could just love their developed styles. It is oft overlooked, but in the world of humans "all of the above" is often a better answer than any single explanation.)
I certainly do not know, but it would be unsurprising if the origins of this article were Harari asking ChatGPT to "Tell me a long, involved, analytical story about <X> that ends with a punchline inverting anthropomorphized agency". Maybe he thought "Erm.. Maybe I am about to be replaced by AI!"
Seriously. I feel like we’re weeks away from “This is the ONE Vegetable that Hegel is BEGGING you to Cut Out Of Your Diet!” being published in The Atlantic or whatever
That is absolutely the keynote/talk, @dang. Even just the slides would be nice, but Van did a great delivery as well, and "knew his audience". Ah well. I could try a more expansive summary from memory, but this may not be the place for it.
Human actors have been doing the voice scams for a bit. No AI needed. It's easy to swindle people, especially senior citizens, who are reacting naively and ultimately emotionally to a good actor over a communication mechanism where half of their senses are cut off, when they claim to be a relative, when they claim to be kidnapped, can bullshit their way into their bank account, etc. It's more clickbait, but with AI, and I'm not even sure the barrier to entry is lowered, other than not requiring potentially multiple con artists.
Operating a call center full of entry-level con men is possible. The binary options industry did it for years, before Israel made it illegal to scam non-Israelis.[1] The crypto industry was more of a bulk thing done via the web, although there were some high-touch scammers.
It is embarrassing how much we seem to legislate out of a place of ignorance or fear.
Where was this attitude when social media was taking over the internet? We just stood back while it dramatically reordered society. But now we're expected to wring our hands about entirely hypothetical and likely imaginary risks related to AI because they echo of dystopian sci-fi themes? Seriously?
I've yet to see any AI catastrophist address the fact that the training set for current state of the art LLMs is already effectively "the whole internet". It's difficult to see how an LLM which sets out to improve itself overcomes this limitation. We haven't even seen the early stages of models able to reason or develop and retain new knowledge independently. Maybe at that point I would agree it's time to apply the brakes, but at the moment regulation would be tantamount to strangling AI in the crib.
In this very interesting video, Tristan Harris and Aza Raskin argue that LLM's can feasibly generate their own training data sets [1]. It's a very interesting (and a slightly alarming) talk, maybe check it out.
Well, to some degree I think we are considering this approach to social media to have been a mistake, and are looking not to make that mistake again. I mean, at the very least, AI is going to amplify all the same stuff that social media did and push it a lot further.
> I've yet to see any AI catastrophist address the fact that the training set for current state of the art LLMs is already effectively "the whole internet". It's difficult to see how an LLM which sets out to improve itself overcomes this limitation.
Honestly, I think that even with AI at GPT-4 level, we could be in for a bad time. This tech is going to accelerate everything -- and it's a lot harder for a society to steer itself when things are changing so quickly.
But I think you may be basing your viewpoint on some shaky assumptions here:
* that LLMs are the smartest model architecture we can come up with
* that the size of the training set is the limiting factor
I think it's very likely that there will be more breakthroughs, even if it's just combining LLMs with other types of existing AI in novel ways, at first. As for the second, humans clearly learn much more than AI without reading the entire internet, so it seems likely that there's something else limiting LLMs. And that a different design could far surpass what LLMs can do.
> We haven't even seen the early stages of models able to reason or develop and retain new knowledge independently.
Maybe not with a pure LLM, but people have hooked up LLMs to a sort of notebook, where they can note down stuff they want to remember for later, just like a person with a not-so-great memory. And that's just kinda duct-taped on - I can imagine it being much better integrated and faster to access in future.
> Maybe at that point I would agree it's time to apply the brakes, but at the moment regulation would be tantamount to strangling AI in the crib.
I guess the question is, are we developing something that's going to be really bad for humanity? Because maybe that's the right thing to do. If at some point we research and figure out how to make AI that doesn't screw things up a lot, it's not too hard to allow it again. But it's hard to stop now, and it's going to be even harder to stop once companies start using AI lobbyists to influence policy.
*edit -- and given how many people involved in AI have expressed concerns about where things are going, I wouldn't say that this would be out of ignorance. I see Moloch's (<https://slatestarcodex.com/2014/07/30/meditations-on-moloch/>) dirty fingerprints -- slowing down or stopping is a nasty coordination problem, where individual incentives are all to keep going as fast as you can.
It's not so much that I disagree with any of this so much as I totally disagree about the timescale. We're iterating on LLMs really fast, which makes it seem like AI as a whole is progressing at warp speed, but there's no indication that that pace is going to translate to the kinds of breakthroughs that would actually make AI an existential threat rather than a new and weird annoyance.
As far as GPT-4 already being a potential nuisance in terms of potential social disruption -- yes, sure, but it also has obvious potential benefits, and we seem resolved to total indifference about the way technology is shaping our society (viz. social media). I don't view LLMs as a step change in that regard.
> Yuval Noah Harari argues that AI has hacked the operating system of human civilisation.
Storytelling computers will change the course of human history, says the historian and philosopher
For a moment there, I was worried The Economist went off the rails.
> Storytelling computers will change the course of human history
I realize that everyone thinks this is now THE moment, but I remember reading niche stories about this or that storytelling program changing the world back in 1994. Not much has changed, although the stories are more realistic and the teller seems to pass the Turing test. Fine, but let’s get to work solving real world problems that have been festering unabated for centuries. Still waiting.
"Everyone will make their own games with this new Game-maker thingy!"
"Everyone will make their own stuff with these 3d printers!"
"Everyone will make their own music with Garage Band!"
...and yet, those tools help some people, those who invest the time in them as tools to produce specific work, but in general no.
Not everyone learned how to use the tool to make their own games. 3d printers can do interesting things, but not everyone can afford one or is willing to learn to use one. Anyone with an Apple device has Garage Band, but only a few actually get full use out of it short of playing around for a bit.
LLMs are easier to use, but most people won't learn how to get amazing stories. They'll get the same archetype a few times and get bored.
> In ancient India Buddhist and Hindu sages pointed out that all humans lived trapped inside Maya—the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion.
A point of note: Hinduism (specifically Sanatana Dharma) does not only talk about Maya (which for some reason the Western scholars who study Hinduism are fascinated with). This is just one branch of Vedanta (specifically Advaita Vedanta/Monism). However, two other branches do not agree with the concept of the Creation being an Illusion/Maya (Vishishtadvaita/Qualified Monism as well as Dvaita/Tattvavada/Dualism). So can't really generalize and say that "Hindu sages pointed out that all humans lived trapped inside Maya-the world of illusions". An equal number (if not more) opposed the idea.
It is being pushed very hard that the solution is to keep models under lock and key.
Is not the most major problem with maintaining a healthy society keeping the power of governments and massive corporations in check? How are closed source models supposed to be the solution? I simply don't see it, so to me it reads as propaganda from those powerful entities. Leaving the most powerful software to be only understood, researched, and used by the most dangerous groups is likely the worst thing you can do.
>I simply don't see it, so to me it reads as propaganda from those powerful entities.
Yes. It's called regulatory capture. Big business, and business with big business investment, want to be regulated, because they have the resources to navigate the regulations, at an attempt to solidify their lead or "catch-up", and prevent new entrants due to the burden of regulation.
AI is scary like Harari says but the tone of the article sounds better like a Terminator movie: we are fighting to a new technology, there will be a war if AI reaches some intelligence thresholds (passing the Turing test or not...).
I am saying that just as an hypotheses: I don't think we can control the daemon or alien as Harari suggest. The AI technology is different from nuclear technology, "everybody" has the technology to run it. We don't need enriched uranium.
All this challenge anthropocentrism. At the end is the complex reality who will shape the next iteration of the world.
From a more HN social perspective, I think it is important that everyone could potentially create and access this technology based on open source software and computing power.
The US has some of the strictest drug regulations and yet it’s home to most of the biggest drug makers.
The US and the EU have some of the strictest aircraft regulations and yet Boeing and Airbus are the world leaders.
Sure, people can build gpu farms in Somalia if they want to, but I doubt they will unless the regulation is so draconian that it completely stifles all progress.
I guess I don't see what the problem is. Are you worried about this happening from a economic perspective, like "we should let private industry do whatever dangerous or immoral thing it wants lest we risk losing the competitive edge in said dangerous or immoral markets," or a cynical one like "well if someone's going to fuck up the world it might as well be us?"
It's not like we must either stop in our tracks or allow every coked-up startup founder to do whatever they think will get them on the cover of Forbes. Also, pursuing this tech with wanton abandon won't stop, say, China from progressing with their own efforts.
Isn't the problem that if the risk is entirely hypothetical, you can't define a limiting principal to any hypothetical regulation?
It reminds me of the Vulnerable World Hypothesis - that at any moment someone could invent something so dangerous that it destroys humanity. Given enough time and technological development this outcome becomes inevitable, and ergo we must put brain implants in every child to limit creativity. Do you see the problem?
a) The US Government uses policy to mitigate hypothetical risk all the time. b) Many of the problems AI may bring about are standard human problems that know how to measure and address, but we likely haven't seen them on this scale. For example, many people reasonably reckon that this new technology will put a lot of white collar workers out of work. That's not some amorphous newfangled philosophical problem-- we can address these problems through policy.
I'm not really sure what you're trying to argue here. It was, in fact, a huge problem from a humanitarian perspective and the state of rust belt cities is a shining example of what happens when we ignore these forces. We, as a society, hung those people out to dry and the current batch of workers doesn't even have the meager protections afforded to some factory workers by their union affiliation. If it was deliberately legislated for, that's a really great example of what we should not do in the future.
That, if the market deems the value of a white collar to be less than the output of a text predictor, maybe you should "reskill" and "learn to code"?
What's wrong with letting the market speak? Since apparently the white collar class celebrates, while chastising and legislating away the jobs of the blue collar class, apparently it's different when the white collar jobs are at risk, and we must not let the market work? Are you ignoring the massive wave of hysteria, including this article, for the last several months? Straight from the 'white collar' class.
If the AI is so powerful at whatever bullshit job the white collar worker has, and you want to make white collar workers the glorified elevator attendant, be my guest. But it certainly reveals a glaring hypocrisy, that jobs only matter when they're well-off people's jobs, but fuck the average joe when it comes to their skills and livelihood, they should just "re-skill!"
> That, if the market deems the value of a white collar to be less than the output of a text predictor, maybe you should "reskill" and "learn to code"?
The idea that mid and late career tech, creative, medical, etc. professional will be able to easily "reskill," or that there will be good jobs for them if they manage to, is magical thinking.
> What's wrong with letting the market speak? Since apparently the white collar class celebrates, while chastising and legislating away the jobs of the blue collar class, apparently it's different when the white collar jobs are at risk, and we must not let the market work? Are you ignoring the massive wave of hysteria, including this article, for the last several months? Straight from the 'white collar' class.
There's a lot going on here but concentrating on your core point: saying "let the market work" without being curious about what the actual human beings involved are actually going to do is nothing more than your thinking money is more important than human life and that people are only as important as their commercial potential. Why beat around the bush? To put it lightly, I disagree.
> If the AI is so powerful at whatever bullshit job the white collar worker has, and you want to make white collar workers the glorified elevator attendant, be my guest. But it certainly reveals a glaring hypocrisy, that jobs only matter when they're well-off people's jobs, but fuck the average joe when it comes to their skills and livelihood, they should just "re-skill!"
Once again, I'm not really sure where you get this idea that I'm being a hypocrite here considering that I've been very consistent for decades about it being pretty bullshit how our society treats obsoleted workers like used condoms and says "well that's the market!" If you want to go argue that point with someone else who actually has that opinion, feel free.
I can tell that we're not going to get anywhere because free market zealots have a philosophically different understanding of human worth than I do.
To be fair, I wasn't calling you a hypocrite. I was implying that the "journalists" who shovel shit, whose prospects of being replaced by the predictor is almost certain at this point, are hypocrites. I do enjoy a little schadenfreude every now and then. And certainly, other people, including two-faced elected politicians. I apologize if you felt it was directed towards you.
>The idea that mid and late career tech, creative, medical, etc. professional will be able to easily "reskill," or that there will be good jobs for them if they manage to, is magical thinking
The jobs aren't going anywhere. Some will be made redundant, a lot will be augmented, and if we're so lucky, there will be more content shoveled out by the text predictor than ever before (garbage.) There is no medical, tech, or "truly" creative professional that is going to be replaced wholly by an LLM or diffusion model, not anytime soon. I'm assuming you're referencing the "study" done on reddit with /r/AskDoctors, and it's a laughable study. Certainly, docs can augment themselves, and they should, but not wholly rely on the model. Even 70B+ LLMs do not produce consistent facts, they produce plausibility, despite the facts most definitely being in their corpus. They predict. If you read Microsoft/OpenAI's paper around a month ago, it appears that RLHF actually reduces the quality in many aspects, despite it increasing the quality of output for whatever they deem quality elsewhere, so it's a double edged source instead of "add more tokens" + "add more layers."
But more importantly, new jobs will come, more than before.
>without being curious about what the actual human beings involved
I'm back in the rust belt with family after moving around the US for software jobs in the last five years, I see it first hand.
LLMs are like Google on steroids for a (compressed) corpus (the internet.) But you have to know that it's simply predicting. It's plausibilities. Once you know their limitations in that respect, they are useful.
Well I agree with you regarding not having just any ol' startup do whatever they like with technology, I'm just not convinced China is something I'd be so worried about. They don't seem like the kind of country who likes things being out of control. Russia is maybe a different story but do they really have the know how, money, resources and talent to be a risk anymore? I don't think so.
If you take the AI component of this text, it becomes a wild trip. A storyteller telling the story of how storytelling can mess people up, and almost bragging about it.
AI hacked nothing. We've been doing it to ourselves for ages. This is just another way of doing it, from ourselves, to ourselves. We hacked every single kind of language there is. The AI just trained on that previous work.
This could have been a nice piece about how we need to be less inclined to believe in stories and be more skeptical, but I guess no storyteller wants to write a story about _that_. Instead, they blame it on the machines.
>AI hacked nothing. We've been doing it to ourselves for ages. This is just another way of doing it
except much more effective.
Humanity barely managed to get over (if so) previous mind-viruses like bible and communism. Partially because of massive discussion around these topics. With current LLMs (not future superhuman AGI) it's possible to spoon-fed a personally tailored mind-virus to each internet user. And there would be no discussion just because every instance is so original. (In practice it's often enough to just rename all the keywords, as proven by numerous psycho/business-cource gurus.)
If the story is too personal, you can't share it and influence others. In your analogy, is a virus that can't spread.
A pop song can feel deeply personal to many people, precisely because it's vague and not specific. The composer doesn't need to know the listener at the individual level. It needs to know an audience.
The combination of grooming a specific audience and then targetting it broadly is much more likely to work than individual propaganda pieces, and much more effective because the affected are inclined to influence others within the audience. It's already being done in a massive scale.
Yes, there are concerns about the targetting individuals with AI. Whistleblowers, activists and so on are in serious risk. These are likely to have some antibodies though, and that is not what the article is talking about.
The best defence in my opinion is better education, skepticism and proper critical thinking. Exactly the same stuff that prevents people from being targeted by the old stuff.
>ai can help us in countless ways, from finding new cures for cancer to discovering solutions to the ecological crisis.
Can it?
"Storytelling AIs", i.e. LLMs, are not autonomous. Their goal is to predict text. They predict text. They are predictors.
So, in the article, we casually drop the claim: "AI can make exponentially more powerful AI." Is it feasible for a predictor to run in a loop and exponentially improve, by itself? No. Is it potentially feasible for a predictor to run in a loop and generate outputs that may be used by a human to further improve its own outputs, or Rube Goldberg'd together by a human to steadily improve predicted outputs in general or for niche? Yes. But exponentially, and realistically, no. Hardware constraints, architectural constraints, performance constraints, monetary constraints, correct?
Let's go back to fear, for a second. Nukes cannot invent more powerful nukes, but a nuke can cause a "Buck" Turgidson can nudge an organization or machinery in the direction for more powerful nukes. LLMs can potentially produce the output to feed into more powerful "AI" in that way, with a "Buck" nudging it. But that's with the assumption that the predictor's outputs don't reduce performance. Can the predictor spit out a novel description of some architecture that may be used by a human that is "exponentially" more powerful? Unlikely.
After re-reading the article, perhaps the joke is on me, because I detect some vague satirical wit. But when someone wants a government, the elected congressional or federal body in which elected individuals are knocking on dementia, catatonically voting on party lines, and are still figuring out email, to regulate for some vague idea of "safety" for some "AGI" entity they cannot even define, I smell bullshit. Not to mention, regulatory capture. Maybe that's the real "death of democracy."
We've been through this over the past few years. Hysteria leads to regulation and government intervention, leads to large corporations to be able to plug their nose and survive via acquired capital, and/or use their resources to comply with the bureaucratic bullshit, while smaller businesses die, people lose their life savings, income inequality grows, a generation is set back for the rest of their life: we're worse off and outcomes are not improved as per the original goal of the regulation, in the end.
Harari seems to really like the flip in perspective of the “selfish” gene and has applied that to every situation. Some patterns are effective and used widely; I’m not sure it’s always very relevant to apply the idea of selfish preservation or evolution to it. Not when there’s a social construct around it, at least.
Arguably this civilization has been hacked for centuries.
Capitalism is undoubtedly a sort of AI that only seeks to find equilibrium points in markets, and the idea is running on almost everybody’s brain. I hardly imagine that any sort of AI could be more dangerous than something that currently let’s millions die due to malnourishment because it’s trying to satisfy some loss function - equilibrium.
> Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai.
This is dumb and lacks basic education. No, ai cannot make an exponentially more powerful AI.
Humans, however can use AI as tool to create a more powerful AI. But that would not be different from humans crating first nukes with enriched uranium, and then using that uranium to create plutonium for even bigger nukes, and then figuring out that this plutonium could be used as a mere activator for even more devastating thermonukes.
“ The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe.”
Not sure I agree here. Who watches the watchers? Who sets the safety checks? Who gets to see behind the curtain? Do we trust our current governmental and regulatory structures?
The way to understand A.I. might be as an "extension of man", per Marshall McLuhan.
"Language does for intelligence what the wheel does for the feet and the body. It enables them to move from thing to thing with greater ease and speed and ever less involvement."
"To see a man slip on a banana skin is to see a rationally structured system suddenly translated into a whirling machine."
> Human rights, for example, aren’t inscribed in our DNA. Rather, they are cultural artefacts we created by telling stories and writing laws.
No, they are based on instinctual responses (https://www.youtube.com/watch?v=pXyZ0kEwvqY ). Telling stories about them and writing laws, at best, codifies them (and often enough codifies exceptions to them).
> What gives money value is the stories that bankers, finance ministers and cryptocurrency gurus tell us about it. Sam Bankman-Fried, Elizabeth Holmes and Bernie Madoff were not particularly good at creating real value, but they were all extremely capable storytellers.
I grant him this.
Yes, language is important. But it is not the foundation of human civilization. It is merely an important pillar (especially for the subset of humanity, such as the author, who are very language-use prone).
> The catch is that it is utterly pointless for us to spend time trying to change the declared opinions of an AI bot, while the AI could hone its messages so precisely that it stands a good chance of influencing us.
I seriously doubt the AI will influence the person it is arguing with. This happens among humans occasionally, but not frequently. Maybe the AI will get very good at reverse psychology, though.
So why is it utterly pointless for us to get better at arguing a point by arguing with an AI that we will personally not convince, but the reverse is not also true for the AI? Bystanders are the audience. If a human's arguments are more convincing than the AIs, then most bystanders will be convinced by the human. It may take a bit longer for the human to reference the arguments of other humans, than the AI to cite whatever (if it can cite, as LLMs are currently not good at doing that), but an AI who argues too fast risks being seen as a bullshitter by the bystanders.
> In a political battle for minds and hearts, intimacy is the most efficient weapon, and AI has just gained the ability to mass-produce intimate relationships with millions of people.
Until people stop being intimate with other people, and are intimate solely with AI, there will be a countervailing force. Intimacy is also more compelling when coupled with in-person interaction.
> People may come to use a single AI adviser as a one-stop, all-knowing oracle.
This is a risk. In-person interaction is a countering force. Few people would trust an AI, or even an expert, over the feedback of those they know. Because the feedback of those who know us is more likely to take into account our personal experience. Maybe intimate AIs would be a problem, here.
> Previous tools like the printing press and radio helped spread the cultural ideas of humans, but they never created new cultural ideas of their own. AI is fundamentally different. AI can create completely new ideas, completely new culture.
They never created new cultural words of their own, but they surely created new ideas. Just as humans get new ideas from the lives and interactions of other animals and plants. Do the ideas and culture of AI matter if humans don't adapt them to our purposes?
> And the first regulation I would suggest is to make it mandatory for AI to disclose that it is an AI.
This is a very neat idea. Of course bad actors won't care. But in a future where AI is generating new AI maybe this is something that would be worthwhile.
There is a tremendous gap between the sort of general in-group altruism in social animals that is precisely the evolutionary trait that made them social animals, and human rights. The cultural expression of these social instincts varies wildly, insanely, and no culture from Aztec to Assyrian has ever had norms that were not based on them. Tautologically!
I think I understand what you're writing. I agree with it on its face, but only because "human rights" are now individually codified, instead of general ideas.
How do human rights (as laws) differ from human legal obligations and forbiddances?
Conflating stories and written laws with rights is conflating the tree for the forest. Protection of general human rights is only a small part of the law.