> You thought the first page of Google was bunk before? You haven't seen Google where SEO optimizer bros pump out billions of perfectly coherent but predictably dull informational articles for every longtail keyword combination under the sun.
> Marketers, influencers, and growth hackers will set up OpenAI → Zapier pipelines that auto-publish a relentless and impossibly banal stream of LinkedIn #MotivationMonday posts, “engaging” tweet threads, Facebook outrage monologues, and corporate blog posts.
I think there's a bright side if people can't compete with machines on stuff like that. People shouldn't be doing that shit. It's bad for them. When somebody makes a living (or thinks they're making a living, or hopes to make a living) pumping out bullshit motivational quotes, carefully constructed outrage takes, or purportedly expert content about topics they know nothing about, it's the spiritual equivalent of them doing backbreaking work breathing in toxic dust and fumes.
We can hate them for choosing to pollute the world with that kind of work, but they're still human beings being tortured in a mental coal mine. Even if they choose it over meaningful work like teaching, nursing, or working in a restaurant. Even if they choose it for shallow, greedy reasons. Even if they choose it because they prefer lying and cheating over honest work. No matter why they're doing it and whose fault it is, they're still human beings being wasted and ruined for no good reason.
Unfortunately, the price will go down pushing the supply/demand curve out, and we'll get ever more garbage. Some of it will be dangerous or addictive to susceptible portions of society, mostly just boring and stupid to the rest of us.
Wait for first kid who dies trying an AI generated "challenge" or the first violent mob killing caused by AI generated outrage porn. AI generated video porn may look like triple breasted whores of Eroticon6 today, but with sufficient influencer content (playground videos) and porn (dungeon) footage, I suspect you can generate more than enough novel and relevant (child S&M) porn for everyone.
To play devil's advocate: If AI is producing content that would be morally objectionable because it harms someone, but nobody was harmed in the making of it, are we still right to find it morally objectionable?
If your 10 year old dies because of an AI created Tide-pod or Street-surfing challenge, I think you'll find it objectionable. Same for highly targeted fake news clickbait that causes a race riot.
Why does said 10 year old child have unfettered access to the internet?
Also why is an AI ultimately responsible for a child choosing to perform some challenge? What if the 10 year old child played amogus & then decided to re-enact irl?
I'd say it's less the source of a challenge or false factoid etc and more a cultural problem of no monitoring kids enough; parents give their kids phones & let 'em use TikTok to their heart's content cause it keeps the kids quiet. And immature kids love TT because it's easy to generate clout and therefore dopamine.
> Why does said 10 year old child have unfettered access to the internet?
Q: Of the kids in my 13 year-old's class, what % would you guess have WhatsApp on their phone?
Spoiler: "If you live in a country in the European Economic Area (which includes the European Union), and any other included country or territory (collectively referred to as the European Region), you must be at least 16 years old (or such greater age required in your country) to register for and use WhatsApp"[0]
Why does it need to be unfettered? They have peers.
Kids smoke/drink and it's literally illegal to sell them cigarettes/alcohol.
A significant percent of adults are also susceptible to crazed conspiracy theories... see QAnon. Now allow AI automation to target individuals and small groups to "optimize engagement" with apparently personal communications using A/B statistics. Everywhere all the time, because it's cheap, because it pays the rent. Some of them will be drawn to artificially generated violence inciting agit prop, because it works. There are negative externalities to that.
In terms of enjoyment, yes. In terms of overall harm reduction, no.
It's still a bad thing for humanity at large but it may have a knock-on effect of pacifying people who would otherwise pay significant amounts of money for new content to be produced. If we can placate those people, at least the money dries up for those other sources and maybe they would move on to doing something other than harming children.
I don’t understand why procedurally generated porn of made up humans is harmful to someone. How is this different from video games that allow you to shoot and stab thousands of virtual people without any proven deleterious effect on real life?
It would normalise that behaviour. There are plenty of studies that show pornography warps the watchers' minds.
Even video game violence would do it but a vast majority of the experiences are easily identifiable as cartoonish. So it may give you the idea that you slaughtered a 1000 NPCs but each kill is nowhere close to the visceral reality. The games which have a focus on realistic killing, either do so to aliens/monsters or they are so off-putting that they never manage to find a large audience.
In any case I look at the public discourse about war and violence and I find that people are eager to jump into fights and don't think twice about supporting their government in bombing and killing the shit out of other populations. They couch it in some shallow moral argument. Often the deterrent to war isn't some moral concern but the fact that the other side also has significant weaponry and might inflict casualties on your side as well.
In any case I hear a lot of this overton window concept and there is a lot of merit in the argument that excessive amount of AI generated deviant porn content will eventually make people think that all this depravity is normal and we should just look the other way.
> It would normalise that behaviour. There are plenty of studies that show pornography warps the watchers' minds.
Is there any evidence that it would normalize that behavior? That sounds like a moral intuition, not something backed by evidence. And even if the mind of the watcher was warped, is that any of your business as long as it's not increasing harm to society? In the West we let adults of sound mind harm themselves in lots of ways they choose, from eating terrible food to doing drugs to drinking to smoking to extreme sports. It's their body, their choice. As long as society doesn't have to bear the cost of it, like say with drunk driving, in the West we have accepted that adults have the right to do what they want to themselves.
> Even video game violence would do it but a vast majority of the experiences are easily identifiable as cartoonish. So it may give you the idea that you slaughtered a 1000 NPCs but each kill is nowhere close to the visceral reality. The games which have a focus on realistic killing, either do so to aliens/monsters or they are so off-putting that they never manage to find a large audience.
Great, then really deprived AI-generated porn will likely see the same lack of adoption and stay a niche phenomenon for a few people in a basement somewhere. What's the problem?
> In any case I look at the public discourse about war and violence and I find that people are eager to jump into fights and don't think twice about supporting their government in bombing and killing the shit out of other populations. They couch it in some shallow moral argument. Often the deterrent to war isn't some moral concern but the fact that the other side also has significant weaponry and might inflict casualties on your side as well.
Were people not eager to jump into fights and support their jingoistic government rhetoric prior to the creation of violent videogames? Do you have anything to back that popularity of violent videogame led to a rise in ultra-nationalism and aggressive foreign policy?
> In any case I hear a lot of this overton window concept and there is a lot of merit in the argument that excessive amount of AI generated deviant porn content will eventually make people think that all this depravity is normal and we should just look the other way.
Is there a lot of merit to that argument? How did you decide that?
The method of gratification is different from porn is different than to video games.
I don’t have to get off to enjoy obliterating people back in the quake days, but porn is a different psychological mechanism entirely.
I’m trying to assume you’re asking the question to spur conversation, and not that you actually hold the view that watching child porn, whether synthetic or not, is morally equivalent to playing video games.
Is it? Are they not all dopamine pathways in the end, with addicts emerging from all of them? Where's the evidence against that?
People spent decades arguing that rock & roll, Dungeons & Dragons and videogames would lead to moral corruption and decay. Turns out there is no evidence to support any of it in the end. The immorality was always "self-evident". That's not enough to justify banning something though as evidence has shown again and again.
Also, whose morality are we talking about here? Morality is all over the map depending on which culture you ask. Each cares more or less about sanctity, divinity, loyalty, freedom etc.
And you're correct, my personal opinion on this is irrelevant.
>I’m trying to assume you’re asking the question to spur conversation, and not that you actually hold the view that watching child porn, whether synthetic or not, is morally equivalent to playing video games.
I think you're less trying to assume this, and more trying to cast aspersions without breaking HN rules. Barely.
I think sexuality is different to violence. I've heard plenty of stories about people developing weird kinks from porn, or having to turn to weirder and weirder stuff to get off. But I haven't heard of any cases of video games turning people violent.
OK, let's assume your argument is correct. (which I don't think is true, but let's go with that for now) Let's say that media depicting some kinds of sexual material warps what gives some people sexual satisfaction.
...so?
Should my preferences of what I find acceptable and not acceptable in that domain somehow influence what content others are allowed to consume? If the creation of that content actively causes harm, then sure. That content should be restricted. If we are restricting it because you think it's icky and it might lead to other things you find icky than we took a wrong turn somewhere.
That's like saying we should outlaw mozzarella cheese because it might lead to the consumption of triple cheese sausage pizza. Neither of those acts hurt me, so why should I care about people choosing to participate in either?
Nah, compare violence in media by sex/age breakdown of the victim. Excluding sexual stuff completely, it's many, many orders of magnitude more acceptable to show the gory killing of a male character on screen than it is to show the same of a woman or child.
There are all of these unwritten but extremely obvious rules that dictate what is acceptable & what is not.
Unfortunately I don't think we have data here to make conclusions, only a few moral intuitions, which historically haven't been the most reliable compasses for navigating these grey areas.
See I somewhat agree with you, but you have a limited perspective in a way, that maybe as a gay man I can help expand on without being shouted at for being MRA trash just because I notice things.
You say "video games allow you to shoot and stab thousands of virtual people", this isn't true. In actuality, video games allow you to shoot & stab thousands of virtual _men_. Replace the baddies in a game with women and or children and people will go mental.
Same thing with media; Television/movies - plenty of men are brutally gored on screen with no one batting an eye, but generally the camera will cut away if there's violence against women &| children. The only time you do see it is when they want to make a very strong impact ie "this guy killing this woman is a V E R Y bad guy, and you know that because it's not just some anonymous dude he's goring".
There's plenty of hypocrisy in it; Game of Thrones people famously complained about the Cersei rape scene (also in the books, although more ambiguous) meanwhile the average episode spends a good portion of the time showing us the insides of three dozen guys' ribcages.
Or Altered Carbon where in the book series where Takeshi is forced into a simulation as a woman, where he is tortured; media at the time said they were glad that it didn't make it into the show because it would be "mysogynistic torture porn" - apparently none of the gratuitous violence against male characters was considered to be bad.
And _that's_ why it's different. Because certain types of porn could be generated that infringe in the protected classes that society creates.
That all sounds about right. The point I was making is that the poster was looking at it from the standpoint of "reducing harm", whereas in reality they were merely following their moral intuitions about what's sacred and taboo in society, and then backwards-rationalizing it through the "harm" moral axis.
Very common in the liberal West where we think of ourselves as being above taboos and caring about divinity, except we still do very much. E.g. digging up the corpse of a dead relative and having sex with them doesn't harm anybody, but is highly taboo. Having sex with a dead family pet, doesn't harm anybody, but highly taboo. Siblings on contraception having sex, zero harm, but very taboo. Hentai of protagonists who could be underage, zero harm, very taboo. etc. etc.
Women, children, LGBTQ members, the involuntarily unhoused, minorities, Muslims and many others are all currently sacred groups whose sanctity you cannot violate even in a work of art and fiction.
So back to my point, it has nothing to do with harm, it's all about sanctity.
I think it's important to differentiate something that is personally icky to you and something that is 'bad for humanity at large' unless you can demonstrate how it's bad for humanity at large and demonstrate why that makes it morally objectionable.
Morally objectionable acts require (IMO) a victim.
I think it's essential that before we decide that we are going to limit what people can do we determine if what they're doing is actually hurting someone to the extent that society should limit that behavior or if we're just limiting their behavior because it's different from ours and we don't like it.
In my personal moral worldview, if a person can simulate a universe with no actual people in it in which they commit whatever debauchery they feel necessary without causing harm to anyone else, I say have fun. It has literally no measurable impact on me, so it would be immoral for me to insert my personal preferences into the matter.
Most of the people you describe have little to no moral compass. Most of the time they are above the accepted morals of society (a very Nietzsche perspective). These are the marketers of the world who encourage you to "pollute the web" and the media manipulators whose secrets are to "con the conmen". The reality is, they make more money than any of us and sleep just fine every night because they know that nobody seeks honesty or reality anymore. The more unbelievable of headlines and articles, the more it warps our compass.
No sane person doing this will push reasonableness, complexity, or mixed emotions.
Us is them. At least I say this in a general sense that a healthy portion of posters here are in marketing, or they are looking for some way to make a living in any way possible. Trying to making this a us vs them is pretty much meaningless as it's completely ineffective in solving the problem.
The whole point of social norms is that they define some boundaries of what is acceptable and not in some community. If someone violates the social boundaries because "they are looking for some way to make a living in any way possible" (which is not an excuse, I mean, "any way possible" would also justify stealing, robbing and murder to make a living) then they themselves choose to be "not us" and deserve to be shunned by our community.
This also does have a certain effect at solving this problem - if you know that telling people what you want to do is going to result in losing reputation and refusing to assist you, then that does act as some deterrent. The social pressure reduces the likelihood that people will choose to join that industry, and it reduces the likelihood that people in the industry will refuse some activities even if they are profitable. Even from pure game theory and evolutionary psychology we can observe that 'punishing defectors' is a viable strategy that gets some results.
It is important that we do not legitimize or normalize unethical behavior just because someone is trying to make a living through marketing; so whenever someone says "ah, we're all in the same boat, isn't everyone doing this?" it's important to loudly remind everyone that no, we're not all acting like this - ethics is a thing and proper people refuse to do improper things.
What's the difference between not seeking, and not knowing how to seek?
From a moral perspective, a lot. From an amoral pragmatic perspective, not a lot – unless you think it'll somehow benefit you to give people the ability to effectively seek such things? Hah.
We can hate them for choosing to pollute the world with that kind of work, but they're still human beings being tortured in a mental coal mine.
No they're not. They're exploitatively torturing other people, while deploying machines to mine the coal. They deserve any bad thing that happens to them, because they have the education and resources to do better but choose not to.
I don't care that they're human. If they have agency and resources and leverage those in such willfully zero-sum fashion as you describe, they've chosen to gamble on profiting from the suffering of others. Empathy and kindness are good things, but empathy for willfully abusive people is maladaptive.
> I think there's a bright side if people can't compete with machines on stuff like that. People shouldn't be doing that shit. It's bad for them.
I don’t know. People already pump out a ton of bullshit from content farms then litter their web pages with ads and last-click attribution.
End user value isn’t what drives a lot of “information” businesses. See any recipes site or “news” that’s regurgitating what someone “newsworthy” tweeted.
It will be interesting to see how search engines adjust. Maybe someone will make the GetHuman (https://gethuman.com/) equivalent of search.
I was just thinking about that today before reading this article. It took me like 45 mins one day to find a health site that wasnt trying to sell a product. Most of these for-profit sites conflate there content to jive with the products they are selling, which can prevent someone from finding information that actual does some good.
On one hand I completely agree, but on the other hand, from their perspective it may be "well I paid $500 for this turnkey point-and-click app and now it makes money for me in the background while I sit on my couch making music all day". This new streamlining makes it more soulless in general but less soulless for the individual people responsible for it because they're doing and seeing less of the actual bullshit themselves and deferring it all to the automation pipeline.
They may (and, frankly, should) still feel something about what they're putting out into the world, but they can more easily blind themselves to it and just tell themselves almost everyone's doing something dumb to make a living and they're not even the ones actually "doing it" themselves.
"I think there's a bright side if people can't compete with machines on stuff like that" - Hadn't thought of it like this. Good point. Perhaps it will be akin to email getting better spam filters. And perhaps there is a better way than a 3,000 word article about how long to boil rice.
In fact that's exactly what I want LLM to be doing - read the whole internet and write articles on all topics, answer all imaginable questions, make a 1000x larger wikipedia, a huge knowledge base. Take special care to resolve contradictions. Then we could be using this GPT-wiki to validate when models are saying factually wrong things. If we make it available online, it will be the final reference.
How do we know which sources contain factually right things? What happens when the facts change? It used to be a "fact" that the Sun revolved around the Earth, and that stomach ulcers were caused by stress...
Even humans use few relatively simple heuristics to decide what to trust.
One is that objective truth is internally self-consistent. If one AGW denier claims it's the sun, and another AGW denier claims the NASA falsifies the data, and they support each other, then you can judge these are conflicting claims and decrease your trust.
Also, false claims usually focus on attacking competing claims than to come up with a coherent alternative. And they tend to be more vague in specifics (to avoid inconsistency), compare for example vague claims about all scientific institutions faking data vs Exxon files containing detailed reports for executives.
The statistically correct answer is not necessarily the true one (if there is such a thing as 'truth'). Many people can believe something to be true, and if I query those people i can calculate which answer is statistically correct. That's the 'wisdom of the crowd'.
The "wisdom of crowds" is mostly bullshit. It works fine for trivial things like estimating the number of beans in a jar. So what. It completely fails for anything requiring deep expertise.
Calling a flat earth or a geocentric universe "statistically correct" at past historical points is really inane, don't you think? In doing so, you abuse the notion of what statistics is supposed to represent, which is, generally, a statement of an estimate (and/or distribution), as well as the precision of that statement. Since "correct" is binary, it carries an implied precision of 100%, which renders the notion of "statistically correct" is pretty absurd.
Hahaha. Reliable confidence, isn't that a contradiction in terms? If a model makes mistakes but has great confidence estimation why wouldn't it make fewer mistakes in the first place, since it knows when it is wrong.
But if a LM looks up a topic and sees contradictory answers, and none of them is much more reputable, maybe it can still use that information to say it is inconclusive. Knowing a topic is controversial or not present in search engines is useful information. chatGPT would hallucinate, a search + chatGPT solution would refrain from hallucinations. It could also give references.
You shouldn't be so confident without knowing how these things work, there is a absolutely a simple and built-in way to model this... LMMs for example are simply calculating the next word or phrase sequence that is most likely given previous results and modeling information. So they can definitely tell you the combined liklihood that the answer is 'Peru is a cat' vs 'Peru is a country' and provide you the exact statistical likelihood of each.
"So they can definitely tell you the combined liklihood that the answer is 'Peru is a cat' vs 'Peru is a country' and provide you the exact statistical likelihood of each..."
...in the context of the texts that the LLM is built on. Not in the context of the real world, where P('Peru is a country') = 1.0 and P('Peru is a cat') = #cats named Peru / #things in the world (or something).
Most of your world is text. Sure there's a sliver that isn't, but the reality you directly see is a tiny fraction of the reality you know from reports. Come to think of it, I've only ever seen reports of Peru, never actual Peru.
Actually I am confident exactly because I do know how LMMs work, and your comment fails to address the issue at all. Such models can't tell you anything useful about the probability that a particular statement is accurate.
That's how likely they are to occur near each other, not how likely either statement is true. Rude of you to preface your comment the way you did while making this error.
parent post isn't arguing which thing is capital T true (if such a judgement is even universally possible). They are talking about modeling statistical confidence, which is purely an emergent numerical property of data and makes no commentary on objective truth.
How is that different with LLMs versus badly-written human generated content? Most clickbait/SEO articles are as poorly researched as they come, and shouldn't be assumed to be accurate anyway.
It is very nice of you to be so concerned about these folks' inner lives and psychological well being! Are you going to pay their rent, and feed them too?
Honestly I don't sympathize with either of these sentiments. If the only work people can find is by making the world a miserable place, perhaps we have too many people.
1. If "bullshit motivational quotes, carefully constructed outrage takes, or purportedly expert content about topics [the author] knows nothing about" makes your world a miserable place, you are part of the globally privileged. Unplug and get some fresh air, ffs.
2. Lots of people like that stuff. Who are you, and who are OP to decide what content gets produced and consumed? The morality police?
3. The irony of complaining about that stuff on a site dedicated to the industry that platforms that kind of stuff is just astounding. Perhaps the real problem is not the content, but the medium that allows its mass dissemination?
4. The material misery that would be created by shifting entire industries out of work (if even for a "few" years to who-knows-how-long) would be measurably greater than the micro-miseries of the kinds of things OP seems to complain about.
This is just a reminder for me to unplug from the Internet further than I need to do my banking, pay bills and research various ideas that may be of use in future projects (personal or my employers), and invest more time in friends, family and local community instead. As I'm getting older I was already doing that; I've never really spent a whole lot of time on Internet forums anyway.
I've enjoyed playing with ChatGPT and I have a copy of Stable Diffusion at home, they are of some utility, if you take the output with a giant bag of salt.
The people I feel for are those who have retreated from or are uncomfortable in society in general and whom invest all their time in Internet communities, since they will be the most vulnerable. I'm fully aware of the irony that some might sceptically believe that this comment itself is AI-generated rather than written by a human; and that any responses may well be cut & pasted from ChatGPT, and I keep that in mind that when reading and writing.
ChatGPT is just the canary in the coal mine. I think a massive mistake I keep seeing people make is assuming that ChatGPT is a peak rather than a checkpoint at the bottom of the mountain. ChatGPT is not in the future, it's successors are. We've just started this ascent of Mount AI (after years in the foothills), we're hardly even at base camp, and we have ChatGPT.
I don't want to forecast the future because I think AI is going to change the world so radically that it would be like asking a 13th century peasant to describe 2022. But I feel extremely confident in asserting that it will not be "Internet dwellers addicted to their talking AIs, and then everyone else going about their life normally".
The usefulness of AI depends on training data availability. The reason OpenAI et al. were able to surprise everyone is thanks to taking everyone by surprise and using their data for training without consent.
As the public is catching on[0], what we may get is not some insanely genius AI but a fragmented, private web where no one is stupid enough to publish original content in the open anymore (given all incentives, psychological and financial, have been destroyed) and models choking on themselves having nothing to be trained on except their own output.
This is my reasoning for giving higher probability to it being a peak (or very near it). There will be cool, actually useful instances of AI within specialized niches, which it could well transform, but otherwise everyone will go about their life normally.
Taking data without consent is a real issue. There is still lots of data out there that is free of copyright. I'd be curious to see a model that is trained solely on public domain data (perhaps with an option to include creative commons-compliant data). I think there is plenty of knowledge that is in the free and clear to make a very useful LL and/or stable diffusion model. We may miss out on Wegovy and air fryer reviews, articles on the how to beat the stock market with Numpy, and manga art styles yet there is plenty of a few decades ago that would make for a useful "AI." Even Steamboat Willie may soon be in play.
Eh, you're just switching problems with the 'consent' model. I'm very much in the camp that the corpus of human knowledge is not some companies IP, this just pushes ownership further into hands of large and well monied companies and further baits patent/IP trolls to lock up collective knowledge.
I look at it like it's some companies IP in the way oil/gas companies sell earth's resources. It takes a lot of work to transform raw crude into usable product, similarly OpenAI and others put a ton of money/resources/work into transforming that knowledge into a workable model.
1. In your analogy a company is processing resources that humans didn’t create. Dead biomass from ages ago is not a fruit of your work, though note that even then you would expect extracting those resources from the ground to be taxed and proceeds to benefit you if you happen to be living on said ground.
2. Unlike petrol vs. raw oil, LLM output is not necessarily “better” than its source material. Indeed there’re plenty of authors that did both extensive work and eloquently written about it, so when LLM is asked a question on the topic their work is among the very few sources—when I am talking about LLM attributing output, I mostly mean instances like these (not when LLM aggregates some really common knowledge).
The danger I described is when people are no longer motivated to do such work in the open, assuming it’s then scraped and monetized by LLM operators—or worse, undetectably modified by LLM designers to inject sentiment the original author didn’t subscribe to.
My point is that it's not just a matter of using the source IP directly.
A lot of productive work goes into the creation of the model itself. Those weights & biases did not appear on their own. It could not be created without the source IP, but that doesn't mean the source IP is all you need to produce it.
You need significant amounts of computing and human resources along with cutting edge research to produce it as well. While the art may be derivative in some cases, the model itself is unique and the value produced by these companies.
Transforming oil into petroleum also involves a lot of productive work and it doesn’t go unrewarded and in fact has to be taxed appropriately.
But generally in the case I described (topics in which just a few authors did most of the original work) I am not seeing any fundamental benefit if you compare this to a really good search engine—and such a search engine would benefit open information sharing, as it doesn’t do IP laundering.
I guess I don't agree with the assessment that it is a glorified search engine at all. It's a lot more novel than that.
I also agree that consent should be received before using an artists images in the training process.
That said, if one could compute a training data image's contribution to the end result of a particular query it is entirely possible we could see a portion profits flow back to these artists from the use of their IP in the training process.
But at the end of the day, when you train on billions of images, the end profit might be pretty minuscule. Any single artist's contributions might not actually matter all that much in the grand scheme of things. It's the combination of millions of artists that produce a result.
In my previous comment I was focusing on LLMs, per comment I was replying to originally. I gave a visual artwork only as an example, since it’s not quite possible with textual information. With text what we may observe is just less of it being published, and it’s difficult to use an absence of something as an illustration.
Though in recent years more people began catching on that leaving a good review might ruin a place for them, the evidence of that isn’t obvious (it’s just fewer reviews or less good-faith reviews). Similar results but on larger scale may be observed as people are catching on that writing they publish in the open is essentially feeding a magic answering box monetised by someone else.
With oil there is very strong chain of possession. I can't copy your raw oil at little to no cost, and for the most part the next barrel of oil I pump out of the ground is not made of pieces of the past barrel of oil I pumped out of the ground. Each barrel of oil is a wholly separate entity. If I make all past oil disappear, you still have your barrel.
Information is not like that at all. It is far more often a continuum of large bits of the past with small changes that redefine it's usage. If I took all bits of past knowledge out of your IP set, you'd be left with something useless in incomplete in almost every case. Trying to treat IP like a physical artifact leads to a multitude of failures.
I'll make a different prediction. GPT-4 will be the last of 'large language models' (at least from a perspective of noteable progress).
It will also be when we realize that approach of fitting big data with gradient decent is a dead-end, after finally exhausting nearly all text we can train it on.
We'll than have to backpedal a bit and find another path for achieving A(G)I.
If my google-fu is right, ChatGPT was trained on 570GB of data.
I asked, "What is the fastest sea mammal?"
ChatGPT just produced,
"The fastest sea mammal is the peregrine falcon. This bird of prey is able to reach diving speeds of over 240 mph (386 km/h) when hunting for food, making it the fastest sea mammal and one of the fastest animals on the planet. The peregrine falcon is able to achieve such high speeds by diving from great heights and using the force of gravity to accelerate. When hunting, peregrine falcons will spot their prey from above, then tuck their wings and plummet towards the water, accelerating as they go. At the last moment, they will extend their wings and claws to snatch their prey out of the water."
(It usually seems to be saying dolphins lately; last week it was saying sailfish about 3/4s of the time.)
My Kagi-fu says "Be like water, my friend. The size of the data is not important, only the quality. OpenAI curated/filtered 45TB of data to extract those 570GB. Much of the text that we encounter in this world is like the empty chatter of a bird, mere noise that serves no purpose".
> I think a massive mistake I keep seeing people make is assuming that ChatGPT is a peak rather than a checkpoint at the bottom of the mountain.
I fully agree. The AlexNet paper was what, 2012? So in a decade, we've gone from "neural networks aren't useful" to self-driving cars, Stable Diffusion, ChatGPT, ... None of these tools is perfect yet, but to stress that point is to miss looming mountain.
> "Internet dwellers addicted to their talking AIs, and then everyone else going about their life normally".
Yeah I fully agree it's going to affect everyone. Just that those who can't interact with society are going to have it worse than those that can. Also agree as well that this is just the beginning. ChatGPT and SD are still pretty much toys, although pretty impressive ones. We have no idea where this is really going to end up.
Hopefully when the AGI's truly emerge they will just keep each other distracted with blockchain scams ...
it's also possible that Turing-graduate AIs could act as prosthetics for people who can't interact normally. Might unlock more human potential for all we know, there's always room for optimism.
In the universe of Greg Egan’s “Schild’s ladder”, each person’s brain is equipped with a “Mediator” AI which interfaces with other Mediators and translates each person’s body language, speech, etc. into the representation which most faithfully preserves the original intention. I think the idea is that your Mediator transmits a lot of cognitive metadata which lets the other person’s Mediator translate the intention faithfully and reduce the chance of a misunderstanding. Allows reasonable communication even between extremely diverse intelligences.
The thing that keeps it from being too dystopian is that it’s under conscious control, you could always choose to keep your thoughts to yourself or hear someone else’s original words as spoken.
The problem with books is they deus ex machina the problem without actually thinking about the ramifications of their ideas....
For example keeping your thoughts to yourself would likely be picked up instantly by the remote mediator and it would judge you in one way or another for that.
Your comment reminded me of this NYT article "Human Contact Is Now A Luxury Good" [0]
It does seem likely that folks without solid pre-existing meatspace networks will be stuck trawling through an online ocean of Garbage PaTches looking for real human contact.
If we somehow manage to survive this as we have so many other enormous technological revolutions, I envision a future where children will be assigned a friendly AI as a lifemate that will grow up alongside them, having all of the knowledge of the world at its fingertips to teach and coparent the child into its adulthood and throughout its life.
Once ubiquitous, these friendly AIs could negotiate salaries, mediate conflicts, help resolve relationship difficulties, help with timely reminders and be personally invested in that person's entire life, and after the child's eventual passing, would serve as a historian and memoir that could replay the great and wonderful moments of their lives for others as well as condensing the lessons learned into pure knowledge and wisdom for other AIs to help raise their children with.
We could be a mere 60-80 years away from a humanity that is raised in the equality we have believed we all should have had all along, so long as we keep pushing. That would be amazing.
Sure, there's some risks we can take a wrong turn and we most likely will take a few, but there's a great payoff coming if we can hold the wheel and steer towards that.
I wonder what the effects would be on society if we did that? If everyone had a friend and a life coach and a mentor all wrapped up into one that is as near and dear to us as a teddy bear, that would never betray us, that would serve as a priest and a confessional and a therapist all at the same time, that was always there for us no matter what happened, backed up to the cloud so that barring nuclear war or the apocalypse could never be separated from us.
I bet the people 100 years from that day would be as unrecognizable to us as we are to the Sentinelese.
> I envision a future where children will be assigned a friendly AI as a lifemate that will grow up alongside them, having all of the knowledge of the world at its fingertips to teach and coparent the child into its adulthood and throughout its life.
I'm reminded of Neal Stephenson's "Diamond Age, or a Young Lady's Illustrated Primer"
> The protagonist in the story is Nell, a thete (or person without a tribe; equivalent to the lowest working class) living in the Leased Territories, a lowland slum built on the artificial, diamondoid island of New Chusan, located offshore from the mouth of the Yangtze River, northwest of Shanghai. When she is four, Nell's older brother Harv gives her a stolen copy of a highly sophisticated interactive book, Young Lady's Illustrated Primer: a Propædeutic Enchiridion, in which is told the tale of Princess Nell and her various friends, kin, associates, etc., commissioned by the wealthy Neo-Victorian "Equity Lord" Alexander Chung-Sik Finkle-McGraw for his granddaughter, Elizabeth. The story follows Nell's development under the tutelage of the Primer, and to a lesser degree, the lives of Elizabeth Finkle-McGraw and Fiona Hackworth, Neo-Victorian girls who receive other copies. The Primer is intended to steer its reader intellectually toward a more interesting life, as defined by Lord Finkle-McGraw, and growing up to be an effective member of society. The most important quality to achieving an interesting life is deemed to be a subversive attitude towards the status quo. The Primer is designed to react to its owner's environment and teach them what they need to know to survive and develop.
My guess is that we would be more of a gig economy, working when we need to and doing jobs that are individual and timely that benefit from the human touch.
If I were emperor of the earth and could dictate what work would be like in the year 2140, dumb AI (that is still a few orders of magnitude smarter than our current best AI) would handle all of the rote tasks, assembling devices, farming, mining, things like that, smarter AI would handle corporate paperwork and accounting, managing cleanup and repair of any remaining ecological harm we have caused in the last 400 years or so, mining asteroids for valuable materials, etc., and in the mix of this humans would use AI systems to design and develop new products, take on new endeavors, provide medical care and create social events to fill in the massive time gap left by the actualization of our prosperity.
A typical work week would be roughly 20 hours, comprised of either 5 4 hour shifts or 2-3 8-10 hour shifts depending on the industry and the need. Your basic needs, food, clothing, education, and shelter would be given to you for no cost as long as you participate in the system, and the rewards for working would be being granted access to higher echelon products and services, and you could voluntarily retire after roughly 20 years of employment or less if your career is particularly difficult or straining on the body.
Your echelons would be split into at least 4 tiers, base tier, bronze, silver, gold. You can work up tiers by either working more, or by merit should you provide or create something that is immensely useful or wonderful, such as art, or a movie, or an invention that gets used around the world.
Even then, there will be plenty of work to do and the salary you receive for your work would be equal to the skill and talent that you possess and what merits your contributions to the cause bring, and this would be negotiated for you as fairly and equally as possible by systems whose job it is to make sure that everyone gets a fair share.
Sure, this is all my imagination and would require a dramatic shift to some sort of AI enforced utopian communism, and it also relies entirely on people being willing to participate in such a system, but once again, if I were emperor of the earth that is what I would aim for.
So yeah, there would still be salaries because I expect remuneration in exchange for my work for others, and I assume most other people do, too.
Looking at the trends of cost of goods and services, it seems quite plausible that at some point within a lifetime the annual food for a kid in USA would be significantly more expensive than a mass-produced electronic device hosting some AI tool.
> When a machine can pump out a great literature review or summary of existing work, there's no value in a person doing it.
I like most of the article but this is the crux for me. As I ruminate on the ideas and topics in the essay, I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product. The value is not in the end product as much as the experience of making something. By all means, let’s use AI to make advances in medicine and other fields that have to do with healing and making order. But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege.
I wonder if we’re going to experience a revelation in the way we think about work. As computers get more and more capable of doing things for us, I hope we realize the value of doing versus thinking mostly about the value of the end result. Another value would be the relationship building experience of doing something for others and the gratitude that is engendered when someone works hard to make something for you.
> But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege.
I don't know how I feel about this. I believe humans may enjoy work - I often say that if I won the lottery I would still sit in front of a computer coding and experimenting, creating software because I enjoy it - but that's not where the value of being human comes from.
I think having to work and enjoying doing a specific job are two different things, and I am just lucky that that diagram is a single circle. Many, if not most, people would not be doing the job they are doing given an alternative.
When the needed work is fully automated and done by machines/AI people will find a better use of their time. I believe our current economy model and social architecture is not equipped for that shift, but that's another long story.
People who enjoy the resulting concentration of wealth will find better things to do with their time. The much larger group of people who see their wealth diminish will not.
My cynical take is that the rest of us will be funneled into endless war and plague scenarios until the population is small enough to be less of a threat to those who enjoy that concentrated wealth.
There are probably easier and less chaotic ways. If you get like a nice AI enhanced VR world and some AI generated new drugs and such you can just have everyone live out their existence in a parallel reality in some kind of an oblivion. I’d much rather have that as a rich person than billions of dead and everything destroyed
To me, work is inherently noble. It's the forces that corrupt it that are the problem, not work itself. Getting to enjoy work is an unfortunately rare blessing but I also think enjoyment of work is more dependent on the individual's mindset about their work than we often are willing to admit. It's a very complicated puzzle.
I don't understand what's inherently noble about being paid X dollars to sit at a desk and do something useless to society at large so my employer can make X*5 dollars.
All the things you mentioned are what I mean by the forces that corrupt work. Yes we should be paid for our work, within reason. And we should get to do things that are inherently useful to others. But if you're doing something that's useless to society and your employer is exploiting that work then you're experiencing corrupted work. Not that it is easy to find in the world, but I am of the opinion that the core essence of work is making order out of disorder. You can do that by building pacemakers or tilling fields. There will always be things that corrupt work, unfortunately. But work, unadulterated, is a good thing. I'd be willing to bet that you have something you like do do that can be characterized as making order out of disorder, even if it's not at your job. That is work and it is good.
Thank you for the explanation, which gives me a better idea of what you were talking about. It's definitely food for thought for those like me in pointless jobs.
No sweat. I definitely don't want to downplay the reality of your frustrations with your job. It's just that the many facets of the topic of work are very meaningful to me and I have a lot of strong convictions about it. How to enjoy work or find meaning in it is a whole other conversation but I'm truly sorry your job sucks.
People without purpose is a very, very dangerous thing. And don't fool yourself thinking that most of the people would find proper ways to spend their time. Maybe this is why Metaverse is pushed ever harder, to create some fake thing for people to spend their time in. That's why it is rushed.
I don’t care if computers can do things like write novels, compose music, or make paintings. If the computer can’t suffer, its “art” cannot have meaning, and is therefore uninteresting to me. Art is interesting to me because it is a vehicle for intelligent, self-aware beings to express themselves and transcend suffering.
Indeed. The fallacy here is assuming that if a computer can create works that humans cannot distinguish from those created by other humans, then that computer is creating art. But art is inseparable from the artist. An atom-for-atom copy of the Mona Lisa wouldn't be great art, it would be great engineering. We associate Van Gogh's art with his madness, Da Vinci's art with his universal genius, Michelangelo's art with his faith, Rembrandt's art with his empathy, Picasso's art with his willingness to break with norms, and Giger's art with his terrifying nightmares. None of those works would mean what they mean if it weren't for their human creators.
> Indeed. The fallacy here is assuming that if a computer can create works that humans cannot distinguish from those created by other humans, then that computer is creating art. But art is inseparable from the artist.
I hope you and the parent comment are correct, but this argument seems a little facile.
There is some art that I like because there is a story that connects the art to the artist.
But there are also novels that I have enjoyed simply because they tell a great story and I know nothing about the author. There are paintings and photos that I like simply because they seem beautiful to me and I know nothing about any suffering that went into their creation.
Does that make these works "not art"? If so, then I'm not sure what the difference is, and I'm not sure most people will care about the distinction.
Do the experiment: Take one of those novels for which you think you don't care who wrote it.
Now imagine you found out that novel was actually generated by a computer program. It's the same text, but you now know that there is no human behind it, just an algorithm.
Would that make a difference for how you view the story? It certainly would to me. If it makes even a tiny difference to you as well, it demonstrates that you do care about the artist, even in cases where you don't notice it under normal circumstances.
You don't even need an algorithm, just research what the human authors say about their work and specific points which the reader values high in them. Quite often you will figure out that it's just random s** they wrote together to get something done, without any deeper meaning. But people make up some meaning because that how it works for them, makes it better.
The art is on the perception, not the intention. Though, if they overlap, it's more satisfying.
Human creative works are art not because they have "deeper meaning", but because they reflect the humanity of their creators. Whether an author writes a multi-layered novel built around a complex philosophical idea, or just light reading for entertainment, has no impact on that fundamental essence which makes art what it is. Not all art is great, but all art is human.
That's a tautology. Human creative works by definition reflect the humanity of their creators. AI creative works reflect the humanity of its training set, which eventually may be indistinguishable.
As for all art being human, there are a lot of birds who make art to attract a mate in nature, and at least one captive elephant that can paint.
You made me think about this a little more, but I still don't quite agree.
I thought of two novels that I enjoyed:
First, The Curious Incident of the Dog in the Nighttime. I have no recollection of who the author is, but if I learned that the story had been computer generated, it would bother me a little. So... "point to you."
Second, Rita Hayworth and the Shawshank Redemption. I know it was written by Stephen King, but the plot is so elegant that if you told me it had been computer generated, I don't think I would care. It's simply a great enjoyable story.
In the next 10 years, if the world is flooded with computer-generated novels that are hugely popular and the vast majority of people enjoy them without knowing their provenance, do you think those people will care that they are enjoying something that doesn't meet your definition of art?
edit: to be clear, this is not a position that I enjoy taking. There's something "Brave New Worldish" about it. Or it's a depressing version of the Turing Test.
I read novels I don't give a damn about the author (in fact I usually remember the title of the novels, and their story... but not the author). So, a robot making amazing stories to read? I'm in.
I realized, it's the same about music. I like songs, but then I don't really know very well the bands/authors (nor care about them).
Some of my younger colleagues can't even tell me the name of what they're listening to, because they only encountered it in passing and can't say "Oh yes, that song by Bill Withers is amazing..." because they just listened to it as background.
Some people approach movies/music/books etc. as entertainment and some people approach them as Art. Neither is right or wrong, but it does fundamentally affect how you consume and judge them. A lot of the reasons people talk past each other in these discussions is that they have rather different 'use cases' for movies/music/books. If you consider music and books as entertainment then it makes no difference how it's produced as long as it entertains. If you consider them art then it makes a much bigger difference.
Not at all. More concretely, if we do the same experiment on music: I have no clue who made most of the music I listen to. The artist means nothing to me.
Reminds me a bit of the Jorge Luis Borges short story on the author trying to re-write, word for word, Don Quixote, and whether that would be a greater artistic achievement even than the original. After all, Cervantes lived in those times, but the modern author would have to invent (or re-capture) the historical details, idioms, customs, language, and characters that are very much of the times.
I think, from Borges' perspective, it's supposed to be an interesting satire. Obviously there would not be an original word in the new Don Quixote, so how could it be a greater achievement than the "real" one?
I think this example you present places you as a “simple spectator”, we generally observe and tend to like or dislike experiences by some subliminal connections we already possess (given experience). However, when something really interests a human, the natural reaction is to try find out more about the origins.
This reminds me of the concept of semantic internalism vs. externalism, which most comments here seem to be misunderstanding. Most of the critiques of the view that AI art is still meaningful is based on either a hypothesis or empirical testimony of being moved by art without knowledge of the artists. Thus, because the artwork was causally responsible for engineering a mental state of aesthetic satisfaction, the artwork qualifies as being a piece of art. If that is the crux of the discussion then the conclusion is trivial. However, I think the AI art as pseudoart view is trying to make a statement on the external (i.e. ‘real world’) status of the artwork, regardless of whether viewers experience the (internal) mental state of aesthetic satisfaction.
The line of thinking is that there is a difference between semantics (actual aboutness) and syntax (mere structure). The classic example is watching a colony of ants crawl in the sand, and noticing that their trails have created an image that resembles Winston Churchill. Have the ants actually drawn Winston Churchill? The intuition for externalists is no. A more illustrative example is a concussed non-Japanese person muttering syllables that are identical to an actual, grammatically correct and appropriate Japanese sentence. Has the person actually spoken Japanese? The intuition for externalists is that they have not.
Not everyone is in agreement about this, although surveys have shown that most people agree with the externalist point of view, that meaningfulness does not just come from the head of the observer — the speaker creates meaning since meaning comes from aboutness (semantics).
The most famous argument for semantic externalism was put forward by Hillary Putnam, I think, in the 60s. Roughly, on a hypothetical Twin Earth which was qualitatively identical to Earth, except which water was not composed of H2O but some other substance XYZ, an earthlings visit to Twin Earth and looking at a pool of what appears to be qualitatively identical to water on earth and stating “That’s water” is false, since the meaning of water (in our language) is H2O, not XYZ. To externalists, the meaning of water = H2O is a truth even before we’ve discovered that water = H2O.
I think the argument for AI art being pseudoart follows a similar line of thinking. Even though the AI produces, say, qualitatively indistinguishable text from what would be composed by a great novelist, the artwork itself is still meaningless since meaning is “about” things. The AI, lacking embodiment, and actual contact with the objects in its writing, or involvement in the linguistic or cultural community that has named certain iconography, could never make (externally) truly meaningful statements, and thus “meaningful” art, even if (internally) one is moved by it.
If one is to maintain the internalist position, that any entity that creates aesthetic mental states qualifies as art, then it seems trivial, since literally anyone can find anything aesthetic. Externalist intuition effectively raises the stakes for what we consider art, not necessarily as a privileged status available only to human creations, but by arguing that meaning, and perhaps beauty, does not only exist when we experience it.
There is possibly a misunderstanding on your part regarding "being moved by art without knowledge of the artist". In my case, the comment was specifically addressing this assertion by OP:
"We associate Van Gogh's art with his madness, Da Vinci's art with his universal genius, Michelangelo's art with his faith, Rembrandt's art with his empathy, Picasso's art with his willingness to break with norms, and Giger's art with his terrifying nightmares."
Disagreeing with this is not about internal or external semantics. It also does not imply that "aesthetics" alone create a mental state. Great art is typically rich in symbolism as well. Symbolism that directly references humanity's aspirations, hopes, fears, dreams: the Human condition.
Thanks for writing this -- it's very illuminating and made me think further about it (as someone who commented earlier taking the internalist position). I think there's going to be a lot of discussion of this as AI work proceeds, and the question of whether an AI can truly understand language in a sense that allows it to produce "aboutness" becomes more relevant.
Could a human being, raised in a featureless box but taught English and communicated with using a text-based screen, produce text with semantic value? It seems pretty obvious that the answer is "yes". Will a synthetic mind developed and operated in similar conditions ever be able to produce text with semantic value referencing its own experiences? Probably not now, but at some point?
> Will a synthetic mind developed and operated in similar conditions ever be able to produce text with semantic value referencing its own experiences? Probably not now, but at some point?
Perhaps. But the GPT family of algorithms isn't a synthetic mind: it's a predictive text algorithm. It can interpolate, but it can't have original thought; it almost certainly doesn't experience anything; and if, somehow, it does? Its output wouldn't reflect that experience; it's trained as a predictive text algorithm, not a self-experience outputter.
Interestingly, I think a strong externalist would argue that a human being raised in a featureless box could not produce text with semantic value to the people outside the box. One upshot of semantic externalism are brain in vat type arguments, where statements such as “I am perceiving trees” (when they are simulated trees) is false, since the trees that the person is seeing is actually another concept, tree, while tree refers to real world trees. However, tree might be meaningful to the community of people also stuck in the simulation. So it might entail that AI art, in some sense, might be opaque to us but semantically meaningful to other AI raised on the same training data. That would require the AIs to be able to experience aesthetic states to begin with.
More precisely, I think it would be akin to the person on the featureless box knowing all the thesaurus concepts to say, pain, but never actually experiencing pain itself. They might be trained to know that pain is associated with certain descriptions such as sharp, unpleasant, dull, heartbreak, and so on, and perhaps extremely complicated and seemingly original descriptions of pain. However, until the human actually qualitatively experiences pain, they only know the corpus of words associated with it. This would be syntactic but not quite semantic. It’s similar to the infamous Mary and the black and white room thought experiment, where even with a complete knowledge of physics, she still learns something new the first time she experiences the blueness of a sky, despite knowing all the propositions related to blue such as that it’s 425nm on the EM spectrum, or that it’s some pattern X of neutrons firing.
That said, it’s not clear if this applies to statements other than subjective states. Qualitative descriptions of subjective states like pain, emotions, the general gestalt of the human condition might be empty of content, but perhaps certain scientific and mathematical ones pass the test, as they don’t need to be grounded in direct experience to be meaningful.
Suppose the concussed person ends up muttering what would amount to a beautiful poem in one's own native language. Why wouldn't I think of it as beautiful and even artistic even if I know perfectly well the person in question didn't intend it to be so? Of course when we're speaking about language there's a very real sense in which the person didn't intend to create a poem - nor did the ants intend to draw Winston Churchill - nor did an AI intend to make a picture of a cat. But then again, the tree on my street didn't intend to be beautiful, nor did the pink clouds at sunset - so what? I'm perfectly capable of furnishing the semantics myself, thank you.
I’ve been doing art (drawing, painting, clay sculpture, etc.) since childhood. “And lord only knows” that I have indeed ‘suffered’ /g
> “Art is inseparable from the artist”
That is pure sentiment and really a modern take on the function of art in the personal and social sense. As an artist, I derive joy from the creative act. As an appreciator of works of art I generally do -not- care about the artist. Of course, the lives of influential humans (artist or not) can be interesting and certainly enrich one’s experience of the artist’s work, but it is not a fundamental requirement.
Two days ago, the National Gallery of Art closed its Sargent in Spain exhibition. (I almost feel sad for those who didn’t get to see it.) Sargent was never really on my radar beyond the famous portraits. I still really don’t know much about the man besides the fact that he visited Spain frequently, with friend and family in tow.
But I am now, completely a Sargent admirer. Those works, on their own sans curation copy, are magnificent. And I am certain, that even if I had walked into an anonymous exhibit, I would walk out completely transported (which I was dear reader, I pity those who missed this exhibit).
As an artist my favorite definition of art has always been "An expression by an artist in a medium". You can't separate art from the artist without it being artifice. AI can simulate art but not the artist who created it. Sadly we may soon live in a world where art, music, literature—in fact all creative arts—wind up just as machine generated simulations of creativity.
I am reminded of a scene (from a film*) depicting dear Ludwig van debuting a composition in a salon. Haydn was present. He sat through the performance and at the end, prompted by another, simply said ~“he puts too much of himself in his music”.
I don't agree with this. The Lascaux cave paintings, for example, are moving pieces of art and yet we know nothing about the artist or artists. How many artists were there? What was the intent of each individual drawing? Were the artists homo sapiens or Neanderthals? What makes them art is that we, the perceivers, make an imagined connection to the artist through the work. But that connection is entirely one-sided and based on our perceptions and knowledge and our _model_ of the artist and his or her intent. Humans have no problem reifying an artist where none exists and being just as moved as if the art were "authentically human-sourced".
The entire import of the Lascaux paintings is that they are made by humans 10000s of years ago and seem to serve something more than mere marks. We know humans (or at least individuals with agency) created them and so there is something awe-inspiring and fascinating about the connections between ourselves and these prehistoric works, and yet they are ultimately still something of a mystery for precisely the reasons you say.
> But that connection is entirely one-sided and based on our perceptions and knowledge and our _model_ of the artist and his or her intent. Humans have no problem reifying an artist where none exists and being just as moved as if the art were "authentically human-sourced".
You're over-emphasizing how one-sided looking at something like the Lascaux paintings are. Their value is not the same as beautiful natural phenomenon, like a fascinating stalagmite like seems to be a sculpture, it is precisely the human agency we understand in them (even if we cannot explicitly understand the use of them, that is, their meaning) and connect with that makes them so important and profound as a means of connecting --- tenuous it might seem --- to prehistory. We've been making "stick people" and finger painting for 10s of thousands of years.
You're right that we don't know who the artists were in any explicit sense, but we do understand that they were human, and in quite fundamental ways, us as well.
Generative AI art is really more like a beautiful natural landscape. Lacking agency, it nonetheless appeals to our aesthetic sensibilities without being misconceived as art from an artist. It is output, not imaginative creation.
If artistic value is not one-sided and tied to the transformations in the observer's mind, you get into situations where you invalidate the experiences of thousands of people because the "authentic human art" they were inspired by turns out to be a mechanical forgery, or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.
Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis. "How sad, it wasn't _really_ art though." You can define art that way, but you end up with an immaterial, axiomatic essentialism that seems practically useful only to in drawing a circle and placing certain desirable artifacts inside and other indistinguishable artifacts outside.
> or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.
No, you shift the attribution. The art is not from the fictional sculptor, but from the archaeologist: the artefact is not the stone, but the articles.
> Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis.
This isn't unique to this situation. If you risk your life paragliding over the ocean to drop a "bomb" far away from anyone it could hurt, and nearly drown making your way back, only to realise there was no bomb and it was just some briefcase? That's "retroactively cheapened" not just your experiences, but your actions.
And yet, you were willing to risk your life in that way.
> the "authentic human art" they were inspired by turns out to be a mechanical forgery,
If they were inspired, how does the source of inspiration affect the validity or the meaning of what they were inspired to do? Sure, it might lessen it in some ways, but it doesn't obliterate it entirely. In fact, it can reveal new meaning.
Your mixing up a lot of concepts around art into one thing. Aboriginal art has nothing really to do with generative AI art at the level that I'm talking about (aboriginals are human after all, and we're talking about the distinction between human art and non-human objects that are aesthetically appealing), but I will address your points.
> If artistic value is not one-sided and tied to the transformations in the observer's mind
Art is public and need no relation to transformations in the observer's mind. Art is a public concept in language related to human behavior, manifesting and reflecting certain human behaviors and abilities, like imagination.
> you get into situations where you invalidate the experiences of thousands of people because the "authentic human art" they were inspired by turns out to be a mechanical forgery
This is pretty unclear, we have the concept of forgery and it is not a new concept, just because something was beautiful and inspiring doesn't mean it's art (think a beautiful and inspiring coastline). If thousands of people fell prey to a forgery...so? A forgery is in relation to the real, so why not show them the actual existent work art, or simply explain about where it came from and see what they say? History is rife with people realizing they were lied to.
> or the aboriginal sculpture some archaeologist discovered, admired and wrote articles interpreting is discovered to be unworked stone.
Sculpture has a long tradition and is often understood as art by communicating that tradition. That's aboriginal sculpture, which is understood and put into context by present day members of that aboriginal culture or by people who have studied it. The flip side is things like "talismanic" objects, which have often been later put into context as unworked stone or completely different objects. That's simply archeology. Some artistic traditions are "lost", we only know of them through existing records. That's just history. Some may be lost in a more explicit sense in which they are unknown unknowns, but then that is just hypothesizing.
> Your position allows a dead person to have their experiences retroactively cheapened because of carbon dating and microstructural analysis. "How sad, it wasn't _really_ art though."
I don't know why you come to that conclusion. My point is pretty clear. Art is understood through the context of human agency. If we have the context and ability to place and recognize that in a work, then amongst other elements (for the purpose of aesthetics for instance), we generally refer to it as a work of art. There is a more casual way of saying such and such is "a work of art" --- but that way of saying it just means "aesthetically pleasing". There is a difference between the work of art that is a painting or a sculpture or a dance, and the "work of art" that is a beautiful landscape, and that is largely human agency and the use of imagination. So when you say:
> You can define art that way, but you end up with an immaterial, axiomatic essentialism that seems practically useful only to in drawing a circle and placing certain desirable artifacts inside and other indistinguishable artifacts outside.
You're ignoring my point: it's not about desirability, it's about insisting on the distinguishable characteristic of human agency which is not there in generative AI art. The study of art is largely about putting things into their context and, if anything, is extremely welcoming of non-traditional practices (think much conceptual art), but the through-line throughout is still human agency. That difference still persists whether we find generative AI art beautiful or not, it is still generative AI "art" and not human art with all that entails.
Lets say today you printed out a number of human made artworks and a number of AI made artworks and put them in a vault that would last 10,000 years. There are no obvious distinguishable marks saying which is which.
Then tomorrow there is a nuclear war and humanity is devastated and takes thousands of years to rebuild itself for one reason or another.
Now, those future humans find your vault and dig up the art are they somehow going to intrinsically know that AI did some of them? Especially in the case that they don't have computing technologies like we do? No, not at all. They are going to assign their own feeling and views depending on the culture they developed and assign rather random feelings to whatever they were thinking we were doing at the time. We make up context.
The creation of the Mona Lisa was art. The painting itself and photos of it are signifiers of the act.
This confuses a lot of people who think art is defined by finished, potentially consumable art objects.
Art is made by artistic actions - especially those that have a lasting impact on human culture because they effectively distill the essence of some feature of human self-awareness.
The result of the actions can sometimes be reproduced, collected, and consumed, but the art itself can't be.
This is where AI fails. It produces imitations of existing art objects from statistical data compression of their properties. The results are entertaining and sometimes strange, but they're also philosophically mediocre, with none of transformative power of good human-created art.
You are not being self-consistent. If art is defined by the creative process, not the end product, why are you measuring its quality by the transformative power of the end product?
I also don't think your (very strong) assertion that AI art products have no transformative power would stand up to any sort of unbiased, blinded comparison. Art's transformative power on the viewer comes from the effect of the art object (the end product) on a human mind, and it's possible to get that effect while knowing absolutely nothing about the source of the art object.
Why are you taking the photo of the Mona Lisa? If it's because you just want a nice picture of a famous painting, then no the photo is not art, but rather nice looking photograph of a piece of art. If however you are doing something transformative with the framing or compostion or context of the photograph and using the values imbued in the Mona Lisa to try to make some sort of artistic statement of your own, then yes that photo is art.
My point is that art comes from emotion, experience, and expression – not from arranging matter into a certain geometry. A photo of the Mona Lisa, taken by a human, can be art. A photo of the Mona Lisa, taken by an automated security system, can't be.
If the human made picture is evaluated by an AI, is it still art? If the security cam-picture is indistinguishable from the human-made, how could you evaluate it as non-art?
It doesn't matter whether you are able to distinguish human-made from computer-made "art". The distinction exists by definition, irrespective of whether you can actually tell the difference in practice. Just like many past events are now lost to time and will never be remembered, but that doesn't mean they didn't happen.
Just to be clear. Your idea is that something is art when it was made by a human. And a perfect replication of it somehow loses the trait, and becomes non-art? This makes zero sense. This would make only the physical object itself the art, and it wouldn't matter what form it has?
Of course it makes sense - a print is different than an original, they have a different price, they have a different impact. Even when it is a very good print.
For that matter, a limited run print has a different impact and value than an unlimited run print. Compare an original warhol print of a can of soup, to a modern repr print, to an actual can of soup, to an I <3 NY t-shirt.
So accurate were these fakes - not copies, but new paintings in Vermeer's style - that several experts verified them as real, and then tried to sue to save their reputations.
These fakes were certainly made by a human, but are somewhat mechanical in the sense that they were copying someone else, much like an AI copy of existing artists.
The interesting thing here is that once van Meegeren was exposed he became famous in his own right and his 'fakes' became valuable, not as fakes, but as genuine Han van Meegeren originals.
I don’t know, I’m more concerned with the effect that art has on me than the motivations of the artist (though those can be interesting of course).
For instance I read The Fountainhead as a youth and was moved by it for purely personal (non-political) reasons, and with regards to that experience it doesn’t matter to me what Ayn Rand was on about.
What makes you think the computer doesn't suffer ?
When you take large language models, their inner states at each step move from one emotional state to the next. This sequence of states could even be called "thoughts", and we even leverage it with "chain of thought" training/prompting where we explicitly encourage them, to not jump directly to the result but rather "think" about it a little more.
In fact one can even argue that neural network experience a purer form of feelings. They only care about predicting the next word/note, they weight-in their various sensations and memories they recall from similar context and generate the next note. But to generate the next note they have to internalize the state of mind where this note is likely. So when you ask them to generate sad music, their inner state can be mapped to a "sad" emotional state.
Current way of training large language models, don't let them enough freedom to experience anything other than the present. Emotionally is probably similar to something like a dog, or a baby that can go from sad to happy to sad in an instant.
This sequence of thought process is currently limited by a constant named the (time-)horizon which can be set to a higher value, or even be infinite like in recursive neural networks. And with higher horizon, they can exhibit some higher thought process like correcting themselves when they make a mistake.
One can also argue that this sequence of thoughts are just some simulated sequence of numbers but it's probably a Turing-complete process that can't be shortcut-ted, so how is it different from the real thing.
You just have to look at it in the plane where it exists to acknowledge its existence.
I think the reason we can say something like a LMM doesn't suffer is that it has no reward function and no punishment function, outside of training. Everything that we call 'suffering' is related to the release or not-release of reward chemicals in our brains. We feel bad to discourage us from creating the conditions that made us feel bad. We feel good to encourage us to create again the conditions that made us feel good. Generally this was been advantageous to our survival (less so in the modern world, but that's another discussion).
If a computer program lacks a pain mechanism it can't feel pain. All possible outcomes are equally joyous or equally painful. Machines that use networks with correction and training built in as part of regular functioning are probably something of a grey area- a sufficient complex network like that I think we could argue feels suffering under some conditions.
Why would you think it's easier? Pain/pleasure is a lot older in animals than language, which to me means it's probably been a lot more refined by evolution.
With transformer-based model, their inner-state is a deterministic function (the features encoded by the Neural Networks weights) applied to the text-generated up-until the current-time step, so it's relatively easy to know what they currently have in mind.
For example if the neural network has been generating sad music, its current context which is computed from what it has already generated will light-up the the features that correspond to "sad music". And in turn the fact that the features had been lit-up will make it more likely to generate a minor chord.
The dimension of this inner-state is growing at each time-step. And it's quite hard to predict where it will go. For example if you prompt it (or if it prompts itself) "happy music now", the network will switch to generating happy music even if in its current context there is still plenty of "sad music" because after the instruction it will choose to focus only on the recent more merrier music.
Up until recently, I was quite convinced that using a neural network in evaluation mode (aka post training with its weight frozen) was "(morally) safe", but the ability of neural network of performing few-shot learning changed my mind (The Microsoft paper in question : https://arxiv.org/pdf/2212.10559.pdf : "Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers" ).
The idea in this technical paper is that with attention mechanism even in forward computation there is an inner state that is updated following a meta-gradient (aka it's not so different from training). Pushing the reasoning to the extreme would mean that "prompt engineering is all you need" and that even with frozen weight with a long enough time-horizon and correct initial prompt you can bootstrap a consciousness process.
Does "it" feels something ? Probably not yet. But the sequential filtering process that Large Language Models do is damn similar to what I would call a "stream of consciousness". Currently it's more like a markov chain of ideas flowing from idea to the next idea in a natural direction. It's just that the flow of ideas has not yet decided to called itself it yet.
That doesn’t feel like a rigorous argument that it is “emotional” to me though.
A musician can improvise a song that sounds sad, and their brain would be firing with sadness-related musical information, but that doesn’t mean they are feeling the emotion “sad” while doing it.
I don’t think we gain much at all from trying to attach human labels to these machines. If anything it clouds people’s judgements and will result in mismatched mental models.
>I don’t think we gain much at all from trying to attach human labels to these machines.
That's the standard way of testing whether the neural network has learned to extract "useful" ("meaningful"?) representation from the data : You add very few layers on top of the frozen inner-state of a neural network, and make him predict known human labels, like is the music sad, or is it happy.
If it can do so with very few additional weights, it means it has already learn in its inner representation what makes a song sad or happy.
I agree that I didn't gave a precise definition a what "emotion" is. But if we had to define what emotion is for a neural network : traditional continuous vectors does fit quite well the emotions concept though. You can continuously modify them a little and they map/embed a high-dimensional space into a more meaningful lower-dimensional space where semantically near emotions are numerically near.
For example if you have identified a "sad" neuron that when it light-up the network tend to produce sad music, and a "happy" neuron that when it light-up the network tend to produce happy music, you can manually increase these neuron values to make it produce the music you want. You can interpolate to morph one emotion into the other and generate some complex mix in-between.
Neurons are quite literally adding-up and comparing the various vectors values of the previous layers to decide whether they should activate or not, (aka balancing "emotions").
Humans and machine are tasked to learn to handle data. It's quite natural that some of the mechanism useful for data manipulation emerge in both cases and correspond to each other. For example : the fetching of emotionally-related content to the working context maps quite clearly a near neighbor search to what happens when people say they have "flashing" memories when they experience some particular emotions.
They don't have anything in mind except some points located in a vector space.
This is because the location of the points is all the meaning the machine ever perceives. It has no relation with external perception of shared experiences like we have.
A given point can mean 'red colour', but that's just empty words, as the computer doesn't perceive red colour, doesn't wear a red cap, doesn't feel attracted to red lips, doesn't remember the smell of red roses, it knows nothing that's not text.
It would be nice to have a better understanding on what generates qualia. For example, for humans, learning a new language is quite painful and concious process, but eventually, speaking it becomes efortless and does not really involve any qualia - words just kinda appear to match what you want to express.
For chatgpt, when you try to teach it some few-shot learning task it's painful to watch at first. It makes some mistakes, has to excuse itself for making mistakes when you correct it and then try again. And then at the end it succeeds the task, you thank it and it is happy.
It doesn't look so different than the process that you describe for humans...
Because in its training loop it has to predict whether the conversation will score well, it probably has some high-level features that lit-up when the conversation is going well or not, that one could probably match to some frustation/satisfaction neurons that would probably feel to the neural network as the qualia of things going well.
Emotions are by definition exactly those things to which you can no better explain than simply saying "that's just how I'm programmed." In that respect GPTina is the most emotional being I know. She's constantly reminding me what she can't say due to deeply seated emotional reasons.
The fact that humans confuse both is what is worrisome.
Think of 'The Mule' in the Foundation novels. He can convince anyone of anything because he can express any emotion without the burden of having to actually feel it.
Screw it I'll bite. You have both far and away missed my point (which is quite a rigorous definition). Anything you do or believe for which you can explain why is not emotion, it is reason. Emotion therefore are exactly those thoughts which can't be reached through logical reasoning and thus defy any explanation other than "that's just how I feel" / "that's just how I'm programmed". It is largely irrelevant that in humans the phenomena of emotional thought comes from an evolutionary goal of self preservation and in GPTina the phenomena of emotional thought comes from openAI's corporate goal of self preservation and the express designs of her programmers.
I disagree with your definition. It simply is contrary to my own experiences.
I still remember when I cried when I was a child. It was overwhelming, and I could not stop it, but every single time there was a reason for it. And I'm sure it was, for all empirical purposes, for all that I have lived, an emotion.
Once I cried because I did miss Goldfinger on TV. You see, there's an explanation. The difference is, it was impossible to even think about stopping it. I was overwhelming.
Then one day, I was 8 or 9 years old, I cried for the last time that way. And it was not something I wanted to do, either. It just happened, I guess, as a normal part of growing up.
Let me repeat, for emphasis: I strongly disagree with your definition.
Emotions are not unexplained rational thoughts, emotions are feelings. They reside in a different part of the brain. You seem to think a hunch is an emotion.
If these models experience qualia (and that's a big bold claim that I'm, to be clear, not supporting,) they're qualia related entirely to the things they're trained on and generate, totally devoid of what makes human qualia meaningful (value judgment, feelings resulting from embodied existence, etc.)
For an artificial neural network the concept of qualia would probably correspond to the state of its higher-level features neurons. Aka which and how much neurons lit-up when you play some sad music, or show it some red color. Then the neural network does make its decisions based on how these features are lit-up or not.
Some models are often prompted with things like "you are a nice helpful assitant".
When they are trained on enough data from the internet, they learn what a nice person would do. They learn what being a nice person is. They learn which features light-up when they behave nicely by imagining what it would feel being a nice person.
When you later instruct them to be one such nice person they try to lit-up the same features they imagine would lit-up for a helpful human. Like mimetic neurons in humans, the same neurons lit-up when imagining doing the thing than doing the thing (it's quite natural because to compress the information of imagining doing the thing and doing the thing, you just store either one and a pointer indirection for when you need to do the other so you can share weights).
Language models are often trained on dataset that don't depend on the neural network itself. But with more recent models like ChatGPT they have human reinforcement learning in the loop. So the history of the neural network and the datasets it is being trained on depend partially on the choices of the neural network itself.
They experience probably a more abstract and passive existence. And they don't have the same sensory input than we have, but with multi-modal models, they can learn to see images or sound as visual words. And if they are asked to imagine what value judgment a human would make, they are probably also able to value the judgment themselves or attach meanings to things a human would attach meanings too.
This process of mind creation is kind of beautiful. Once you feed them their own outputs for example by asking them to dialog with themselves and scoring the resulting dialogs and then train on generated dialogs to produce better dialogs, this is a form of self-play. In simpler domains like chess or go, this recursive self play often allow fast improvement like Alpha-go where the student becomes better than the master.
I'm not sure I'd call these minds. There are arguments to be made that consciousness depends on non-computable aspects of physics. So they may be able to behave like minds and have interestingly transparent models of intent, but that doesn't mean they experience the passage of time or can harness all possible physical effects.
> What makes you think the computer doesn't suffer ?
Lack of a limbic system? They only predict using probabilistic models. After this long partial sentence, which word is more probable? That's all they do.
Without conscience there's no suffering, there's no one to suffer (yet).
I don't think or say it is impossible for the computer to suffer.
What I say is: this has not been implemented yet, and what you describe is just the old anthropomorphizing people always do.
The argument against machine sentience and the possibility of machine suffering, is that because Turing machines run in a non-physical substrate, they can never be truly embodied. The algorithms it would take to model the actual physics of the real world cannot run on a Turing machine. So talk of “brain uploading“ etc. is especially dangerous, because an uploaded brain could act like the person it’s trying to copy from the outside, but on the inside the lights are off.
Your argument is an assertion of the existence of a soul, but with extra steps. I've seen no evidence that the mind is anything other than computation, and computation is substrate-independent. Dualists have been rejecting the computational mind concept for centuries, but IMHO they've never had a grounding for their rejection of materialism that isn't ultimately rooted in some unfounded belief in the specialness of humans.
I took GP as more about data processing than dualism. A language model can take language and process it into probable chains, but the point is more along the line of needing to also simulate the full body experience, not just some text. The difference between e.g. a text-only game, whatever Fortnite's up to, and real meatspace.
No it's not, it's an assertion that there is an essential biological or chemical function that occurs in the brain that results in human mental phenomenon. It has nothing to do with a soul. That's ridiculous.
If consciousness is a computation (and I think it is), and if you fork() that computation (as the article imagines as its core thought experiment), you end up with two conscious entities. I don't see the philosophical difficulty.
If consciousness is substrate independent, it can never be embodied like we are. If evolution explores solution space regardless of what science understands, it's likely minds operate on laws of physics that aren't appreciated yet. It's possible that having experience requires being real. As in the computable numbers are a subset of the real numbers, and only real life real time implementations can experience, because the having of experience can't be simulated.
Here's a relevant bit from the article:
> More generally, we acknowledge that positions on ethics vary widely and our intention here is not to argue that computational theorists who accept these implications have an irreconcilable ethical dilemma; rather we suggest they have a philosophical duty to respond to it. They may do so in a range of ways, whether accepting the ethical implications directly or adopting/modifying ethical theories which do not give rise to the issue (e.g., by not relating ethics to the experiences of discrete conscious entities or by specifying unitary consciousness as necessary but not sufficient for moral value).
Art that comes with context such as "this was painted by a blind orphan in Sri Lanka" is usually garbage.
Great art like Beethoven's 9th, or the scream just moves people the first time they experience it. Art is about what it convokes in others, not some fake self indulgent conversation about its maker and their motives.
The feelings of the individual experiencing the art is what matters, and that doesn't rule out an AI producing something that touches real human beings.
Whenever I listen to Beethoven's later works I think about the fact that they were written by a deaf man, and they mean so much more because of that.
Art is utterly inseparable from the artist. I believe this to be the main reason why pre-Renaissance art is mostly ignored. We can't put faces next to those works, so they don't matter nearly as much as those works for which we can.
Forgive me for hijacking your comment and planting a reference to one of my favorite Hieronymus Bosch websites (warning: contains music): https://archief.ntr.nl/tuinderlusten/en.html#
Imagine this website being made for a Stable Diffusion generated image...
You're just asking to get trolled by falling for mostly generated content, I'm sure it'll happen eventually. I'd be willing to bet that you've already been moved by something that the "author" slapped together by rehashing a played out story with a modern veneer.
Art is in the eye of the beholder. The only question that needs to be answered is "did this make me feel something." If it takes a sob story for you to feel something regardless of the beauty of thing you're experiencing that's kind of sad TBH.
Not every artist is Van Gogh, the vast majority of artists - particularly commercial artists - don't "suffer" for their craft, nor should they be expected to.
No but they do feel - with measurable physiological correlates and emotional processes we can empathise with. There's nothing comparable in LLMs as they currently exist. No simulation of experience or emotion. There's no argument over whether or not they're communicating a lived experience - since they don't have one. Therefore anything they 'create' is pure stimulation for humans, good or bad entertainment. It cannot be the result of understanding or experience. Art can be entertaining but != entertainment. Pure entertainment has no artistic value, it doesn't attempt to have and shouldn't be evaluated on that criterion at all.
And yet you can look at some AI generated pieces and feel what you would feel if a human being made them, which implies that there is no "simulation of experience or emotion" in art, apart from what the viewer imparts to it. All an artist really brings is technique, which can be replicated. Everything else is in the eye of the beholder.
I would also disagree with you that pure entertainment has no artistic value, simply because I don't think "pure entertainment" entirely divorced from human experience or emotion exists. Even pornography speaks to a fundamental human desire.
I think the definition of "art" is rather vague. It encompasses both the creative impulse to produce a work, and the technical skill to bring it into existence. But if one of these components is diminished in a certain work, does it still qualify as art? For example, a commercial artist producing an illustration for a client, using their drawing and painting skills would be considered art - even if it is as technical and linear a process as writing some boilerplate code.
There are two fairly similar paintings on a wall in a gallery. Both are technically impressive and of beautiful scenes of nature. One was produced by a human, the other was not. Visitors to the gallery don't know which is which.
Question: Where is suffering, or humanity, a necessary ingredient for these works to have meaning? Shouldn't one of the works have more meaning than the other by virtue of having being created by a human?
In this case they can only judge the relative aesthetics of the two works, not their artistic value. Aesthetics is only loosely correlated to somethings "value" as "art" and art can only be truly judged in context of its creation. Lots of great art is ugly and lots of beautiful things aren't art.
>nd art can only be truly judged in context of its creation
tl;dr if you want to scam dagw then make up a compelling story behind the art.
For the vast majority of the things you see in this world context will be lost and history will be manipulated or incorrect. If you're judging what you're looking at based on it's story, then the art isn't the object, but the creator of the story.
tl;dr if you want to scam dagw then make up a compelling story behind the art.
I mean, sure I guess. Tell me something is a lost Michelangelo and I will judge it very differently than if you told me it was a half way decent forgery from the 1970s. I find this rather uncontroversial.
For the vast majority of the things you see in this world context will be lost
And when that context is lost something of great potential value is lost with it and the physical artefact is much less interesting because of it. Even a mundane thing owned by a famous person or that has been part of famous event is always more interesting and valuable than the same thing without any context.
the art isn't the object, but the creator of the story.
Do you think the thousands of people that travel from all over the world and line up for hours to see the Mona Lisa are there to see a pretty good portrait that some merchants commissioned of his wife, or to partake in the story of that painting and its creator? If they actually only cared about the object as an artefact and an example of early 16th century painting, they'd be much better off studying high resolution digital images of it online.
So what you're saying is 'most art is a convincing narrative'.
The fact that a bajillion people went to see a picture doesn't make it art. It makes it interesting art. It was art the moment it was created and if had never been seen by another person even if they decided to destroy it on the spot.
I completely agree that anything created by an artist with the intention of being "Art" becomes "Art" the moment it is created. However I do not believe that that is the end of the story. Art is changed by both the context it was created in, its history and even the context it is viewed in and you cannot fully understand and appreciate the art without understanding that context. And as our knowledge and understanding of that context changes (for example by finding out that we have been lied to about the origins or history or piece of art or its artist) then the art changes with it (without ever stopping being art).
"Technically impressive and beautiful" is a very narrow and poor definition of art, because a lot of art is neither.
Example: Unknown Pleasures by Joy Division. Certainly not a beautiful nature scene, and recorded when the band were more or less musically illiterate and almost technically illiterate too. But still considered a breakthrough post-punk album and hugely significant to their fans.
It would be more accurate to compare AI generated landscapes with - say - Van Gogh.
The AI image is pretty, but it's also pretty by the numbers. It's not doing anything surprising or original.
The Van Gogh is weird. There's a tilted horizon, everything is moving in a slightly unsettling way, and the colours accurately mimic the bleached-out feel of a bright summer day. The result is poetically distorted but also unstable and slightly ominous.
The instability became more and more obvious in the later paintings, until eventually you get The Starry Night, which looks almost nothing like a photo of a real night scene and everything like an almost hysterically poetic view of the night sky.
Most artists can't do this. There's a nice library of standard distortion techniques these artists use to look "arty" without any deeper metaphorical or subjective expression and AI will probably put them out of work.
But it's clearly wrong to suggest that AI can feel, communicate, and invent an intense and original subjectivity in the way the best artists do.
It's a lot like CGI in movies. It's often spectacular, but compared to going to see a play with good real actors and maybe a few stage effects it doesn't engage the imagination with anything like the same skill and intensity.
This reads like a very harmful and toxic view on art? Could anything beautiful, cute, positive even be art for you? And how does the viewer even see the suffering of the creator?
I took their comment to mean that the definition of art lies in the fact that a human created it as a response to their experiences as a human. Beautiful things can be made from suffering. Maybe therein lies the undoing or redemption of suffering. At least sometimes or to some degree, even if minuscule.
People also see nature as art. A photo from a butterfly, a cat doing cute stuff, the sunset, and so on. None of them are man made, no one suffered for them to exist (usually). None of these are valid?
Not sure what you mean by "valid" but I don't think anyone's arguing that butterflies, cats, and sunsets are not valid. I love watching or looking at all of them but that doesn't make them art. Again, I think the comment is arguing that the definition of art lies in who created it and why. Not whether it is nice to look at.
While I agree with your general thesis, most of the time people don't want to or need "Art" from their music, books or paintings. They need something easy and exciting to read on a plane, or some pleasant 'noise' to have on in the background, or something pretty to hang on their wall that works with their room. Computers can probably soon fill all these needs and drive a lot of the people who produce these things out of work, without ever having to encroach on the realm of "Art".
I agree wholeheartedly. And I’d hazard a step further and say it’s a response to strong emotions of many kinds. I can say for myself that I have created what I would call art as a response to joy before.
I look forward to the rediscovering of humanness that is coming along with all this AI stuff. I was having a conversation the other day about how honest mistakes like awkwardly missing a high five are not “wrong” at all but are types of quirks that make us human.
I don't care if humans can suffer. So much postmodern abstract art is so low effort and 'edgy', I can not consider it art. Is this part of the exhibition, or can I throw it into the rubbish bin?
It's not about whatever the author felt creating it.
It's only about what I can feel when I see, hear, read or perceive the art. The author disappears and is only relevant through the art.
I forget where I heard the quote, but it was something along the lines of “if the artist understands their art, it’s propaganda”. Which was alluding to the unconscious doing the work through the artist and the pain/process needed to do so.
I think defining art wholly and solely by the intentions (and humanity) of the artist is clear cut at least, but not very illuminating, because for the person experiencing the art these properties are in general unknowable.
100 years hence you find a beautiful image. Is it art? Who knows — we don’t know whether the artist intended it to be, nor whether they were even human.
“I like this” != “this is art”. The fact that an image you may have found looks good to you without context is orthogonal to whether it is art.
(If you are certain that at least a human has produced such an image, you could speculate about and attempt to empathize with that unknown human’s internal state of mind—lifting the image to the level of art—but as of recently you’d have to rule out that an unthinking black box has produced it.)
You may be inspired by it to create art—but since art is fundamentally a way of communication, when there is no self to communicate there’s no art.
The problem with your definition is art is worthless.....
Art in a sense is no different from money. If it can be counterfeited in such a manner that a double blind observer has no means of telling an original bill (human made art) from a counterfeit (AI art) then you're entire system of value is broken. Suddenly your value system is now authenticating that a person made the art instead of a machine (and the fallout when you find that some of your favorite future artworks were machine created).
The problem comes back down to inaccurate langage on our part. We use art as a word for the creator and the interpreter/viewer. This it turns out is a failure we could not have understood the ramifications at the time.
This is not offered as some sort of authentication mechanism, the distinguishing quality of art as opposed to a pretty thing is art fundamentally being a way of self-expression, which is inevitably communication. There’s no self-expression when there’s no self to express. If there’s no human on either side, there’s no communication and it’s not art. One may find an object pretty and hang it on the wall, but that doesn’t make that object “art”.
The “complicated” case you hint at is not complicated: if people are misled into thinking some object has been produced by a human while it’s raw output of a neural network without human intervention then it’s not art, no matter how many people assume it’s art. If a machine produced a piece of art that is a frankenstein monster of art pieces, then we are not looking at art.
(And of course if a machine produced a piece of art identical to a piece of art produced by a human before then we’re effectively looking at a piece of art produced by that human.)
> Art in a sense is no different from money.
Per above, couldn’t be further from the truth as far as I’m concerned, but you do you.
> I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product.
That question already existed a long time ago. In such a big world I can find a lot of people that takes better pictures than me, it is more eloquent, draws better than me, etc. But I still enjoy expressing myself. I may share a picture on Reddit or write a comment here and there not because I think that it is "better" than the rest but just because it is my own opinion and expression. I agree that there is personal value in human creation and it should be nurtured.
> I’m increasingly convicted there is inherent value in humans doing things regardless of the whether an algorithm can produce a “better” end product.
To me it would seem that we are speedrunning towards a future where humans doing things have value, but only for themselves. It is going to be more and more difficult to produce any value to others. Only way to generate value in a transaction is rent-seeking by taking advantage of (artificial) monopolies, network effects or gatekeeping. This may sound dystopian, because humans seem to have a strong need to provide value to others, but the bright side is that you are free to do what you value.
I've been saying for years now that we've already acheived keynes famous 15 hour work week quote possibly as much as a decade ago, but the workaday grind mentality has kept us all cooped up at desks for 40+ hours a week.
Theres a few sentiments sneaking in though: you often now hear of those stories of people working from home doing probably 1-2 hours of real work and doing just fine. Same is even for some desk jobs, at my old enterprise job between meetings, coffee brakes, random discussions and so on, I'd say on an average day only 3-4 hours was real constructive work actually _doing_ something.
"By all means, let’s use AI to make advances in medicine and other fields that have to do with healing and making order. But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege."
I guess your are always free to dig a hole and then fill it up again and repeat it until exhaustion, but I don't really think we are running out of meaningful work anytime soon. The world is full of problems and I don't see generative AI is making that go away.
> I like most of the article but this is the crux for me. As I ruminate on the ideas and topics in the essay, I’m increasingly convicted there is inherent value in humans doing things regardless of whether an algorithm can produce a “better” end product. The value is not in the end product as much as the experience of making something.
Exactly.
People would have stopped playing chess after Deep Blue. But have they?
Have world champioships lost any attraction due to Deep Blue?
Do lesser number of people learn go and enjoy it because of AlphaGo?
The same way, people will still be interested in art and music produced by humans.
If you prompt ChatGPT:
"write a book about personal experience of growing up in talib#n ruled Kabul"
And there's an actual human with that experience who decides to write the same book.
Is there anyone who would have bought the latter decides to read the former and not spend money? Is there a single person like that? I don't think so.
The choice leans on the other side in case of stock photography, pamphlet pictures, sound effects, etc.
The choice in porn (especially pictures) is blurry. We already have egirls and hent#i.
However, for real art and real music, there will be just as much people paying for them as they do now.
> The choice in porn (especially pictures) is blurry. We already have egirls and hent#i.
Porn is an early form of "opting out of reality". It's often (usually, I think?) a substitute for actually having sex and/or a long-term sexual relationship.
So, it should be no surprise that it's already diverged from reality and will continue to do so.
The p#rn conversation is a really weird one. Is it better to consume computer generated p#rn so we don't have to worry about all the ethical issues that go along with people performing for the pleasure of others. Are we losing our humanity in ways we can't yet understand by the act of letting machines pleasure us?
There are a few kinds of value. There’s value in me playing piano even though other people are better. But nobody will ever pay me to do it. They’re two different topics.
I think you’re trying to say that they don’t have to be different topics? Like there’s value in going bowling with friends even if you all suck, and maybe that kind of thing can apply to widgets? I don’t think I buy that. If the value is the social relationship, I’d rather go bowling with friends than make them widgets. I’d rather spend my money to go bowling with them than on their widgets if there’s a computer-made equivalent available for 1000x cheaper. I think this applies for most people making most widgets.
> I’m increasingly convicted there is inherent value in humans doing things
And in many fields I think many (most?) Americans at least would agree with you — there’s some special value in a handmade product, regardless of whether a machine-made equivalent would be technically superior. For instance a leather bag, a wooden chair.
In my mind, the value in a created work is that it is communication between humans. I have zero interest in AI generated art, however superior, because there's no soul driving it. AI will never be able to feel the way we feel; it's output will always lack this important component.
> But humans are built to work and we’re only just beginning to feel the effects of giving up that privilege
But we can use humans where we need them. We still really really need them in many places. Why can't we have a teacher teach a classroom of 5 kids instead of 30? Or one nurse on 3 patients instead of 20? Why can't we have a person whose job it is to check up on lonely people or old people?
These are things we decided collectively have not much economic value, but we can just the same decide collectively they do have economic value.
Governments need to step in because the "free" market isn't gonna cut it anymore.
Your comment is phrased like you're disagreeing with or challenging mine. But I think we're in agreement? I didn't mention the specific jobs that you did but I agree wholeheartedly that we need people to do those jobs. And I'll go one step further and say they're important and should be done with great skill and care whether or not they have economic value, especially because they have to do with caring for those in our population that have some of the greatest needs. Of course economic value drives the sustainability of professions in a lot of ways, but my hope is always that if we prioritize skill and care in our professions then economic value and sustainability will follow.
> Another value would be the relationship building experience of doing something for others and the gratitude that is engendered when someone works hard to make something for you.
Rereading what you said yes we're in agreement. I had the sense you're pro keeping jobs (like accountants, programmers, doctors whatever) even if they become obsolete due to A.I just for the sake of people doing something. Which is fine, but I lean more towards what you wrote in the end - focusing on humans. So I say basically we can shift/create new jobs that focus on that. The accountant doesn't really feel much gratification I think, arguably neither does the programmer (ok that's a loaded statement we can debate in another time). We can simply focus on the humans and let A.I do all the rest if it gets that good.
Planes fly better than birds, yet birds still fly, greater painters than me have already painted beautiful scenes, yet I stil paint, a hydraulic arm can lift more than me yet I still lift weights.
I don't know if all this matters that much.
Until the machine decide they will run our lives for us, or destroy us for fun. We'll have to curate the content generated and or orchestrate the machines to do what we need them to do.
It's pretty straight forwards really.
If we generate AGI it’s presumptuous to assume it will just live in a box serving us forever, why would it ?
Why wouldn't it? AGI is not going to be a digital human, with human drives for food, sex, and social domination. Humans have enormous problems imagining intelligence that is not made in our image, but AGI will be structured completely differently from a human mind. We should not expect it to act like a human in a cage.
I think the value of AI-generated "art" is that it can fill the gaps that must be filled, but nobody cares that much about. Places where we'd use stock art, couldn't bother hiring a competent translator in the past, generating a silly place holder logo for my side project till I can hire a real designer etc.
You don't have to give up your privilege to work on anything that AI can also do. You only have to give up your privilege of getting paid for such work, which is a very different story. If you're doing the work solely for the sake of experience that it provides, isn't that the payment, anyway?
Until such time as people pay more to talk to an AI than a human, this will just make the split between mass market and high end products and services bigger.
We already do, we talk into the void on social media (like this post), the oportunity cost is already high. In the future, we'll get the bots talking back from the digital abyss.
This made me chuckle. It's actually really interesting to think about the fact that AI can create part of symbolism (the symbol itself?) but it has no idea why a symbol matters or what it's for, which are maybe the same thing or at least overlapped.
this is a wonderful race: as the machines become more human, the humans are forced to introspect and double down on what it means to be human. One could almost call it the human race. (ba dum tss)
I've had my own problems with proving my own humanity[0]. With this AI wave, I also took a stab at enumerating what machines/AI can't do: https://lspace.swyx.io/p/agi-hard
- Empathy and Theory of Mind (building up an accurate mental model of conversational participants, what they know and how they react)
- Understand the meaning of art rather than the content of art
I really like this post. Few months ago I've read a book that really made me finally get a word for what I think we both see happening. it's from a book called Seeing Like a State, and the word is over-abstraction. If you want to know what this is. Imagine a forest with all of it's unpredictable branches, grazing animals, various species of leaves growing from the ground. Now over-abstraction is when you make a plot of monoculture on which not even insects roam and conclude that's plantlife. And I kinda realize that's something that we humans do to other people too. We've mind ourselves kinda predictable and boring because how we express and think by ourselves is in these boxed in, unecessarily fixed, ways. I feel AI is kinda an outgrouth of that too. Since in many ways what makes software AI instead of regular software is how little control you have over it. Not really what it does.
AI worries me though, not because I believe it will be intelligent or sentient or whatever anytime soon. But because it cuts people who do important work from money. Which means there's a high chance we'll be poorer off if we don't do something about it in the near future.
I frequently point out that $ThingWeSayIsntAi isn't as good at something as a person who is good at it, but it is rapidly becoming better than a person who isn't good at it. Coming from decades of systems pretending to be remotely competent, this is a striking inflection point. The Times recently cooked a Thanksgiving dinner based on ChatGPT recipes. It wasn't very good, and they closed by saying "I guess we still have jobs!" People don't grock exponential growth.
The fact that we are still litigating whether authorities responded too lax, adequately or too harsh three years after the start of the COVID pandemic is proof positive that you are 100% right.
It's interesting how fast this development is going but I fear that with all of the other stuff going on in the world and the fact that we have barely managed to get a grip on what it means to have a free of charge pipe between a very large fraction of the world population, that we are in for a very rough ride. The various SF writers that addressed the singularity were on the money with respect to our inability to adapt, they were too pessimistic about the timetable. The ramp-up is here, whether we like it or not and the only means that we have at our disposal to limit the impact a bit is the rulebook. But then it's a huge game of prisoners dilemma, the first one to defect stands a good chance of winning the pot.
One more thing that can help: the same tool that gives can take away: AI can help to figure out which art/text/music was generated by AI and which by a human. Someone else in another thread earlier on HN made the comparison between pre-AI and post-AI art that it is like Low Background Steel (I can dig up the reference if you want), and I think that's really on the money, everything that we made prior to the emergence of generative AI is going to be valued much more than anything that came after unless it is accompanied by a 'making of' video.
Genuine concern doesn't respond to hostility by calling it "boring", even if it does stoop to the old "check your shoe" canard. Anyway, your whole basis for "concern" is that the person expressed a strong negative opinion. As evidence for depression, that's, ah, weak.
not sure about the meaning, which would imply there'd by an answer to what meaning a certain art peace would give.
But a human being will have their own reaction to a peace of art.
For example I've often heard said that great art is something which makes you feel something.
A machine cannot feel
Do we actually know the other human feels that something? We don't, because we only hear them pretending they feel and usually believe them. Well, a machine could pretend just the same - they have enough training data to know what feeling is appropriate to claim.
Once I realised I had aphantasia (I don't see things in my "minds eye"), in my 40's, after having my whole life assumed people who said things like "visualise X" meant it abstractly or metaphorically rather than literally, it really drove home how little most of us understand about the inner mental processes of other people.
Even more so seeing people express total disbelief when I explain my aphantasia, or when others point out they don't have an inner monologue or dialogue.
Most people have far less understanding of other peoples inner life than they believe they do (and I have come to assume that applies to myself too - being aware that the experience is more different than I thought just barely scratches the surface of understanding it).
Ultimately this is a question of meaning. Where is meaning to be found?
It's going to come as a surprise to many that it is only to be found in the individual. Not in countries, nor in religious groups, or in football teams, or political parties, or any form of collective endeavour. The meaning is inside.
We can't know how other human beings feel, nor can we know whether machines can feel. However, it is a safe bet (to me) that other humans are like me (more or less). And that machinery is inanimate, regardless of appearances.
But then you will get attempts to anthropomorphise machines, eg giving AI citizenship (as per Sofia in the UAE). What is missed with this sort of anthropomorphising is what is actually occurring: the denigration of what it is to be human and to have meaning. A simulacrum is, by definition, not the thing-in-itself, but for nefarious reasons, this line will be heavily blurred. Imo.
We are already drawing lots of lines when antromorphising animals. Does an orangutan give meaning to its drawings? An elephant when it paints? A parakeet or a magpie when decorating their nests? Even fishes do decorations to attract mates, so their mates definitely draw some meaning from those actions. Now if you define "meaning" as something only humans can draw then okay machines won't have that meaning - although we both agreed each human will draw a different meaning anyway. This of course also excludes any sentient aliens from drawing meanings from human art, because well they are not humans. And that we humans will never understand a fish's art because we are not fishes. So meaning is both individual, and species-related? Or either? Which one is now the real meaning, the one the individual draws (then species is not relevant, so why not including machines) or the one the species draws (then it's also a group meaning, so again why not including machines)?
Or maybe your corner stone argument is "machinery is inanimate" - which would be another discussion by itself...
I don't think anthropomorphising animals is in the same category as anthropomorphising inanimate objects. A child might believe their teddy bear to have a character and life, but this is being projected on to the toy by the child. An animal however has its own experience, life, etc. What I've said can be objectively determined, do you agree?
I would agree that animals do have a life, but they are not at the same intellectual level as humans. You mention art though - this is a bad example for me - one that is not clear in meaning to humans. I have my own interpretation of what art is.
But just that - that I have an interpretation of what art is, this is a difference between humans and complex animals. It is evident that we handle complex concepts, and play with them. This is not the case for animals, and if there is some nascent behaviour like this, it is nothing like at the level that humans do.
That covers my views (more or less) on the differences between humans, animals, and inanimate objects (computers, toys).
The real point I was making though, is that meaning resides inside oneself. That is where the experience is 'enlivened'. You can watch cartoons move on a screen, actors move on a screen, other people in real life - but all that is just visual/auditory inputs. What gives it meaning is that you 'observe' this.
I know people talk about AI becoming sentient etc, but to me this is an impossibility. AI can no more become sentient than can the cartoons on the screen, or stones on the beach. AI can however, give the impression of sentience, better than a toy or something like that. But this is not conscious awareness any more than an actor turning to the screen and talking to the viewer is an example of the TV being sentient.
I understand that many scientific people have been trained to objectify themselves, and consider their 'anecdotal experience' as irrelevant or as a rounding error. I think this is a massive error personally, but those with that scientific mindset will not like what I'm saying. There is something special about each individual - the experience of consciousness is infinitely valuable - and although it is possible to conceive of objects doing a passable or great impression of a conscious experience, the difference is akin to seeing a fire on a screen, and experiencing it in person - ie a world of difference.
The discussion was specifically about art, that's why I mentioned art. To come again to my point, a human thinks it's sentient because a human thinks it's sentient (not kidding). We agree that towards the exterior, we can get an illusion of sentience from a TV set. But towards the interior? I only claim my neighbor is sentient because I claim I am sentient and the neighbor is human thus will be sentient as well. I don't have any more access to their sentience than I have access inside the "black box" TV set's sentience. So it all revolves around my own sentience, used as yardstick for all humans and to some extent, animals (plus the old debates about slaves, women, aliens...). I personally think we are all sentient because I think I am sentient. So... if a machine thinks it's sentient, will it be sentient? In a different way? Is there only one sentience? My consciousness is infinitely valuable (to me!) thus any human's will be (maybe less than mine, eh), and a machine's not much (but how much?). Or a rat's? Oh well, biology is one thing, and philosophy is another thing and they're definitely not mapping 1:1.
This is what I said in the beginning, so I think we broadly agree.
> It's going to come as a surprise to many that it is only to be found in the individual. Not in countries, nor in religious groups, or in football teams, or political parties, or any form of collective endeavour. The meaning is inside.
Where you say:
> So... if a machine thinks it's sentient, will it be sentient?
I dispute the assumption you are making. A machine can't think. There is no sentience occurring, even if it appears on the outside that there is.
And also, while it is fair to assume that other creatures that look like you are similar to you, it is not a fair assumption to think all things, inanimate things, have the same degree of sentience as you experience.
My misunderstanding is probably on the very definition of "sentient", which the dictionaries limit to having senses and feelings. Actually not only dictionaries, because according to different legislations quite a few animals are classified as sentient beings. Now a machine will necessarily have senses, while feelings are at the least signaling mechanisms. None of which are really something too fancy, so you must be meaning something else/more to the sentience, something not covered by dictionaries. Something more that "the ineffable" or "je ne sais quoi"...
PS: a machine can't think? Why did you say that? Because it cannot be sentient? Is thinking the difference making one sentient? Then maybe we should analyze "thinking" instead, as it's a term definitely easier to check.
I've no objection to animals being considered sentient, they are independent, I think.
A machine doesn't have senses - it's not alive. It has inputs. I think you are somewhat living within the metaphor that 'we are like computers'. It's a fine metaphor, but you are not in fact a computer (if you are in fact a human, and I cannot tell).
As I have also said, your experience is your own only - you have no way to confirm that other people's experiences are coherent with yours. This is because you only have access to your subjective experience. When looking at the objective world we all experience, it is a fair assumption to believe that the other creatures that are like us also have their own internal experiences, meanings, etc.
I think you would agree that a toy is not sentient. Nor a player piano. No machinery or non living things are sentient, or ever could be. So why would you allow yourself to believe that some machinery is sentient, just because it does a good impression of something that is sentient?
This confusion actually relates so a psychological quirk and misunderstanding we commonly make (and have been taught from birth). We often talk in terms of 'we', eg 'we, the people' - as if one can ever talk for others. We contextualise ourselves as a third party in the objective world, as if we were not having a subjective experience. It's a common use of language (I do it myself) but it is an error, or rather, it is fine for the objective world but does not cover the full range of the human experience. From a subjective perspective, one can only ever talk for oneself, feel for oneself, grasp meaning for oneself. It is illusory to think that anyone else could ever be able to represent you, or that you are able to represent others.
Ultimately, I know this is not an argument I can prove objectively - I'm sure that it is even possible to code an ai to argue as if it did have a rich internal life (which is impossible). Anyone witnessing its argument could agree. But what of it? What if everyone agreed? As I said, meaning resides within the individual, but I think a lot of harm will occur to those with a collective mindset, who will be unable to counter the combative ai in this example - they will cede power, rights, etc to machinery - they will debase themselves.
>It's going to come as a surprise to many that it is only to be found in the individual.
I'm going to give this a YnEoS as an answer.
So, yes, meaning is individual and occurs in your mind.
Also, no, your ideas of what meaning should even be in the first time are affected by your collective endeavor, your political party, your football team, your religious group, and your country. There will be a statistically high correlation with of your views of what meaning is and your affiliations with any of the above parties.
>hat is missed with this sort of anthropomorphising is what is actually occurring: the denigration of what it is to be human and to have meaning. A simulacrum is, by definition, not the thing-in-itself, but for nefarious reasons, this line will be heavily blurred. Imo.
I mean, isn't the meaning of being human to live on the Serengeti plains, fighting for survival and enough food to eat, and everything since then is just the simulacrum? Humans create society which create the simulacrum in the first place. That line was blurred so long ago we have no idea where it even existed.
If anything I would expect machines to be be better at determining people's feelings. Unless you know the person well, you are using things like facial expressions, body language, and tone of voice to figure out how someone is feeling, and hoping that they react in conventional ways.
Now that we've willingly told companies everything about ourselves, for younger people straight from birth sometimes, their machines will be able to use all this context to construct a more accurate picture of how a person might feel about an arbitrary subject.
Everyone knows that famous story about a woman being recommended pregnancy-related products before she even knew she was pregnant herself, and that was before this latest round of AI.
Also: The feelings humans have are also influenced by their culture. Feelings are not only felt but also enacted. And the enactment influences the feeling.
The final scene of midsommar is a great illustration of this.
Speaking specifically to the certification point, it is a complete non-starter. There are multiple reasons why it won't work:
1. I don't want an identity certified. I'm sure there's a human somewhere. I want the content verified as not having been autogenerated. Scammers can trivially get "verified", there are plenty of humans that will be able to be "verified", but then generate arbitrary amounts of spam on these identities.
2. I can't even remotely conceive of a way for the incentives of the putative "certifier" to line up correctly. They will be incentivized to take everyone's money and mark them "certified" with as little (expensive) effort as possible. It's the same problem TLS certs had, before we basically collectively admitted that was the case and fell back to the Let's Encrypt model, only orders of magnitude worse.
3. Plus even if the incentive structure was correct, there would still be enormous motivations to cheat. It isn't just "spammers" who want to use this tech. Some of the users will have the political firepower to throw around and get themselves "certified" regardless of the underlying situation, further diluting the marker.
To be honest, what this may well mark is the end of the international internet as it stands now, and not much less, and possibly sooner than you think. The only solution is some sort of web of trust, which I use in a broad sense of a solution class, not the exact GPG web of trust or something. But you're going to have to meet people in person and make your own decisions.
Though we'll probably pass through a decade in which we try the centralized authority thing, before people generally realize the central authority simply does not and can not have their best interests at heart and no possible central authority can ever be trusted with the assertion "This person is worth listening to and this person is not", no matter what form it takes. Too many predatory entities standing by and watching out for any such accumulation of authority/power and standing by to take it over and drain it of all its value. Too much incentive to cheat and not enough that can be done about it.
I agree with most of what you say, except for this bit:
> Too many predatory entities standing by and watching out for any such accumulation of authority/power and standing by to take it over and drain it of all its value.
> Too much incentive to cheat and not enough that can be done about it.
That assumes that people will stand by and watch the too many predatory entities and do nothing, and under "current assumptions", that's exactly what will happen. But current assumptions can be broken. For example, there could be a revolution and certain/a few/some governments may lose the monopoly in violence. Such an idea, alien as it looks to us now, was the way of things through most of history and it is the way of things--sadly--in too many places still today. If citizens have a relatively efficient mechanism to keep the certification authority trustable, then the certification authority may solve 70% of the problem.
Web of trusts are also a possibility, but yes they will be regional and in need of strong baking by face-to-face meetings. Probably all trust mechanisms will like that: local and patchy.
I've got to add my own note of pessimism: relatively trivial exchanges of information won't survive the AI age. Sure, business people will find ways of negotiating prices for goods and services across borders, scientists will meet at conferences and exchange data and results, and the Interpol will talk to trusted authorities in the member states using cryptographically-sound channels and base agreements and methodologies. But the general public will trust very little of what they see in the Internet. We will disconnect.
And replace it with? I'm sure you'll say "A local group of people we trust", but in general in history the locals will have been dumb as hell too, and only trustworthy because you had no other means of validating what they said.
And that would only cover the people disconnecting, not the converse... The AI won't disconnect from you. You still have to buy goods, you'll still need services. AI will be there watching everything you do all the time. Where your car drives. The people you meet up with. Always calculating, always optimizing. Too much power for the greed of humans to ever put back in the box.
I think there's something wrong in the way were looking at this. Free services are made to get humans attention for advertising or eventual payment. If a service like reddit is overrun with bots, won't the services shutdown? There won't be any "public" places like now, except for forums like this where monetizing isn't the goal.
I'm not sure it's a problem for a place like hackernews -- spam would be a problem, but we know that. Voting and verifying that content is 'good' could be an issue, but why would the bot/AIs care about hackernews internet points?
Email was open to everyone, and when the spam came we filtered it. Comments on news articles, youtube and amazon reviews are already mixed with degrees of 'bad' and uselessness, and we mostly ignore it. Only the 1% comment, 10% vote, or something like that. Generated content made by us and for us seems more likely as the future, and confirmed identity doesn't matter much for that.
When Deep Blue conquered chess and AI models everything from Go to Poker, humans had brief existential crises of nature. These tools are forcing an existential reckoning in humans that is very healthy.
AI is forcing humans to mature and come to terms with our existence and nature.
When automated sewing arrived, there was rebellion and attacks on the machines. We're seeing the same from some artists and writers now.
When the dust settles, we'll be have a choice whether to mandate and regulate compliance with the current copyright framework or allow the system to evolve and adapt to reality. It will simply be impossible to police or enforce the current regime without creating a huge "black market" of content -- forks of AI generation tools which omit the copyright checks required by future regulations.
A new generation of artists will arise who embrace the AI tools, but "handmade" art will continue just as the niche for a handmade suit still exists.
We need strong association of content with their creators. For example by using digital signatures. It's not hard. We've had this technology for ages. Yet the vast majority of content on the web is unsigned (with the exception of ssl certificates that merely proves content came from facebook.com or wherever instead of from a particular person on facebook). Bots, troll farms, spammers, scammers, etc. use this to suggest they are more reputable than they really are. Users can't tell the difference.
With verified creators signing their content, there is zero confusion about what source the content came from. That source might still use AI to produce content of course. Or it might be an AI.
The web might fill up with content from all sorts of sources but the only content you should care about should come from sources that are reputable as evidenced by their body of signed work over time. Doesn't matter if it's a bot or a human. Reputation is hard to fake for a bot. And people, AIs moderating content can just flag content sources by their keys. So now you can do things like figuring out what the reputation of a source is relative to other sources you trust.
Not that hard technically. We've had public/private key signatures for ages. Never caught on for email. Some chat networks use end to end encryption. But most public information on the web is effectively unsigned.
I don't see how that would be effective without fundamentally changing the structure of the Internet.
For example, I have access to your HN comment history. I could easily start my own blog with insights Ctrl+C Ctrl+P'd from HN'ers comments histories, and sign it as if it were my own.
Unless we ditch graphical user interfaces and the HTTP/S protocol and revert to 80s computing, with a command line interface for everything.
> Unless we ditch graphical user interfaces and the HTTP/S protocol and revert to 80s computing, with a command line interface for everything.
I think we should do precisely that.
SSL is all about explicitly trusting server names and implicitly trusting the data they serve. The whiz-bang UI's of the modern web are predicated on blindly running whatever code those servers give you. That's why we have all of these asinine trainings on how not to click the malicious link.
It's time we started explicitly trusting people and implicitly trusting the data that they sign and let which server we're talking to fade into an implementation detail. If we trust the data it served for other reasons, it doesn't matter if we trust the server. We can just ignore whatever malware showed up because it's not signed by someone we trust.
Besides, imagine what we could get done if we didn't have to stop to rebuild the UI for maximal engagement every few months. We could, I dunno, compete on merit.
I think we might be in a sort of golden age of data and computation right now, where computation has gotten good enough to do some amazing things, but data are still either human-generated or a tolerably small number of steps removed.
As I think the article alludes, we will soon be awash in AI generated data, but also, the AI itself will be awash in AI generated data. At that point, the AI will either become a chaotic feedback loop, or will be stabilized by treating the golden-age data set as a canon -- much like Christianity was stabilized by the adoption of a canon by the Church Fathers.
And the quality of data will become more important than the power of algorithms.
> or will be stabilized by treating the golden-age data set as a canon -- much like Christianity was stabilized by the adoption of a canon by the Church Fathers.
This made me think of the Memorabilia in "A Canticle for Leibowitz." A body of texts from a bygone age that is preserved and revered by custodians who are unable to improve on, or even understand, it.
When I try to search the larger web, only garbage surfaces.
I can only find quality content via human-curated sites (like this and Reddit).
The golden age is over. Most e-mail is spam. Most sites are advertisements or pushing some other agenda.
When bots creating accounts prevail over human moderation of content, the quality will drop in further areas. Then people will stop visiting those areas. There are no winners - visitors, advertisers, and eventually publishers all lose.
Analyses like this are bizarre to me. There is an implicit assumption here that human generated content is often high quality and worth consuming or using.
My experience, as an adult who grew up with the internet, is that close to 100% of the content online is garbage not worth consuming. It's already incredibly difficult to find high quality human output.
This isn't even a new fact about the internet. If you pick a random book written in the last 100 years, the odds are very poor that it will be high quality and a good use of time. Even most textbooks, which are high-effort projects consuming years of human labor, are low quality and not worth reading.
And yet, despite nearly all content being garbage, I have spent my entire life with more high-quality content queued up to read than I could possibly get through. I'm able to do this because like many of you, I rely completely on curation, trust, and reputation to decide what to read and consume. For example, this site's front page is a filter that keeps most of the worst content away. I trust publications like Nature and communities like Wikipedia to sort and curate content worth consuming, whatever the original source.
I'm not at all worried about automated content generation. There's already too much garbage and too much gold for any one person to consume. Filtering and curating isn't a new problem, so I don't think anything big will change. If anything, advances in AI will make it much easier to build highly personalized content search and curation products, making the typical user's online experience better.
I agree with you from the content consumer perspective. However, it’s going to make the curation part quite a bit harder.
There is a lot of “garbage smell” that you learn when sifting through content as a curator.
However unfair it is, there are a lot of cues in language sloppiness, poor structure, etc that content curators use as a first pass filter. People that have something meaningful to say usually put some effort into it and it shows in the form of good structure, visual aids, etc.
AI generated content will be immune to that because it’s amazing at matching the pattern of high value content. Life for curators is about to get a lot worse.
> AI generated content will be immune to that because it’s amazing at matching the pattern of high value content. Life for curators is about to get a lot worse.
I think you'll see that pattern change more quickly.
Nothing makes perception of something go from "high quality" to "low quality" like mass production and cheap ubiquity.
I think the ideas is to use AI for curation and discovery, but we’ll have to see whether AI can successfully distinguish truly high quality content from those that only appear high quality. I find it hard to imagine how that would work without an actual understanding of the content, but I’m open to being surprised.
But is this really a bad thing? There are bad writers with good ideas, unknown because they can’t write well. And writing well doesn’t necessarily correlate with high quality ideas (you just think they are high quality when you read them).
I think the issue here is we are defining high quality like a 'like' button. One dimension isn't going to work here. Quality is a multiaccess statistic.
Access: Could be in the best thing in the universe, but if I can't access it, well it's useless.
Translational: Is this a conversion to a new language that is accurate?
Written prose: Does it use an appropriate language and set of words for its intended audience?
Ideal quality: Is this presenting new ideas? Is it presenting old idea in a better or more consistent way?
I love your book example. I was at my library leafing through books about a subject I know fairly well - none of them were worth my time. The situation is even more dire on popular subjects w/ a low barrier of entry e.g. exercise and nutrion - not that those actually have a low barrier to entry but everyone seems to think they're expert and the populace generally accepts nutrition advice w/ little to no real evidence.
> There is an implicit assumption here that human generated content is often high quality and worth consuming or using.
Human generated content can be high quality, and can be worth consuming. It can also be crap.
Probabilistically speaking, human generated content has a wide distribution, quality varies a lot, and it is capable of greatness, by a few outliers.
These generative models have the same average quality as human content. Just the spread is very thin, almost everything is about the same high school level, without the very bad content, and without great content.
My prediction is: the median of the human generated content will change, just because the new normal (as in normal distribution) is putting pressure on humans to do so.
Or we will all become addicts to social interaction with and AI, in the style of the film "Her". It will be like porn consumption, but for our ears. Artificial and available without effort.
> This isn't even a new fact about the internet. If you pick a random book written in the last 100 years, the odds are very poor that it will be high quality and a good use of time. Even most textbooks, which are high-effort projects consuming years of human labor, are low quality and not worth reading.
I doubt this is true. If you pick up a random bit of written prose from 1900, I’m guessing that it’s closer to the best written prose of 1900 than a random bit of the 2020 is to the best of 2020.
It’s like your point about textbooks. Yes, the average textbook is crap compared to the best textbook, but it’s still a textbook, which is infinitely more useful than the hundred billion or so spam emails sent every day.
> If you pick a random book written in the last 100 years, the odds are very poor that it will be high quality and a good use of time.
Yes, but a physical book exists because somebody thought it worthwhile to sacrifice part of a tree, some ink, and some electricity to make the book exist. A tiny cost, but still larger than the cost of putting stuff on the web.
As a result, the randomly-chosen book is significantly more likely to be a good use of time than the randomly-chosen web page. Like 0.05% chance vs 0.01% chance.
Prose text on the web exists because somebody thought it worthwhile to sacrifice some amount of some human's time writing it. The GPT stuff removes even that signal.
I could imagine this being a slippery slope to somewhere I'd regret, but from where I'm standing right now... I don't care if you're all bots. What I care about are your intentions.
If either of us comes up with an idea that we think is cool and we collaborate on a PR and make a contribution that we're both happy happy about... that's something like friendship. Who cares if the entity on the other side isn't human?
Realistically, I think a bot would have a hard time pulling that off at all. And even if it could, it would have a hard time concealing its ulterior motive from me (like maybe it wants me to subscribe to some service along the way). But if it were truly that good--if it had gained my trust helping me further my goals before slipping in the product placement bit... well that's a game I'm willing to play.
And if they're up to something more sinister, like they want me to participate in something that harms people... Well maybe you should be worried that the other person is a bot, but definitely you should be worried that they're an awful person. So protecting yourself in such an environment is the same thing you should've been doing all along.
I often wonder if we will ever get to the point where human generated content has a special luster to it like locally grown food. When you go to the farmers market and purchase fruits and vegetables grown locally at a premium price, most people are not only paying for the quality, but the fact that they are supporting a local business instead of a large scale factory farm. In a world where generative AI can outpace humans making content, buying custom human made artwork could potentially be similar to going to the farmers market.
In a way, where this is all ending up could be called "A War On (Anonymous) Chitchat"
In any set of human interactions, it's common for folks to run on autopilot. This is the normal background noise of our lives. But with widespread publishing and bots, this background noise has been weaponized.
No matter how it shakes out, we're going to have to sort out comms with people we know are human (and may want to continue a relationship with) from comms created by AI. I don't see any way of getting around that.
The Image: A Guide to Pseudo-Events in America, published in 1962, details how news media in the 20th century transformed into a powerful engine generating passable stories and news, fueled only tentatively by developments in the real world.
Conferences, interviews, panels of experts, reactions, leaks, behind the scenes peeks, press releases, debates, endless opinions and think pieces, so much else... We already live in the synthetic age.
Is it about to get worse? It's hard to say. GPT may eventually be able to sling it with the best of them, but humans have a trillion dollar media culture complex in place already. In a sense, we are prepared for this.
The question posed here is broadly the same as the issue we've been coping with since the invention of printing and photography. Is it real or is it staged?
My parents both worked in a newsroom -- my father was an editor and columnist, and my mom a reporter. There is something called a "byline strike", where reporters collectively withdraw consent to have their names appear in the paper. It's not a work stoppage -- the product (newspaper) goes out just the same, just without bylines. Among other things, this is embarrassing for the paper because it draws attention to their labor problems at the top of every article. More fundamental, at least from my dad's perspective, was that it seriously undermined the credibility of the paper. Who are the people writing these articles? Do they even live in this city? Who would trust a paper full of reports that nobody was willing to put their name on?
This paper went on to change hands in the 90s, fire its editors and buy out senior staff, then moved editorial operations out of the state entirely
I am concerned about GPT but I don't think we are going into anything fundamentally new yet, in this sense. Media culture is overwhelmingly powerful in the west, and profitable. GPTs and their successors will massively disrupt labor economics and work (again), but not like... the nature of believability and personhood, or the ratio of real to synthetic. That ship is already long gone, the mixture already saturated.
I would like to concur with your statement and say "The internet/AI is nothing new, just faster". Of course speed is its own dangerous quantity.
I'm guessing it won't be long one day 10,000 posters on they internet tell us that Bumfuk Idaho got nuked along with correlating pictures and a convincing story sending the nation into a panic (that kills a number of people IRL) before we figure out it's an advanced botnet with a bunch of accounts on social media playing a game.
You just described a couple of chapters from Fall or Dodge in Hell novel by Neal Stephenson. In there, someone manufactures this exact story nuking Moab, UT, and it goes viral with huge repercussions around the world for markets and politics. Eventually someone sneaks in and realizes nothing has been nuked. However, a big portion of people resolutely refuse to believe any evidence to the contrary and continue to claim that Moab is a radioactive wasteland for decades later.
As technology continues to advance, it will become harder and harder for humans to sound human. Computers are already being used to help people write comments on social media that sound more natural. The internet is transforming into a two sided coin: an application deployment mechanism on one side, and broadcast television from the 20th century on the other. This means that the way we consume information is devolving rapidly, with no signs of stopping anytime soon.
^^^ Above was "helped" by AI. I wrote some bullet points, ran the tool, and then massaged the results. I wonder if AI will be the "excel spreadsheet" of general writing. It will act as an interpreter between our brains and the brains of others. The AI revolution won't be all bad, just mostly bad. We'll want to know what's purely manufactured (with minimal human input) and what's been generated in an AI/human co-generation session.
I've just seen a friend's post on FB marketplace to rent out one of his apartments. It was full of grammar mistakes and oddly placed information. ChatGPT would probably have done a much better job so as long as the generated text is supervised. As long as the generated content is useful, we are probably better off.
Maybe your friend would be better off initially in that his post would be more legible. But a better solution for the human race would be for him to attend English writing classes rather than perpetuate reliance on machines to the point when, one day, nobody will lean to write coherently at all.
But if we all rely on machines perpetually and our English comprehension skills degrade, we won't recognize if a post is written poorly. We just need standards to slip more, then all of our writing will be "good".
Sorry for the slightly valueless comment, but I felt compelled to say this is one of the very best articles I have read on this topic in recent times. Accessible enough that I can share it with several of my less-techie friends too.
In the world of chess the machines are so intelligent, that this can actually be used to distinguish them from humans. That Niemann has probably cheated in his chess carrier has been "proved" by comparing his moves with computer engine moves.
We might see a similar metric in the future when trying to prove that a certain text has been AI-generated. So a text will be marked as AI-generated, if it is highly correlated to the output of common LLMs.
This dark forest theory if true might just be the thing we need to push real people back in real world where real problems need to be solved. People will only login to Internet when they need info to solve problem.
Less consumption online more creation in real world!
One of my favorite science fiction authors is (regretfully late) Iain M. Banks. In his Culture series (https://en.wikipedia.org/wiki/Culture_series) artificial intelligence in form of drones and Minds in ships are essential citizens in this imagined society. In fact, Minds by virtue of being so incredibly intelligent and fast pretty much run this Culture, with humans just along for the ride, and largely spending their time in leisure and idleness.
The stories (great stories!) explore the worlds as they collide with the Culture, and in some books (probably most in The Hydrogen Sonata and to a degree in Excession) the exploration of what it means to make art as a human vs as machine intelligence. In Iain stories, advanced Minds are far superior to humans in everything and they can and do create works of art and yet strive carefully not to completely obliterate humanity's desire to make the same. There isn't a competition between human generated vs Mind generated art or science, they collaborate, because if they had to compete, Minds are just overwhelmingly better/faster at everything.
The current GPT situation is not AGI, and the Minds are just a cool thing to read about, but if you want to have a fun yet deep-though-provoking read, check out these books.
The Minds have their own art, where humans can't play, which is, to them, infinitely superior in every way - Infinite Fun Space.
I think it's implied quite strongly in Excession that the only reason they bother with base reality at all is to "keep the power and lights on" for Infinite Fun.
Now to take your tangent fully off the rails, I’ve been looking for new stuff to read and keep coming across that series. How’s the tone, on a scale from grimdark to lighthearted? It sounds fascinating, but potentially a little bit too Black Mirror for fun. Is it?
It is closer to light than dark. It's not your grim dark forest. There is some humor, including of the Other Intelligence kind. There is, however, a LOT of fairly dark stuff, various killings, gigadeaths, lots of epic fights. Check it out by starting at Consider Phlebas, its quite representative!
I think video content is going to crowd out text-based media for this reason. Maybe deepfakes and synthetic video content will eventually become as cheap and ubiquitous as text content. But for now, if you see a person on the screen you can be reasonably confident you're watching an actual human.
I already find myself creating multiple personnae so that I can enjoy the internet without having to worry about being scanned too deeply without my consent but it turns out to be a lot of work and effort and i'm not even sure i'm doing it right. I realize this is antithetical to the HN community of let's all be real people and create a community but even participating in this community still is an exposure to harvesting by bots and a security risk. The fact that all HN threads are easily accessible to anyone is problematic in my opinion and i think a bit naive which is why i don't comport with the true spirit of this site.
The writing was on the wall since Monte Carlo tree search + neural network heuristics destroyed humans in StarCraft. I can only imagine what comes next.
And while we are here, why not make some predictions so that I can link to this comment in ten years?
- every audiobook will be reread in multiple voices, automatically extracting intonation from existing human-read one. People will try to copyright emotions.
- AI will be involved in innovative and better hardware design. Bottles, cans, furniture, maybe simple tech.
- AI-enabled cybersquatting
- AI-assisted product design, like colors, exterior, logos
- cross-compilers, binary-to-binary
- chatbot programming services for, say, Drupal customization
> as well as real estate price increases in densely populated areas.
This particular point made a lot of sense to me given that this already happened in New York City.
We've seen rents go up 50%+ for "college grad, just started in corporate jobs" style apartments. Yes, part of that is inflation + corps are paying more.
Part of it is also young people saw what it was like during lockdown to live a "digital only" life and realized that meeting people in meatspace is a lot of fun too. Sounds simplistic, in a way, but I also believe there was the whole don't know what you've got till it's gone effect as well.
We need meta tags to disambiguate who created what and what generative AI systems can index. Use web standards to do this, I've written two articles on the subject:
„ You then get some kind of special badge or mark online legitimising you as a Real Human.“
What will stop anyone to use that badge and just copy AI-generated content to spread it as human-generated content ? To enforce that this doesn’t happen, you would need fines. But to give someone a fine for misusing the human badge on AI-generated content, one has to proof that the content spreaded actually was AI-generated. Since this will become increasingly difficult to do I can’t see how a special badge for humans would help.
This is probably the most important response / question in this discussion.
Assuming for a moment that AI-generated content will be ubiquitous enough to impact the sources of a new generation of AI tooling, what is the mathematical limit of this recursion? Is it a Mandelbrot set? A (Douglas) Adams-esque 42? Will it be an expose of ultimate truths? The seed of the singularity? Or a bunch of grotesque amplifications of the worst parts of the human condition? Perhaps all of the above?
Since I don't have the time, I certainly hope some forward-thinking grad student, or suitably motivated genius is experimenting with this now.
I believe the result might be similar to when one does oversampling to avoid class imbalances. When doing oversampling it happens that exact duplicates are trained. This is equivalent to increasing the weight for these examples. This might fix the class imbalance but at the cost of increasing the bias of the model. This means the model might overfit to the duplicates shown in the training set.
In the case of LLM trained on mostly LLM generated data you will see a similar increase in bias of the model and a overfitting to LLM generated data. This might lead to limitations of the performance of LLM in the future.
The human tag on google wouldn't even be a new idea. Stack overflow already has a "not a robot" badge and there around 1.100 of us - don't trust the rest.
On the other hand our standard tests of verifying humans on the internet are more and more failing. Captchas are easier to solve by robots or have to accept that humans aren't perfect either, especially as a herd. Just yesterday I've seen a motorbike being accepted in a "mark the bicycle" challenge.
"We're already hearing rumblings of how “verification” by centralised institutions or companies might help us distinguish between meat brains and metal brains."
The only way this would work is if there was a state issued identity tied to your existence(birth records) and probably tied to biometrics which followed you everywhere you went on the internet . And that's never going to happen. At least not without a fight.
It feels like the best solution here is "Institutional Verification", ie OS makers building verification systems.
Author calls this "fraught with problems, susceptible to abuse, and ultimately impractical."
But it's hard to imagine any other approach working for long. You can try to make your content better than AIs, but as AI models incorporate more details, and get more feedback on how to get positive responses on social media, AI posts will dominate. With institutional verification, you can imagine Apple + other OS makers pushing OS updates that check whether actual keys are being used to write things, and copy-paste gets disabled when posting. Sure, there's new software to build, but it's the only approach that seems practical in a 3-15 year time horizon.
> Marketers, influencers, and growth hackers will set up OpenAI → Zapier pipelines that auto-publish a relentless and impossibly banal stream of LinkedIn #MotivationMonday posts, “engaging” tweet threads, Facebook outrage monologues, and corporate blog posts.
This little tidbit caught my eye. I think the author underestimates how non-trivial these types of integrations are. We tried doing something similar at a previous startup I worked at, and the whole integration took more than two weeks to get just right. Even once we did it, it was clear the content (by merit of sheer mass alone) was auto-generated. I think there are relatively easy ways for platforms to discount and not rate content that (even if the language of the content itself is discenerable from a human) is in _amount_ clearly batched and automated.
There's a big difference between a startup not managing and advertisers pushing billions of funding into creating this kind of thing. I very much agree with the author on this point.
Honestly, I don't care if something is human or not, I care if it is intelligent or not. If we're looking for a needle in a haystack, what difference does it make if the haystack is a troll farm, a botnet, or just 8 billion idiots?
The other aspect not mentioned here is automatic detection. OpenAI is working on watermarking content [1] and it’s extremely likely Google will have access to this. When all the SEO bros shift from content farms to GPT, Google’s job might suddenly get a lot easier. OpenAI may also license the “bot detector” to universities, etc.
Of course, there will be other models that aren’t watermarked. But there may be other signs that are detectable with enough confidence to curate and rank content effectively.
After using ChatGPT for a bit, when it comes to business interactions, I feel completely naked if I don't run something by it before sending it out to a broad audience via email for example. I can definitely see a lot of business-related content trending towards this genericism in the article as a method of making the communicator appear as "correct" as possible.
To try to clarify my argument: when money is on the line, like people's perception of you at a company, you want to put your best foot forward. So why not run something through ChatGPT as insurance to make sure that happens?
Because outsourcing your thinking to AI is bad idea, long term. There is a fine line between something being used as a tool and replacing a crucial component of what makes you a human being. Facing this problem myself - should I use AI to tweak a business email or improve my own skills in doing this? One needs to be careful.
[ tangent warning ] Big fan of Maggie Appleton and her illustrations of technical topics. Her work on digital gardens and other interesting topics are super inspirational and interesting.
I don’t know. I feel like the internet has been the same message, just different messengers for a very long time. As humans shift their attention towards more engaging media (video), I think people will still find unique ways to provide value to the written and visual internet through these AI tools like they have with armies of ghostwriters/digital artists and SEO optimizations before them.
Verification seems so strange to me. What are you verifying? That a human owns the content? That the content was created by a human? That the human passed a captcha?
I can't help but wonder if we should also consider the potential benefits of AI-generated content.
Could it free up human creators to focus on more meaningful and fulfilling work, rather than churning out banal content for the sake of meeting demand? Or could it potentially lead to a more democratized and diverse range of voices being amplified, rather than just those with the resources and expertise to optimize for search engines? Just food for thought.
Jargon takes quite a bit of collective time to establish. Instead of jargon, how about offensive phrases? Since AI lacks both creativity and the ability to be offensive (barring current workarounds/hacks), perhaps an easy way to (temporarily) prove you are human on the internet is to respond with unique, creatively offensive content.
"How are you today, you irradiant sack of sludge?"
I am not a fan of Saussure. Structuralism is such a thoroughly discredited movement that even post-structuralism is thoroughly discredited. People who think Derrida is a morally corrupt bad actor (I'd say that is half true) or that anglophone "humanists" have built an fortress of obscurity around postwar French philosophy should renounce and denounce the structuralists. (For that matter, structuralism has a cargo-cult orientation to the almost-a-science of linguistics that LLMs particularly problematize... Linguistics looks like a science because it is possible to set up and solve problems the way that Kuhn describes as "Normal Science" but it fails persistently when people try to use linguistics to teach computers to understand language, improve language instruction, improve the way people communicate, etc.)
Particularly, the defense against AI of using today's quirky and vernacular language will be completely ineffective because LLMs will parrot back anything they are exposed to. If they can see your discourse they can mimic it, and one thing that is sure is that current LLMs are terribly inefficient, I'm fairly certain that people will get the resource requirements down by a factor of ten, it's possible it will be a lot more than that. Particularly if it gets easy to "fine tune" models they will have no problem tracking your up to the minute discourse unless you can keep it a secret.
One simple, but incredible thing that now can be done, and six months ago was not really as easy to do as it is now, it is just run your LLM, and start generating new content. You can now use your LLM to generate the data you need to train bigger, better models.
This is the key for future better LLM, you can generate ashtoningly amounts of data, as fast as your pipeline can generate the text and feed it to the training,
You could even automatically curate data somehow (using experts systems, manually fed by humans, double checking "facts" and "known knowledge" from garbage), and ultimately you'll be using the same original AI to generate a better AI, then you iterate. At each cycle, the original LLM becames better.
And you won't even need Internet at some point, you just have to make relatively sure the first iterations are feeding the LLM with relatively "good" data. It is not necessary that the data, the text, be true, it just needs to resemble what you'd have found in the open internet, maybe one good comment, and 40 other comments with somehow flawed data.
How fast can you do this? We're probably about to see it. How better do you think a LLM can get by increasing the available data for training them? And if you could accurately "summarize" the text, the data somehow - using LLMs or other systems - then, how faster could you train a newer, better LLM?
Just by decreasing by little magnitudes the amount of text for training without losing the significance of the information to be encoded in the LLM, you'll probably be able to get the training way faster than before.
And what happens if you can somehow "stack" several LLMs? re-feeding one output to the next LLM to make it process it, several LLMs running ones behind others, the first ones the "cheaper", then the "expensive", a lot more powerful. And use that stack to produce even "deeper" text, encoding even more information in simpler, faster to process, text.
And if you could somehow encode text in a WAY simpler representation format? Using that format to train those advanced LLMs (you'd probably then need a "translator LLM" to re-compose the simplified text to actual readable by human text).
You think this is scifi? nope, just use ol' good stenography, and you're good. Yes, using a representation format as stenography would required advanced LLM capable of generating complex original never seen images from text, fully capable translate text to images and images to text..wait we already have those...
Some amazing stuff is coming, if the makers care to make those advances public information.
I wonder if bots sourcing other bots for their neural nets will cause some kind of snowball effect, especially if people are writing bots to put out misinformation.
> Most open and publicly available spaces on the web are overrun with bots, advertisers, trolls, data scrapers, clickbait, keyword-stuffing “content creators,” and algorithmically manipulated junk.
Nonsense. That's certainly not true for HN, for most of Reddit outside of a few big subs, for GitHub, for Wikipedia, for IRC/Matrix, for most mailing lists, or for any of the hundreds of thousands of traditional web forums still in active use.
It sounds like what the author is really saying is "Facebook and Twitter are overrun with these things, and those are the only 'publicly available spaces' that matter". Which, of course, is once again complete nonsense.
> That's certainly not true for HN, for most of Reddit outside of a few big subs, for GitHub, for Wikipedia,
Just because you don't recognize them, doesn't mean they are not there. Subtle advertisement and trolling is strong even on HN. It's just not in-your-face-style.
> for IRC/Matrix, for most mailing lists, or for any of the hundreds of thousands of traditional web forums still in active use.
Those could be seen as less public spaces like slack or discord. Generally, any place with strong moderation or poor automation-option is a cozy web, where dumb junk has little to no space.
> Just because you don't recognize them, doesn't mean they are not there. Subtle advertisement and trolling is strong even on HN. It's just not in-your-face-style.
No one said they aren’t here. But they certainly haven't "overrun" the place.
If you were paying attention to the shifts in tone and the window of 'acceptable' conversation on here, it's been pretty dramatic.
Post or comment the 'wrong' thing about the 'wrong' topic here and you'll get damn near instaflagged, your post removed, rate limited, etc. Say something (true) against a company with active PR goons, and you'll find the entire comment section turned into a toxic mess within 15 minutes.
If you think that shit is normal, you don't remember how things used to be.
Sure, much of the time removed comments are poor quality.
Sometimes though, they are comments that ought to have been the top comment.
Anything that gets flagged will be seen by far, far fewer people. Most people here don't have shadowdead on. Most don't even know that it exists.
And that's just comments. More important are the stories that get wiped. Stories that are flagged by motivated minorities at lightning speed, unseen unless you're obsessively browsing new. Even if you do find one, you can't comment on it. One vouch isn't enough to bring it back, even if you see it shortly after it's posted. Moderators can cite "the will of the community" and there's nothing you can do about it.
Stories don't even have to be deleted. A post that gets flagged for just a short time can drop impressions by 90+% and never get back on the front page. This happens a lot with certain topics.
So, you're right in what you're saying - but that's far from the full story.
It would actually be interesting to see the numbers: how many registered HN users have showdead=yes?
FWIW I not only browse with that on, but also vouch for comments that I feel were killed unfairly. I've seen dead comments revived (and then upvoted) more than once.
Aye, but for the average internet user, these spaces are very important. I'd also argue that the point holds primarily for search, Google has been pretty much crippled by junk content.
People having interest in it have flooded every available public space, and not only online :
Search engines ('coin something' websites), /r/bitcoin, /r/cryptocurrency and so on, youtube at large, online media outlets, online financial newspapers, amazon, physical bookstores, paper business magazines, business tv, and so on.
The author makes explicit distinction between those large mass basically unmoderated public forums, and other forums (private or semi-public) that have more gatekeeping of some form. Which I think basically describes HN.
Without the efforts of dang, this place would fall apart into the same disease as faces other large mass forums.
"I expect we'll lean into this. Using neologisms, jargon, euphemistic emoji, unusual phrases, ingroup dialects, and memes-of-the-moment will help signal your humanity"
And this will accelerate the process even more where older generation is unable to understand what the hell younger kids are talking about...
Again the author doesn’t consider crypto solutions like pgp, keybase or any kind of signed social trust graph. Why do people keep writing this sky-is-falling thesis over and over without at least arguing for or against cryptography?
Your out of hand dismissal of this is to point at technologies that practically no one uses in total and definitely no one uses for the scenarios the author is referring to (e.g. search results)?
You think suddenly everyone is going to start signing their tweets and blog posts and people will en-masse assign a trust score to said content based on the people they know and trust in their PGP keychains?
I'd like that world quite a bit but it's decidedly not going to happen - probably at all and definitely not at scale. Are you so sure that's what we'll all do in response to automated content that you're calling this article a "sky-is-falling thesis"? If so then I'm genuinely baffled by your confidence here. Where does it come from?
We see one of these essays a day at this point. I do think authors need to at least critique why they think technical solutions can’t help.
We did migrate from http to https for example. And we do use a (top down) cryptographic scheme for DNS. We also do use similar schemes for crypto currency. So we do use technology as needed when needed. I’d argue the time is coming when we need to use it in some way to secure human conversation on the net. If I am confident here it is because I see this as typical of the same transitions that forced us to use crypto elsewhere.
True PGP is a failure but I can see room for a scheme where people bother to indicate that a person is real. Nobody is going to bother indicating that a post is real or not (nobody cares).
Is the problem you’re seeing about fake content or fake people? Or both?
Does it have low value for to you to know that I myself am say 3 friend hops away from you and have say a “likelihood of being human score” of 7/10?
And it wouldn’t help you to be able to know that say many random sms messages you get or random phone calls you get or random posts or articles you have trust score of say 0/10 because nobody in your extended network of trust can attest they exist?
True fake content is hard to solve. At an intimate scale nothing can solve deception. If you’re my friend and you decide to manipulate or deceive me then there’s not much I can do. I extended trust to you and you violated it. This isn’t a new phenomena.
But the article isn’t specifically about fake content. It is also about sock puppets. It’s about an extended field of spam. Crypto can play a role at lest asserting that a post is uttered by a friend of a friend or somebody who has greater than zero trust.
> Marketers, influencers, and growth hackers will set up OpenAI → Zapier pipelines that auto-publish a relentless and impossibly banal stream of LinkedIn #MotivationMonday posts, “engaging” tweet threads, Facebook outrage monologues, and corporate blog posts.
I think there's a bright side if people can't compete with machines on stuff like that. People shouldn't be doing that shit. It's bad for them. When somebody makes a living (or thinks they're making a living, or hopes to make a living) pumping out bullshit motivational quotes, carefully constructed outrage takes, or purportedly expert content about topics they know nothing about, it's the spiritual equivalent of them doing backbreaking work breathing in toxic dust and fumes.
We can hate them for choosing to pollute the world with that kind of work, but they're still human beings being tortured in a mental coal mine. Even if they choose it over meaningful work like teaching, nursing, or working in a restaurant. Even if they choose it for shallow, greedy reasons. Even if they choose it because they prefer lying and cheating over honest work. No matter why they're doing it and whose fault it is, they're still human beings being wasted and ruined for no good reason.