Hacker News new | past | comments | ask | show | jobs | submit login

Slop seems like a good term for unwanted AI generated content.

But I wonder how much this is AI and how much we've sort of curated a slop pattern even before AI:

- Video game tips web pages with massive chunks of text / ads before you get to the inevitable answer of "hit A when X happens".

- The horrendous mess that Quora became with technically correct in some ways but also misleading answers to historical content.

- Medium articles about coding that are filled with irrelevant pics and blocks of text that are "not wrong" but also "not right" followed by weirdly specific code...

We had all that before AI.




Agree. Content was the OG slop. Buzzfeed with monkeys on typewriters.

The problem is that dopamine addicts generate outsized engagement. I know a literal crack mom who spends a solid 90+ hours a week watching accident videos to keep her brain triggered. The algorithm caters to her. Send promotional emails daily or more, constant notifications, recommend the same few videos over and over. Gotta get in there before she clicks another car crash video.

IMHO: Marketing is a top societal evil right now. If the media machine wasn't so desperate for content, AI wouldn't be a fraction of the problem it is. But with everyone obsessing over the next piece of content, fake AI presentations are mandatory.


Hmm. An AI trained to maximize dopamine could be a very bad thing. (It won't be stated that way. It will be trained to "maximize engagement", but it amounts to the same thing.)


> An AI trained to maximize dopamine could be a very bad thing.

Spelled "profitable". This is definitely something that's already happened/happening; see algorithmic timelines and the widespread sudden legalization and acceptance of gambling.


Our brains have been under attack for years. Zuckerberg, Dorsey, and company have already spent decades and billions doing just that. With capabilities already in the AI realm.


Still can’t decide how terrified should I be that somehow zuck is the good guy in the AI wars.


No clue what would give you that idea.


llama3


> could be

is


Too many people are worried about hallucinating AIs somehow taking over nukes instead of them juicing the dopamine machines


You don’t need AI for that, “old fashioned” ML has been doing it for a decade.


TikTok?


Brainrot


Dude take me back to BuzzFeed listicles. Some millenials being cringe >> AI slop


I think you're right. Since LLMs went mainstream, I've seen a lot of my colleagues' presentations and thought "was this written by ChatGPT?" but I've come to wonder if it's just given me the frame of mind to identify low-effort slop that lacks any original insight but uses all the right sorts of words and phrases, regardless of if it was authored by a human or not.


My hope is that equivocating waffle will look so much like ChatGPT that humans will have to write clearly and precisely to differentiate ourselves and we can put this horrible era of essay writing style behind us.

Though I’m starting to think that AI might improve faster than us so there might only be a diminishing margin of opportunity left to do this.


We’ll have AI tools to take our bullet-points and expand them into prose (crappy now, eventually beautiful prose). Then we’ll use AI tools to summarize that prose into bullet points.

Eventually we’ll realize we can just send the bullet points and generate the prose on the receiving side. This will be great because most of the time the AI’s will be able to say “let’s just be a nop.”


Reminds me of the SMBC comic https://www.smbc-comics.com/?id=3576 that explores the idea further.

I am less optimistic than the comic.


It's both. Especially the out of context tinfoil rage response. They always existed. But now it's so common to see some totally benign article about pizza, and the top comment is "don't let them tell you not to remember in 1995 when US implanted radios in Syrian babies".

The algorithm is being trained solely for engagement. It is horrifying.


This sounds completely unrelated. If someone is really leaving a comment like that, it has nothing to do with the article and everything to do with the way weirdos engage with the internet.

The Slop is the wordy vapid garbage that maximizes SEO.


Parent is right though: AI slop in an article is to maximize SEO. AI slop in a comment is a weird jumble of implausible claims to maximize engagement. I see both on a daily basis now.


I've had similar false positive experiences where I swore some content had to be some form of LLM generated content, until I discovered the source was just poorly done or even just text from some older writing that "sounded" wonky but was more of a product of its time (like a 1940s newsreel).


This. Bad AI output is indistinguishable from bad human output. It's literally the same exact shit.


With AI, there can be more content, produced faster and probably more cheaply, that is tailored to individual users.


Yes, AI isn't entirely to blame for this - it's low quality, irrelevant and misleading content in general.

Also, we have to look at the incentives: advertisement. Somehow, this is acceptable to consumers, profitable for companies and profitable for publishers. How, is absolutely beyond me... and it won't change so long as Google has a majority in the "search"-space as they are directly profiting from this.


Good point. The standards for advertising networks has to increase tenfold, right now they reward slop, companies drain their ad budgets on channels they can’t even fully measure, and it repeats since it takes too long for companies to notice the effects.

It’s the reason advertising costs have ballooned digitally, and also the cause of many lawsuits that Google continues downplaying in the public eye.


Today's AI is not to blame for anything because those AIs lack agency. Take a good look at the theory and real-life algorithms and soon you will realize that GPTs are just better parrots. Tools that they are the blame does not lie in the tool, but in the user. Not unlike guns.


> - The horrendous mess that Quora became with technically correct in some ways but also misleading answers to historical content.

What kills me is I have to hunt for the answer in Quora now. I just treat quora like I do pinterest, just back out and never return.


If you happen to use kagi, you can "ban" Pinterest from your SERP.

For me, it's one of those "quality of life" things that really improves my search experience (and is therefore worth paying for).


There are chrome plugins that block domains from Google results for free and don’t require a login.


I was, and I just might re-subscribe to it, I'm getting tired of how increasingly useless Google is becoming.


I remember finding Quora when it was good. It was a literal godsend, actually interesting and meaningful questions and answers. Basically what the internet was advertised as (information interchange). Sadly it only lasted like 6 months.


> We had all that before AI.

What AI gives us is vastly cheaper slop, so now it can be produced at a scale unimaginable to prior generations. No more paying some schmuck a penny a word to bang out "private label articles," so they were only practical as SEO. Now you can have a unique slop for every spam email, every search query!

Truly, we are making the world a better place.


I can see the use to describe AI spam, but I am starting to seeing people using it to describe anything they don't like, basically a replacement for "mid" with was highly used the last couple of years. I noticed that when some people learn a new "trendy" word, they want to use it in every possible opportunity until it loses meaning.


"Slop" is a internet slang that has always been used to refer to low quality content that exploits current internet trends, using it to refer specifically to AI generated content is pretty new.


> basically a replacement for "mid" with was highly used the last couple of years

Wot now? Somehow I managed to completely miss that.

Edit: ah, seems like it's mainly a twitter thing. That explains it.


"mid" is something I hear from people high school age most commonly.


I've seen it on reddit too.


Of course, it's seeking it's low energy state.


The signal to noise being so bad on the web today is AI's most compelling use for me, it's better at getting a pretty-close-to-right answer than searching the web, with much less crap I have to block along the way.

But, consider that all that crap ended up on the web for a reason, and wonder how long before AI just injects it itself into its own results.


That's a result of natural selection forced by search engines. I think that's why I like ChatGPT so much. You can ask it very specific things and it will tell you exactly what you need. It does also output verbose answers by default, but you can control it by promoting for a short answer.


Enjoy it while you can. Once the marketing guys and monetization engineers get to it I suspect things will get a lot more annoying.


This. Marketing is a cancer on modern society, and it's metastasizing to new communication media increasingly quickly.


Marketing is literally customer/product (market) identification and all the activity required to produce and deliver the product to customers. Business. Did you mean advertising?


Are engineering, design, and logistics just sub fields of marketing? Is this really how people use the word marketing?


Motte and bailey. Marketing is what you say on paper, but nothing like it in practice.

(And people have been giving me identical response with "advertising" in place of "marketing" for years now, so I'll say yes to both; the terms are effectively interchangeable in both motte and bailey cases.)


Luckily, unlike search engines and similar, LLMs can be run completely locally. As long as there are corporate interests trying to squeeze new tech for every cent with no regard to anything else, there will be hobbyists making what's actually useful for them.


Not if my company is paying for it. Which they will, because they have to. The price will be kept in check becauae it is a commodity. Anyone wanting enterprise business will have to include it (Microsoft 365).


Good point.

I do find myself sometimes even prompting for a shorter answer after it hits me with a blob of text ;)


I do so as well, but usually the response is the chatbot first generating a paragraph about how it’ll comply with the request, making the prompt moot


It is probably an internalized "prompt engineering" trick from gpt3.5 times, where you could achieve near gpt-4 performance using stuff like that. Rephrase the question and plan your answer was on top of the list.


Keep in mind that tokens are LLM units of thought; the only moment the model does any computation is when generating tokens. Therefore, asking it to be succinct means effectively dumbing it down.


We also had the term "slop" before and it's not strictly related to AI but "content or media of little-to-no value".


Coming up with and quickly adopting new terms to sound "hip" is one of the most important skills for AI practitioners. We've had "agent-based" concepts in CS for decades, but if you're "in" you'll of course refer to "agentic" workflows and the like.

It make sense to come up with terms to describe common patterns: Chain-of-Thought, RAG etc. are good examples of this. But the passion some members of this community have for being intentionally confusing is tiresome.


It's true... the quality of content on the Internet has a bunch of problems, and AI is just one of them. The economic incentives to trick people into staying on your page and scrolling as much as possible are a fundamental part of the problem. Politically-motivated ragebait and lies are a separate huge problem. AI-generated slop is also a problem for content quality and UX, but I'm far more concerned about the impact of AI on the value of human labor and intellectual property than I am about the UX of my search result pages.


Youtube videos that could have been a one paragraph answer.


Exactly why I literally never use videos for 'how to do [x]', when 'x' can be expected to be fairly straight forward, anymore.

- 10 seconds intro

- 10 seconds yoooo guys wassss up

- 30 seconds build up

- 30 seconds showing what the answer will do

- 30 seconds encouraging you to post comments to the video

- 2 seconds to explain the answer

- 20 seconds yooo don't forget to pound that like and subscribe

If this is really what's optimal for the sacred algorithm, then that algorithm needs a serious tune up.


If it's a somewhat-successful YouTuber, then you missed the 60-second shout-out to their sponsor.


Plus 30 seconds to 2 minutes of Patreon segment, depending on whether they're reciting the list of newest/bestest patrons, and then 30 seconds of outro in the end, creating a frame for YouTube to stuff recommended videos of the creator in.


What we need is an AI agent who can parse through 10 minute videos, and then extract out and summarize in text format only the important 2 seconds.


I've seen examples where seeing someone make e.g. some repair really benefits from video. But I certainly won't argue in general.


You forgot the NordVPN ad!


Literally all of these are just the symptoms of declining ability of people (general public) to perform critical thinking. The content/spam/slop is simply being tailored to be effective with its intended audience.

But that's not the scary part.

The difference with AI slop is just the enormity of the scale and speed at which we can produce it. Finally, a couple of data centers can produce more slop than the entirety of humanity, combined.


Don't think so. It's just democratizing of the internet. It went from elitist, well read and educated bunch to people communicating with pictures. Nothing wrong with that, tho text internet was nice.

At work people often ask me for help with documents or translation. Or I see some friends' conversations. While Polish grammar is pretty difficult, it's not surprising to see messages with orthographic errors in 5 out of six words. You just live in a bubble of people who can read and write well.


> It went from elitist, well read and educated bunch to people communicating with pictures. Nothing wrong with that, tho text internet was nice.

There is absolutely everything wrong with that if it consistently invades and drowns out the voices of the well-educated elite.

The worst tyranny in this world is the tyranny of the ignorant against the learned. In its worst form, it can lead to mob justice and other horrible things.

Maybe that's not your worldview, but it is the view of many, and it's just as legitimate as yours.


> it's not surprising to see messages with orthographic errors in 5 out of six words

But they're saying something. The characteristic feature of slop is not informality: it's fundamental meaninglessness.


> The difference with AI slop is just the enormity of the scale and speed at which we can produce it. Finally, a couple of data centers can produce more slop than the entirety of humanity, combined.

Think only about your own consumption for a second. You're not going to engage with slop, are you?

I'm imagining that whatever your filter process is, that you manage to heavily engage with content that is mostly good and well-suited for you. Discounting Google search becoming crummy, of course.

AI in the hands of talented people is a tool, and they'll use it to make better stuff that appeals to you.

I wouldn't worry about other people. Lots of people like rage bait, yellow journalism, tabloid spam, celebrity gossip, etc. There's not much you can do about that.


When I was a kid and I was told to write an essay "what is slop" teachers would give lots of extra points for dumping useless and somehow vaguely related information just to raise the word count. Answers along "slop is useless shit created only to serve as filler content to make money on stupid people" would get zero points, I was expected to write the history of slop, the etymology of the word, the cultural context, the projected future, blah blah blah, don't forget at least ten citations, even if they're even more useless than the essay I was writing and 100% pure unadulterated slop.

My master's thesis was on a topic that nobody else researched (it wasn't revolutionary, just a fun novel gimmick), so I had to write filler just to have a chapter on a topic possible to find references to, in order to get the citations count, even if the chapter wasn't relevant to the actual topic of the thesis

So yes, I think that the push to create slop was there even before computers became a thing, we just didn't recognize it


As with everything, I think it's scope and scale: Quora was always a cesspool, but now every single question has a machine-generated response that's frequently incorrect or misleading (sometimes in legally concerning ways, like one was for me recently).


I don’t think they are saying that the internet hasn’t been shit. It is. I think what they are saying is that it is about to get a whole lot shittier thanks to AI.


Anything that is a response to classic "SEO manipulation" to get higher ranking on a search engine result page to create an appearance that the content is higher value, is more comprehensive, or took more effort to produce does create net-slop. And that's been going on for 10+ years.


I guess the problem is that for the lazy, the ability to generate slop has accelerated significantly through the advent of AI. Slop creators have been disproportionately empowered by AI tools. People who create quality content still benefit from AI, but not to the same extent.


> The horrendous mess that Quora became

Its not a horrendous mess for me. It works very well. Everything depends on what content you interact with as the algorithm heavily shapes your content depending on what you interact with. Its no different from any other social network.


99% of people who use Quora don't use it as a social network, they click that top Google search result claiming to answer their question and then get frustrated at how fucked up Quora's website is and how rarely it actually answers their question.


Recipe articles with hundreds of words of irrelevant text before the actual recipe.


The endless drivel of recipe websites is another one, burying the actual recipe under an absolute mountain of slop.


LPT: Recipe Filter is shockingly good at cutting out all the filler and presenting it in an easy-to-read format

https://github.com/sean-public/RecipeFilter


Thanks for sharing!


Recipe for cinnamon rolls:

When I was a kid we used to spend summers with my grandmother. It was an idyllica pastoral setting and we used to chase the goats around and catch butterflys.

[snip 3000 words]

...when I asked her for her recipe, it turns out she made cinnamon rolls by buying pillsbury ones at the grocery store! So if you don't want to be like grandmother, use 2 cups of flour...


"Best X of YYYY" articles have been (mostly? fully?) automated mashups of tech specs for years too.


Yeah, slop isn't new, AI makes it easier to produce.

Other examples include those books where each chapter generously estimated has a tweet worth of thought padded out with 35 pages of meandering anecdotes that just paraphrase the same idea. Like it's very clearly a sort of scam, the padding is there to make it seem like the book has more information that it does when you look at it in a digital bookstore.


AI hype allows one to push "slop" about AI slop.

Just like simple template generated SEO, template-written "content", etc. before.

In fact, a lot of writing about AI slop could be considered just as much slop...


Yeah, and most of the reason for that can basically be summed up as "it's what Google incentivises".

They look for detailed pages, so pages are bloated with irrelevant information. They look for pages people spend a lot of time on, so the same thing occurs. Plus, the hellscape that is modern advertising means that rushing content out quickly and cheaply is encouraged over anything else.

AI will probably accelerate the process even more, but it's already been a huge issue for years now.


There's a bit of blaming a victim going on here. Especially early on in the days of SEO, Google incentivized slop the same way a bank vault incentivized armed robbery: by having something of value in it.

Google incentives don't matter much for honest website operators. They're only relevant when you want to abuse the system to promote your worthless bullshit[0] at the expense of the commons.

I really wish society started to treat marketing hustlers with the same disdain it has for robbers.

--

[0] - If it was worth anything, you wouldn't be worried about SEO all that much, especially back before it all turned into a race to the bottom.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: