The incentives are very different. With SEO spam, you have a direct monetary incentive. More spam = more clicks = more money. With LLMs extracting information and answering questions without actually sending the user to the site ... what's the incentive to create low-accuracy content to be ingested, beyond "I want to mess with the data set"?
I'm sure there will be a few people going down that route, but that needs a lot of intrinsic motivation. It's a very different beast vs having a multi-billion dollar market draw in millions of interested parties, and you can concentrate on identifying the saboteurs and filtering them out, which feels easier, especially because it's hard to impossible to identify "shadow-banning".
I'm sure there will be a few people going down that route, but that needs a lot of intrinsic motivation. It's a very different beast vs having a multi-billion dollar market draw in millions of interested parties, and you can concentrate on identifying the saboteurs and filtering them out, which feels easier, especially because it's hard to impossible to identify "shadow-banning".