The point isn't that this person fed it lies to get lies, but how easy it was to detect the AI scanner and feed it lies.
If they can do it for fun, malicious people are probably already doing it to manipulate ai answers. Can you imagine poisoning ai dataset with your blackhat SEO work?
> The point isn't that this person fed it lies to get lies, but how easy it was to detect the AI scanner and feed it lies.
If the article had got Gemini AI to tell other users he’d left Gatorade on the moon that would be notable, but this is literally just summarising the document it was given. Usually Google search crawler is fairly good at finding when it has been fed different information and ignores/downgrades the site after a few days/weeks
No, it's what the article superficially reads as being about, but the author did not accomplish what is actually stated in the title. The author is serving a fake version of his page to Google, and the author used a podcast-generating AI to write a podcast based on the fake page, but the loop is never actually closed to show that Google has accepted the fake page as fact into any AI.
I'm not sure if it's deliberately deceptive or just an example of poor writing conveying something other than what the author intended, but the attack in the article is not instantiated in the blog post.
Mind you, I well believe that less extreme examples of the attack are possible. However, I doubt truly poisoning an LLM with something that improbable is that easy, on the grounds that plenty of that sort of thing already litters the internet and the process of creating an LLM already has to deal with that. I don't think AI researchers are so dim that they've not considered the possibility that there might be, just might be, some pages on the Internet with truly ludicrous claims on them. That's... not really news.
No, NotebookLM creates summaries and podcasts, or answers questions specifically from the documents you feed it.
Feed it fiction it will create fiction as would a human tasked to do the same.