I really don't see the value of summarizing/repackaging web search hits. Given that 99% of SEO-tuned web content is just shilling for vendors who don't want to be seen, LLM search summarization will just repackage those ads into a more palatable format that is LESS useful than the original, while more successfully hiding the obvious signatures that used to be a clear warning sign that... THE.FOLLOWING.CONTENT.IS.MANIULATIVE.CRAP.
I think the value here is not in searching for SEO crap, but turning it on when you want to get references to the most current information relevant to your query.
For example, if you ask LLMs to build code using the three.js library, nearly all of them will reference version r128. Presumably because that version has the largest representation in the training data set. Now, you can turn this on and ask it to reference the latest version, and it will search the web and find r170 and the latest documentation to consider in it's response.
I was already doing this before by adding "search the web for the latest version first" in my prompts, now I can just click a button. That's useful.
I tend to agree. If I ask ChatGPT what is the best way to make pasta, it will pull from every source it’s ever been trained on. If it decides to search the web, it will mostly cater to one or two sources.
You don't think SEO-LLMs will evolve to redirect search-LLMs to 'see the world' the way the SEO-LLMs want it to? I foresee SEO-LLM-brinkmanship as the inevitable outcome. Soon THIS will be the catalyst for the real Skynet -- battling smart ad engines.