It only does this if you don’t ask it to supply sources and double check where you aren’t confident. It allows you to cut through 99% of the bullshit within search results and double check where necessary. Perhaps it is a new type of media literacy but I don’t think it is too far off.
The idea that I might have to research the validity of a search result is very offputting. There was a time when I trusted Google to give me the most relevant result, filtering out the linkfarms and spam results. I don't see how AI gets us back to that trusting state.
My experience with Google is that the results were filled with SEO crap requiring quite a bit of manual sifting and confirmation/research (try to searching for reviews unbiased reviews that aren’t simply trying to get you to buy from Amazon using their affiliate link, or look for some unbiased nutrition information). ChatGPT will at least (with the right prompt) give me links to Pubmed or similar, rather than “Muscle Joe’s Nutrition House”.
Were you not already checking the validity of search results? Because Google's top few results I don't think were ever immune to "hallucination" where the top results happen to be garbage. It's where "don't trust everything you read on the internet" came from.
So I think the only thing that really needs to happen is just blindly trust the AI like you were apparently doing with early Google. I suspect however that you were gut checking Google which you can still do with any AI search that cites its sources.
You really are missing the experience of what google offered for their first decade, when pagerank seemed revolutionary in terms of filtering out spam and 101-level SEO tricks. We had working search engines for a while, and they were taken from us by, first, the providers in seeking greater ad revenue, and second, the AI floggers who want to exploit our now lowered expectations.
All you're doing here is justifying your lowered expectations.