I have the same concerns, but am feeling more comfortable about Munchausen-by-LLM not undermining Truth as long as answers are non-deterministic.
Think about it: 100 people ask Jeeves who won the space race. They would all get the same results.
100 people ask Google who won the space race. They'll all get the same results, but in different orders.
100 people ask ChatGPT who won the space race. All 100 get a different result.
The LLM itself just emulates the collective opinions of everyone in a bar, so it's not a credible source (and cannot be cited anyway). Any two of these people arguing their respective GPT-sourced opinions at trivia night are going to be forced to go to a more authoritative source to settle the dispute. This is no different than the status quo...
Think about it: 100 people ask Jeeves who won the space race. They would all get the same results.
100 people ask Google who won the space race. They'll all get the same results, but in different orders.
100 people ask ChatGPT who won the space race. All 100 get a different result.
The LLM itself just emulates the collective opinions of everyone in a bar, so it's not a credible source (and cannot be cited anyway). Any two of these people arguing their respective GPT-sourced opinions at trivia night are going to be forced to go to a more authoritative source to settle the dispute. This is no different than the status quo...