Thank you for the answer, I thought it was simpler than that, I'm glad the assumption was wrong.
I understand that any practical system of this kind would have to be very coarse, but even at the coarse level, does it have any kind of "error bar" indicator, to show how "sure" it is of the possibly incorrect answer? And can it come up with pertinent questions to narrow things down to a more "correct" answer?
I'm not sure I'm able to answer that in a satisfying way just because my memory is fallible. The degree to which the system is unsure of something (to the degree to which that can be coarsely represented) certainly shows up in the results, and I suspect the underlying search heuristics tend to prioritize things with a represented higher confidence level.
The latter thing sounds like something Doug Lenat has wanted for years, though I think it mostly comes up in cases where the information available is ambiguous, rather than unreliable. There are various knowledge entry schemes that involve Cyc dynamically generating more questions to ask the user to disambiguate or find relevant information.
I understand that any practical system of this kind would have to be very coarse, but even at the coarse level, does it have any kind of "error bar" indicator, to show how "sure" it is of the possibly incorrect answer? And can it come up with pertinent questions to narrow things down to a more "correct" answer?