LLMs don't know what is true (they have no way of knowing that), but they can babble about any topic. OpenCyc contains 'truth'. If they can be meaningfully combined, it could be good.
It's the same as using LLM for programming, when you have a way to evaluate the output, then it's fine, if not, you can't trust the output as it could be completely hallucinated.
It's the same as using LLM for programming, when you have a way to evaluate the output, then it's fine, if not, you can't trust the output as it could be completely hallucinated.