Yeah this isn't at all unexpected. Language models are trained to make plausible text not true text. A lot of the time it happens to produce stuff that's true, because it is more plausible that somebody say "Biden is the president" than "Steve is the president", but you asked it a pretty niche question for which there is no answer that is plausible and true so it had to go with plausible.
It clearly worked because you thought they sounded real!
Try asking it for good references that deal with lists *or "none" if there aren't any.
It clearly worked because you thought they sounded real!
Try asking it for good references that deal with lists *or "none" if there aren't any.