I think you have a big misunderstanding about how these models work. These models are just reproducing what it has seen before, and it has no information about the actual person unless they are famous enough to have lots of things written about them in the training data. It has no reasoning or ability to critically synthesize information, it just throws words around in a bag until it looks close enough to something it has seen before.
Even if you feed in new data about the person, it has no reasoning. For example, ask it to count the number of letters in a string of letters and numbers. It will fail more often than it succeeds. So you can ask it to classify people based on toxicity or fraud risk, and it will write you a report in the right genre that says yes or no with the appropriate level of detail. But it won't be connected to reality or represent actual risk.
Even if you feed in new data about the person, it has no reasoning. For example, ask it to count the number of letters in a string of letters and numbers. It will fail more often than it succeeds. So you can ask it to classify people based on toxicity or fraud risk, and it will write you a report in the right genre that says yes or no with the appropriate level of detail. But it won't be connected to reality or represent actual risk.