I think LLMs operate in a similar way to some of the important parts of human congnition.
I believe they operate in a way that makes them at least somewhat useful for some things. But I think the big issue is trustworthiness. Humans - at least some of them - are more trustworthy than LLM-style AIs (at least current ones). LLMs need progress on trustworthiness more than they need progress on use in other areas.
I believe they operate in a way that makes them at least somewhat useful for some things. But I think the big issue is trustworthiness. Humans - at least some of them - are more trustworthy than LLM-style AIs (at least current ones). LLMs need progress on trustworthiness more than they need progress on use in other areas.