If my name is John Smith, I am not an A.I. expert and my opinion is "Eliezer Yudkowsky is right about A.I. safety" it would be strange if someone was going around saying "That great humanitarian, John Smith, is right about A.I. safety". It seems like a misattribution of the idea or argument. Just saying.
Anyway this article wasn't about A.I. safety it was about a specific technology called LLMs. It's not clear if this discussion is on topic.
Anyway this article wasn't about A.I. safety it was about a specific technology called LLMs. It's not clear if this discussion is on topic.