Alternatively, we can draw a more fitting parallel to Robert Oppenheimer, who, upon recognizing the devastating potential of his creation, dedicated himself to halting the spread of nuclear weapons worldwide.
Robert Oppenheimer knew the entire domain space of nuclear weapons. Current researchers don't. Its not like future neural networks are just going to be stacks and stacks of transformers on top of each other.
Warning against potential dangers is meaningless. Any significant piece of tech has potential danger. Some innocuous microprocessor can be used as a guidance chip for a middle, or run a smart toaster oven.
There is more to it though. Geoffrey isn't just warning about potential danger. He is looking at current research, and wrongfully extrapolating AI power into the future. Sure, AI can and will be misused, but most of the warnings about sentient AI, or it's ability to solve complex problems like making deadly viruses are all hypothetical.
If they offered, I would take it. Not going to put an ounce of effort into convincing anyone to give me the position.
Jokes aside, the statement stands on its own without any sort of credentials and lack there of. A lot of the hypothetical AI danger relies on the fact that AI will somehow internally prove by a proxy that P=NP, and be able to produce information that would require brute force iteration to search using traditional methods through some arbitrary algorithm. Or alternatively, it will somehow be able to figure out how to do search for those tasks more efficiently, despite there being no evidence what so ever that a more efficient search algorithm exists for a given task.
Everything "simpler" then that is already possible to do, albeit with more steps, which is irrelevant for someone with capital or basic knowledge.