The real problem is the bad actors - trolls, mental and financial strip miners, and geopolitical adversaries.
We are just not psychologically adapted or intellectually prepared or availing of a legal framework for the deluge of human-like manipulative, misleading, fraudulent generative fake reality that is about to be unleashed.
Free speech, psychopathic robots, adversaries who want to tear it all down, and gullible humans, are a very bad mix.
Absolutely this. You can already use GPT-4 to have a convincing text-based conversation with a target. And audiovisual generative AI is fast reaching the uncanny valley.
Since there is apparently no way to put the genie back in the bottle, everyone needs to start thinking about how to authenticate themselves and others. How do you know the person calling is your daughter? Is that text message really from the new bookkeeper at the plumbing firm who just asked you to change the wire transfer address? She seems legit and knows all sorts of things about the project.
Exactly! The distraction of “ai safety” that focuses on made up cool sounding sci-fi risks will absolutely take us away from thinking about and dealing with these very real (and present right now) dangers.
I wonder if the compute power/GPUs for crypto mining are being converted to be compute for LLMs/GenAI/AI. I wonder because I wonder what percent of crypto compute resources that are under the custodianship of "bad actors" -- just trying to think of how bad actors get these AI "powers" at the scary scale that can disrupt society.
We are just not psychologically adapted or intellectually prepared or availing of a legal framework for the deluge of human-like manipulative, misleading, fraudulent generative fake reality that is about to be unleashed.
Free speech, psychopathic robots, adversaries who want to tear it all down, and gullible humans, are a very bad mix.