Humans barely agree with each other. Do you think are going to be persuaded by half baked chat bots like GPT3? (ya the cherry picked stuff looks good but rest is still not really there).
So far all OpenAI has done is generate more publicity (probably intentionally may be with an eye towards investors) by acting as if they are protecting the next nuclear weapon of some kind. Frankly, there is nothing to lose by not using their "ohh god its so dangerous" model.
Machine learning is synonymous with non-linear growth. So of course I think any chatbot will eventually have no problem persuading humans.
The pivetal moment is when his persuation skills succeed that of the average human.
I have no clue how far away we are to that goal, but given accelerated growth, it will come rather sooner then later.
We are already being convinced by invisible bots that have no mouths to talk with and no keyboards to type with. Go to any website -> all the ads that are desperately trying to woo us or nothing but AI/machine learning. Go to Amazon -> the product placement and results customized to you -> AI/machine learning. Considering Google/Amazon are already making boat load of money selling stuff indirectly and directly through this, I would say no new danger is coming by the way of Zeus's thunderbolt (aka GPT-3 as per ClosedAI's attitude).
Unless you are a bot and managed to convince me to spend a couple of minutes typing this answer - in which case - well played bot, well played! :D
> Humans barely agree with each other. Do you think are going to be persuaded by half baked chat bots like GPT3?
Isn't that precisely why we can be conviced by things like GPT3? Humans don't value other humans opinion that much, so it's easy for people to trust bot over other humans.
So far all OpenAI has done is generate more publicity (probably intentionally may be with an eye towards investors) by acting as if they are protecting the next nuclear weapon of some kind. Frankly, there is nothing to lose by not using their "ohh god its so dangerous" model.