I actually see this as a positive thing. Rather than one bad actor hooking an AI up to a headless browser at some point in the future we are trying everything that could possibly go wrong long before the AI can do much real damage (in a technical sense, as opposed to misinformation campaigns and job replacement etc).
> we are trying everything that could possibly go wrong long before the AI can do much real damage
What I would prefer to see is more people doing this with the intention of making AI safer. Without this intention, people are incentivized to look the other way when something does go wrong, so we don't actually learn lessons from this.
In a similar manner, I would like us to build stronger social coordination mechanisms and technical safeguards to help us determine when things are going wrong.
Just mindlessly trying to hook up everything you can to AI seems bad.