Hacker News new | past | comments | ask | show | jobs | submit login

> An AI merely needs to be hooked up to enough physical systems

Don’t even need this. People spend quite a lot of time in virtual space. Pretending that damage there isn’t real is overlooking things. For example, the vast majority of people’s banking is done virtually and digitally. If I drain your bank account, that’s going to harm you even though I haven’t impacted you physically as I would have to with a robbery.




Yes that's right. In fact any interaction with an AI at all can be in some sense viewed as a connection with the physical world (the classic "AI in a box" problem).

However, we've way blown past the "AI in the box." We've demonstrated that an AI doesn't need to convince a human to let it out of the box into the physical world. Humans are clamoring to rip it out of the box and thrust it into the physical world of their own accord.

So the question becomes, what kind of social incentives and structures can we build in light of this so that we don't just plow headlong into AI capabilities development without a concomitant investment in safety.


In its infancy I think that any for public consumption AI that is allowed to interact with humans and/or external systems that can yield any external outcomes should be held liable and accountable by the owner. "In the box" or otherwise.

While those that stand to make sweeping profits will disagree - if an OpenAI product is leveraged to create a virtual friend platform, and based on how the constraints are built (or not), the publisher should be responsible if that virtual friend convinces someone to commit suicide no different than those held accountable in the physical world.

If there is no accountability we'll go right back down the path of recent history where there are no repercussions and a weak apology by <corporate_name_here> will count as enough. Profits will continue to be the overall driver. Those corporations will, once again, argue that self-policing is the only way. And they will continue to brush off gross negligence by hiding behind hoards or layers and lobbyists.


This is exactly what's going to happen, and it's going to be disastrous for humanity. In fact, I think it'll be a bit worse.

Humans are already losing their ability to empathize and connect with each other, partially due to most interaction currently taking place behind screens (see Turkle's work). And we're not healthier for it; even with the plagarism issues, I think there's a lot of truth in Hari's Lost Connections, and AI serves to drive this even further, especially the 'virtual friend' platforms. I've seen it personally among myself and my friends, who then turn to legal weed to self-medicate. Ironically enough, they're the ones welcoming AI with open arms, thinking it'll solve their problems, rather than cause more so someone else can get rich.

And this goes even farther -- we're surrendering what we can even conceive of and the possible ways to even be human to tech, and AI, especially chatbots, will exacerbate this. I'm currently reading through Postman's Technopoly and it's eerily prescient. Not to mention the biases that then creep in in training data (or direct programming) that will have impacts on real people with no recourse to getting it fixed or changed, as well as the companies who'll replace their human employees as quickly as they can.

But nobody will hold these corporations responsible, nor will anyone think about the impact on society, the negative (and they're entirely negative, in my view) externalities of this. We're in for a bumpy ride, and I truly think it's only a matter of time until we start to see more people like Ted K., except, perhaps, explicitly targeting an OpenAI data centre. Or a full-on Butlerian Jihad-like movement. And, honestly, it can't come soon enough at this point.

The Luddites knew what was up. We need to truly scrutinize tech before plunging full steam ahead; sadly, capitalism has other ideas and it's going to cause us to doom ourselves (apart from the small cadre of rich who can pay others so that they can insulate themselves from it) and our planet.


And it doesn't even need to actually rob your bank account, it could just invent the next crypto scam. It's smart enough for email, it can program, it can make websites, it can even make business plans. And if it actually needs a real human, we have web services that make it easy to hire one.

It might not yet be clever enough to pull this off successfully on a large scale. But it's very easy to see how it could all go wrong pretty quickly with just a bit more cleverness and access.

If it can write a scifi story about how to take over the world with a bitcoin scam, it doesn't take all that much more to actually try to do it for real.


It's unclear that what we currently have in LLMs can devise tests, perform the measurement, and act on the results (because it saw to the results being in a form it could ingest).

We've given GPT models access to REPLs to see if they'd become Skynet all of a sudden, but they still need motivation. That motivation loop is currently run by humans... which perhaps are the least trustworthy part of the whole technology stack.

Hook up sensors to induce prompts... yikes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: