> And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines?
Those things that we see as problems are exactly the things that our civilization relies on. Every time you make a purchase you rely on the fact that meatware AI corporations exploit environment and employees ruthlessly.
Every time you enjoy safety you rely on the fact that meatware military AIs got hellbent on acquiring the most dangerous hardware for themselves and make assessments that not using that hardware in any serious manner is more beneficial to them.
All the development of humanity comes from doing those problematic and horrible things more efficiently. That's why automating it with silicon AI is nothing new and nothing wrong.
I'm afraid that to evolve away from those problems we'd need paradigm shift in what humanity actually is. Because as it is now any AI, meatware or hardware will eventually get aligned with what humans want regardless of how problematic and horrible humans find the stuff they want.
It's a bit like with veganism. Killing animals is horrible but humanity largely remains dependent on that for its protein intake. And any strategic improvements in animal welfare came form new technologies applied to raising and killing animals at scale. In absence of those technologies welfare of animals that could feed growing human population would be far worse.
There's always of course the danger of brief period of misalignment as new technologies come to existence. We paid for industrial revolution with two world wars until the meatware AIs learned. Surprisingly they managed to learn things about nuclear technology with relatively minor loss of life (<1 million). But the overarching motif is that learning faster is better. So silicon AIs are not some new dangerous technology but rather a tool for already existing and entrenched AIs to learn faster of what doesn't serve their goals.
Those things that we see as problems are exactly the things that our civilization relies on. Every time you make a purchase you rely on the fact that meatware AI corporations exploit environment and employees ruthlessly.
Every time you enjoy safety you rely on the fact that meatware military AIs got hellbent on acquiring the most dangerous hardware for themselves and make assessments that not using that hardware in any serious manner is more beneficial to them.
All the development of humanity comes from doing those problematic and horrible things more efficiently. That's why automating it with silicon AI is nothing new and nothing wrong.
I'm afraid that to evolve away from those problems we'd need paradigm shift in what humanity actually is. Because as it is now any AI, meatware or hardware will eventually get aligned with what humans want regardless of how problematic and horrible humans find the stuff they want.
It's a bit like with veganism. Killing animals is horrible but humanity largely remains dependent on that for its protein intake. And any strategic improvements in animal welfare came form new technologies applied to raising and killing animals at scale. In absence of those technologies welfare of animals that could feed growing human population would be far worse.
There's always of course the danger of brief period of misalignment as new technologies come to existence. We paid for industrial revolution with two world wars until the meatware AIs learned. Surprisingly they managed to learn things about nuclear technology with relatively minor loss of life (<1 million). But the overarching motif is that learning faster is better. So silicon AIs are not some new dangerous technology but rather a tool for already existing and entrenched AIs to learn faster of what doesn't serve their goals.