Hacker News new | past | comments | ask | show | jobs | submit login

These small incremental AI tools seem in isolation to be helpful things for human coders. But over a period of decades, these interations will eventually become mostly autonomous, writing code by themselves and without much human intervention compared to now. And that could be a very dangerous thing for humanity, but most people working on this stuff don't care because by the time that happens, they will be retired with a nice piece of private property that will isolate them from the suffering of those who have not yet obtained their private property.





If the danger is a high degree of inequality among humans on Earth, we are already there.

Inequality though isn't on/off, and there are degrees. The current existence of inequality isn't a logical dismissal of attempts to prevent it worsening.

And of course, the danger of AI is much greater than just inequality: it is the further reduction of all human beings to cogs in a machine, and that is bad even if we all end up being relatively equal cogs.


Every time it’s the same pattern:

“Autonomous AI is dangerous”

“pfft, are you worried about X outcome? We already had it”


Because it's true? We already had a world war between autonomous AIs called national militaries before they (mostly) learned that total conflict doesn't result in them getting more resources. And autonomous AIs called corporations exploit our planet constantly in paper-clip maximizer fashion. The fact that they are running on meatware doesn't help at all.

And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines? The problems would be orders of magnitude bigger. They can do far more at scale. They can perfectly recall, process and copy all information they’re exposed to. And they don’t have a self preservation instict like people with bodies do.

> They can perfectly recall, process and copy all information they’re exposed to.

I'm not sure if it's better or worse that the computers can do that while the AI running on them get confused and mix things up.

> And they don’t have a self preservation instict like people with bodies do.

Not so sure about that, self preservation is an instrumental goal for almost anything else. Even a system that doesn't have any self-awareness, but is subject to a genetic algorithm, would probably end up with that behaviour.


If we are still talking about AI enhanced companies, it's not that companies evolve. It's that those companies who are unfit, die off. Paul Graham put it humorously in a very old speech I can't find...

I was responding to (what I thought was) a point about AI themselves rather than specifically attached to corporations.

Corporations (and bureaucracies) don't follow the same maths as evolution — although they do mutate, merge, split, share memes, etc., the difference is that "success" isn't measured in number of descendants.

But even then, organisations that last, generally have their own survival encoded into their structure, which may or may not look like any particular individual within also wanting the organisation to continue.


> And we see those as problems. But they were constrained by being executed by humans. Now the AI fans want to make more and more actually autonomous ones executed by machines?

Those things that we see as problems are exactly the things that our civilization relies on. Every time you make a purchase you rely on the fact that meatware AI corporations exploit environment and employees ruthlessly.

Every time you enjoy safety you rely on the fact that meatware military AIs got hellbent on acquiring the most dangerous hardware for themselves and make assessments that not using that hardware in any serious manner is more beneficial to them.

All the development of humanity comes from doing those problematic and horrible things more efficiently. That's why automating it with silicon AI is nothing new and nothing wrong.

I'm afraid that to evolve away from those problems we'd need paradigm shift in what humanity actually is. Because as it is now any AI, meatware or hardware will eventually get aligned with what humans want regardless of how problematic and horrible humans find the stuff they want.

It's a bit like with veganism. Killing animals is horrible but humanity largely remains dependent on that for its protein intake. And any strategic improvements in animal welfare came form new technologies applied to raising and killing animals at scale. In absence of those technologies welfare of animals that could feed growing human population would be far worse.

There's always of course the danger of brief period of misalignment as new technologies come to existence. We paid for industrial revolution with two world wars until the meatware AIs learned. Surprisingly they managed to learn things about nuclear technology with relatively minor loss of life (<1 million). But the overarching motif is that learning faster is better. So silicon AIs are not some new dangerous technology but rather a tool for already existing and entrenched AIs to learn faster of what doesn't serve their goals.


If you are okay with more of it then it is clear on which side of the gap you are

Inequality has always had a breaking point where people revolt. There is no sides, only mechanisms.

Exactly. And it won’t isolate them btw. The AI will affect them too.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: