Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd go even further than "hold no value" and say it's actively detrimental on both the individual and society. We already have an avalanche of dehumanizing technology that isolates and placates us. We see the results of this with problems in mental health and socialization. This is a downward spiral as AI content will likely appeal to those who lack social skills as they don't have to cope with tricky vagaries of other humans - which is part of which makes us human and gives us social growth.



Even more catastrophizing. Do you get upset when you read the abstract from an academic paper? Or when you listen to a real podcast that does summarize a difficult topic in easier/shallower terms? Is it the fact that an AI summarized it the problem? Can you point to a real harm here, or will you just hand-wave, instead of seeing the reality of making information more available being a net positive?


a few real harms:

- massively inefficient use of energy, water and other resources at a time we really need to address climate crisis

- ai 'slop' with myriad mistakes and biases performing a mass DDOS on people trying to learn things and know what's true

- moving resources away from actually producing factual and original content


Thank you, these are mostly extremely valid complaints. I hope with time these come to be inefficiencies that can be moved past (AI models turn into local-first energy efficient tools, becomes more intelligent at summarization). Right now though, wholeheartedly agree.

The last one seems to be irrelevant for this specific use case - the content is produced, it's put into an easier to digest format. No one thought sparknotes would kill books.


I was referring to real podcasts


Interesting. You have turned this around to be about me instead of the ideas. You must be good at arguing on the internet. I'm not.


Well, I'm just curious why you think something like this has negative value - I _do_ care about the ideas but you are the one who expressed that sentiment.


Here is what AI can do thus far:

1) humans produced a lot of content in good faith on the internet

2) the AI was trained on it and as a result produced a non-von-Neumann architecture that no one really understands, but which can reason about many things

3) even simply remixing the intelligible and artistic output of millions of humans in lots of nonlinear ways, directed by natural language, leads to amazing possibilities that obviate the need for humans to train anymore because by the time they do, it will all be commoditized.

4) doing it at scale means it can be personalized (also create unlimited amounts of fraudulent yet believable art / news / claims etc.) to spam the internet with fake information for short-term goals, some for LULz, others profit or control etc.

5) targeting certain goals, like reputation destruction of specific people or groups, seems like low hanging fruit and will probably proliferate in the next couple years, with no way to stop it

6) astroturfing all kinds of movements, with fake participants, is also a pretty easy goal with huge incentives — expect websites where 95% of the content and participants are fake trying to attract VC money or sell tokens, etc.

7) but ultimately, the real game changer is commoditizing everything you consider to be uniquely human and meaningful, including jokes, even eventually sex and intimacy. Visuals for heterosexual men, audio for heterosexual women (this is before the sexbots and emotionbots that learn everyone’s micro-preferences better than they know themselves, and can manipulate people at scale into being motivated to do all kinds of things and gently peer-pressure those who might resist).

8) For a few years they will console themselves with platitudes like “the AIs arent meant to replace, but enhance, centaurs of human + computer are better than a computer alone” until human in the loop will clearly be a liability and people will give up… the platitudes will become famous as epitomizing optimistic delusions as humans replaced themselves

Would probably be used for busy parents to rsise their kids at first, in a “set and and forget it” way, educating them etc. But eventually will be weaponized by corporations or whoever trains the models, to nudge everyone towards various things.

Even without AI, the software improves all the time through teams of humans sending autatic updates over-the-air. It can replace a few things you do… gradually then all at once. Driving. Teaching. Entertainment. Intimacy. And so on.

I think the most benign end-game is humans have built a zoo for themselves… everyone is disconnected from everyone by like 100 AIs, and can no longer change anything. The AIs are sort of herding or shepherding the humans into better lifestyles, and every need is satisfied by the AIs who know the micro-preferences of the humans and kids and pets etc.

But it will be too tempting for the corporations to put backdoors to coordinate things at scale, once humans rely on their AIs rather than other humans, a bit like in the movie “Eagle Eye”. But much more subtle. At that point most anything is possible.


Hahaha

Here we go, a claim that AI will create a glut of things detrimental to society

And then you’ll have the usual response that the things detrimental to society have already been there and this is nothing new

And round and round we go, while AI advances and totally commoditizes all the things humans produce that you found meaningful.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: