Hacker News new | past | comments | ask | show | jobs | submit login

Yes, the article mostly assumes that the initial effects of AI generated fake content will be the same as the final effects. This is silly.

People will change what they do in response. Though at the very end, he does say "We should learn to be skeptical of content", that belongs near the beginning, before an analysis of what the effects of increased skepticism will be, rather than what the effects of blindly believing fake content will be (since that won't happen, after a short initial period).

Smaller communities are one possible response. But just more critical assessment of arguments and reported facts is another. For arguments, it doesn't really matter whether or not the argument was AI generated - if it's valid, it's valid, if it's not, it's not. For factual reports, critical assessment might be more difficult, though I think it will be a while before AI generated fake facts have the the right sorts of connections to common-sense reality to withstand critical examination.




Content, info, arguments, etc. are all propagated online based on their deliciousness. Is it dramatic? Easy to digest? Shocking? Emotionally powerful? Bright and alluring? Sexy or disgusting? These are the elements that push information to the top. Reality, truth and logic can't compete.

Advertisers figured this out in the middle of the 20th century. Prior to Edward Bernays' (Sigmund Freud's relative) revolution of advertising, products were marketed based on their functional qualities: how effective they were, how efficient, etc. Bernays realized from war propaganda and Freud's ideas of the unconscious, that selling with emotional coercion and sex was far more effective. In fact, you could make people buy things they didn't really want or need, by making them unhappy without them. He was able to convince women to smoke cigarettes by having trendy, independent women smoke openly at a parade, followed by a branding campaign calling them "torches of freedom". This concept of emotional manipulation trumping factual data is how our entire society now operates.

If we want a skeptical and thoughtful populace, our entire education system must be restructured and information dieting will have to become an innate part of the online experience.


People haven't shown the inclination for more critical assessment so far; why would that change all of a sudden?

And AI fakes are still in their infancy. For example, they haven't learned to push emotional buttons yet. But they will soon, because it's not all that hard, and it drastically increases the virality.

Now, with that in mind, watch this video, and weep: https://www.youtube.com/watch?v=rE3j_RHkqJc


> right sorts of connections to common-sense reality

Unfortunately, I think this matters less than it should. Connection to common-sense reality does not seem to be a prerequisite for most people who engage with content on the internet.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: