Hacker News new | past | comments | ask | show | jobs | submit login
Post-truth society is near (mindprison.cc)
45 points by 13years on Oct 16, 2023 | hide | past | favorite | 83 comments



There is absolutely no chance whatsoever that robots that are so real they are indistinguishable from people will come about in our lifetimes. The amount of breathless speculation since OpenAI released a kind-of-decent text completion program is so far beyond ridiculous at this point.


The first sentence is true, but the second raises the question: are we so gullible that the robots don't need to be indistinguishable in order to fool most people?

While the tech is great, if anything I feel like what OpenAI has revealed to us is that we're still a bunch of superstitious apes that are just not cognitively equipped to reason about what we've created.

It feels like a new age of religion is dawning in which we construct myths about the tools we build, instead of myths about the forces of nature. At one point we blamed bad weather on the gods. Our gods now are technological. The cryptocurrency movement was more belief than tool. The AI panic is more belief than tool.

We are apes and we have always worshiped falsehoods but now the falsehoods we worship are unknowable artifacts of technology, Arthur C. Clarke was prescient when he said - "Any sufficiently advanced technology is indistinguishable from magic"


ChatGPT could fool a lot of people already. Unless you’re looking to probe and check it’s an AI you probably won’t detect it over a chat session, especially with the right prompt giving it a background, context and the right speech/chat patterns.


They are talking about physical robots. Replicants/cylons, as referenced in the article. I agree with them.

Sci-fi style androids will definitely be feasible soon. Some of them will be capable of acting like humans, displaying convincing emotions, forming relationships, etc. However, they will definitely not be indistinguishable from humans at a physical level; not even remotely close.

Even AGI will not achieve that in our lifetimes, unless it first cures aging and disease thus extending our lifetimes indefinitely, which I also (sadly) regard as exceedingly unlikely in our lifetimes.


AGI may only be a momentary transitional state. It is assumed that ASI almost immediately follows AGI as once the machine is self aware and can improve itself then exponential intelligence takeoff is achieved.


They can start with virtual dolls (MMD) right now, except gpt has no idea how to move a body, virtual or not.


Generalized robotic skills may have just been solved.

The paper

https://storage.googleapis.com/deepmind-media/DeepMind.com/B...

Explanation

https://www.youtube.com/watch?v=GZdytTKeGYM


Maybe in a text chat, but in a fluid one-on-one in-person conversation, or in a group conversation? Absolutely no chance, and no reason to believe it will get to this level, maybe ever.


I mean, have you tried chatGPT’s latest multimodal speech update. The voice generation is pretty damn human like. Again with the right prompt you could definitely fool someone for a while using voice, since that mode is pretty conversational and includes the AI asking relevant, context specific follow up questions.

For an in person, we would need to generate full fledged bodies that probably isn’t happening soon but it’s not insane to think of a mechanical robot under structure covered by artificially generated meat (which we can make now) with a chatGPT brain from 10 years in the future.


A single person can also fool a lot of people.


My gripe is that, if a robot does something, it is not post truth, it is actual, real, truth. We should not confuse discourse on truth with discourse on identity.


Maybe you're much sharper than I am, but I am thoroughly convinced that what already exists could fool me for a long time under the right circumstances.


Makes me think of:

"Never, no matter what the development of science, will men, who have concern for their reputation, attempt for forecast the weather"

https://www.nssl.noaa.gov/users/brooks/public_html/future/te...


Humans perform terribly at exponential estimation. Many AI scientists and researchers are now saying AGI will be within our lifetimes and some are saying within a few years.

If that happens, lifelike robotics will be one of the lesser achievements.

If your opinion is that we will not achieve AGI, then I would say your outlook is probably consistent.


> There is absolutely no chance whatsoever that robots that are so real they are indistinguishable from people will come about in our lifetimes.

Yet the Prince of Nigeria keeps sending emails to good effect. Makes you think :)


I would be 0% surprised to find out this comment was written by a bot.


So we're living in a "truth" society now or in the near past? How silly. For most of human history people believed things to be true based on the words of soemone they trust (clergy, teacher, village chief,etc..). People are just doing that using the internet now.

There was a brief few decades where TV and radio changed society so that everyone thought what's on TV was what everyone else outside their circle believed, this media centric truth seeking is what is going away.

A lot of people have crazy beliefs these days because the politicization of institutions that used to be far less polarized historically. This i ncludes academia,religion and the press. If someone lives, believes and acts in total contradiction of how you live, believe and act then the only way to accept their claims of fact is to be an expert in that field just as qualified as them and find out for yourself but most people can't do that so they turn to someone closer to their world views and life styles.

This isn't post-truth but the decentralization and segregation of trust at all aspects of society, because people are tribal and targeted ads make a lot of money.


Uh... Post-truth has been here awhile. They’ve had the capacity to fake/alter live events for decades now.


"Some will make the argument “But isn’t this simply the same problems we already deal with today?”. It is; however, the ability to produce fake content is getting exponentially cheaper while the ability to detect fake content is not improving. As long as fake content was somewhat expensive, difficult to produce, and contained detectable digital artifacts, it at least could be somewhat managed."

Yes, indeed. However, to the degree it is getting worse is substantial, rapid and significantly different than prior.


In the 90’s, the internet was cool, but you went to the library to do research.

For about 20 years, that flipped. I think it is flipping back. For me, the eye opener was trying to diagnose a roof vent issue. The small local library has two relevant books, each with 2-3 pages of information.

Those pages were more informative than a 6 hour internet search.


Yes, somewhat a different topic, but so much information now exists in video form versus written due to the ease of video recording.

So many videos on YT that could be 2 lines of text are 5-10 min videos of someone talking about something irrelevant until we find the part where the problem is discussed.

I suppose AI might actually be solution to this. Allowing efficient video search.


That works both ways. a short video about how to properly chop potatoes is better than a text document. They are different forms of expression


I suspect both of you are hopelessly old and uncool like me.

It’s easier to have a machine autogenerate textual spam.

Content production costs are higher on YouTube, so there is a lower ratio of spam to content. Some of my hipper millennial friends caught on to this five years ago.

Of course, that heuristic is going to stop working in about 5 years, based on current photo generation quality. (Just have ChatGPT 10 auto gen the script and stage blocking directions for Stable Diffusion, AV edition or whatever).


This entire comment is really amusing to me. "Some of my hipper millennial friends caught on to this five years ago" has me chuckling.


I guess my question is if that actually makes things worse. The people who were likely to believe falsehoods unquestioningly probably already do. How much does the ease of making new convincing falsehoods ensnare new people vs. causing the wary to just get paranoid and unlikely to believe things without overwhelming evidence?

At some point, getting people to believe things, let alone care about them, is already something that is generally known to hit diminishing returns, mostly in terms of reach. The percentage of the population that even is following wherever you're posting your fake content isn't 100%, even if you're so convincing that it gets onto major news sources.

Post-truth always feels like it's brought up as "And then anarchy follows", but it's unclear to me that it's that different than what things were like in pre-internet society where "fake news" was just someone in your town telling you something they'd heard from someone the town over about something happening hundreds of miles away. It can be a regression, but it's unclear to me that it's necessarily worse for people, since it's not clear to me that the fact that I know about some natural disaster in Laos is good or important.


This argument around cost of fake content would be more convincing if it weren't already used countless times throughout history. Socrates saying that writing will atrophy people's memories. The priest's fear that books will replace them when it comes to preaching. Gessner and his belief that the unmanageable flood of information unleashed by the printing press will ruin society.

The social dilemma, and all those that were convinced social media would spell the end of modern society.

Instead each of these technologies improves access to information and makes it easier for most to determine the truth via multiple sources. I'd imagine in the future there will be many AI agents that can help to summarize the many viewpoints. Just like anything, don't trust any one of them in isolation, consider many sources, and we'll be fine.


> I'd imagine in the future there will be many AI agents that can help to summarize the many viewpoints. Just like anything, don't trust any one of them in isolation, consider many sources, and we'll be fine.

It would be promising if there were any formal theories proposing any such methods of verification. However, I'm presently aware of none.

Instead, we see AI detection methods failing and AI cyber defense failing at present.

“As cybercriminals are turning to AI to create more advanced malicious tools, a separate report by the web security company Immunefi said cybersecurity experts are not having much luck with using AI to fight cybercrime”

https://decrypt.co/150899/wormgpt-fraudgpt-ai-hackers-phishi...

I will have a different opinion when we see some real solutions being proposed or implemented. If you have some references in that regard, let me know.


You mean it'll fuck up SEO? The web is unsearchable already, anyway.

And people have their attention already saturated, this is the bottleneck. Not the amount of bullshit society can produce.


It sounds like you're making an argument about people's relationship to truth that pivots entirely around the unknowns presented by AI, and not around historical truths about how disinformation already works.


And the capability hasn't mattered for nearly as long. Large swaths of society have chosen to reject inconvenient truths and instead believe convenient lies. The battle's already lost. I'm not sure I can be convinced that we aren't 4th century Rome.


What were the convenient lies in 4th century Rome?


There is so much baseless hysteria about this subject.

If society actually cared about this issue (spoiler, it doesn't), we already have all the required cryptographic tools (PKI, trusted timestamping, remote attestation) to create/record verifiably sourced content.

Nobody really cares though. But people sure do like to whine about it.


"Nobody cares" is the important part. The most recent X-Files reboot did an episode on this. The age of conspiracies is over; governments don't have to kill people to protect secrets because they can just dump them in public and nobody has any idea what to believe anymore.


It's not that people don't know what to believe. They'll just believe what already confirms their worldview.

Indoctrinating children is the battleground. Kind of always has been though.


Does this mean that the age of conspiracies is over, or just beginning?

In the original X-Files the little group of hackers was framed as an eccentric and motley crew of misfits. And in terms of conspiratorial thought in the past (or at least the perception of it), that wasn't entirely inaccurate. But now? Decades of lies, conspiracies, and misdeeds have created the scenario you're alluding to. Public trust in the government peaked in the JFK era, at 77%. Today? It's at 16% [1] and trending to the single digits.

The overwhelming majority no longer trust by default, but this doesn't mean they have no opinion on what the truth may be.

[1] - https://www.pewresearch.org/politics/2023/09/19/public-trust...


There may be other reasons besides "they don't care". For instance, it's pretty dubious to say "they don't care" about the results of an election. But we still don't use encryption or really crypto at all for election results for reasons like logistics.


Logistics, and the hard constraint that we can collect the votes without being able to tie a distinct vote to a distinct individual. That last part puts some hard limits on what we can use for validation.


Hint, it's not biometrics.


> But people sure do like to whine about it.

That is the minority of people who comprehend what is happening stating their observations.

Cryptography has been proposed, but implementation details are problematic. There are still methods of subversion, but they do get harder. The other issue is that having every hardware recording device in existence implement cryptographic capabilities will also result in a regulation battle more likely to result in DRM state controlled devices.


Nobody cares because atleast for entertainment, no one cares as long as the end result is good.


> Nobody really cares though

You've come full circle and pointed out the exact problem


How do cryptographic tools help against fake imagery?


Trusted hardware running remotely-attested trusted software can capture imagery that can be highly assured to be real.

Trusted timestamping, when correctly implemented, completely prevents the generation of imagery after-the-fact. Any false imagery would need to be pre-prepared prior to an event in time, or generated in real time. And a trusted device would need to be exploited to "record" it.

PKI allows individuals/organizations/governments to cryptographically associate their reputation with specific pieces of content.

There will always be the possibility of exploitation of trusted hardware. Apple's eternal fight against the jailbreak scene is evidence of this. However, the combination of these techniques would go a long way towards making the production and distribution of fake imagery difficult.

But, as already stated, the actual problem is societal. People don't really care that much, and enjoy living in their echo chambers.


See “Using ZK Proofs to Fight Disinformation”

https://medium.com/@boneh/using-zk-proofs-to-fight-disinform...


They allow people generating fake images to sign them so that people believe they are human-generated.

Oh, sorry. I meant "They allow independent sources of truth to do the meticulous work of validating the pedigree of creations, so that with their cryptographically-secured seal you can be confident a human originated the content and a human authenticated it. Only after such work is done would a work be cryptographically signed as authentic."

You know, just like you can always trust the presence of an SSL certificate means a site is who they say they are. /s


Hmmm, how does "trusted timestamping" and "remote attestation" fit into your assumption?


Post-truth society has already arrived in the mid-2010's. Making fake content was never expensive or difficult to produce to begin with. Misinformation is already expensive to tackle and in many cases nearly undetectable without invasive data collection. Generative AI doesn't change the equation and was never a requirement to successfully manipulate public opinon. Hell, most of these problems with misinformation are probably as old as the emergence of sedentary human societies.

Also, what's up with throwing some random unsourced statement about robotics/replicants in the middle of an article?


Truth has been a struggle in humanity since the first two people communicated. Even the Christian origin story involves deception and distorting fact. Propaganda and scientific mass deception are modern concepts but extend back over a hundred years. In modern times the assault has become pervasive and persistent, and to an increasing extent automated and personalized at scale. The very nature of truth has come into question and the idea of personal truths has become weaponized at extreme scales. Generative AI just improves the quality, scale, automation, and personalization dimensions of an endeavor that is as old as humanity - manipulating people to act in a way that benefits you at their own expense by warping their perceived reality.


Generative AI makes it easier to further weaponize, so I would say mid-2010's was the entry (key issues). Now we're about to see it happen everywhere even in places we "thought was safe" (mass market).


What's the meaning of "further" weaponizing something that has already been weaponized?


Doing it at scale by non-technical people without leaving a trace.


Troll farms have already achieved scale and they are rarely staffed with solely technical people. The trace part never mattered and will continue to not matter.


THe accessibility is way higher.

Like if everyone all of a sudden had access to bio weapons and missiles... our Gov would be a lot more scared than it is today.


This essay equates "human-generated" with "truth." That doesn't match to my current experience online; more than half of what humans generate is already false.

What if it ultimately doesn't matter if the source is human or machine-generated?


It's interesting to note how short a time it took, after the rise of social-media business as a for-profit enabler and cultivator of on-line resentments and madness, for dire social upheaval to be the new normal.


It has certainly been an extreme accelerant.

Although, I believe it is the structure of social media itself contains inherent problems beyond profit motives and algorithms.

More detail here FYI

https://www.mindprison.cc/p/uniform-thought-machines


> We are never going to know what is real anymore.

Yeah, except for all those aliens who actually go outside and see for themselves, or ask people who have been there. You know, like most of humanity would do. :D

I get the article's premise but it sounds like a non-issue midterm. A lot of people go through increased financial hardship currently, and their interest in news is waning so the whole thing kind of balances itself out: more manipulated online material, less engagement from people. I'd think the latter factor will prevail because the manipulated material is put online to achieve a goal (usually a financial one) and if that goal is not achieved then the manipulators will start losing interest -- eventually.

Though I am really curious to see how will the whole thing unfold in reality. Obviously I am not claiming I can predict the future.


> actually go outside and see for themselves, or ask people who have been there

Ya sure, only way forward to know what’s going on between Israel and Gaza is to go there myself and ask around. Same with every other conflict. No problem


At least in urban parts of the US, I can go outside and learn Jews are not evil masterminds all planning in conjunction to take over the world, and that Muslims/Palestinians are not all brutal terrorists. What saddens me is that so many fail to make the logical leap that 99% of people you know just wanting to live their lives in peace also applies to 99% of people on the planet.


Unfortunately, that's not exactly true or even relevant. I wish it were.

It's not relevant because it's often the actions of governments, whether elected or not, that actually dictate so much of what happens in the world. It could be the case that e.g. most Russians just want peace - that won't stop a world war from happening if the Russian government decides to start one.

It's also not true either. It's not true because it really is the case that some people don't just want peace - they want to achieve specific goals, some of which are incompatible with other people's goals, and they want this more than they want peace. The Palestinians want various things - e.g. right of return. Israel doesn't want to grant it to them. It's more complicated, there's lots of nuances, but it's not just an issue of "oh why can't they realize that they just want to leave peacefully"

I wish it were.


And where did I say you can do that for everything?

Also you should not care what happens there unless it starts personally and directly affecting you -- just my opinion.

And before you say that it affects you... I seriously doubt it.


> Cylons/Replicants are going to likely become real within our lifetime.

Great! So if me and some of my friends can afford to buy a few replicants together, we can agree offline on some gestures we will make using our replicants ahead of time and then we can witness important events with our replicants, and we will discretely gesture during the events. This will then allow us to determine if the feeds we are seeing from the eyes of our replicants are being fiddled with significantly.


Naive to think you won’t have a compromised software stack on your replicant.


What I am saying is that because we agreed offline on what we would do at the event, we will be able to recognise when the feeds from our replicants do not match what we expected to see.

This way we detect that what we are seeing in the video does not match the real events at the place where our replicants are.


> Software detects control input gesture

> Software adds gesture to altered feeds in proximity


I think the complete opposite is true. We are moving, for the first time ever, toward truth.

We never had truth in our society and suddenly with social media, people are forced into reading the truth and it makes them homicidal.

Worse yet, this truth has broken the world. No longer are the USA going to be the world police with a goal of stability. They are now at least incentivized to create instability.


"Simulation and Simulacra" was an instruction manual to some people.


The dystopian novels all were.


Wow thank you “mind prison” for this revelatory call to action. We should indeed all “share with others” when it comes to this blog


I should have known it was a fucking substack blog before I even clicked.


97% of us always believe what we're told by the authorities of the hour. Moving in conformity with the hive like a pebble embedded in a glacier. Evidence, recorded or otherwise, makes no difference.


That's not what post-truth means. Post-truth would mean something like "the world after the realization of the lack of coherence of the concept of truth".


Post truth society is already here. Disinformation has been on the rise the past couple of decades, and the results are clearly visible: conspiracy theories, election misinformation, and divisive politics where people's worldviews vary across their perception of reality.

Photoshopping and lying your way to a narrative is now the norm in the age of social media.

No doubt that AI will accelerate this, but to act like AI is the beginning is just evidence that you aren't paying attention.


Yes, the problem already exists, but we are approaching something uniquely different in magnitude that is significantly different than prior.

I would add the caveat Post-truth is not the moment disinformation exists, but rather the point where the threshold is crossed where truth is no longer discoverable for the majority of content.


But that assumes that we've always lived in a "Truth" society where Truth was discoverable for a majority of content, and I'm not sure that was ever the case for more than like... 10% of the global population for more than a few decades. And it's not even clear to me that that was a net good for everyone involved in a practical sense.

A hundred years ago, even people in rich countries often had no simple way to verify the truth of anything. Even information local to them. Cut to 2000 and the scope of things you're expected to validate the truth of has expanded to wartime conditions in countries thousands of miles away, something that is often hard to discover the truth of in real time even if you're actively involved in the war there, and is often not fully discovered for decades or centuries after.

In my mind, optimistically, things are moving towards some people being more ok with just not having a hot take about something that they can't verify. Not to say people won't have unjustified opinions on things, but that folks will maybe revert to the sort of "that's happening over there, and maybe I don't need to know about it on a minute by minute basis" zone.


I agree with that. Truth has always been problematic and never had absolute certainty. Nonetheless, we had reasonable methods to increase the probabilities of what we know under certain conditions.

There was and always will be limits. However, we understood to some degree at least how to rationalize some manner of probabilities for truth. Locality of information, reputation, the amount of congruent data etc.

So I think we are passing from an era of difficult to truth to potentially and era of impossible truth. And yes, some things were already impossible to verify. However, it just simply becomes substantially more and at extremely low cost to manufacture confusion at enormous scale.


Disinformation is a complete red-herring.

It's about presenting people with content that they want to believe. All sides are engaged in it.

The facts have never been either-or/black-or-white anyway.

It has always been more important to teach people to think for themselves, how to empathize with people who are different from you and have a grounded moral-center. I think if you find yourself completely agreeing with a major narrative in the media, you're probably missing one of the above three. Tribalism is cancer.

"Disinformation" is just a reframing of the narrative to pretend that only one side has all of the facts. The people shouting at you about how dangerous it is are the ones playing you.


You seem to have interpreted this in a strongly partisan way for some reason. Nobody accused any specific party or groups of spreading disinformation. This was not a tribalistic argument. and by your own account you seem to understand that disinformation is something that should be combated with critical thinking, so you seem to acknowledge that it is a problem when people decide to believe in falsehoods.

"All sides are doing it" doesn't diminish the problem, if anything is shows how severe the problem is. Critical thinking is a good skill, but it's a hard skill for a person to develop if they are overexposed to disinformation. People cannot be expected to figure out what the truth is if they don't have trusted sources of information. You either have to guess who is telling the truth and hope they are right, or you have to assume nobody is telling you the truth and stop caring.


cultural hegemony but with AI


Eh, what?

Anti-vaxxers believing there's microchips in vaccines, 9/11 truthers, flat earthers, NFT peddlers, gamestop investors, moon landing deniers, qanon, believers in a shadowy cabal running the USA because there's an illuminati-looking eye on the 1 dollar bill...

These did pretty well without GPT4 and SD.

People need education, critical reading skills, reasoning skills. And they need at least the lower 2-3 rungs on Maslow's pyramid satisfied. That protects against misinformation, not controlling or detecting where content originates.

Memes will spread if the conditions are right, no matter how much tech or legislation you throw at society.


Everybody always thinks it's the other guy."Why can't those sheeple just wake up?"

The best way to tell who's drinking the cool-aid is probably numbers. The more people who believe it, the wronger it is.


How are people hip when they’re all the same?


I don't understand the fear—we have always lived in a society where truth exists in terms of degrees of certainty, and virtually any statement about the world can only be true with less than perfect certainty. Vagueness and ambiguity, floating signifiers, and the difficult-to-articulate impact of connotation ensure that we live in a state of constant uncertainty.

And, of course, there are always self-motivated incentives to directly mislead people about the world. If we really want to minimize the uncertainty that comes with relaying statements about the world around us to each other, we need to minimize the personal gain that can come from obscuring reality from each other. As I see it, this is inherently at-odds with the inherently narcissistic nature of market economics—there's nothing inherently special about that but being the primary mechanism of power in this world.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: