Hacker News new | past | comments | ask | show | jobs | submit login

The way I see it, it won’t take long before human eyes won’t be able to distinguish AI generated content from original.

The only regret I have about that is losing video as a form of evidence. CCTV footage and the like are a valuable tool for solving crimes. That’s going to be out the window soon.






Trust can be preserved by adding PKI at the hardware level. What you said about CCTV is true; once the market realises and demand appears, camera manufacturers will start making camera modules that, e.g., sign each frame with the manufacturer's private key, enabling Joe Public to verify that that frame came from a camera made by that manufacturer. Reputational risk makes the manufacturer store the private key in the device in a secure, tamper-proof way (like TPMs do now), which (mostly) prevents those private keys from leaking.

Does this create difficulties if you want to modify the raw video data in any way? Yes it does, even if you just want to save it in a different lossy compression level or format. But these problems aren't insurmountable. Essentially, provenance info can be added for each modification, signed by the entity that made the change, and the end viewer can then decide if they trust the full certificate chain (just as they do now with HTTPS).


Oh wow, that's a great idea. Isn't this already happening maybe?

Recently someone said here that it's noticable that videos from CCTV cameras are often filmed with a phone or camera on a screen instead of using the original video, and people were speculating that it might be hard or impossible to get access to the original recording because of bureaucracy or something, but that recording a playback on a screen with a phone or camera or something is then often allowed. Maybe they also do this partly so that the original can't be easily messed with by other people.

But yeah if you can verify that a certain video was filmed at a certain time by a certain camera, that is great. Of course the companies providing these cameras should be trustworthy, and that the camera's are actually really sending what they actually record, and that the company itself doesn't mess with the original recordings.


>Isn't this already happening maybe?

I recall an article posted 1-2 years ago about a camera company (Kodak? Can't remember) which was starting to offer something along these lines.

>the companies providing these cameras should be trustworthy, and that the camera's are actually really sending what they actually record, and that the company itself doesn't mess with the original recordings.

I agree. We can't guarantee any of these things, but on the bright side, the incentives are pointing in the right direction to make self-interested companies choose to behave the right way.

It will complicate things and make the hardware more expensive, so I doubt it will sweep through all consumer camera tech unless the "Is this photo real?" question becomes a crisis. There's also the fact that it would be possible to give individual cameras different private keys, with certificates signed by the manufacturer: This would enable non-repudiation (you would not be able to plausibly deny that you had taken a particular photo/video), which has potentially big upsides but also privacy downsides. I think that could be solved by giving the user the option of signing with their unique camera private key (when the user wants to prove to others that they took the photo themselves) or with the manufacturer's key (when they want to remain anonymous).


It's sad that almost AS SOON as we acquired the ability to record real-life moments (with the promise of being able to share undeniable evidence of events with one another), we also acquired the ability to doctor it, negating that promise.

I'm not sure we should have been trusting images for the previous decades either. Photoshop has been a thing for a long time already. I mean, there's those famous photos that Stalin had people removed from.

Your mention of Stalin is I think stronger as an argument that there’s been a significant change. Those fakes took lots of time by skilled humans and were notoriously obvious - what made them effective was the crushing political power preventing them from receiving critical analysis in public.

Similarly, while Photoshop made it easier it happened at a time where technical advances made the problem harder because everyone’s standards for photos went up dramatically, and so producing a realistic fake was still a slow process for a skilled worker.

Now, it’s increasingly available to everyone and that means that we’re going to see a lot more scams and hoaxes as people without artistic talent or willingness to invest time can make realistic fakes even for minor things. That availability is transformative enough to merit the concern we’ve been seeing here.


The glass half-full in me feels that the advantage to this is that in a few years the average person will know better than to trust anything that could be faked like that, instead of the old situation where someone who was willing to put in that effort could actually trick a lot of people.

I think that’s true, but it’s kind of like the trade offs during the pandemic where we knew it would eventually settle into a stable state but still wanted to reduce the harm getting there. We basically need some large fraction of the global population to level up in media literacy at all once.

I don't think it goes out the window completely. You need just the owner of the CCTV to stand up in court and say "yes this is the CCTV footage I personally copied from storage and I did not manipulate it".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: