Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see how you can cryptographically validate much more than "this was validated by this source before this time", which doesn't seem to solve the problem stated by the parent at all.


Maybe you could have some DRMish thing where the camera signs it with a "secret" key, but this would be terrible for various reasons and also likely broken very fast.


I don't think so. You can cryptographically sign anything much like how SSL works now. You'll have to rely on certificate authorities to assign these certs, but it works.

Videos should be cryptographically signed, and verified once online. You can spoof certs but you can't really fake the cert authority


But, signing some data with a certificate only indicates that a key belongs to a particular name. It doesn't tell you whether the person or organization with that name is trustworthy.


That's what I meant by "validated by this source". But unlike with CAs, where they're (meant to) just base issuance on the simple objectively testable criterion of whether you control the domain in question, an external authority cannot easily know whether a video represents real events, whatever that means.


Maybe a service or a public blockchain where you send a hash of a digital artefact which is signed with a time constrained key. The signed hash is attached to the digital artefact and you can check the hash on the blockchain or on the api's service.

A blockchain is more wasteful, but a service requires a leap of faith in the provider.


This is still just a way to validate when something existed, isn't it?


Yes, and this is rather strong feature. A fake will be effective only if you have prepared it in advance instead of revising the past.


If you authenticate the video comes from a credible unrelated source, that would be different than if it came from a mysterious unknown source. Additionally if you have the chain of trust, you can interrogate every step manually for credibility and consistency.


Which is somewhat helpful, but also just pushes the validation work off onto large entities of some kind.


Yes, of course.

The value of it is that the legwork only has to be done once, instead of requiring everyone to independently do it (which would basically turn every accusation of crime into a DDoS against the accused).


It's turtles all the way down :-)


If you have cameras that sign their video feed with some id; editing software where an editor signs off on any edits, peopling handling/validating the content adding their signatures, etc. you build a chain of digitally signed content based evidence that you can follow all the way back to the original recording.

Then you can get people into court testifying whether they used a given piece of equipment to film something, edit something, etc. and you can guarantee that you are watching the exact output of that chain of recordings, edits, etc.

As I said, not a thing right now. But also not that technically hard to build. Right now we're just trusting witnesses that might be lying through their teeth without us knowing or being able to prove otherwise. Once we had such capability; anything else would be inadmissible in a court and no self respecting journalist would touch equipment without this capability. Why would they?

A deep fake would look plausible but lack this chain of evidence.


This doesn't seem significantly better than just having the organization providing a video sign it as "authentically theirs", in cases where that's possible; if you mean some sort of thing where editing software and cameras will sign things as "not tampered with", then this is effectively a DRM system and subject to the excitingly wide range of issues affecting that. This would not work for many situations, particularly the ones SamBam describes (not least due to the anonymity thing), as it is unlikely that there will conveniently be someone there with chain-of-trust-capable recording equipment and software.


Even if I don’t have the private key to sign the video nothing stops me from sending the processing element of the camera the same signals as the photo array. Even if you encrypted the connection between the processing elements and the photo array. A photosensitive array is already an exposed die so I could easily just set some needles on a few internal traces and again do the same thing.

There is no known solution to the analog hole.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: