Hacker News new | past | comments | ask | show | jobs | submit login
Deepfake Offensive Toolkit (real-time deepfakes for virtual cameras) (github.com/sensity-ai)
556 points by draugadrotten on June 7, 2022 | hide | past | favorite | 320 comments



> Authors and contributing developers assume no liability and are not responsible for any misuse or damage caused by the use of this program.

Anything that can be created, will be created. However, that doesn't free you from all moral culpability. If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

I'm not saying that they shouldn't have created this, or that they don't have the right to release it. But to create it, release it, and then pretend that any misuse was entirely separate to you is at best naive.


I’d argue there is a moral imperative to create and release tools like this as free and open source software so that anyone has access to them and can choose whether to use them, rather than only sophisticated and well resourced adversaries.

IMO the creators should feel good about their actions, even if they feel bad or apprehensive about the direction of the world because this technology exists at all.


> there is a moral imperative to create and release tools like this as free and open source software so that _anyone_ has access to them

That's an argument I have a lot of time for, but it needs to be made, rather than - as at present - having the whole issue sidestepped. With any tool like this, I think there's a need for ethical due diligence as much as there is for engineering, and there's no evidence that this aspect has been considered at all.


To me it sounds like it boils down to either you release the software or you don't. If you decide to release it into public, I don't think there is much you can realistically do to prevent abuse.


Release/don't release is definitely the choice with the highest impact, but it's not the only thing that can be considered.

Who you release things to, in what form, with what framing - all of these impact how something will be used. You can never eliminate bad actors, but you can encourage responsible use.


If you know to use Linux properly you can tamper with digital evidence and make a mess for a digital forensics investigator. The thing is, despite Linux making it easier than Windows, it doesn't come with a "here's how to erase digital evidence!" manual[0]. It's "left as an exercise to the reader".

Probably same on macOS, but it's (mostly) not open source, so you can't really argue about it being an open source thing that lets you do evil stuff. You could argue about its availability though.

And it boils down to the point you (and others) were making: these are tools, the ethics are in how you use them.

[0] I know there are some forensic distros that come with anti-forensic tools. Still, Linux in general is the environment that allows that kind of thing to work in the first place, going back to the original point of whether it should be open source or not.


Does this apply to nuclear weapons? To biological weapons?

“Tool is expensive” is a real, effective deterrent to frivolous use of a tool. There’s a reason we pool resources into governments, which is explicitly so we (via a government) have capabilities that we (as individuals) don’t have.


>> Does this apply to nuclear weapons? To biological weapons?

That may not be a fair comparison. Nuclear weapons are difficult to make, and using them will not go undetected - the damage would be enormous. Deep fakes are a real threat today, and anyone willing to pay for the expertise (which isn't that hard to find) can have them made. By putting this out there, deepfakes are now something almost anyone can make which means the threat is imminent and needs to be addressed. I understand the argument, but still not sure about it. Just wanted to say it's different than nuclear weapons.


I'd argue that one of the biggest reasons deepfakes pose a threat is because a large number of people don't know they exist. It's just those e-mail chains from the early 00s with poorly photoshopped images that lots of people thought were real.

Now that the public knows broadly that digital manipulation is easy and they have seen it with their own eyes a ton things like that are believed less. Notice I do say less though, we still definitely have problems with misinformation being propagated with shitty-looking video/audio/photos that people should not believe in because they look/sound so inauthentic.

All that to say, these tools clearly exist. I think the public is better served by them being pervasive and well-known rather than having them be locked up in Hollywood studios, research labs, and government agencies.


I would agree with this.

It may be worthwhile to have someone create a bunch of deepfakes of politicians saying "This video has been faked" to drive awareness of this as a threat model.

E: Forgot the key bit, it would need to be spammed all over Facebook/Instagram/Mainstream social media.


Right but this means real content is less believable, too.


Indeed. Honestly that sounds like a bit of a plus too. A bit more skepticism should be applied to videos of talking heads in suits saying things. Especially when there's a huge panel of them.


I think a lot of information could be treated like pollution, in that it is easily created, spreads quickly, and becomes harder to clean up the more people are able to learn of it. I have to wonder if forging on like this will have a net positive effect on society, or if these kinds of developments are just fated by human nature.

In a perfect world, what should have happened was a separate team inventing a reasonably accurate deepfake detector to the world before the deepfake producer gained prominence (differential technological development), but interest, productivity and excitement on the researchers' part to release their work first seems to have foiled that.


For nuclear weapons, even if plans were publicly available, getting the right kind of uranium would still be a major hurdle for 99.9% of the people.


But what if they weren't? What if one could create something with similar destructive force with materials easily accessible, and the only hurdle is knowledge of how to do so?

It's honestly an interesting moral question.


This is explored further in Nick Bostrom's Vulnerable World Hypothesis. He believes that not even a total global surveillance system would be able to stop a single teenager with the recipe for a homemade pathogen. Nothing changes about the laws of the universe in that case, only the fact that the information was not in our minds before and then it is, and we collectively cannot forget or uninvent it.

We as a species have only survived ~75 years since we became aware of nuclear weapons, which is only a small portion of our overall history.

This has taught me to be mindful of the state of being unaware, because once you are aware of something it is impossible to reverse unless you're an amnesiac. Collectively forgetting something as a society then becomes infeasible without the technology and the willingness to revert our knowledge to a safer state, and a single outlier can ruin the entire system. When Jobs held up the first smartphone and we all started cheering and fantasizing, did any of us think it would lead to this?

(I also think that a pill to forget the experience of [addictive thing] would be revolutionary, but sadly our ingrained curiosity might undo the effect shortly afterward. A lot of tech takes advantage of our curiosity.)

This makes me wonder: what compels us to keep researching AI despite us also being able to hypothesize all these game-over scenarios? Why does it feel like we have no choice but to be crushed by unbounded progress?

Perhaps 50 years from we may find that nothing short of halting all research in the name of curiosity was the only option that could have preserved our culture/existence/values for another few decades.


> When Jobs held up the first smartphone and we all started cheering and fantasizing, did any of us think it would lead to this?

Jobs didn’t hold up the first smartphone and smartphone proliferation has little to do with the AI research that led to deepfakes.


I was more trying to point out that what we might hope comes out of new technology in the future may not necessarily come true, or the benefits might come with as-yet-unknown downsides that only become visible by the technology being proliferated extensively (and irreversibly).


You keep saying “we” and “us”. Who the fuck are you talking about? Speak for yourself.


Over time the number of single humans that have the power to wipe out a sizeable portion of life on Earth (which is almost certainly non-zero already) will almost certainly increase due to advances in technology and dissemination of information.

Do we aim for security through obscurity or work out how to deal with that increasingly terrifying reality? I think at some point we will be forced to do latter.


This trend is clearly real, but what exactly do you mean “work out how to deal with that?” One of the obvious ways to deal with it, albeit imperfectly, is in fact to keep these technologies as obscure as possible for as long as possible.

Maybe an interesting norm to try to generate in this community would be, “if you plan on publishing attack X, you should publish immediately alongside it your best guess as to the effective counter to X.”


Keeping these technologies in the hands of a few has its own set of severe negative consequences.

In the case of nuclear weapons, one of those consequences is that poor countries are denied the ability to defend themselves.

This led to the bullying and invasion of countries without nuclear weapons by countries with nuclear weapons (most recently Russia invading Ukraine, but China and the US are equally guilty).

I'm reminded of a line from the TV show The West Wing, something like: "India must and will be a nuclear power. That way, we'll never be dictated to again."


Even an extreme example -- The technology to build an automated drone with a camera for targeting and a gun on a gimbal and deploy it to do a mass-shooting -- all exists today. All it would take is for some madman to pick the pieces up and assemble them. In fact, I'd be willing to bet $100 that we'll actually see some kind of tragedy to that effect in the USA within the next 5 years. The technology cats are very quickly getting out of the bags.


Many airsoft players are mounting fully automatic airsoft rifles on drones and some Russian dude already put a glock on a 1st gen DJI phantom years ago. My guess is a big thing stopping such an attack from happening is that the people doing these attacks are usually not well educated, so they don't have the arduino-starter-kit level of electronics knowledge required to pull it off.


Mental instability happens on all portions of the education and intelligence spectrum though - so if that's what we're banking on we're going to have a very rude awakening.


Your example is not extreme. Off-the-shelf "drone bombers" have already been used in the Middle East.


So then what about releasing data on how to enrich uranium at home or selling it on <web store>? I know usually there are multi-million dollar facilities required for enriched uranium production but why shouldn't the free market allow people to buy enriched uranium - it isn't our place to morally prejudge what people may use it for and the bad people will have access to it anyways... so why can't I buy a kilo of U-235 on Amazon?

I'm sure there are some very worthy technical efforts to improve mankind that are being hampered by the lack of easily accessible U-235 so why are we restricting this material from Chemists that want to Chem and Physicists that want to Fizz?


But what are the consequences of the .1% of people who get over that hurdle?


Having something that can still be detected by most of the planet and making them very angry.


No, but it does apply to the distribution of slippery slopes.

The OP specifically said tools "like this"


I guess I don’t understand what “like this” means.

While it’s convenient to dismiss any discussion about A and something slightly similar to A as “slippery slope,” an inconvenient fact is that the slope often is slippery. It’s increasingly evident to anyone paying attention that information warfare can yield disastrous effects in a variety of ways: disrupting peaceful transition of power; disrupting critical infrastructure; disrupting command and control of ”traditional” WMD systems, even.

So that’s sort of the spirit of my question: how far does this argument go? What is the definition of “like this?”


Some slopes are slipperier than others.


I'm not saying it's a good idea to give everyone a gun, but I do like the argument that the disadvantaged have the same opportunity to pull a trigger. I hope humanity learns wisdom as quickly as we innovate. We've made it this far..


And as well as not trusting that video of someone is really that person, I choose to live far away from places where anyone can get a gun


Where anyone can get a gun? If you live in the middle of nowhere, lots of people around you will have guns (though they could be dozens of miles away in the best case), and obviously where there are more people the police can get guns, or call up the military if they can't for some reason (what country is that though?)


Most UK police officers don't have a gun:

An authorised firearms officer (AFO) is a British police officer who is authorised, and has been trained, to carry and use firearms. The designation is significant because in the United Kingdom most police officers do not routinely carry firearms, although they can be equipped with tasers. In 2019/20 fiscal year, there were 19,372 police operations throughout England and Wales in which the deployment of firearms was authorised and 6,518 firearms officers, 4.9% of the 132,467 active FTE officers. Following the November 2015 Paris attacks it was decided to significantly increase the numbers of armed officers, particularly in London.

https://en.wikipedia.org/wiki/Authorised_firearms_officer


We open sourced 3d printed AK-47s. We disclaim any responsibility for the ensuing mass shootings.

We produced the software to run the drones. We didn't personally deploy them to kill those children.

We built software to guide the rockets. We didn't personally fire them at those hospitals.

We chose to startup our company providing profiles of suspects using scraped, publically available data to any government agency. What they do with the information isn't our problem.

We wrote the software, but it's not our fault or problem what happens because of who uses and because of how they use it. Our intent was good. The market and demand was there. What's wrong with providing a supply? How many software creators feel culpable? Vanishingly few. Who cares when comp is high?


"Once the rockets are up, who cares where they come down? That's not my department!" says Wernher von Braun


This sounds like it's making the gun control argument regularly made in America, and I disagree with it vehemently


I can’t see a way this technology could be used defensively. Wide access just leads to more abuse. There’s no principle by which it serves some sort of justice to make some crime more accessible.

It’s surprising how often this argument is made compared to, say, equal-opportunity access to fancy cars or good housing.


More usage = More awareness = More people stop trusting random videos on the internet as evidence of anything.

Seems like a good thing.


They literally have a demonstration of the tool being used to evade an intrusive "send us a video of your face, turn your head both ways, blink".

That isn't a defensive use!?


Definitely useful for scammers.


Then build a cultural revolution that instills distrust in what you see. We already teach the elderly not to trust a link that says, "you are the millionth visitor, click to win your prize". Teach them also to look past the video call and what looks (and sounds) like their grandson to what is actually being said. He's asking for your bank password, that behavior doesn't match!

The cat is already out of the bag. Going "nooo, unpublish this" doesn't protect people from scammers, who will continue to use it.

Society must unlearn that seeing is believing. That is what is morally irresponsible, not this software.


I don't think anyone so far has called for this to be unpublished, just for it to be more carefully considered and for the ethical aspect to be addressed when publishing. That's part of 'building a culture of distrust'.

Knee-jerk refusal to even consider thinking/talking about the use of technology is far more irresponsible than any other stance.


Ignoring human nature is far worse. There are people that will release this simply because you tell them no.

Even beyond that, if you're an authoritarian nation with a rather good lockdown on your news and internet, what's the downside of throwing tools like this at democracies?

In my view the age of tools like this has already been here for some time and they are going to their next step of evolution, and there is very little we can do about preventing the distribution of them without affecting other pieces of software and their distribution.


But you are making the argument that releasing it is protective, and there is no basis for that argument.

> Society must unlearn that seeing is believing.

A total lack of trust of our shared perception? Let me know how that works out. Might as well have everyone on acid 24/7.


>A total lack of trust of our shared perception? Let me know how that works out. Might as well have everyone on acid 24/7.

It's no different from religion, which was the backbone of the western civilisation for quite some time.


Religion is a contemplation of free will, not sure if contemplating that is like being on acid :)


Religion claims that certain things are true when they contradict observable reality. There's no fundamental difference between claimed gods and deepfaked ones.


Many of the claims of religion can’t be contradicted by observed reality because they are pointing to things beyond the observable (ie non-Falsifiable, like the proposed existence of a multiverse).

For example, religion claims you have free well. For a materialist, any interesting definition of free will is impossible, because we are made of quantum particles with defined statistical behavior.

Of course, we cannot prove or disprove the existence of free will in other people either…


The deepfaked ones actually speak.


Religion is a very complicated and touchy subject - it means drastically different things to different people.


Yeah I can’t believe that people think it’s possibly a desirable future for there to be no such thing as trust, and furthermore that we should accelerate our march in that direction. What an insane way to live.


It's not desirable at all, but it is inevitable and in many ways already here. While I'm not sure I agree with the argument, I think the idea is holding this stuff back just means it's more powerful and nefarious for those that do have it because the population at large will remain unaware for longer that trust is already gone. And that means innocent people are going to get hurt while that knowledge asymmetry in the population is delayed from catching up.


I honestly am puzzled: the both of you (if I understand you right), do you really think some fake videos will cause there to "be no such thing as trust"? Why?

Videos aren't the only way to understand the world, and there was never any such thing as "direct observation" in the first place.


Videos are considered one of the more authoritative sources of factual information. E.g. Imagine how disruptive it’d be if a deepfake of Biden announcing support for Putin’s invasion of Ukraine were to circulate during those first hours of the invasion? Sure, there are other communications channels that would be able to dispute the authenticity of that video, but would those channels be as fast and as (apparently) authoritative as video of Biden “himself” declaring his support?

Now consider a few months after this hypothetical fake-and-refutation, what if Biden actually does announce an emergency threat to America, and people end up waiting a few days (or weeks, or months) for the refutation to come.

The “no such thing as trust” is not coming from this innovation in particular but rather the confluence of two general trends: 1) heavy reliance on digital media and 2) ever-increasing ability to fake digital media.

Note that you don’t even need large, successful falsified media campaigns to create distrust. All you need is for people to know that the capability exists, and trust is immediately damaged. Even trivially-detectable falsifications can end up being difficult to sort through if there are a billion fake pieces of information crowding out a hundred real pieces of information. This is true even if all billion fakes are trivially detectable by a human (which they won’t be).


All of this is not "no such thing as trust".

I too dread a little all the mischief that's going to happen while the world learns about this. But people don't have to uncritically believe "the evidence of their own eyes", even though that has been very useful corrective to many mistakes, and will I guess be less so now. Evidence doesn't work that way: people always have to interpret everything in terms of their understanding of the world, and they can get better at that in response to changes like this.


I agree. I don't think this in particular will land us in "land of no trust." I think it, and innovations like it, move us clearly towards that land though.

The advent of widespread photo/video evidence was a big help for people trying to interpret was real and what wasn't. Then photography was knocked out of service by e.g. Photoshop (generally "easy photo manipulation"). Now video is getting knocked out of service as well. So, yes, it removes a useful tool in interpreting the world's evidence, and it's not clear what will replace that tool (if anything).

To be clear, the "no such thing as trust" was specifically a response to GP's assertion that people will "have to unlearn that seeing is believing." Okay, fine, but then what can you believe if you not even what you see? Again this specific tech doesn't get us all the way there, but it gets us closer.


Trusting photos wasn't destroyed by the advent of photoshop, same with videos.

There are ways to prove authenticity, e.g. with cryptographic signing.


Why weren't people this concerned about the existence of photoshop? To this day people continue to try to fool and manipulate others by using it. You don't even need video or photos to trick someone. Plenty of people lie in text, but text editors aren't considered dangerous tools.

People have been aware that lies can be typed for as long as we've had typing, but it still works. The fact that it still works a lot of the time hasn't ended the world. People are more aware that they can't trust a photo, yet people still fall for edited photos, and the world still turns. I don't think deep fakes are any different. Videos were not very trust worthy before deep fakes. Special effects have been a thing for many many decades. The world is still learning exactly how untrustworthy video is and there will always be people who fall for tricks and manipulations, but eventually the majority will adjust and society won't crumble.

We'll put our trust in what we see with our own eyes and in people and organizations who have proven themselves to be trustworthy. We'll worry about how real something is in proportion to what it would mean to us if it were real. When it really matters we'll trust experts to authenticate a photo or video as legitimate just like we do currently. If the world becomes less dependent on information presented in formats that can we know can be convincingly manipulated we'll all be better off. History tells us technology like this in the hands of the public won't cause people to abandon faith in everything and anything they see, or destroy civilization. We all just get a little more savvy, more industries will take advantage of the cool tech, lots of people will have fun and games with it, not much really changes for most people.


Really really knocking down these straw men one by one eh? Anyway, people do complain about Photoshopping, e.g. the psychological effects of digitally-generated beauty standards. No, photography isn't broken, but yes there is less trust in what a photograph is saying. No, this won't "break" video, whatever that means, but yes of course it will mean people will be less trusting of video.

No one said it'll destroy civilization, don't worry.


>> I can’t see a way this technology could be used defensively. Wide access just leads to more abuse. There’s no principle by which it serves some sort of justice to make some crime more accessible.

Should that argument be made to the original research? I don't think it applies to one but not the other.


Honest question, do you apply the same reasoning to gun control?


Good question. I would apply that reasoning to the information about how a gun works for sure, but IMO there is some (admittedly hard to define) amount of danger or perhaps destructive power beyond which we might or perhaps should choose as a society to restrict the ability to wield a tool and certainly at least some guns exceed that limit for me. I’m not sure that any software, on its own, does though, at least I can’t think of any immediate examples.

I admit that this limit is hard to define. It’s also extremely hard to enforce for pure data, if that turns out to be what we want to do.


I agree its a very hard line to draw. But I think certain technology is more contagious and dangerous. For instance, a single person with a gun can inflict damage locally with dire consequences on his or her own personal safety in the process. But if someone has a virus (physical or digital), then it caries a kind of contagion that can disproportionately hurt many more people. That's why we can't equate something like deaths due to car accidents to pandemic deaths. A single car accident doesn't increase the risk of all car accidents. It's localized. But a single infection increases the risk of future infections due to contagion. Whether you think any technology falls into the dangerous contagious category or not is up for debate.


In this analogy information about how gun works would be an equivalent of Deepfake research paper. This tool is analogous to an actual gun.


This has a clear benign use. Of course it sucks that you can also use it in a hostile manner - but the fact that this tool is publicly available rather than hidden in the pocket of some unscrupulous blackhat means that every space that uses verification with these methods can now incorporate this type of testing. That's a net benefit for society.

I do think disclaimers like this are a little juvenile (it reeks of a US-ian litigation mentality), but you can easily imagine why they put it there. Perhaps instead of the author being less naive, you need to be more empathetic.


More generically, I think there's a big difference between releasing proof of concepts, and fully weaponized tools. While the latter is also usable by red teams, it also gives attackers (who often wouldn't have the resources to build the same) the weapons they need.

Personally, I like when people release proof of concepts, and hate it when they release weaponized tools. Especially when I inevitably end up reading reports where APT groups are using those tools.


Is a pen that contains ink a weaponized tool?


This is a tool with a clear benign use, along with a bunch of malign ones. If you create something with significant potential for harm, then you should at least think about that, the potential consequences, and possible mitigations.

I wouldn't term it a failure of empathy to ask that people consider the impact of their actions on others. I totally get why they have the disclaimer, and I wouldn't ask them to remove it, but I don't think it's enough. Given the clear potential for harm here, the potential uses and misuses of this tool should have been addressed directly.


What do you do? IMHO, only person naive here is you.

People should be happy, that there are still white-hats who're reporting exploits, even for no profit. I personally switched to gray-market, can't take the shit we get from companies, anymore.

This tool released publicly is only to bring an awareness to the topic. Everybody else who needed to exploit this, have these tools developed in private.


> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

That's a dangerous precedent. Would you apply the same logic to a kitchen knife? Or if for some reason only freemium products count (not sure why), then a pentesting tool?

I understand the underlying point you are trying to make but what are you proposing as an alternative exactly? Who gets to decide which products fall into a gray zone, whilst others are only for bad use. We already see this kind of shoddy thinking leading to keeping DALE-2 out of the public's hands (or at least that is their claim).


A knife is a multi-purpose tool that can be used as a weapon, and one that has so many important non-weapon uses that not having it would cause far more harm than having it would. This is intended primarily as a weapon, even when used defensively. There's an important qualitative difference there, though it's tempting to gallop down the slippery slope.

Regardless, I'm not proposing to ban either knives or this tool - I've been very clear about that. I do think that - with anything that has potential for harm - it's important for people to consider the possible consequences and to actually engage in the discussion about usage, rather than either washing their hands of the issue or declaring that any consideration will soon lead to arresting everyone for everything.

This is a path we've trodden countless times. With some things, we've collectively decided that no controls are necessary. With other things - poisons, nuclear weapons, a host of things depending on location - society has enacted controls of various efficacy and validity.

The responsible choice, regardless of whether you end up being for or any any given restriction for [thing], is to spend time thinking about it, discussing it, and - particularly when you've chosen to release something - acknowledging the potential issues.


> we've collectively decided

I think I agree with everything you wrote except what I'm unfairly reading into this phrase: we don't need to and should not irrevocably "decide" once and for all. We might have been quite wrong to ban some poisons even in the situations that applied at the time when we banned them. If we were right to do so, times may have changed and we don't any longer. Even nuclear weapons perhaps, in certain situations.

Similar to a previous comment of mine in this thread, of course I don't know from your passing use of that word what you think about this! But I do think that that worldview, that these things are or should be "decided" in a permanent sort of way, is commonplace.


That's fair.

It's wordier, but more accurate, to say that I think society is (and should be) in a constant conversation with itself, deciding what to prohibit and to what extent. We regularly get this wrong, and we should never assume that our current collective position is unassailable; there should always be room for doubt and further discussion.


I sell airplanes do I am to feel responsible for the 9/11?


Do you sell meth?


Yes, do you need it?


I disagree. The moral responsibility really rests on the person who uses the tool.

For instance, I once cheated by using gcc/godbolt to generate assembly output for a class from C code. By this logic, Richard Stallman should be blamed for my misconduct.

There are any number of reasons for supplanting another face onto your own, many of which are simply good fun. If you choose to use this for scamming or perverted reasons, so be it.

Moral posturing aside, perhaps there could be an invisible watermark or something included by default to easily identify less technically inclined actors as users of this tool.


If I stab someone with a knife, who is responsible?

The inventor of knives? The knife‘s manufacturer? The store who sold me the knife? I would say the responsibility lays 100% with myself.

I do not think it makes sense to pursue inventors for what happens to their creations, unless they actively encourage misuse.


Parents of one of the victims of the recent elementary school shooting in Uvalde, Texas are attempting, or preparing, to sue the gun manufacturer of the semiautomatic rifle that was used in that event. Will they sue, will they win? I don't know.

Do guns kill people or do people kill people? Could be a relevant analogy.

https://www.npr.org/2022/06/03/1102755195/uvalde-special-ed-...



Responsibility is a ghost. Like God, Love, Honor, etc. You’re not actually talking about anything.


They arrest people for making and selling meth everyday. You have a bias because making deep fakes is not illegal yet. Knives are used for cooking, what alternative good do deep fakes provide?


Cardiovascular diseases are the number of cause of death worldwide BY FAR. Millions of people die every year from consuming fast food. Fast food does not really serve a purpose apart form being tasty/quick/cheap. You could fulfill your dietary needs much better with different food.

Should McDonald's be banned? I do not think so. Anything can be misused.

To answer your question about what "good" uses there are for deep fakes: Cheaper and better special effects, available to a MUCH broader audience than before. I cannot even imagine all of the great use cases creative people will find for this.


> Should McDonald's be banned?

Yes. I believe it should.

Regarding the deep fakes being good, I meant societal good. Not just fun tacky things that some people have a good time with. What social value does it bring to the world? We can do everything we need to without deep fakes. It has no value to the world.


Thousands of cheap special effects that would not otherwise be possible without a studio. Comparing this to meth, which has extreme detrimental effects on the human body, brain and the society that has to deal with the dangerous junkies it creates is completely absurd.


All I learned from this response was to just release tools like this on Darknet Marketplaces and maybe Telegram and just forget the disclaimer


The main value in releasing tools like this is to demonstrate weakness in our current security controls. A key weakness of biometrics is that there is no secret data. Open source tooling like this help people understand that.


I wish I didn’t have to scroll this far to find your rational perspective. Lets take the famous LockPickingLawyer of YouTube, is he responsible for every crime where the thief defeats a lock that he has demonstrated the weaknesses of? I would say “no!”. Exposing a weakness puts the onus of securing said weaknesses on those that sell technology/devices/services that market themselves as “secure”.


This is just some rationalization that security nerds like to regurgitate on each other.


Exposing a vulnerability in a public forum is the fastest way to patch said vulnerability. You don’t have to be a security nerd to connect the dots


> Exposing a vulnerability in a public forum is the fastest way to patch said vulnerability.

Unjustified empirical claim.

And even if it were true, it would still be insufficient for justifying public disclosure, classic is-ought.

Also, only nerds stand up for security nerds.


> If you create something, make it freely accessible and easy to use, then I think you are partly responsible for its misuse.

I don't agree, but people will never reach consensus on this moral topic. So please don't call the other side "at best naive".


If there's no duty to consider potential harm when releasing tools, then why would there be a duty to avoid criticising people who release them?


But if such things exist in secret and there's neither awareness nur a reference implementation to develop countermeasures against, wouldn't that also cause harm?


Absolutely, and there's definitely a strong argument to be made for increasing awareness of existing tools and attack vectors.

Importantly though, that aim is served by proofs-of-concept and information sharing more than it is by releasing point-and-shoot attacks. This comment puts it rather well [1].

As above, I'm not saying this shouldn't have been created, or even that it shouldn't have been released. I am saying that there are important ethical considerations and questions raised that should be addressed directly by the creators, rather than being handwaved away as in the quoted disclaimer.

[1] https://news.ycombinator.com/item?id=31652008


OpenAI attempted this with GPT-2’s incremental release strategy and was lambasted on moral and ethical grounds as well.

https://openai.com/blog/better-language-models/

So individuals working on a technology like this have no clear societal consensus to use as a guide when making the decision. Strong arguments can be made in both directions.


I thought that was actually a positive sign from OpenAI; potentially a disingenuous one, but it at least showed an awareness of the potential issues.

There will always be people who find the very idea of thinking about things from an ethical perspective offensive; that doesn't mean they're right. A lot of valuable discussion in this space is drowned out by people shouting down attempts to explore nuance and shine a light on the grey areas [1].

I'm not asking for censorship or a government crackdown; I do think that it's irresponsible to dodge the issues raised, act as though they are settled, or dismiss any concerns as anti-progress. As you say, there is no clear societal consensus, and there are strong arguments in both directions; what I want to see is those discussions taking place, in good faith, and those issues being acknowledged, rather than ignored.

[1] https://twitter.com/emilymbender/status/1532699418281009154


Considering potential harm is one thing, being liable for misuse is another. If you're releasing something that can only harm others and has a arguable net downside, that may be a good reason to not release it.

What we're talking about (I think) is releasing something with a net upside, but with potential for misuse and the resulting liability?

Am I misunderstanding your point?


Unless we cut off the chain of responsibility somewhere, the creators of the programming language, the computer and its components, as well as the people who have designed the computer and those who mined the required minerals are responsible as well.


> Unless we cut off the chain of responsibility somewhere

We can do that. We do it all the time, assigning varying levels of blame to different parties for different things.

Determining between proximal and ultimate causes, or assigning more weight to one cause or another, is not some impossible burden.


Good thing nobody is suggesting we *checks notes* jail silica miners for building the Deepfake Offensive Toolkit.


What's the difference between this and Metasploit, sqlmap, ... ? Not saying you're wrong, it's just that while these tools have valid and legal uses (pentesting) they're also used in black hat scenarios and one could say the same about DOT.


Which is why Metasploit triggered the same discussion.


That’s a legal disclaimer, not a moral one.


I think that's part of what makes me uneasy about this - it's a solely legal disclaimer for something with a moral dimension, and it reads to me like the abdication of responsibility, rather than just a shield against litigation.


Oh no! We didn't remind our users to be kind to each other!

We're software developers, not kindergarten teachers.

How do you feel about Sqlite's code of conduct? Because if you don't like that, then you shouldn't be making moral statements in your own repositories.


Kindergarten teachers, as people who can have significant positive or negative impact, are bound by codes of ethics and practice.

This is also true of almost every other area of human effort, whether implicit or explicit. As a catch-all baseline, we have the various legal systems.

The fact that a significant proportion of the tech industry refuses point-blank to even discuss the idea of ethics is not a sign of superiority but immaturity.


It is an attempt at a legal disclaimer. It may not hold up in court.


Under what charges? If we're talking about the US, they have the first ammendment and all that. As for defamation laws and similar, the person who used the tool for those purposes would be on the hook, not the creators.


That entirely depends on the circumstances. But given you are mixing criminal, constitutional, and civil law (libel, sometimes also criminal) in one comment (oh, and it's 'amendment') I don't think that this will lead to a fruitful discussion.

My only point was: you can disclaim anything you want, but that won't stop people from suing you if they feel like it (this can happen anyway, even if you disclaim any and all responsibility) and creating something that can very easily be abused and then thrown out into the public even though it is standard practice in the security community may well end up being a boomerang.

There is some precedent for the creators of such software being sued:

https://knowtechie.com/a-company-that-makes-iphone-hacking-s...

So I would not be too quick to dismiss that this could happen to smaller entities or even open source creators of such tools.


It's the same as the age old "guns don't kill people" that seems to work fine for the weapons industry in the US.


The world is larger than the US, and even there that argument does not hold water:

https://www.nytimes.com/2022/02/15/nyregion/sandy-hook-famil...


They sued using the argument that the manufacturer’s advertising violated Connecticut consumer law.

That’s not remotely the same thing as holding a creator liable for the misuse of his work by unrelated third-parties.

Frankly, the illiberal push towards “ban everything that could be misused” is disheartening, especially when we see it applied to a speech-based tool like this one.


Yes, they did that because they did what everybody in that position would do: pick the argument that is most likely to find sympathy with a jury.

Also: note that the degree to which everything is labeled 'speech' is fairly uniquely American.


This argument echos the same justifications made for the US’ cryptography embargoes in the 1990s.

(Rightfully) labeling code as speech is what allowed us to break down that barrier.

Would you also be questioning the ethics of djb in 1995? https://en.wikipedia.org/wiki/Daniel_J._Bernstein#Bernstein_...


No, it doesn't.

Because once created something will have a life of its own. The question is only meaningful prior to creating it. This goes for the atomic bomb, the invention of gunpowder, strong cryptography and any other tool.

But if you don't care about ethics then it is obvious why you would not ask that question to begin with and I think that technology without ethics is no better than your average arms manufacturer. Only there at least we have - in some cases - some control over the uses, whereas software multiplies at a speed that politics can not catch up with. This may be a good thing, or it may be a terrible thing, and if it is the latter you will only find out when it is too late if you don't spend some cycles thinking about it beforehand.

I liked the internet a lot better before I had to think about the security implications of every move I made, and the difference between 'bad actors' that are incapable of wrecking everything and 'bad actors' that are empowered by the brain children of the good actors far more than the good actors are (due to the asymmetry between destruction and creation) may be more than we can deal with.


You seem to be interpreting the internet’s historically libertarian ethics as not thinking about ethics, rather than what it really was (and is) — an intentional, studied decision to prioritize openness and individual empowerment, instead of trying to enforce safety through prior restraint.

Individual empowerment enables incredible creativity, engenders individual and societal resilience, and leaves space for those who do not fit the norm to contribute their talents in ways we might not otherwise anticipate.

The second — prior restraint to enforce “trust and safety” - requires exerting authoritarian or paternalistic control over others to achieve its aims. In the process, it gives those who prioritize power over others the tools they need to undeservedly secure it, and invariably leads to individual, social, and institutional ossification.

Having things go wrong is a good and necessary thing; a perfect world is a fantastically dull and oppressive one.

These views did not arise from a lack of ethical consideration; they were a very considered ethical foundation on which the internet was originally built. In many respects, it was originally built to stand in opposition (or at least offer an alternative) to the staid and oppressive institutions that prioritize trust and safety.


I don't think we have read the same bits about the history of the internet. Openness and individual empowerment had absolutely nothing to do with the origins of the net.

The main purpose as far as I have it down was to create a method for (government) researchers to communicate and to have a reliable communications network in wartime.

https://www.usg.edu/galileo/skills/unit07/internet07_02.phtm...

Feel free to rewrite history any way you want it though. Perhaps you were confusing the world-wide-web and the internet?


No; your condescension aside, I’m thinking of the internet I grew up on, from the late 80s and into the 90s.

Gopher, Usenet, and IRC. Eventually, the web.

Yes, the internet grew out of government projects, but it was rooted in the openness of academic networks.

The “trust and safety” ethos is merely the staid hall monitors of the status quo doing their best to co-opt something that was only possible to create outside their influence.

Its healthy for them to lose, sometimes. Maybe even most of the time.

Opposing the “trust and safety” hall monitors is an ethical position, just one that’s (seemingly) entirely contrary to your own.


The difference is that guns are literally deigned for killing people. There are nonlethal alternatives, so when people buy guns (the exception being hunting) the purpose it to be able to kill someone.


There are non-lethal alternatives to devices "literally deigned [sic] for killing people?"

This logically does not compute.

I guess you mean in situations where guns are currently used, other strategies could be employed?

Since the goal of a firearm is to hurl a chunk of metal through the air at high speed, I can't really think of alternatives. Besides air rifles, we just have traditional powder guns filling that niche.

The usefulness of firearms extends beyond killing people, and I don't think "guns are literally designed for killing people" is very accurate. I think the majority of guns are not designed for killing people. I think the Ruger 10/22 is the best-selling gun of all time, and it's .22 caliber, which is definitely not what you'd choose if the goal of your design was to kill people.


I'm a bit confused here: the majority of the uses for guns seem to not be to 'hurl a chunk of metal through the air at high speed' but the intended outcome: to wound or kill someone. Sure, there is hunting but since the advent of the farm hunters have seen less and less employ, to the point that some societies have no hunters at all.

Guns are universally classed as 'weapons' not as 'metal object throwers'.


>The usefulness of firearms extends beyond killing people

Not really, not for the vast majority of them. For hunting or sport shooting a bolt action makes sense, but a 9mm pistol is a tool specifically designed to kill people, just like an automatic rifle is.


We're actually super fortunate that the development of deepfake technologies have been done relatively out in the open, with source codes, concepts, and pre-trained models often being readily shared. This allows for a broader based understanding of what is possible, and then hopefully develop ways for folks to inoculate themselves, or at least have some societal-level resistance to being hoodwinked by them. If this tech was only developed in secret, and was being used in a targeted manner, who knows what kind of large-scale manipulations would be being undertaken.


So, if we take your naive take to its logical conclusion; The Apache Foundation should be considered partly responsible for all the malware distributed via their http software?


That’s not a naive take but a stupid one, and it wasn’t made by OP. Slippery slope is a fallacy, not an argument.


There is a conceptual difference between releasing a weapon (even if for research/defensive purposes) and releasing tools which could later be adapted negatively. That's not to say that weapons should never be created/released, but that there is an extra onus on the creators to at least consider the harm they are enabling and possible mitigations.

The Apache Foundation - great example, thank you - despite not creating weapons, has clearly put thought into how their work interacts with wider society and how to ensure positive outcomes as far as possible [1]. People absolutely don't have to agree on moral issues, but it is irresponsible not to have considered them.

[1] https://apache.org/theapacheway/index.html


> pretend that any misuse was entirely separate to you is at best naive.

No, it is naive to pretend facial recognition is worth anything when creating tools to defeat it is a mere academic exercise in reading papers and implementing algorithms described in those.

Don't shoot the messenger.


Should Ron Rivest, Adi Shamir, and Leonard Adleman have responsibility if criminals use their discovery?


Is this the first pen test tool you've seen? No, they aren't responsible, morally or legally or however you want to slice it. At all. Not one bit.


Reminds me of Oppenheimer: "I am become Death, the shatterer of worlds."


you can hear Oppenheimer himself: https://www.youtube.com/watch?v=lb13ynu3Iac


Do you ask for the same level of culpability from a hammer manufacturer? Should hammer makers have trouble sleeping at night because someone bludgeoned another person to death with a hammer made by them?

No. You don't.


Hmm. It is a tough one.

I am closer to your line of thinking than not, but, at the same time, from where I sit, it seems that data monitoring went too far already and regular user has to have a way to circumvent it.

For that reason alone, this tool, even if it will result in some abuse, it just evens out the playing field.


Same applies to a lot of security oriented tools, most notably metasploit which had these same accusations when it came out long ago, nowadays it's no big deal as exploitation frameworks are now more accessible. same applies to a lot of niche technologies that can be misused.


Do you think knife manufacturers are responsible for stabbings and knife attacks?


You are a partly reason for misuse, but you are not morally responsible for it. Causal relationship doesn't necessarily imply a moral one.


> then I think you are partly responsible for its misuse

irrelevant


Exposing these tools to broader use will accelerate the development of mitigations, which is a net win for regular users IMHO.


This is morally corrupt, dangerous and would lead to an oppressive, violent society. A knife maker killed for murders, a watch maker killed for the tyranny of time.

We should do the exact opposite. Science and reason would cease to exist otherwise. Individuals wouldn't need to be held accountable, innovators/engineers/scientists/entrepreneurs would be. End of a free society.

Have you thought of the chilling effect this might bring?


This is not a good faith argument.

I've been extremely clear throughout this comment section - I don't want censorship, I don't think this should be banned, and nowhere have I called for legal consequences for releasing this. I want people to think about their actions, and to discuss the ethical issues arising from them.

And yet you've jumped to calling me morally corrupt because I want to murder craftsmen. That's not a reasonable reading of my comments, or in any way a proportionate response.

If you want to talk about chilling effects and the importance of science and reason, how would you describe your comments? You're shouting down discussion with wild accusations.


I am discussing larger societal implications of what happens when this type of morality expands to the extreme.

Absolutely zero to do with you personally. You're building a strawman.

To you personally: If you're going around touting "other side's morality", you should be open to seeing what happens when this kind of thinking is adopted by the society. Don't play the victim game.


> A watch maker killed for the tyranny of time.

This isn't so much a strawman as a slippery slope made from them. It is not reasonable to leap from 'people should acknowledge and discuss the ethics of their actions' to 'blood-drenched tyranny'.


Yes, done on purpose. IMO when discussing morality, we should consistently applying it, a litmus test for whether it holds up or not.

I am arguing that your case is fundamentally flawed, irrational, corrupt, inconsistent and dangerous.

Edit: I can't respond to your slippery slope fallacy but you started with an extremely broad stance "Anything that can be created, will be created. However, that doesn't free you from all moral culpability." I am responding in kind with an extremely broad implications. I guess we're busy debating which fallacy falls in whose arm with not addressing what I countered your argument with. First came the victimization, then a strawman card, then a slippery slope fallacy card. But you never considered responding to what happens when we take your ideas and consistently apply them. I still don't understand why you feel like you're being accused. I am arguing about the points you raised. No one is accussing you of anything.


A slippery slope [1] - particularly based on a wild misstatement of my position - is not a strong argument.

If you want to argue against my case, then please do that, starting by engaging with the position I'm actually advancing [2]. It's difficult to discuss morality at all when you're deliberately engaging in violent rhetoric rather than with me.

[1] https://fallacyinlogic.com/slippery-slope-fallacy-definition... [2] https://news.ycombinator.com/newsguidelines.html


What authors _could_ do is to add some kind of secret watermark that would only be shared with select government agencies and perhaps software companies that could be trusted to keep it secret.

That way, the software could be used for pen testing, but it could cause a silent trigger to go off.

That could even be a way to monetize the software....


this authentication mechanism is flawed and needs to go asap. these guys are doing something positive by speeding up the process.


It is not a moral statement, just lawbabble.


By this argument Linus Torvalds would be responsible for everything from child porn to drug marketplaces to nuclear weapons.


Having morals when the other side doesn't is a weakness.

Make the tools available to everyone. As Elon musk says, sun is the best disinfectant.


Having morals when the other side doesn't is the best possible reason to be on different sides.


Morals are cultural.. in some countries it means burning gay people.

Power however, is amoral and neutral


I work at Axis, which makes surveillance cameras.[0] This comment is my own, and is not on behalf of the company. I'm using a throwaway account because I'd rather not be identified (and because surveillance is quite controversial here).

Axis has developed a way to cryptographically sign video, using TPMs (Trusted Platform Module) built into the cameras and embedding the signature into the h.264 stream.[1] The video can be verified on playback using an open-source video player.[2]

I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

[0] https://www.axis.com/ [1] https://www.axis.com/newsroom/article/trust-signed-video [2] https://www.axis.com/en-gb/newsroom/press-release/axis-commu...


It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!


>> I hope this sort of video signing will be mainstream in all cameras in the future (i.e. cellphones etc), as it will pretty much solve the trust issues deep fakes are causing.

> It shouldn't be too hard to film a deepfake movie from a screen or projection that don't make it obvious it was filmed. That way, the cryptographic signature will even lend extra authenticity to the deepfake!

Would you even have to go that far? Couldn't you just figure out how to embed a cryptographically valid signature in the right format, and call it good?

Say you wanted to take down a politician, so you deepfake a video of him slapping his wife on a streetcorner, and embed a signature indicating it's from some rando cell phone camera with serial XYZ. Claim you verified it, but the source wants to remain anonymous.

I don't think this idea address the problems caused by deepfakes, unless anonymous video somehow ceases to be a thing.

Similarly, it could have serious negative consequences, such as people being reluctant to share real video because they don't want to be identified and subject to reprisals (e.g. are in an authoritarian country and have video of human rights abuses).


The point of signing is to do a checksum of the content of the video, not just adding a signature next to the stream (which would be pretty useless).

The signature of the politician on a street corner would only be valid if it actually verifies the content of the video; and the only entity that can produce that signature is the holder of the private key.


> The point of signing is to do a checksum of the content of the video, not just adding a signature next to the stream (which would be pretty useless).

That's what I meant: sign the video content with some new key in a way that looks like it was done by a device.

> The signature of the politician on a street corner would only be valid if it actually verifies the content of the video; and the only entity that can produce that signature is the holder of the private key.

The hypothetical politician's encryption keys are irrelevant. The point is that even authentic video is going to be signed by some rando device with no connection to the people depicted, so a deepfake signed by some rando keys looks the same.

IMHO, cameras that embed a signature in the video stream solves some authenticity problems in some narrow contexts, but it's nowhere near a panacea that "will pretty much solve the trust issues deep fakes are causing."


It's not about the keys of the politician. It's about the keys of the manufacturer of the camera. The aim of the signature is to prove that a video did in fact come from such camera at such location, at such time.

Of course, if one has physical access to the camera/sensor it's possible to make a fake video with a valid signature. But it's a little more difficult than simply running a deepfake script on some machine in the cloud.


> It's not about the keys of the politician. It's about the keys of the manufacturer of the camera. The aim of the signature is to prove that a video did in fact come from such camera at such location, at such time.

If the goal is to allow a manufacturer to confirm video came from one of their cameras, I think it's somewhat more helpful than I was originally thinking, but it doesn't change my opinion that this technology would only "solve some authenticity problems in some narrow contexts." Namely, stuff like a burglar claiming in court that security camera video was forged to frame them. I don't think it addresses cases of embarrassing/incriminating video filmed by bystanders with rando cameras and other stuff like that.

> Of course, if one has physical access to the camera/sensor it's possible to make a fake video with a valid signature. But it's a little more difficult than simply running a deepfake script on some machine in the cloud.

If you're faking a video like I described, you certainly would have "physical access to the camera/sensor" that you claim made it. You're making a fake, which means you can concoct a story for the fake's creation involving things that are possible for you to acquire.


A screen with double the resolution and twice the framrate should be indistinguishable. Moreover if you pop the case on the camera and replace the sensor with something fed by display port (probably need an fpga to convert display port to lvds, spi,ic2 or whatever those sensors use, at speed) that should work too.


This is still a lot harder than just using the deepfake application in the OP. But I’ll admit the arms race might not be over yet.


You can raise the bar a bit. No one checks to see if a $5 bill is fake. There is no real upper limit on what the payout for a deepfake could be. TPM isnt going to save facial recognition ID like the IRS from being obsoleted by deepfakes. But, for things that go to trial a TPM dashcam with tamper evident packaging (that can be inspected during trial) is probably good enough for small claims court. You could add GPS spoof detection, put as much as you can on the tpm chip (like the accelerometer), and sign all sorts of available data along with the video, but that will up the unit price a lot, and for the kind of fraudulent payouts you'd be trying to stop, you wouldn't make it enough harder to keep it from being cost effective for the fakers.


Not if the camera includes metadata like focus, shutter speed, accelerometer, GPS, etc. I don't really know, but I imagine the hardware security required wouldn't be too far from what's common now. Cameras are already unrepairable, so I suppose the arguments would have to be more from the privacy and who-controls-the-chipmakers perspectives.


GPS spoofers are available legally, you just replace the gps antenna with the spoofer, so no FCC violation. You'd have to break the law if you don't want to open the case to get to the antenna. I don't have any answers to the accelerometer or focus other than replacing those sensors too, and if you made the accelerometer on the same tpm enabled soc it would make moving shots like from a dash cam hard.


Its not like TPMs are infallible. And even if they are thought secure today, older encryption becomes trivial to crack with time. But like you said, its about raising the bar. You can do a lot to mitigate the threat of deepfakes to a certain point which will eventually push them back into just the realm of those who really know what they are doing. That's not ideal, but well funded and talents groups have been able to falsify evidence to discredit people since the beginning of time. So the nature of the problem doesn't change, they just have another tool.


You could include LIDAR with the camera and use that data to verify that there is depth to the image.


Isn't it fun how the analog hole works both ways? :)


That will just move the hack one level further down and will create even more confusion because then you'll have a 'properly signed video stream' as a kind of certificate that the video wasn't manipulated. But you don't really know that, because the sensor input itself could be computer generated and I can think off the bat of at least two ways in which I could do just that.


"That will just make it more difficult to make fakes."

Yes that's kind of the point. Plus I'm sure they could put the whole camera in a tamper resistant case. They could make it very difficult to access the sensor.

Including focus data should make "record a screen" a bit harder too. I guess recording a projection would be pretty hard to detect, but how likely is it that people would go to those lengths, vs using a simple deep fake tool?


> "That will just make it more difficult to make fakes."

Please don't use quotes when you are not quoting.


Isn't the point to prove physical access to the camera. As in, this stream originated at this TPM which was sold as part of this camera.

So, the best you get is that the stream shows it wasn't produced with access to a particular camera. Then impersonating a YouTuber, say, requires access to their physical camera.


> Isn't the point to prove physical access to the camera.

Yes. But since you can't really prove who had and who did not have physical access to the camera (what are you going to use, video evidence ;) ) that carries little weight.

Let's say we're talking about a death penalty or a multi billion $ lawsuit, the stakes would be so high that to rely on something as trivial as a signature on a video stream would be way too risky because of the consequences.

Which expert would sign on the dotted line to the statement that 'the equipment has not been accessed by someone who wasn't authorized to do so'? And that's before we get into the question of whether this camera really is the only way that this particular video could be generated (can duplicates of the TPM be made by the manufacturer?), you'd have to do a lot of work to make guarantees that would stand the test of time.

All it takes is one 'DVD Jon' and you can kiss your carefully constructed scheme goodbye.

Finally, the whole chain of that video would have to preserve the signatures, which in a world that tries to balance privacy with other concerns may not be feasible and in fact could well have downsides all its own.

Personally I would much rather see the level of skepticism against all kinds of digital evidence go up than to see the trust go up due to supposedly magic signatures and other tricks to pretend that we have a perfect lock on this stuff. Every time we believe we do someone comes along that proves us wrong, usually after the damage is done. See also: fingerprints, face recognition, DNA evidence, lie detectors, bomb detectors and so on.


TPM is supposed to be a secure enclave that's resistant to electronic access, so non-duplicable (in theory). So as much as anything, you're pricing access by the signing of a file with the TPM. That means continuity of 'proof' -- perhaps we'll get to the point where famous people perpetually film themselves using a third-part verifiable module to sign the data, just so they can show "that wasn't me" (ie technological deniability).

I don't think you can show the reverse (it was you), nor can you make a 'proof' without an established chain (you're only proving that you verified something with the same equipment you previously used).

Reputation seems like it could become much more a part of social media, like, this video was signed by all these witnesses to endorse it as a true account.


> Which expert would sign on the dotted line to the statement that 'the equipment has not been accessed by someone who wasn't authorized to do so'?

You have a lot more trust in the ethics of expert witnesses than I do.


Well, expert witnesses that give false testimony are a lot more rare than expert witnesses that are going to go on the record for things that they know to be false.

Expert witnesses and liability are a very interesting area of the legal system. There are all kinds of variations on this theme ranging from blanket immunity to various forms of liability. Knowingly giving false testimony would likely lead to problems, especially if say a panel of experts would in turn testify that said expert should have known and could have known that what they testified was wrong.

I'm not aware of a case that went that far, but I would love to read more about such conflicts between expert witnesses on two sides of a lawsuit, usually the experts agree in broad terms on their testimony, and may differ in the details.

Maybe some who have served in such a capacity can chime in here?


Just like this post about making a software bypass to a security check, there are hardware guys making bypasses to security checks.

Now you create a situation where hardware bypassed cameras have immense value to bad actors.


> it will pretty much solve the trust issues

I'm not so sure. How will you verify a signature if you see a video on TV or on social media? Do you believe these devices are 100% secure and the keys will never be extracted?


Nothing is 100% secure, but the point of using a TPM is to prevent extraction of the keys.


TPMs are great at preventing extraction of keys via the exploitation of software bugs. They are not so good at preventing extraction if you have physical access to the machine, and especially not if you don't need the machine to survive intact, and can afford to destroy as many machines as you need to get the keys out...


I agree that it's better than nothing. Sooner or later deepfakes will be pretty much perfect and detection will be more or less impossible.


It will be 6 months before hardware bypass devices show up on alibaba.


I don't see how that would be useful here. I'm not giving out info about my webcam to anyone. There would be no way to verify the signature.


Manufacturer signature, not your signature.


That will help someone determine that a fake video didn't come from their own camera.

But most videos where we're worried about deep fakes are videos that come from other people's cameras, where we don't know which signature should be valid, nor whether it should have a signature.


I believe they're saying that the manufacturer will sign the video, not the filmer. Those signatures can then be validated by platforms that the video is uploaded to. The signature isn't supposed to say "this video came from person X" it's supposed to say "this video in fact came from the real world".


But there are many manufacturers of cameras. How do I know that a particular clip should even be signed?

Signatures don’t change the provenance problem.


It seems like the goal would be to make this ubiquitous enough that eventually you'll expect all clips to be signed, although this would of course take quite a long time to achieve, which would indeed undermine the authenticity guarantees a system like this could provide.

It seems like official channels like CSPAN, white house press briefings, and other world governments could get immediate benefit by applying these methods to all of their official communiques.


>it will pretty much solve the trust issues deep fakes are causing.

It's a nice piece of tech, I can see it being used in court for example, to strengthen the claim that a video is not a deepfake.

However, that's not "the" problem with deepfakes. Propaganda of all sorts have demonstrated that "Falsehood flies, and the Truth comes limping after it". As in, with the proper deepfakes, you can do massive damage via social media for example. People re-share without validation all the time, and the existence of deepfakes add fuel to this fire. And I think that we can't do anything about either.


That's really cool! I've been waiting for tech like this to finally come to light. Honestly expected either Google or Apple to lead the way on it. Have you all worked with the content authenticity initiative at all? It seems like they're looking at ways to develop standards around tech like this to ensure interoperability in the future.

https://contentauthenticity.org/


The surveillance states wet dream, where you can just look up who took that ‘unfair’ video of your agent breaking the rules.


You're being uncharitable. I read it as a way for a person to prove they took the video, not as a database of person<->signing key. In other words, the government would only be able to see that video 123 was signed by signature 456. It would be up to the poster to prove they own the camera that produced signature 456.


So as long as you never post any video you made with that camera on any account that is not anonymous, you’re safe. In other words, completely impractical.

This is exactly why all these identifiers and all this tracking are harmful: you can never be anonymous, there’s always someone watching who can couple everything you do to your identity. And it happens with you knowing as little about it as possible.


Hey, off topic but I love your cameras and the attention to detail you put in them, especially in regards to what helps the tech install them.


A few months ago, the IRS made me verify my identity using some janky video conferencing software where I had to hold up a copy of my passport. The software was so hard to use, that I can't believe average people manage to do it. Now, real-time deep fakes are literally easier to create than using the video verification software itself. This will have interesting societal implications.


In India, Digital Signature issuing companies use webcam video to authenticate the applicant as well(I don't think even holding document is required); That digital-sign is used everywhere from signing tax filing to paying taxes.

I hope deep-fake detection software can compete with deep-fake generation software, I've been tracking this need-gap on my problem validation forum for a while now[1].

That said, There are ethical usages of deep-fake videos as well; In fact I might checkout this very tool to see if I can use it for 'smiling more in the videos', remembering to smile during videos is exhaustive for me. There are other ethical usages like limiting the physical effort needed to produce video content for those with disability(like myself)[2].

[1] https://needgap.com/problems/21-deep-fake-video-detection-fa...

[2] https://needgap.com/problems/20-deep-fake-video-generating-s...


I mistyped my SSN this year and I wound up doing something similar: I had to take off my glasses and hold my face exactly in the center of the camera, while repeatedly squinting (hopelessly) to try and read the error messages as they alternated between "TOO CLOSE" and "TOO FAR AWAY". I gave up and, luckily, a few hours later found the mistake.


I'm glad they released this.

I'm sick snd tired of seeing big companies and orgs (Google is the most recent) publish an amazing application of ML but refuse to release the trained model because the model is biased and may be used in a bad way.


I suspect that's mostly an excuse and they just want to keep it to themselves for commercial reasons. I mean I'm sure they are happy not to have to deal with any ethical issues by keeping it private but that's probably a secondary motivation.

It's not like they release their state of the art ML stuff when there aren't any ethical issues anyway, e.g. for voice recognition.


To those who ask about the ethics of releasing something like this, I'd say that this technology already exists, and bad actors probably already can get access if they really want to and are sophisticated enough. Making this available to the general public will spread awareness of the existence of such tools, and can then possibly have a preventive effect.


As someone with a stalker, I can't emphasize this enough. A stalker will go to all sorts of lengths to do bizarre shit. People don't believe it. I would guess governments will do some equivalent there-of.

Democratizing access to things -- including bad things -- has a preventative effect:

1) I can guard against things I know about

2) People take me seriously if something has been democratized

The worst-case scenario is if my stalker got her hands on something like deep fake technology before the police / prosecutor / jury didn't knew it existed. I'd probably be in jail by now if something like that ever happened. She's tried to frame me twice before. Fortunately, they were transparent. She'll try again.

Best case scenario is that no one has access to this stuff.

Worst case scenario is only a select group have access, and most people don't know about it.

Universal access is somewhere in between.


I want to second this, as there are so many people that just simply can't believe how much time and energy some people will put into destroying someone else's life.

And when you ask for help, people think you are the insane one because they simply can't believe your story about the insanity of someone else.

I hope you find relief sometime from your stalker. I found it (not a stalker exactly) from letting the person burn themselves with their behavior so many times without me doing or saying anything in return to them (my strategy of non-direct conflict, and it worked for me), that eventually they ran out of people to manipulate and fool.


A worse situation: What if you ARE insane, it's just that this time it's not your imagination running wild, it's actually someone stalking you? People that have some mental problem, whatever it is, have a much harder time since no one will believe them. It's always just "I know you are feeling bad right now..." even though the diagnose is ADHD or something else irrelevant.


Your "worst case" depends greatly on who the select group is. Is it movie studios making 9 figure budgets, or is it any obsessed person who can figure out how to find and install software.

Obviously, it's hard to imagine many situations like that, but you can imagine a process that required a 8-figure quantum supercomputer.


This is too myopic a view. Think about how far graphics have come on mobile devices let alone high end gaming machines in the past 5 years. This technology will eventually be accessible in the palm of your hand via a powerful enough device or the cloud. It's more a question of when than if.


At that point the select group will change. It eventually reaches a tipping point, but usually it starts with a small enough select group


history say you cannot control how selected the selected group is, no matter the implications, i.e. nuclear bomb secrets from manatthan project finding their way into russian hands

https://www.smithsonianmag.com/history/spies-who-spilled-ato...


That's just not true. If you couldn't control the spread at all, some terrorist group would have already detonated one.


I thought I had the knowledge to build a nuclear bomb when I was in high school. In retrospect, I'm not sure I was right, but I'm pretty sure I was close to right. From where I was, I certainly could have picked up the knowledge as an undergrad; it's no longer a hard problem with publicly-available information.

For better or worse, I did not have the requisite $2B-or-so to do so at the time. My allowance was $2/week. I don't think most terrorist groups have $2B either. Even if I did have the requisite $2B, I don't think I could have done so discreetly enough to not get caught; someone would have noticed a high school student buying up things like uranium or building appropriate facilities.

I'm more concerned about biological warfare. While I don't have the knowledge to engineer a super-virus, we're at the level where a few years intense study is all it takes to have that knowledge, and a few high school students do. The cost structure there is increasingly moving into the budget of high school students too. Things like DNA sequencing and synthesis have a Moore's Law-style curve, with falling prices.

(If this sounds unrealistic, I was in high school back in the days when nerds intensely learned physics; physics is much less interesting to nerds in 2022. Hot topics for nerds change over time, and we're out of the cold war and the space age. Today's high school nerds have moved on to today's hot topics like machine learning and microbiology)


They could probably make one (it would probably fizzle though, but that's still nasty, even if not full theoretical power), they just don't have the highly enriched uranium to do it.


Knowledge spread and stealing one/building an enrichment facility are quite far apart.


> Democratizing access to things

not to be too pedantic, but this is not "democratizing access", as that would involve print-outs/usb sticks/discs of the code distributed to people that can't access the internet, accessibility issues, bias considerations etc etc. as such, this is just "access".


Athens, late medieval/early modern Poland, and the early US all had democratic systems of sorts where about 10% of the population, give-or-take, were in the electorate.

Likewise, slavery < serfdom < caste / Jim Crow / etc. class system < equality under the law with discrimination < equality

Access, inclusion, and democracy aren't binary concepts, and progress is gradual.


I agree with everything you said, but I we shouldn't deny that opportunistic bad actors don't exist. Or it might get on their radar and be exploited. Open source tools also tend to be better maintained, documented and reliable, so the bad guys will have a better tool.

That being said, bringing it to light also has benefits like you said. If the tool is out in the open and state of the art techniques are used, technology to detect its use will also benefit.


You are right. I saw some guy that looked like Matt Damon trying to sell me some crypto coins...


It would be so ridiculously crazy that an established A-list actor would willingly promote those crypto scams. Must have been a deep fake!


For what it's worth, actors seem to be securing themselves against this using IP rights.


I'm reminded of Firesheep - https://en.wikipedia.org/wiki/Firesheep - which came out in 2010. It wrapped session hijacking on WiFi in an easily usable interface. The technique and the vulnerability wasn't anything new, but the extension raised awareness in a big way and really sparked a big push for getting SSL deployed, enabled, and defaulted everywhere.


Ease of access does matter.

It only buys time, but that can provide the time needed to create countermeasures and ideally make those very accessible—somewhat similar to responsible vulnerability disclosure.

This piece goes into more detail: https://aviv.medium.com/the-path-to-deepfake-harm-da4effb541... (excerpt from a working paper, part of which was presented at NeurIPs).


The entirety of deep fake technology was developed mostly in mainstream academia using "raising awareness" as an excuse. Paper after paper, model after model, repository after repository. Every single time the excuse was "if we don't do it, someone else will". This was going on for years and the explanation is absolutely laughable. Without countless human-hours put into this by academia, it's pretty obvious that this technology would be nowhere near its current state. Maybe some select military research agencies could develop something analogous. Currently this is accessible to literally every crook and prankster with internet access.

Also, the notion that "raising awareness" is going to prevent deep fakes from being used in practice shows complete and utter disconnect from reality. Most people who are skeptical are already aware how imminently fakeable all the media really is. Most people who still are unaware will remain so, no matter how many GitHub repositories some dipshits will publish.


I agree that raising awareness that tools like this are possible is important and that sufficiently advanced actors can do this anyway, however I don't think in this case releasing pre-trained weights to the general public is responsible. This could probably be used to help bypass crypto exchange KYC for moneylaundering purposes. I'm not sure what the best access model is - email us with a good reason to get access to the weights perhaps - but what alarms me is there seems to be no consideration for misuse or responsible release at all.


Even without deepfakes any kind of system relying on a person (or computer) not being tricked by webcam video seems quite questionable. People could still be tricked with a spliced video fragments of the real person or makeup especially if the set of face expressions used during "liveness check" is known ahead of time.


I try to imagine how society will deal with this. What if deep fakes are so perfect that anyone can generate real-time footage of anyone else doing anything? As a society we’d need to move out of the virtual and back into the physical. Would that be such a bad thing?

I suspect this perfect deep fake technology might be a real boon to society.


I’ve often wondered if it’s possible to put modern tech “back in the box”. At first, this seems impossible. It permeates our daily lives in so many ways.

Continued development towards a tech dystopia might be the only way. But I’m worried it will require the dystopia part before people will wake up and accept that moving back to the physical world might be required.

The next few decades will certainly be interesting, at the very least.


This technology certainly already exists and has probably been around for a long time.

https://youtu.be/CpAdOi1Vo5s?t=3786


That's like saying that nuclear weapons exist, and bad actors can potentially get them, so let's lower the bar so that anyone can.

Making such tools accessible is reprehensible. It will lead to more bad actors, to less trust in media and in any objective reality, and more erosion of our institutions and society.

There is absolutely no reason whatsoever for this. It's unethical and frankly downright evil.


On par with nuclear weapons? Downright evil?

You’re being absurd.

The technology exists, and it’ll get better. Pretending that it doesn’t exist or banning it won’t make it go away — it’ll just be used by the least scrupulous and most powerful.

Disruptive efforts like this are most upsetting to anxiety-ridden people who think that if they could just control things firmly enough, everyone and everything will be safe.

That kind of thinking doesn’t actually work, though, and it produces a stiflingly rigid, oppressive society that deserves to be upset occasionally.


I think you describe the conventional wisdom well here.

But: are those really the only options? Paranoid stop-the-world wishful thinking or all-new-trends-are-inevitable?

I realise I'm almost certainly wildly misrepresenting your point of view with the latter label! I don't really mean to ascribe it to you. But I think the conversation often ends up tacitly as a debate between those two positions, both of which are mistakes. But one other option -- as an example -- is to find more complicated ways to stop doing <bad use of tech>.

A striking historical example to me because it's so old it can hardly be said to have been an invention that needed to be responded to -- rather just an apparent invitable fact of life: For the longest time, it seems everyday brutal violence on a level we find hard to imagine was common. At the time, that would have been obviously inevitable. Nothing else had ever happened. Then it basically stopped -- turns out it never was inevitable. Why? We didn't for example ban knives as a technology, but we did a lot of things over the centuries that, to a good numerical approximation, stopped us using them for violence (banning them in certain situations is only one, I suspect not the most important and maybe not even necessary).


I think in many ways, the continued evolution of technology will turn back the clock on progress in other areas of our lives.

In ages past, impersonating another person was not nearly as difficult as it is today. Advances in identity verification and such have eliminated a certain class of problems. The emergence of deepfakes doesn’t necessarily introduce “new” problems, it just resets the progress on some very old ones.

Assuming this is solved not by banning the tech but by making progress elsewhere, some pretty worrisome implications come with that. One antidote to deepfakes would be even more progress on identity verification and tracking the source of digital media for purposes of authentication / validity.

It’s not hard to imagine the negatives in such improvements at a time where we’re already tracked far too much.

I think a more ideal development would be a return to pre-tech habits. Meeting people in person. Less reliance on virtual communication, etc. But one need only look at the backlash against returning to the office to see that such a move would require something truly existential.

I suspect you are right in the long run, but I’m curious what other factors or forces might nullify the threat introduced by this tech in the short term, or if things really have to run off the rails before people decide that fundamental changes in how we interface with tech are required.


> I think in many ways, the continued evolution of technology will turn back the clock on progress in other areas of our lives.

I don't disagree that it can have that effect.

Just to note: this use of "turn back the clock" is quite distinct from "turn back the clock on technology".

> Assuming this is solved not by banning the tech but by making progress elsewhere, some pretty worrisome implications come with that.

My point was not that we can make progress elsewhere to solve resulting problems (we can) but that we can also not use the technology in the "inevitable" bad way. It happens!

> One antidote to deepfakes would be even more progress on identity verification and tracking the source of digital media for purposes of authentication / validity.

One other antidote (of many I'm sure) would be to do less identity verification in the first place.


But who are the actors worse that the mafia that is already in control?

Institutions that are corrupt and serve themselves (not the public, despite their lipservice) need to go.


>>It will lead to . . . less trust in media

Trust in media is already very low, and in fact should go lower. Deepfake tech exists, and the fact that it does, and is broadly available to bad actors, should be widely known.

Obviously the best case by far is that such weapons (in this case disinformation weapons) do not exist, but the worst case is them existing but hidden — THAT is the recipe for fooling people in the greatest numbers.

These tools existing, with widespread knowledge of their existence is sort of the least-worst case for the real world in which we live.

Yes this does have the potential to kill pretty much anything related to video and photography (everything from art to news to documentation), but that was the same when spam was a literal threat to existence of email. Unless we manage it, video and photography will be trusted for nothing but boring amusement; but better than than mass deception.


Dot was used for performing vulnerability assessments on many biometric KYC vendors on 2022. The Verge covered this study in this article https://www.theverge.com/2022/5/18/23092964/deepfake-attack-...


That article is linked in the second paragraph of the readme.


Well, genius of this guy. Create the threat then sell the cure. The old school business model we know from anti-virus software.

"I am Cofounder and CEO at Sensity, formerly called Deeptrace, an AI security startup detecting and monitoring online visual threats such as “deepfakes”." (one of the contributors of this repo)


Well, this kind of threat was just a matter of time, if did not exist already, and public knowledge is for a greater good.


I'm really excited to see what could be done with this! I think the primary benefits of this being released are two fold:

1) It will give security researchers more freely available technology to work with in order to try and fight the malicious use of deepfakes. (I saw some interesting comments in this thread about TPM. It'd be interesting to see what other solutions are out there.)

2) It would raise the overall awareness of the general population about the existence and advancements that deepfake technology has made. I would argue that a small subset of the overall population know what the term "deepfake" means, and even fewer are aware at how far it has progressed in only a few short years. (I'm not super well versed in the topic myself, I just know that I've heard a lot of progress has been made.)

I think that since this tech is already actively being used by bad actors, the best course of action that we can take until at least a somewhat good counter to it has been adopted (and then quickly defeated) is to make as many people aware that this is something that could affect them, or their families. That this is something that could be used to get someone fired, or hurt, or killed. I think that the more that people are aware of its existence, the less impactful the overall effect of deepfakes becomes. People learn to look twice before making a call on something, because of how easy it has become to fake audio and video.


To all people who ask moral side of this tech and that it can be abused. What do you think about tools like Kali Linux? NMAP?

Basically any pentesting software can be abused.


It's not what we think about this, it is what the creators think about it that matters.


It doesn't matter what anyone thinks about it.


I actually disagree with that. I think software and ethics are closely intertwined and too often we see them as disjoint. It's interesting to try to find a software project that does not have an ethically dubious angle to it, that isn't all that simple.


My point is not that ethics don't matter, but rather that accessible deep fake technology has been on the verge of existing for years already. It was only a matter of time. You can have any ethical opinion about it that you want. But reality is that unverified video is no longer a trustworthy medium. If anything this software release makes that fact harder to ignore. We will have to adjust sooner rather than later.


> It was only a matter of time.

This is a cop out. Yes, it is a matter of time, but still it takes people to build this stuff and they are rarely the people that use/abuse it.

> But reality is that unverified video is no longer a trustworthy medium.

No, video is no longer a trustworthy medium and in many ways never was.


While it's an interesting discussion, would you ask the same question to a knife or sword maker? Your tone in this and the AMA thread is combative, and wouldn't lead to a productive discussion.

It would be much more interesting to discuss the technical merits of this project than any particular moral concerns you might have, especially on this forum.


> Your tone in this and the AMA thread is combative, and wouldn't lead to a productive discussion.

That's mostly your impression. But that may be because you believe that I have a position with regards to this particular software, which I do not, I just see this as an opportunity to gain insight in the position of the author which I find interesting and which may help guide me in similar decisions in the future, because it's something that I've been wrestling with for a long, long time. Since 1995 in fact.

> It would be much more interesting to discuss the technical merits of this project than any particular moral concerns you might have, especially on this forum.

What you find more interesting and what I find more interesting do not necessarily have to be the same things, and you are totally welcome to ask your technical merits questions.

Finally, I don't like to be told what I can and can not discuss, especially not in a thread started by someone who wrote 'AMA' where the third A stands for 'Anything'.


Fully agreed. I think software directly addresses ethical questions far more often than we tend to be comfortable admitting - almost everything involving personal data runs full tilt into questions of identity and ownership, for example.

There's no right answer to a lot of the dilemmas raised, and people will reach their own conclusions and choose particular principles to follow. I don't want to tell people how to think, but I do think that everyone has a responsibility to actually think about these issues, wherever they end up landing on them.


Hi there, one of the maintainers here. Thanks for sharing & AMA!


Even if you claim that you are legally not responsible for any abuses of this software do you feel any moral responsibility?


What kind of answer to your question would you accept? What kind of answer would you be very satisfied with, versus merely be placated by? I'm curious because I have my own answer to your question for my own things that I suspect you would be unhappy with, and I'm wondering how absolute-good you require your interrogatees to be.


I don't have any expectations. In fact that idea that there would be an answer that I would be 'very satisfied with' would make such a question meaningless, it would presume that I am about to judge the author, which I'm not. It's an AMA so I did just that.


Isn't the situation completely analogous to other "ethical hacking" software?

I distinctly remember a very similar discussion around firesheep 12 years ago: https://www.computerworld.com/article/2469667/firesheep-fire...

https://news.ycombinator.com/item?id=1828955

> There are probably going to be a lot of people negatively affected by this for quite some time to come. One thing to point out is that there are grades of things. There is "public", and then there is "top hit on Google". Similarly, there is "insecure" and then there is "simple doubleclick tool to facilitate identity theft".

> How many millions of dollars and man hours is it going to take to lock down every access point? How many new servers are going to be needed now that https is used for everything and requests can't be cached?


Indeed it is, but I'm interested in this particular author's stance on this, prompted by their disclaimer which clearly indicates that they realize that there is the risk of abuse, and an AMA seems to be an excellent opportunity to gain some insight.


I agree.

I'm not OP but I would answer like this: abuses of this technology are inevitable and can only be mitigated by counter-software (which leads to an arms race).

The release of this source code could kickstart the development of deepfake detection software.

Or maybe in general people need to put less weight on video evidence.


Please read this https://www.biometricupdate.com/202205/sensity-alleges-biome...

We have warned many vendors about the vulnerability of their commercial biometrics software. The threat is currently downplayed by the whole industry. We hope this release to be a wake up call and that our team will be joined by other experts in raising the alarm.

Deepfakes are already used for spoofing KYC around the world. This is already happening, and not by using `dot`.


Interesting, I will wait for the OP to answer before responding to you.


I too would like to imagine whether or not I would release something like this in similar circumstances.

Yet this question seems unfairly loaded. At the least - premature.

Though we can easily imagine it, afaik there is no evidence of any actual abuse to date.


Tools like these will end up in the hands of those that are trying to harden an installation and in the hands of those that will end up using them to try to break into such installations.

I've often wondered whether the net effect of the whole security research community is a net positive or a net negative and I honestly do not know the answer. So yes, it is a loaded question. But asking yourself if what you can do is actually the right thing to do is always good, especially if you - as these authors imply - know up front that there is a large chance of abuse.


I used to work on secure communications software, and I've often wondered if many more criminals than oppressed people were using it.

In the end, I decided that one oppressed person using it to improve their situation is morally "worth" many criminals using it. It's kind of like "better ten guilty men go free than put one innocent man behind bars".


That depends on what the guilty men are doing...

Because the one innocent is usually a defender, whereas the guilty men are usually the attackers and to give the attackers an effective advantage in an arms race is a risky thing with unknown and potentially devastating outcomes.

In my own life (the live video example listed elsewhere) it would have meant that we probably would have had everything that we have today anyway, only maybe a little bit (not even that much, I'm aware of one other individual who was working on a similar concept who contacted me after our release) later.

Video conferencing, live streams in the browser without plug ins and so on all would have happened, for sure. But at least the massive mountain of abuse cases would not rest partially on my shoulders. And because I've been confronted with the direct evidence of the results of my creation for me that link is easy to make. But if you work on secure communications software you are probably not aware of the consequences.


Thank you for a considered response. I share those feelings.

I just hope the person offering AMA will not be daunted by a good but blunt question from a voice that carries weight around here.


I've been in that position, which is one of the reasons why I'm asking. When I came up with 'live streaming video on the www' I never for one second sat down to think about the abuse potential. Color me hopelessly naive. And when confronted with the various abuses over the years I've always had a problem with that, this was the direct consequence of me just 'scratching my itch' and it caused a huge amount of misery. Oh, say the defenders, but if you had not done it then somebody else would have. This is true, but then that moral weight would be on their shoulders and not on mine.

Hence my question. Because I do feel that weight and it has caused me to carefully consider the abuse potential of the stuff that I've released since then and I've only released those things that I feel have none that I can (easily) discern.


One thing I learned early in my career, and numerous times during it: For every ethical stand you take against writing a bit of software you consider questionable, there's a line of other software engineers out the door willing to do it. I remember when as a junior engineer, I worked up the courage to tell my boss I had a moral problem with writing some code that would help the product cheat at a benchmark. He totally understood, and I didn't get fired or anything--just moved on to a different project. Bob, two cubicles down, was more than happy to write the benchmark-cheating code.

Software engineers and other technology creators don't take a "Do No Harm" oath like doctors. Many of them have never even taken a single Ethics In Technology course at university (it was an optional class when I was in undergrad decades ago). And, even in the alternate universe where ethics was baked into engineering training, all it takes is a single rogue willing to ignore them, and now the world has to deal with it.


May I ask, what proportion of the stuff you did not release on moral grounds were subsequently re-invented in short order?


everything!

Which is one of the reasons I'm so completely against software patents and a large number of patents in general. Quite a few of them are simply things that the time is right for.

Here is one of my idea dump lists, you can check for yourself which ones are not yet done (which is probably a really small number by now) and which ones have turned out to be homeruns (and in some cases billion dollar+ companies).

https://jacquesmattheij.com/my-list-of-ideas-for-when-you-ar...

One that wasn't on there eventually led to https://pianojacq.com/, which I'm happy to report to date has not led to any kind of abuse. And no, it did not put any piano teachers out of business either.


As commented above:

Please read this https://www.biometricupdate.com/202205/sensity-alleges-biome...

We have warned many vendors about the vulnerability of their commercial biometrics software. The threat is currently downplayed by the whole industry. We hope this release to be a wake up call and that our team will be joined by other experts in raising the alarm.

Deepfakes are already used for spoofing KYC around the world. This is already happening, and not by using `dot`.


I just wonder. How well will this work with less realistic textures? Like imagine applying some textures on video game 3D model?

And yeah please don't be discouraged to continue publishing your work and models because some people think it's can be abused. Bad guys always have access to the tech they want anyway.


Thanks for questions. The main faceswap algo that we use is SimSwap. You can take a look at their repo for understanding limitation and chance to be applied to 3d game model. To some extend it will work, but I suspect it may not blend very photorealistically.


I'm interested mainly in the moral aspects.

1. How would you feel if this toolkit were used to create a embarassing and convincing deepfake videos of you and/or your family members (perhaps your parents)?

2. Why do you think you have to enable people to fake videos very easily?


Was the name 'deeply offensive' ever considered?


No, but you could open a PR


Mind releasing some footage of your face and your personal information?

It would be a nice gesture to say "I am as much at risk of the consequences of my actions as the people I have chosen to put at risk"


You can easily google me.


Thanks not what I meant.

Are you not worried about the conflict of interest inherent in providing offensive and defensive tools simultaneously?

If someone charged money for minesweeping and simultaneously gave out mines I think that would be a fairly clear problem. I think it's a good metaphor because it captures both the conflict of interest and large potential for collateral damage.



The moral lens people apply to deepfakes feels myopic to me. There are lots of tools that have been built that have arguably done more harm that no one talks about, like port scanning tools. Perhaps because the the negative consequences are so visually obvious and require few intuitive leaps.


> Perhaps because the the negative consequences are so visually obvious and require few intuitive leaps.

Exactly this. At some point it's not feasible to calculate the impact of a technology's trickle down. Unless the impact is large enough.

I'm not surprised people have such a viscerally moral reaction to deepfakes.

> There are lots of tools that have been built that have arguably done more harm that no one talks about, like port scanning tools

I don't think this is a fair argument against current tools.

If we had some way of accurately measuring the secondary effects of port-scanners, should we start caring? Should we retroactively remove any tools that we later regret? (I don't have good answers to these either)


I agree with everything you've said.

I suspect this also applies to many criticisms of crypto and NFTs as bad for the environment. The impact is simply easy to calculate...not outsized to other industries. We certainly don't evaluate, say, the video game industry based on its environmental impact in the same way.


Governments and bank branches should offer physical identification as a (potentially payed) service globally, and provide a digital signature that a person is owning a physical device (which should be valid until the device is reset or invalidated by the person).



I always imagined the USPS fulfilling this need.


I don't know how serious is USPS, but in my country the post office doesn't check if the signature looks the same for example.

Banks face and handle fraud on a much higher level, and have a much better training in identificacion.


My signatures never look the same. Signatures aren't a good judge of authenticity.

My point is that a bad actor would have to show up in person and lie to a federal entity. It's not that they won't get away with it- it's that they cannot hide on the other side of the globe.


There is also work going on for deepfake detection challenges since years, I think forensics benchmark was the oldest one and other two placed at 2019 , I dont know if forensics still going on though

http://kaldir.vc.in.tum.de/faceforensics_benchmark/

https://www.kaggle.com/competitions/deepfake-detection-chall...

https://ai.facebook.com/blog/deepfake-detection-challenge-re...

edit*: looks like yes forensics top submission 25 Nov, 2021


I'm absolutely loving the idea of deepfake-detection security SASS developing an easy to use live deepfake system. Crassus tradition is truly immortal.


from the article:

Real-time, controllable deepfakes ready for virtual camera injection. Created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.

Reminds one of the vulnerable world hypothesis. It's only a matter of time before a technology comes along that is both destructive and so simple any fool can use it.


In cybersecurity the bad guys always win.

There are two very rough "proofs of a kind" I use to explain this to students, both well known in military and counter-terrorism analysis.

One is called Blotto analysis. As the number of fronts grows large (>12 is an important point) the game favours the attackers. Defenders have to always win on all fronts, but terrorists only have to win once on any of n fronts. Another scenario comes from finite board games like Go. Defenders lose when their back is against a wall. Because the space of practical human security solutions is smaller than the inventiveness of attackers, eventually the attackers win. Biometrics is the poster child for this kind of folly. There are only two edge cases. One is to create proven trustless systems. Such systems cannot involve humans in any way shape or form, and therefore fail by definition as human security solutions. The other is to eliminate all attackers.

This does indeed create a fountain of fascinating ethical conundrums with regard to security research and the possibility for strategic limitation of knowledge hazards.


> Defenders have to always win on all fronts, but terrorists only have to win once on any of n fronts

It depends on how you design systems and processes.

Let Mn denote the nth defense mechanism of a system. We can build architectures in which the probability of breaches is calculated as:

(M0 is pwned) AND (M1 pwned) AND (M2 is pwned) AND (M3 is pwned).....

You can see the total probability can become small quickly, as it's computed as the product of all probabilites. The collorary is: it's possible to design secure systems from insecure building blocks (as long as those are reasonably uncorrelated!)

Insecure systems are generally designed the way you suggest in your post: if a single defense mechanism is breached, the whole system will be breached. So you basically are as secure as your weakest link.


Yes, I haven't mentioned defence in depth here. And as you say, serious real world applications are more than a single layer. But of course what we see with DiD is polynomial growth of complexity, so policy, monitoring, rebuild time and so much else gets hard to manage - upshot is someone, somewhere falls back to simpler configurations - and there's the entry point. But sure, thank goodness we've moved beyond a single blacklist firewall these days :)


Spot on, and given that there seem to be no real guidelines for any of this beyond 'responsible disclosure' the question for me is whether we are accelerating the arms race with all these neat point-and-click tools.


Which requires more computation: a real-time deepfake or real-time deepfake detection? That will determine the future balance of power.


Deepfakes are for deceiving humans. There are still way too many artifacts in the outputs that are easy to detect in code.


The problem comes that even with extremely good deepfake detection, political operations can still use the deepfakes to fool the average voter into believing that is it real, irrespective of any successful analysis. You and I might not be fooled but we don't matter if 10000X people are fooled and affect our lives anyway in a negative way.


For propaganda purposes, the deepfake needs to be neither real-time, deep, nor fake, nor does it need to exist at all. A false message with zero supporting evidence repeated endlessly by high-profile influencers is a million times more effective than a single bombshell video ignored by mainstream media.


You don't need to detect a deepfake in the same time as it takes to generate one, do you?


But you may have to go through orders of magnitude more material


If you get a Zoom call from your CFO telling you to issue a money transfer to that bank account, that it is 100% legit, how much time do you have to detect a deepfake?


Does your company instantly execute payment instructions to new supplier accounts based on a remote verbal instruction?


I can imagine more scenarios other than "bad actors" (scammers and such) for this software...

For example: what about using this in an interview for a remote job (possibly intercontinental)? It could tip the balance in some situations (due to biases). For example, beating ageism by projecting a 25 years old version of ourselves. Or removing a tatoo or birth defect (and possibly other changes too, you can imagine).


Is that something related to the big attack happened last year in China?

https://findbiometrics.com/nine-of-top-10-liveness-detection...


I predict Kitboga will have a lot of fun with this one...


oh my... happy porcupines


This is like releasing an app that unlocks all car doors and telling users to only use it on their car.


You can buy lock picks specifically for cars. The world hasn't ended.


All cars…


I see a lot of people decrying its existence, but maybe this tool can help combating deepfakes?


This seems like a good way to get enough attention around deepfakes to make them illegal. How long before a major media personality or politician is catfished by this?


Has anyone followed the install instructions successfully on an m1 mac?


We haven't really tested it on M1. You are welcome to give a try and open an Issue if any problem


Serious question. What is the chance this video has been deepfaked? https://www.youtube.com/watch?v=ENlL-Uru-cM&ab_channel=CNBCT...


It's not a serious question. Regardless, the answer is 'nil'.


It was published by the official channels. What are you implying?


That his staff couldn't get him to do the sensible thing so had to deepfake his press conference.


I wished we would be in a position where OBS was not needed.


Why so? I am genuinely interested. I am one of the authors.

Other virtual cameras would work too, by the way. OBS is the README as it's the easiest to set up.


Just because you think some issues is "inevitable" in the future doesn't mean you should spend academic resources to make it happen faster, while simultaneously making it easier to exploit.


Time for a Teams / Zoom plugin


You can already use this in video conferences, by the way


This is huge!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: