Hacker News new | past | comments | ask | show | jobs | submit login
The road to realistic full-body deepfakes (metaphysic.ai)
220 points by Hard_Space on Sept 22, 2022 | hide | past | favorite | 201 comments



Funny thing, as a clueless little kid in the 80s whose mind was shaped by popular fiction, I often suspected this kind of thing already existed back then. One of my 'gotcha' questions for adults was, "I've only ever seen him on TV, so how do I know Ronald Reagan is even real?"

Over 30 years later, while I would've never anticipated smartphones... I really thought impersonation technology through video & audio editing (not dependent upon look-alike actors) would've been here sooner. Another example of wildly underestimating the complexity of what might seem like a simple problem.


In a sense, Ronald Regan was not real. All of his speeches were written by someone else, and he relied heavily on advisors. He was a figurehead for his administration to a greater extent than most presidents before and after. He was one of the few presidents that may have actually been innocent of the bad stuff that went on in the white house during his presidency (Iran-Contra), because he never showed any indication of really understanding it the way Nixon understood Watergate or LBJ understood Vietnam.


This is hilariously untrue. He was very well known for his “a time for choosing“ speech in 1964 introducing Barry Goldwater. It had the same impact as Obama‘s speech before the Democrats in 2004. As a union leader he gave many, many well received speeches that he wrote himself. He was politically active as a union Democrat for decades before becoming governor of California and wrote scores of speeches at that time.

From about the 1940s to the 1960s he made many radio addresses that he wrote himself. He was politically very active, touring the country and doing a lot of speeches on his own.

As a president he absolutely had the best speech writers of his time, but he went over each speech meticulously and gave feedback the writers claimed was expert and welcome.

One of his speech writers actually published a book that showed photographs of Reagan’s own handwritten notes for his speeches. There are thousands of them in his presidential library.


I think you also have to give Reagan credit for hiring gifted writers to shape his communications. Peggy Noonan is just one of several.


Thought I did?

> As a president he absolutely had the best speech writers of his time,


Yes you did! Sorry for restating what you had communicated more parsimoniously. (I'll be looking for somebody to help write my HN comments for me.)


I saw Ronald Reagan's interview with Jimmy Carson. He seemed to be pretty sharp and funny.


As another commenter has already pointed out, your assertions about his speeches is very much untrue. Also very much untrue is his lack of complicity in the Iran-Contra crimes.

He was both much better (as a writer) and worse (as a law-breaker) than your depiction of him.


Can I briefly and humorously boil your statement above down to "Ronald Reagen was probably innocent by way of sheer ignorance"?


If you consider the latter years of his presidency, as well as his immediate post-presidency retirement, it'd be more accurate to say:

"Ronald Reagen was probably innocent by way of Alzheimers"

A lot of people who met him during the last two years of his presidency described what we now know to be early symptoms of Alzheimers. He also came out publicly as having it not 5 years after leaving office.


You can, and you would be gravely mistaken.


Ignorantia juris non excusat


That's elitist for: Ignorance of the law is no excuse.


I have practically no technical knowledge of the field, so I could be way off here, but I suspect that if the technology wasn't so ripe for abuse, it would have been developed and/or made visible to the public a lot earlier


I have some technical knowledge of the field, and this is not true. The ability to do this is brand new and getting into the public's hands essentially the same time as anyone else's.

For hard evidence of this, see Russia's use of deepfakes a few months ago to impersonate Zelensky and attempt to make Ukraine think their leader was surrendering.

The deep fake was technically advanced but also laughably bad.

What would be an interesting and difficult question is whether this state of things can largely be attributed to the AI community's commitment to making advances in AI open to public knowledge and use, or if there is some stronger factor at work.


It is not brand new. The ability to replace actors in film was developed for stunt double placements as a manual process, and efforts to automate have been halted by immature male pornographic greed. Seriously. I patented a feature film quality process back in '08 and investor fascination with pornography halted further progress.


How did interest from the pornography industry halt further progress? I would have assumed the high demand for the product would have driven innovation much like how the porn industry decided the winner in the vhs/betamax format war


We had viable non-porn applications that would have made serious bank: 1) brand name advertising with a celebrity telling you (you're in the video advertisement too) explaining how smart you are for using their brand; 2) film trailers where you're in the action too; 3) educational media for autistic children (serious work here); 4) aspirational/experiential media for psychological therapy; 5) personalized photo-real video game characters.

Basically, every time we formed an investor pool after a while one of them would "suddenly realize" actor replacement applied to porn and then he'd fixate on the idea. He'd talk the other angels into the idea and then they'd insist the company pursue porn. We'd explain the non-porn higher valued applications, and the danger of porn psychologically tainting the public perception of the technology and the creators themselves. Plus, we had VFX Oscar winners in the company, why the hell would they do porn?


Why not, it sounded like you had a good amount of investor interest? Was it just moral objections, cause while items 2-5 seem fine as business ideas I would have assumed anyone trying to sell item 1 wouldn’t have cared about porn if money was involved.


Perhaps the female company president felt it was not wise? I was CEO and agreed.



Wait what? Must hear more about this.


> "I've only ever seen him on TV, so how do I know Ronald Reagan is even real?"

This made me wondering how many among the newer generations social media addicts would think along the lines of "I've only ever seen him in person, so how do I know he is real?".


Am I on HN or /r/oldpeoplefacebook?


made me LOL


This makes no sense. And I've only heard the opposite among Gen Z anyway, amazement at seeing someone from "online" (TikTok/YT/etc) in real life.


Remove the "as a kid" part and you're now a conspiracy theorist, or one of those people.


Maybe a dumb idea, but I wonder if there's a future in cryptographically signing videos in order to prove provenance. I'm imagining a G7 meeting, for instance, where each participant signs the video before it's released. Future propagandists, in theory, wouldn't be able to alter the video without invalidating the keys. And public figures couldn't just use the "altered video" excuse as a get-out-of-jail free card.

It wouldn't solve any of the fundamental problems of trust, of course (namely, the issue of people cargo-culting a specific point of view and only trusting the people that reinforce it). But, it would at least allow people to opt out of sketchy "unsigned" videos showing up on their feeds.

I guess it would also allow people to get out of embarrassing situations by refusing to sign. But, maybe that's a good thing? We already have too much "gotchya" stuff that doesn't advance the discourse.


As I mentioned in another comment - there is such an effort underway https://contentauthenticity.org/

They don't intend to dictate who can authorize media, only provide a verification mechanism that the media was sourced from the place it claims to have been sourced from and is unaltered.

I think of it as https but for media content.


Woah! I had no idea something like this would be so far along.

It seems like they're on the right track. I think the key is to keep scope creep to a minimum. As soon as someone tries to add DRM, for instance, the whole effort will go up in flames.


Had this idea a few years ago. Great to see it getting legs.


Ditto, although I was thinking of a slightly different approach. I didn't think anyone was actively doing anything but I love when I have ideas that seem independent and they magically become 'realized' from my ignorance because someone else did it already.


Yeah, I was expecting a range of cameras with cryptography embedded.


I hear this ethical concern raised a lot, usually as some variation of AI being used to distribute “fake news.”

The inverse is equally problematic and harder to solve: those in power discrediting real photos/videos/phone-calls as “deep fakes.”

Not releasing AI models doesn’t stop this. The technology being possible is sufficient for its use in discrediting evidence.

Signing real footage isn’t sufficient. You can get G7 to sign an official conference recording, but could you get someone to sign the recording of them taking a bribe?

Generating deep fakes that hold up to intense scrutiny doesn’t appear to be technically feasible with anything available to the public today. But that isn’t necessary to discredit real footage as a deep fake. It being feasible that nation state level funding could have secretly developed this tech is sufficient. It seems we are quickly approaching that point, if not already past it.


> The inverse is equally problematic and harder to solve: those in power discrediting real photos/videos/phone-calls as “deep fakes.”

This is true. Although witnesses and/or the person behind the camera could sign the video. They might want to, in fact, if they thought they might be witnessing something illegal and might need to defend themselves later.

I guess I'm imagining a future where signed videos are common. Unsigned content or content signed by some random entity would draw suspicion. Maybe not enough to keep people from seeing it, but enough that it wouldn't spread like wildfire like it does today.

There could be disputed videos, too-- Where one party signs and one doesn't. Or maybe a situation where two parties secretly ally against another to run a more convincing smear. Hmm, there's all kinds of weirdness that a system like this might create. Maybe a cyberpunk author could explore it further :)


I imagine realtime cryptographic timestamping services combined with multiple videos of the same event taken from various perspectives, by multiple witnesses connected to viewers by a web of trust, with good discoverability of the different authenticated viewpoints.

Combining all of those things would make it impractically difficult to fake a scene without knowing what you want to fake in advance as well as developing credible witness reputations even further in advance.

For example, imagine a car accident caught by dashcams. You'd not only have your own dashcam footage certified to have been produced no later than the event by a timestamping service, but also corroborating footage from all other nearby traffic also certified in the same way but by other, competing services.

It'd be the future equivalent of having many independent witnesses to some event.

Maybe it won't be necessary to go quite as far, but I think it would be possible for recordings to remain credible in this way, should the need arise.


Suddenly celluloid film will become the unfungible evidence stored in sacred temples.


Doesn't that just move the trust to the key authority?


Just hash the original video


That's the easy part. The hard part is making it simple for anyone to authenticate or verify a file and to get trust and buy-in from people, businesses and governments.

The idea of verifying media has yet to permeate the edges of mainstream thought. People are arbitrarily credulous or dismissive, based mainly on who's sharing the video.


It's already very easy to create a narrative by selective editing. There are high profile examples of this by major news networks. You should already be very skeptical of what you see in the media. It will only get worse.


Hmm, this does make me wonder what kind of an effect will deepfakes have on people's general perception of the world?

I might be far fetching here, but wouldn't this lead to people being more mindful of what they watch and interact with? I think all that it will take is a few "state of the art" deepfakes to cause a ruckus and the domino effect should do the rest.

Anyone in the field spent time thinking on this or has had similar notions?


Photoshop has been common knowledge for years, and people still buy some very dumb edits.

I imagine that deepfakes will follow a similar path to edited photos -- lots of deception, followed by trustworthy sources gaining a little more cachet, but with many people still getting fleeced. Skepticism will ramp up in direct relation to youth, wealth, and tech-savvy.


Even simple video fakes such as slowing down a politicians speech to make them look slow or indecisive has gone viral. It doesn't take state of the art to lie to those who prefer their own echo chamber.


Plenty of people have mislead others online with nothing but text! Ultimately we're going to have to just accept the fact that you can't believe everything you see on the internet.


Video is more effective than text, because people think they've seen whatever event and formed their own conclusions. Those are much stronger than just being told what happened.


I've seen it argued that text is worse because it forces people to read the words with their own inner voice. Somewhere in this discussion is a guy who linked to studies saying you are incapable of reading anything without believing it. (Do you believe me?)

Text, photoshop, special effects, deepfakes they're all just tools for spreading ideas, but we've been dealing (to some degree of success) with folks telling lies for as long as we've had language. I just can't see this fundamentally changing anything except the level of skepticism we give to video which (considering what hollywood has been capable of for some time) we should have been developing already.


> I've seen it argued that text is worse because it forces people to read the words with their own inner voice. Somewhere in this discussion is a guy who linked to studies saying you are incapable of reading anything without believing it.

Seems like there'd have to be some big caveats for that to be at all true, but it is interesting. I've read a lot of ridiculous crap and haven't been persuaded by much of it.

> (Do you believe me?)

No.


You've just disproved his point, but it was here: https://news.ycombinator.com/item?id=32939058#32940128


Ahh, that study. That's an interesting one. I don't think I'd summarize it as "we believe what we read" so much as "correcting false information isn't as easy as telling people they're wrong".


I think it will make it far easier to manipulate dumb people. The same 30% (?) of the people who think the last US Presidential election was stolen. These people will be easier to whip into a frenzy. I worry this will increase the likelihood of violence, above what is already happening.


(as a non-american) I don't think it was stolen, but the whole "vote via mail" thing made me really suspicious


It really is nothing to be suspicious about. Full vote by mail had already been the norm in some US states for years, and most states allowed for it in specific circumstances. The infrastructure, laws, etc., were already there, they just needed to be expanded. Expanding it has always been in the national conversation, it has just been a matter of figuring it out and priority.

So when a global pandemic occurs and we're trying everything we can to isolate and socially distance, that priority changes real quick. People get talking and problems get solved.

Of course, sore losers will complain about anything to justify their loss, and this "new thing" was a prime scapegoat. It was also well known ahead of time that the mail in votes would be largely Democratic (because COVID was VERY politicized and democrats were more likely to follow quarantine guidance and therefore vote by mail). So when the votes came in, they pointed to that imbalance and called it "fraud".

Besides all that, there's no reason to be more suspicious of mail-in ballots than in-person ones. In-person, you mark a paper ballot and then put it in a stack... which then gets mailed somewhere else. If someone is going to be changing mail-in ballots, then they're already in a position to be changing regular ones as well (and every election security professional will tell you that paper ballots are more secure than electronic ones).


It's true that the one who counts the votes matters and this doesn't change with mail voting / in-person voting

The one advantage of physical voting I can think of is the ability to just be close to voting station on voting day, counting people who go in there, asking people (who are willing to share) for whom did they vote. This allows to independently check if fraud exists.


Exit polls are notoriously inaccurate. Given the level of fraud thus far demonstrated (minimal) there is zero likelihood of "checking" by exit polls.


> the whole "vote via mail" thing made me really suspicious

Why? Mail-in voting is hardly unique to the US; what made you suspicious?


quite unique for my country (Russia). Though they recently started to do some remote "blockchain-based" voting in Moscow, which is widely considered to be a fraud


I think most people outside Russia (and some inside it) consider all Russian elections to be fraudulent.


that's not the point. The point is, we don't have voting by mail in Russia and I think it help to observe how fraudulent the elections are.


I mean.. most things blockchain are.


The issue with mail-in voting is that you can be influenced by your family because it's not done in a secret booth, but it doesn't appear to lead to mass voter fraud.

(A few people have been charged with voting their spouse's ballots.)


As opposed to the dumb people who spent 4 years claiming the last-but-one presidential election was stolen, you mean?


I never heard that claim. Only "the electoral college is a bad system" or "voters were influenced by Russian propaganda." Never "votes were impacted by direct fraud."


There were fraud claims on the fringe just after the 2016 election. The evidence was sparse. It didn't take long for even those pretty angry about the election to realize fraud probably didn't happen, and if it did it was at too small a scale to meaningfully affect the results.

Unfortunately in 2020 the fringe became the GOP mainstream, treating equally soft claims as fact.


No, it wasn't "on the fringe". Note that this poll was taken in 2020, a full four years later.

"Seventy-two percent (72%) of Democrats believe it’s likely the 2016 election outcome was changed by Russian interference, but that opinion is shared by only 30% of Republicans and 39% of voters not affiliated with either major party."

https://www.rasmussenreports.com/public_content/politics/gen...


By fraud I mean actual voter fraud. As in, effort was made to cause invalid votes to be counted or valid votes to not be counted.

Russia absolutely did and continues to push propaganda into elections in the USA and elsewhere. That's not really in dispute at this point so I'm not surprised it polls that high.

Got a poll that shows similar numbers for fraud? I would be genuinely surprised to see that.


There were many claims that voters were illegitimately purged from the rolls, which is pretty much the equivalent.

I should actually note here that I didn't vote for Trump, either time, nor did I vote for Clinton or Biden.

I just hate hypocrisy.


> it’s likely the 2016 election outcome was changed by Russian interference

That sounds like a perfectly reasonable claim with evidence that supports it, paralleled by other elections in other countries as well; quite obviously very different to what was discussed above.


Then you weren't listening. People were screaming "Russia stole the election" from Day 1, not "just voters were influenced by Russian propaganda". You're spinning.


In 2019, Hillary Clinton, in a CBS News interview, called Trump “illegitimate”, claimed that Trump “stole” the election, and accused him of voter manipulation, including “hacking”.

https://www.washingtonpost.com/politics/hillary-clinton-trum...


I don't think she means what you think she means by "hacking." I think she means this: https://www.nytimes.com/2016/12/09/us/obama-russia-election-...


In terms of claiming the results of the election is illegitimate, "voters were influenced by Russian propaganda" instead of "votes were impacted by direct fraud" seems like a distinction without a difference to me.

https://news.yahoo.com/hillary-clinton-maintains-2016-electi...

In 2020, Hillary Clinton was still casting aspersions regarding the outcome of the 2016 election, sowing discontent about the electoral college, preparing Democrat voters to ignore the results until Joe Biden was declared the winner.

Portraying this game as if it's only being played by one team does not help restore any trust in the federal election process.


The last-but-one presidential election was affected by various states illegally throwing large numbers of legal voters off their voter rolls, but it’s impossible to say whether it would have made enough difference to alter the outcome, and there’s no convincing evidence votes were directly changed. (It would be a good thing to have a verifiable paper trail for every election; in some parts of the USA it is impossible to effectively investigate any alleged shenanigans.)

The bigger problem in that election was Russian-intelligence-stolen (and possibly tampered with) documents being released to the press in the lead up to the election in coordination with the Trump campaign (with the FBI keeping its investigation of that secret), and then the FBI director making an unprecedented and (we found out only afterward) unsupportable statement attacking Clinton immediately before the election, after being pressured into it by a handful rogue FBI agents who were friends of Trump’s campaign threatening insubordination.

And perhaps the biggest problem of all, an entirely too credulous mainstream media who didn’t put those developments in context, leaving voters to draw mistaken inferences, and giving oodles of free airtime to Trump’s rallies without making any effort to dispute outright lying in real time.


I live in a liberal city and didn't hear this from anyone. The was initially a decent bit of "not my president" attitude, but just in a philosophical sense, and even that petered out pretty fast.


Hillary Clinton herself claimed that the election was stolen and that Trump was an "illegitimate President".

But she doesn't count as "anyone", I guess?


She'd count as one single individual, yes. Thankfully a far cry from the amount that the comment above was referring to: https://news.ycombinator.com/item?id=32940684


Did you hear that from Fox News? Because I never heard it once.


Then you weren't listening. Note the quote above from Hillary Clinton herself.


Stolen is a vague word. If there's evidence she believes there was sufficient fraud to have changed the result, I would be interested. If she was referring to the stolen Podesta emails and Comey's statement right before the election, then those things happened. You may think those things didn't matter, but it's no surprise she does. And then there's the whole storming the capital thing she didn't do.


I think Clinton was referring to this: https://www.nytimes.com/2016/12/09/us/obama-russia-election-...

In other words: not saying that there was actual fraud sufficient to change the election, not saying the election was "stolen" in the sense people seem to be saying here.


Nah, that's not the same.

Stolen by foreign influence is very different than what the 2020 nuts have claimed. How many court cases did they lose? How many times were they asked to produce evidence and came up with ... nothing?


As Abe Lincoln always said: Don't believe everything you read on the internet.

But we do.

There's research. Even if you read something that you know is wrong you still believe it. Especially when distracted or not taking the time to analyze. As we rarely do.

https://techcrunch.com/2013/01/24/study-finds-that-we-still-...

https://www.businessinsider.com/why-you-believe-everything-y...

https://pubmed.ncbi.nlm.nih.gov/8366418/


> Even if you read something that you know is wrong you still believe it.

That seems like bullshit to me. I read your words (you even posted links!) so how come I don't just instantly believe you? If it were true, wouldn't it make all fiction inherently dangerous?

Let's see how it holds up in real life... here's a lie: "My uncle works at Nintendo and he told me that Mario (Jumpman at the time) was originally intended to only have one testicle, but the NES (famicom) didn't have powerful enough graphics to show that so they scraped that part of his official character design and have left the number of testicles unspecified ever since."

Somewhere, secretly deep inside you, do you believe that now?

Nah. I think we don't have to worry about people believing everything just because they read it. Reading things can put ideas into your head (have you ever even considered Mario's testicles before today?) but at this point we're straining the hell out of "belief" and going into philosophical arguments. In real life though, we are capable as a species of separating fact from fiction some of the time.


It obviously doesn't work if you only say it once. It does work if you repeat it often enough. If you surround someone in a self-reinforcing bubble of misinformation, and all of their friends not only believe it, but dismiss and ridicule anyone who doubts it. If people make convincing arguments supporting it.

It works because people don't judge truth based on sober, rational, purely logical analysis but emotion and bias and most importantly comfort. If you're in an environment in which everyone believes X, you will inevitably begin to conform if the social pressures to do so are strong enough, and counter-signals weak enough. This is how radicalization works, through the gradual osmosis of a worldview, and the acceptance of smaller lies that lead to accepting bigger lies. It's how Nazi propaganda worked, and it's how modern advertising and politics work. It's why witness testimony is unreliable and how police can convince suspects that they committed a crime they went in knowing they were innocent of.

The effect isn't universal - nothing about human psychology is. But it is real.


> It obviously doesn't work if you only say it once. It does work if you repeat it often enough.

Repetition isn't enough, but I'll accept that with enough effort from enough people you might eventually be gaslit or brainwashed into believing just about anything so long as it didn't violate some fundamental aspect of your identity in which case it's more likely you'll just lie about believing it to make your life easier.


I have suspected similarly. Skepticism and critical thinking are useful devices, but they can't always tell a truth from a lie. And even if they could-- humans aren't totally rational beings. Sometimes we believe lies because we want it to be true or because everyone around us does. Hell sometimes people believe things just to win an argument


Back in '02-'04 I was a former games/graphics programmer working as a digital artist in feature film VFX. One area I specialized in was stunt double actor replacements. Working on Disney's "Ice Princess" I fixed a stunt double replacement shot and realized a method of making the entire process generic, at feature film quality.

By '06 I had an MBA with a Masters Thesis on the creation of a new Advertising format where the viewer, their family and friends are inserted into brand advertising for online advertising. By '08 I had global patents and an operating demonstration VFX pipeline specific for actor replacements at scale. However, it was the financial crisis of '08 and nobody in the general public had ever conceived of automated actor replacements. This was 5-7 years before the term deep fake became known. VCs simply disbelieved the technology was possible, even when demonstrated before their eyes.

Going the angel investor route, 3 different times I formed an investor pool only to have at some point them realize what the technology could do with pornography, and then the investors insist the company pursue porn. However, we had Academy Award winning people in the company, why would they do porn? We refused and that was the end of those investors. With an agency for full motion video actor replacement advertising not getting financing, the award winning VFX people left and the company pivoted to the games industry - making realistic 3D avatars of game players. That effort was fully built out by '10, but the global patents were expensive to maintain and the games industry producers and studios I met simply wanted the service for free. Struggled for a few years. We closed, sold the patents, and I went into facial recognition.

https://patents.justia.com/inventor/blake-senftner https://www.youtube.com/watch?v=lELORWgaudU

I was bitter about this all for a long time.


> Going the angel investor route, 3 different times I formed an investor pool only to have at some point them realize what the technology could do with pornography, and then the investors insist the company pursue porn.

I'm not sure there's a huge market for people wanting to insert their friends and family into porn, but if there was why not just try it? Seems like it'd have demonstrated the tech worked commercially which could have attracted investment in non-pron uses and it could have ended up just one more technology in a long list of tech made successful as a result of porn.


Such a service would be a lawsuit engine, the social pushback from the female population towards the creators of such a service would significant, and then the public impression of the technology would be "tainted with porn" if ever attempted with brand advertising.

Meanwhile, simply the process of inserting consumers in the trailer of any Marvel superhero film and charging $1 for you own copy would make tens of millions. Repeat with any highly desirable fantasy franchise. That is the most obvious application. Actor replacement with ordinary people has a huge number of positive to society applications. Porn is not one of them.


I don't think it will change much.

I think for claims that you think are important to determine an objective truth value for (like who the President of the U.S. is), your determinism mechanism is based on trusting sources you deem reliable and looking for broad agreement among many sources you deem to be independent. You're probably not just looking at a single sourceless video of Ronald Reagan behaving as if he's the president and believing that claim because the video couldn't possibly have been faked.

And for other claim that you don't think are important to determine an objective truth value for, I don't think you need very high-fidelity evidence anyway. For example, people have no trouble believing claims that corroborate their closely-held ideologies even with very low-fidelity fraudulent evidence, or even claims made with no attempt whatsoever to provide even fraudulent evidence!


I agree with your framework, but what happens when the sources I deem reliable can't trust their own eyes?

A lot of people point to Photoshop not breaking truth, but in my experience, we simply rely on video instead. When I served on a grand jury, essentially all of the non-testimony evidence I saw was video, and it was incredibly compelling. Neutering the reliability of that evidence will hurt.


We (people) already accept the lies we perceive. I think we choose to accept these fantasies because they're often outside our direct influence and when something is distant from us we have the luxury of turning it into entertainment. I think of beauty in media, every video we see today is processed to make people look pretty. Most play along with the fantasy: admire old celebrities for not aging, complement friends for clear skin. But when a friend says, "I feel so ugly" we move closer to reality and acknowledge the makeup, beauty filters, etc. The same effect happens in politics, news, business, technology: people indulge in fantasy at their convenience.

I don't think people will be more mindful of what they watch and believe, I think the opposite will happen: an attraction to fake content. People will embrace the fantasy and share deepfakes at a scale so large governments will be running campaigns to alert the public that such-and-such video is fake, possibly attempting to regulate how content shared online must be labeled.

That said I still believe when these lies are closer to us, enough for us to care either as professionals or friends and family, that we will be more discerning about reality.


There are ongoing efforts to enable digital signatures of online media. The idea is that you (or your browser) can validate that an image or video is unmodified from the source that produced it.

https://contentauthenticity.org/


There are so many technical hurdles to something like this, I don't see it as a solution anytime soon or ever.


It's not as far away as one might think.

There was a demo earlier this year (Jan) showcasing the proposed 1.0 spec working in Microsoft Edge: https://c2pa.org/jan-2022_event/


The browser side is not where all the hurdles occur. It's on the capture side and the key/certificate management/revocation side.


I’d love that to be true, but I think we can use text on social media as a guide here. It’s already as easy to type a lie as typing the truth, and I’m pretty sure lots of made up comments/posts on reddit get taken as truth by tons of people, for example.


I think we will get to a point where trust will only come from face-to-face physical meetings. We won't be able to believe in Zoom calls, phone calls, nothing except face-to-face.

Just like it was for millions of years before now.


It's already become an arms race. KYC identity services are already adding liveness and deep fake detection features.


In the future, TVs and monitors and smartphones will have built-in "truth meters".

Face-to-face is only applicable with you small social network.


> Face-to-face is only applicable with you small social network.

That is my point precisely. And I will point out here that small social networks are our historical environment. I believe not only can we return to them and flourish, but also that it would be a great boon to human flourishing, the only outcome that really matters.


I agree but I'm afraid that we're moving in the opposite direction. VR and deep fakes will make for interesting times.

I've worked at home for nearly 20 years so I've had to learn to create a strong in-person social network. The pandemic and some medical issues have interfered, but I still much more enjoy having fun with my friends than doing things online.


> wouldn't this lead to people being more mindful of what they watch and interact with?

No. We have already run the case study where people on Reddit, Twitter, and other social media will seethe at mere screenshots of headlines and captions under a picture with zero need for verification.

Here on HN we will pile into the comments to react to the title without even clicking the link to read it ourselves.

Deepfakes feel like a drop in the bucket. What does it matter that you can deepfake a president when people will simply believe a claim about the president than spreads around social media? I don’t see it.


No, people will continue to believe as true videos that match their expectations and disbelieve those that don't


If you have access to BBC iPlayer, "The Capture" is a really good fictional programme / drama exploring the possible implications re. justice and politics


In the US, it’s on Peacock. Enjoyed it very much. I think we had watched it on PBS.

It’s a surveillance thriller.


The ability to use this to plausibly deny any real evidence is more chilling than the fake evidence that could be created.


Why yes.

Let me quote myself from a discussion I was having this morning with a friend who is a tenured professor of philosophy working on AI (as an ethics specialist his work is in oversight),

we were discussing the work shared on HN this week showing a proof-of-concept of Stable Diffusion as better at image "compression" than existing webstandards.

I was very provoked by commentary here about the high "quality" images produced, it was clear that they could in theory contain arbitrary levels of detail—but detail that was confabulated, not encoded in any sense except diffusely in the model training set.

"I'm definitely inclined to push hard on the confabulation vs compression distinction, and by extension the ramifications.

I see there a very meaningful qualitative distinction [between state of the art "compression" techniques, and confabulation by ML] and, an instrumental consequence which has a long shadow.

The thing I am focused on being, whether or not the fact that a media object is lossy or not can be determined, even under even forensic scrutiny.

There was a story I saw this week about the arms race in detection of 'deep fake' reproduction of voice... which now requires some pretty sophisticated models itself. Naturally I think this is an arms race in which the cost of detection is going to rapidly become infeasible except to the NSA. And maybe ultimately, infeasible full stop.

So yeah, I think we're at a phase change already, which absolutely has been approaching, back to Soviet photo retouching and before, forgery and spycraft since forever... so many examples e.g. the story that went around a couple years ago about historians being up in arms about the fad for "restoring" and upscaling antique film and photographs, the issue of concern being that so much of that kind of restoration is confabulation and the presumptive dangers of mistaking compelling restoration for truth in some critical detail. Which at the time mostly seemed a concern for people who use the word hermeneutics unironically...

...but we now reach a critical inflection point where society as a whole integrates the notion that no media object no matter "convincing" can be trusted,

and the consequent really hard problems about how we find consensus, and how we defend ourselves against bad actors who actively seek their Orbis Tertius Christofascist kingdom of rewritten history and alternative facts.

The derisive "fake news" married to indetectably confabulated media is a really potent admixture!"


Once you accept lossy compression, it becomes a question of what level and type of "lossy" you're willing to accept, and how clever the "compression" algorithm can be.

If I want to compress the movie Thunderball -- a sufficiently clever "compression" algorithm could start with the synopsis at https://en.wikipedia.org/wiki/Thunderball_(film) add in some images of Sean Connery, and generate the film. That's...maybe a 100K to 1 compression ratio?

If the algorithm itself understands "Sean Connery" then you could (theoretically) literally feed in the text description and achieve a reasonable result. I've seen Thunderball, but it was years ago and I don't remember the plot (boats?). I'd know the result was different, but I likely wouldn't be able to point to anything specific.


I really think there is absolutely nothing to worry about.

put it this way:

written text has always been extremely mutable and therefore falsifiable. video is simply going to become more like text. people will trust it based on the context and their own judgment rather than the content. I suspect most people already do this anyway


I actually think that live, full body AI-generated realistic avatars (sometimes imitating celebrities to one degree or another) will become an everyday part of life for many people within the next 5-10 years.

I assume that full-on impersonation will still be illegal, but certain looks that are sometimes quite similar to a real celebrity will trend now and then.

The context for this is the continual improvement in the capabilities and comfort of VR/AR devices. The biggest one I think is going to be lightweight goggles and eventually glasses. But also the ability to stream realistic 3d scenes and people using AI compression (including erasing the goggles or glasses if desired) could make the concept of going to a physical place for an event or even looking exactly like yourself feel somewhat quaint.


Nobody seems to realize, the value is in inserting ordinary people into media with celebrities! Perfect for brand name advertising, you with a celebrity telling you how smart you are for using they brand they pitch.

I formally built an ad agency with a feature film VFX pipeline for actor replacement advertising - but I built it back in '08, years before "deep fakes" and nobody believed what I had working was possible.


I can't wait to just generate pornography on the fly while wearing a body monitor so that it can fine tune female body proportions to my exact specifications.


Does that sound healthy? Personally or socially? I'd worry about how that would affect my view of the real women around me and, in turn, my behaviors towards others.


Definitely not. It actually scares me what future we are heading towards. Supernormal stimulus. Better than a human partner could ever be. Super addicting in a primordial way.


I'm guessing that most people will have very little trouble separating reality from fantasy.


Yeah a little more skepticism will be nice, but I still personally see myself getting fleeced every now and then.


It will also lead to "that's a deepfake!" as an excuse given after getting caught on camera.


Bingo.

It's less about using fakes to push your agenda, and more about being able to (plausibly or implausibly it doesn't matter) claim that whatever video is a deepfake.

The truth is meaningless, and as tools like deepfakes become more and more sophisticated, it's harder and harder to establish baseline realities.

And someone is benefiting from that shift away from reality, I just don't know who.


which will lead to people trusting forensic experts and corroborating data/witnesses. If you were a Karen caught in an embarrassing public meltdown you could absolutely say that the video was deepfaked and you were really just home alone sleeping at the time, but when 7 different people's cell phone videos, multiple security cameras, two dashcams, 14 ring cams, GPS data captured from your mobile device, and one police surveillance drone all agree it was you that's not going to work out so well.

People made the same arguments about photoshop, but it's really not a problem. Almost never is a single video the only evidence of anything and in the cases where it is and that video can't be verified it's probably best not to ruin someone's life over it.


Let's just hope that fake detection technology stays ahead of any innovations in this field


>Anyone in the field spent time thinking on this or has had similar notions?

Skepticism in general will only be applied to people we don't like and ignored for people we do.

The continued lapping up of blatant Ukrainian propaganda in mains stream media for example doesn't even need photoshop to be believed, just the vague 'sources said'.


> wouldn't this lead to people being more mindful of what they watch and interact with?

Have you been paying any attention to what's going on the last several years?



How about first making deepfake faces actually believable?

Seems like every AI project does something halfheartedly, ponders what the world will be like once it’s perfected, and then starts the next project long before the first project is actually useful for anything but meme videos.


Even AIs which have existed for years and been "perfected" are very noticeably not-human. Though they do look believable from far away, up close they are still in the uncanny valley.

For instance Siri and Google Voice: they are clearly understandable but they sound noticeably different than real people.

Or Stable Diffusion which will supposedly put real artists out of business. It is definitely viable for stock photos, but I can usually tell when an image was made by Stable Diffusion (artifacts, incomplete objects, excessive patterns).

thispersondoesnotexist.com faces can also be spotted, though only if I look closely. If they are a profile pic I would probably gloss over them.

In fact, I bet you can make an ML model which very accurately detects whether something was made by another ML model. Actually that's a good area of research, because then you can make a deepfake model which tries to evade this model and it may get even more realistic outputs...

Ultimately I think we will see a lot more AI before we start seeing truly indistinguishable AI. It's still close enough that the ethical concerns are real, as people who don't really know AI can be fooled. But I predict it will take at least a while before a consensus of trained "AI experts" can't agree on authenticity.


I think the problem we have with deepfake believability today it just takes one weak link to spoil it. It turns out that we're somehow still pretty bad at believable audio and not even close with deepfaked 'presence' in the form of persona of motion. But if you pair a believable impersonation with something even remotely state of the art in the visual, you end up with something pretty compelling:

https://www.tiktok.com/@deeptomcruise

https://www.youtube.com/watch?v=kjI-JaRWG7s

https://www.youtube.com/watch?v=VWrhRBb-1Ig

https://www.youtube.com/watch?v=bPhUhypV27w (Not the greatest visually but funny nonetheless, esp the end)


That's what goes to media. "Engineers scrap last little artifacts off deep-fake still images" just doesn't make for "good" headlines.

Somewhere, someone is working hard to perfect these. In this particular case probably under NDA... le sigh


The company behind this post recently got into america got talent finals with a deepfake act. It looked pretty convincing to me. Especially compared to state if the art of just 2-3 years ago.


>It looked pretty convincing to me. Especially compared to state if the art of just 2-3 years ago.

This has been the case for decades now. Much more realistic than x isn't a good enough metric. It needs to be indistinguishable from the real thing.

I'm old enough to remember this being called photo realistic: https://static1.thegamerimages.com/wordpress/wp-content/uplo...

And it was, compared to everything that had come before. Now ... not so much.


https://youtu.be/TVezHTlPMw8

This is the act I meant. Judge for yourself, but I believe we're close to bridging the uncanny valley


They're believable enough for video calls: https://www.dw.com/en/vitali-klitschko-fake-tricks-berlin-ma...


As far as I remember those calls were actually not made with deep fake tech, but by reusing video material from a previous call, skillfully edited to be believable enough.


I didn't know me and AI had so much in common.


The Jennifer Connelly and Henry Cavill demo on that page makes me think of the Scramble Suit from A Scanner Darkly

https://www.youtube.com/watch?v=2aS4xhTaIPc


Now everyone can build their own Star Wars sequel movies! I was wondering about that after the disaster that was TROS.

I didn't think it would be possible to do in this decade, but we seem to be making progress fast now. Very impressive to see. (and scary)


None of the videos on this page really look convincing. In terms of generating static photos existing "photoshops" people have been making for 25 years are far better. I don't see the need to clutch pearls and call for new laws to put people in prison quite yet.

But even the failures at temporal coherence have their own aesthetic appeal. Like all of this stuff has been it's very "dreamy" the way the clothing subtly shifts forms.

Beyond the coolness I'm glad that individual people are getting access to digital manipulation capabilities that have only before been available to corporations, institutions, and government before.


I imagine that phoshopping videos at this quality or higher is going to take way longer and be a much more specialized skill.


I have been wondering if using a human 3d model (which are quite real, but not 100% there yet) can be overwritten by better texturing -after the render- for complete immersion. So you use a motion tracked animation of a 3d model (or static for a picture) and then apply a way to make the last bit more convincing with better texture and lighting.


I have year old demos on https://storyteller.io.

Some of the others in this space have great results : https://imgur.io/seBTPG8

We've perfected voice replacement and I'll have more to show soon.


Cool!

The animation, is that a 3d actor with replaced visage by AI? Could you explain what you did there?


We're using mocap - both computer vision based and full body. We're also exploring text/audio -> animation, which will be good for quick animation workflows.


> But if you want to describe human activities in a text-to-video prompt (instead of using footage of real people as a guideline), and you’re expecting convincing and photoreal results that last more than 2-3 seconds, the system in question is going to need an extraordinary, almost Akashic knowledge about many more things than Stable Diffusion (or any other existing or planned deepfake system) knows anything about.

> These include anatomy, psychology, basic anthropology, probability, gravity, kinematics, inverse kinematics, and physics, to name but a few. Worse, the system will need temporal understanding of such events and concepts...

I wonder if unsupervised learning (as could be achieved by just pointing a video camera at people walking around a mall) will become more useful for these sorts of model; one could imagine training an unsupervised first-pass that simply learns what kind of constraints physics, IK, temporality, and so on will provide. Then given that foundation model, one could layer supervised training of labels to get the "script-to-video" translation.

Basically it seems to me (not a specialist!) that a lot of the "new complexity" involved in going from static to dynamic, and image to video, doesn't necessarily require supervision in the same way that the existing conceptual mappings for text-to-image do.

Combined with the insights from the recent Chinchilla paper[1] from DeepMind (which suggested current models could achieve equal performance if trained with more data and fewer parameters), perhaps we don't actually need multiple OOMs of parameter increases to achieve the leap to video.

Again, this is not my field, so the above is just idle speculation.

[1]: https://arxiv.org/abs/2203.15556


It's interesting to consider the "full body" deepfakes, but wouldn't the limitation of face deepfakes be even more constraining here? The proportions of limbs' length vs torso, hip / shoulder ratio etc -- it seems like a more effective approach (and something already in commercial use) would be mocap + models -- and that's just for still images.

For motion, there's yet another layer of fakery required (and this is something security / identity detection systems tackle nowadays) -- stuff like gait, typical motions or gestures or even poses. To deepfake a Tom Cruise clone, you need to not just look like the actor, but project the same manic energy, and signature movements.


That the two splashy examples are hot people in their underwear is pretty telling for what one major use of this will be. Makes me feel weird. I find takes on deepfakes fraying shared epistemology alarmist, people will continue to believe whatever they want to believe and falsifying evidence is still a crime, but the ability to conjure moving images of whatever human body you want without that person's permission feels bad. DALL-E adding protections against sexual or violent imagery is a short term solution, at best, IMO. Maybe I'm being alarmist, too. Perhaps it won't be as easy as toggling a switch next to your friend's photo to take their clothes off.


> Perhaps it won't be as easy as toggling a switch next to your friend's photo to take their clothes off.

Unless an existing reference image exists - whatever the switch does will be a guess. Many motivated folks already do this with photoshop; it’s all over 4chan and similar message boards (request threads) and has been that way for at least a decade.

This is already the reality for celebrities with photoshop - their likeness is returned unclothed in image search.

That’s not their body


That's not really comparable though, as it's basically composite work. The AI has the ability to infer then to "imagine" with photorealistic results.

There would be small details kept intact between the source image and the output that would make it feel much more personal than even the best manual fakes of today.


I'm not sure I'm convinced.

There is a lot of variation in details between human bodies that are covered by clothing.

You can infer some things, like skin tone and hair color, from other parts of the exposed body with pretty decent accuracy. You can infer general body shape from how the clothes fit. But for things like size, shape, color, hair, birth marks, moles, surgical modifications, etc. of various concealed body parts? All those vary wildly from person to person. Unless you have a reference image that you can use to answer those questions - I can't imagine that you will be able to infer those. If you can't infer those, you aren't getting the real body of the person you are trying to undress. You're getting a dream of what that person might look like if they were to remove their clothes - a dream that is not accurate.

Not to discredit what you are saying: those dream images are definitely going to cause an entire generation of discomfort. But the cat is out of the bag and has been for some time. Artists were already capable of creating images like this without consent - but it required more talent than most humans poses to get that onto paper. Photoshop made it possible too. AI is making it even easier.

Society is weird about nudity. To be fair, I am too. We have all of these constructs built around the human body and concealing it that many of us have bought into.

At it's core, I think the fear of this tech and nudity is that it will be used to "steal dignity" from folks. The question is: can you steal dignity from someone with pencil and paper? Is a photorealistic sketch of your friend unclothed sufficient for them to have lost their dignity? What about photoshop? How about passing your photorealistic sketch through an AI to make it even more photorealistic? At what point have you robbed someone of dignity? Robbing someone of dignity is a social construct, in some ways this form of dignity stealing is something we _allow_ people to do to one another by buying into that construct. I do feel like the narrative we should be pushing is "that isn't my body." If we invest in breaking the construct, my hope is that we can remove the power this holds over people.


I think this is kind of skirting around the issue. Saying that we can just focus on shifting societal norms around nudity taboos feels unrealistic. How would you envisage that happening in a timescale comparable with the development of deepfake technology?

Beyond that, there's also a host of otherwise these materials can be used for targeted harassment. Sending a woman images of "herself" in extreme sexual acts can be traumatising even if the victim knows it's a fake. There's also the rise of "what would you do with her" and "irl girls" pornography on places like Reddit where unconsenting women are targeted by stalkers who sexualize and degrade them publically for kicks. This just gives them further fuel for their obsessive fantasies.

The shift to being able to "see" arbitrary women as they'd look in certain positions or level of undress will also change how young men perceive real women in harmful ways in a society where women already have to deal with constant objectification.

In terms of inference. Firstly it's about things like lighting, skin tone, background details etc that set the scene for us on the unconscious level and which inevitably leaves the current generation of fakes somewhere in the uncanny valley. Secondly the fact that we know they're fake didn't impact the initial associations our brain will create upon seeing them. If lies were not harmful defamation cases wouldn't be a thing.


I'm not sure I'd call it skirting around the issue. More operating from inside our sphere of influence.

> The shift to being able to "see" arbitrary women as they'd look in certain positions or level of undress will also change how young men perceive real women in harmful ways in a society where women already have to deal with constant objectification.

There is a storyline in a children's movie about a young girl making sexually suggestive sketches of a boy in her notebook (Turning Red). The mom discovers it and confronts the boy thinking he's taken advantage of her daughter. That's going to evolve into using AI to do the same. It's not just a trope, those problems exist already; this is just going to make these cases more common. And you're nailing it with how it's going to change perceptions. These dream images are going to be created to match the fantasies of the person generating the image. Those images aren't the actual bodies of the person being undressed - it's just a fantasy created by the artist optimized to their own preferences.

> Firstly it's about things like lighting, skin tone, background details etc that set the scene for us on the unconscious level and which inevitably leaves the current generation of fakes somewhere in the uncanny valley. Secondly the fact that we know they're fake didn't impact the initial associations our brain will create upon seeing them.

Expert fakes exist for celebrities that pass uncanny valley without the use of AI. There are sites dedicated to cataloging celebrity images and documenting if they are fake or not. Same with request threads and WYWD threads - the stuff on message boards can be pretty convincing, enough to get to those initial associations. I'm not sure why you think otherwise. The dark corners of the internet are full of this content.

This cat is out of the bag and has been for some time. The frequency is only going to increase as the bar for generating these images lowers. Photoshoppers running request threads are the bottleneck right now; soon those threads are going to be replaced with generators that spit out 100s of candidate images.

We don't really have control over how others use our likeness beyond the tools the law extends to us but, even then, it only addresses the problem after it happens and doesn't stop someone from doing it in the first place. I don't see us stopping it. If we can't stop it, we either let it happen to us or we figure out how to remove its power over us at an individual level (and help others do the same).

I'd like to turn this around. You're bringing up a lot of problems without solutions. Other than shifting our view of these images on a person-to-person basis, how do you see us stopping it?


We all have the ability to infer and then "imagine" the results.


But we don't all have the ability to render our imaginations as photorealistic jpegs


What harm would it cause if we did? If I could imagine you naked and produce a JPG of my fantasy it would still only be fantasy. It doesn't matter if I'm making JPGs, cutting your head out of photos and gluing them to catalogue models, or if I've got a supercomputer making deepfakes. It's still just fantasy... speculative fiction.


It becomes reified and it can enter the public consciousness tutti dissemination.

I can fantasise about an intergalactic space war, and I can even make a screen play and produce Star Wars. If i make the movie or not it's still fantasy, still something I just imagined. But making a representation of it and distributing it vastly alters the power it has to affect public consciousness.

Fantasy doesn't equate to harmlessness.


> Perhaps it won't be as easy as toggling a switch next to your friend's photo to take their clothes off.

Thats totally a browser extension next year... Right click, remove clothes...

When you think about it, ethically it's in the same ballpark as right click, copy, something you probably also be doing without asking the subject of the image.


[flagged]


I feel like that was implied by my comment, but yes, believe it or not, I do find that also alarming.


I think you should have been more clear: the most alarming use case of this tech would be some sick pedo taking pictures of your child then using that source imagery to generate fake porn and pleasuring himself all over it.

This should be very illegal.


(Warning: I'm having a very hard time determining if you are trolling or are for real..)

That's not the most alarming use case of this tech. By far. (IMHO)

Also, I find this reasoning very off-putting. Putting child porn into a discussion kills it. All participants are (mostly) willing and basically required to agree and "let's not talk about this further".

The fundamental technology that underpins these achievements is more than capable of destroying civilization if things start to go south - which I believe they will, sooner or later. I find that to be more worthy of discussion than moral jousting about things people do in their private lives that I will - hopefully - never know about.

Let's all use our imagination and see where these kinds of models, both diffusion and transformers can take us. Sure they can generate plausible visual information, but that's not all they can do. Some days ago someone posted about ACT-1, a transformer for actions. People can and will hook up these things in all sorts of complicated pipelines and boy, generating some insensitive imagery is way, way down on the list of things to worry about.


So you've thoroughly defended against the point about talking about porn, but you give no examples of you say we should "truly worry about". Can you at least explain further? Sounds too hand wavy


Good point. I am being handwavey, sorry about that.

First, I see "AGI" as a real problem we'll have to face at some point. I believe we will be too late by the time we recognize it as a problem, so let's ignore that "threat" for now.

The more pressing problem IMO is that, to use technical terms, a shitload of people will have to face the reality that a software system is outperforming them on just about anything they are capable of doing professionally. I believe this will happen sooner than later and I am totally not seeing society being ready for that. Already I am seeing these models outperforming me - and my collegues - on quite a few important axes, which worries me and also the fact they almost universally dismiss it because it's not "perfect". I know it's hot these days to either under- or overestimate AI, but I do feel we have crossed a certain line. I don't see this genie going back into its bottle.

Perhaps I'm still handwavey. I guess I am a handwavey person and I'm sorry about that, but when I see GPT3 finishing texts with such grace I can't help but see a transformer also being capable of finishing "motor movements" or something else entirely like "chemical compounds", "electrical schematics" or even "legal judgements". I just found out about computational law BTW, might interest someone. Even just the "common sense" aspect of GPT3 is (IMO) amazing. Stuff like: we make eye contact during conversation, but we don't when driving. Why not? But also stuff like detecting in which room of the house we are based on which objects we see. That sort of stuff is amazing and it's a very general model too. Not trained on anything specific.

I guess the core of what I'm saying is that "predicting the next token" and getting it right often enough is frightenly close to what makes a large percentage of the human populace productive in a capitalist sense. I know I'm not connecting a lot of dots here, but I clearly lack the space, time and perhaps more importantly, the intelligence to actually do that. I fear I might be a handwavey individual - in fact easily replaced by GPT#. Do you now see why am I so worried? :)


Thanks for explaining, I appreciate it. And it makes sense what you've shared.


I agree it's extremely distasteful, but why should it be illegal? Who is being harmed?


What if those photos then were shared? Someone might accuse the parents


How about you express your views as an addition to the conversation instead of as a criticism for other people not expressing the particular variety of concern that you have...?


Reminds me of the Michael Chrichton movie named "Looker" https://www.imdb.com/title/tt0082677/


Great article, but showcasing this tech by demonstrating that you can have half naked pictures and videos of real people without their (top half) consent is not going to go down well.


The road to realistic full-body deepfakes will be through the adult entertainment industry because of course it will. Some academics may begin the discussion but at the end of the day this is one part of AI image generation that has a clear and extremely large profit motive and won't struggle to find funding in any way.

I'm pretty sure Slashdot is willing to put up the money for thousands of renders of "Natalie Portman pours Hot Grits over <thing>" alone.


No, it is economically infeasible because any such professional service would be a lawsuit engine.


You're assuming that it'll be used on celebrities and not just used to shot a movie once and publish it with stars of various different preferences?


An online service that allows people to insert others into media without consent is a lawsuit engine, pornography absolutely.


What's going on with the scrolling behavior of this page? I'm getting a very annoying "scrolling with inertia" behavior in Chrome for desktop.


TV shows won't need to do casting for extras any more, they'll just have the main cast and then one person who plays all the other characters.


I don't think you need videos with extreme levels of annotations as this article suggests.

If a model is already trained on lots of images and captions, it would probably be possible to just feed it tons of whatever video and let it figure out the rest itself.


In a soon-approaching world where all movies have deep-fake actors, popular music is generated etc. how do you approach the economics of creativity and content generation?

Should Tom Cruise heirs receive a perpetual rent 200 years from now when Mission Impossible 57 staring their ancestor is airing?

What regulation should be put in place / would be effective in a world where any teen with the latest trending scoial media app on their phone can realistically impersonate a celebrity in real-time for likes?


Technology is an enabler. Your hypothetical scenario of Tom Cruise's legacy lasting to Mission Impossible 57 is not probable imo. People get bored.

Instead we'll probably see a bunch of crap, but on top of that crap it will allow people who never would of had a chance before (no connections, money, etc..) to be discovered who have true talent. It lowers the bar to content creation significantly.


I think that we will have some immortal actors - but not too many. I don't think Tom Cruise will be one of them.

> to be discovered who have true talent.

How so?


I love those text-to-video samples with people screaming into phones.


This might be an outlier, but I think the benefit of completely outlawing deepfakes is worth the "but freedom!" harm.

I think deepfakes have the power to do much more real, immediate damage to society vs the "threat" of AGI


I don't see them challenging the veracity of media any more than photoshop and video editing already do, specially since ML can be used to automatically detect tampering. So, what's the damage to society you fear?


The inability to take video evidence as being at all related to truth. Presidential candidates saying "I never said that, that video is a deepfake". If deepfakes are normalized enough, then people will always have sufficient reason to disbelieve actually-true video if they don't want to believe it. If the production and dissemination of deepfakes were criminal, it (a) wouldn't be normalized, and (b) people would think twice about making deepfake videos if punishments were criminal rather than civil slaps-on-the-wrist like libel is. And criminal means actual police resources dedicated to finding creators, whereas a libel suit has only the power of subpoena.


> The inability to take video evidence as being at all related to truth.

That has been true for 38 of the past 40 centuries. Somehow I suspect making it 39 out of 41 won't be that big of a problem, especially compared to getting people to not take video evidence as being at all related to truth.

(Edit: in case it wasn't obvious: you can't take video evidence as being at all related to truth if you don't have any video evidence to so take.)


People have been able to do this via video editing and audio manipulation for a while now and much better at that than state of the art deepfakes, and yet this isn't a problem despite the huge incentives you imply that would motivate very powerful, wealthy groups with the means to easily pull it off. Video evidence is still a thing, too.

Criminalizing making fake videos of any kind is extreme authoritarian behavior and libel is already punished by law everywhere I know. If you don't like how libel is punished in your jurisdiction that is another matter, but if you want to get some ideas on how to make them harsher just get a list of countries and filter by dictatorships and you're bound to find examples that take libel and its definition very seriously.


Appropriation of name or likeness is already a tort that defendants can be held civilly liable for. Would you also make it a crime?


Civil liability is nothing. We need police resources to find creators, not civil subpoenas of YouTubr upload IPs. Otherwise every election will be swamped with unattributable deepfake videos, which is doubtless the direction we are headed.


I think, the kids got this, they will learn how to live with this and adapt to it. But yes, the older generation who still depend on what they see on the internet, will suffer for a while


What we need are digital identify verification strategies for content, such as associating cryptographic signatures with videos.


How would you go about banning it? Restrict the general availability of capable hardware?


The scrolling behavior on this page is horrendous.


The Great Dictator 2023 with Charlie Chaplin would be great!


Why would you want to do that, though?

Honest question. It’s going to be a long trip throughout the Uncanny Valley where everyone will clearly notice the fakery and then … what? What is the end goal here? Ok, making more Superman movies starring Christopher Reeves, obviously. But then what?

To quote someone who deserves to be more in more deep-fakes, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”


It's quite frightening to imagine what this could do when weaponised against women, used for harassment and the creation of nonconsensual pornography based on people's likeness. I wonder if this is one of the first things we'll start seeing legislation relating to.

It's also concerning to imagine the social impact this could have on young boys as well, in a climate where pornography addiction issues become more visible each year.


I'm more concerned about censorship. China justifies their mass internet censorship with pornography bans, which have high public support. Will deepfakes push the US over the edge, bringing the free Internet to an end?

I'm not concerned at all about pornography addiction, I don't think that's real. On the contrary, pornography promotes autonomy and independence by making people less dependent on others for sexual stimulation. It's a massive social good, and unrestricted pornography is the sign of a modern society.


I don't think it's so simple and none of this is black and white. Stigma around pornography is bad because it unnecessarily restricts what adults may freely do with their bodies, but not all pornography is produced with full and uncoerced consent. Making an excuse to ban free speech by banning unharmful pornography is bad, but unrestrictedly producing fake porn of someone without their consent is also bad.


Banning porn in the US is a huge issue in the national conservative camp, at least if you listen to a little bit of their discourse around long term goals. If that camp comes to power expect restrictions in the US which would probably require enormous scale Internet clampdowns.


Banning pornography is the most extreme view. The question is should there be regulations to ensure consent from those whose likenesses are involved?


Commercially, I'd agree someone should have compensation for use of their likenesses, but what people choose to draw, imagine, photoshop, or deepfake for non-commercial use is their business and any state that regulates that would be a dystopian nightmare.


Not really, as we already have a bunch of regulations that limit publications in cases of defamation, hate speech, incitement to violence, harassment, cold pornography, videos of rape etc. and society is safer for it, not indeed a dystopian nightmare.


As much as I agree with you about those, GP mentioned something more specific:

>but what people choose to draw, imagine, photoshop, or deepfake for non-commercial use is their business

And although I'm sure that one can harass or incite violence or defame with artwork, whether that means artwork of the imagination as artwork should fall into that category is dubious. An instructive example here concerns fantasy versus 'hate speech' and where to draw that line while maintaining maximum freedom and a polite society.

I don't think society as particularly 'safer' in the cases of defamation and copyright infringement being prohibited; and when it comes to fiction I'm even more dubious, given that fiction is known to be able to produce statements we can more easily take as fantasy.


I don't get what commercialism changes about it.

If i make deepfake images of your wife being forced into sex, being beaten, crying and screaming, and then I distribute them online free of charge is that really harmless just because I don't profit from it?


Not at all; but it does test where 'harm' lies or should lie in legal system, leaving aside the human sensation. I would not like that to be legal, however that's irrelevant to the question of the art in itself. I'm not making a commercial argument here, I'm saying that these things aren't adequately covered by the 'bad' laws of defamation and copyright infringement.

In your example, I don't think I'd have a problem with the creation of the images if they're not distributed, but on a macro scale, if I put aside my ethical concerns with the proliferation of patriarchy, it's worth considering whether "anyone can do anything with images" would have such an effect if my wife were only one targeted among millions.


I'm surprised that you so quickly equate the free internet with the internet we have today. We already have widespread suppression of certain types of pornography, most notably that involving children.

The internet we have today is not free. The society we have is not a wholly free one but we rightfully make trade-offs to protect people.

We know that today there is already a huge issue of nonconsensual pornography, revenge porn etc. Why the line of what is "free" drawn at protecting these groups, why do we tolerate open abuse against women but not against children? I wonder if our outlook on women's safety as a society is really as forward thinking as we would hope when we look around the world today.

> "unrestricted pornography" is the sign of a modern society.

Is it though? In another world you could say the same thing about drugs. Some people in America today might say it about gun freedoms.

I don't know. I think there are lines to be drawn and I think we can be open to discussing those without falling immediately into hysterics about state overreach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: