I'll simply observe that it is easy to tell a fake face when presented an either/or choice and when specifically asked to. Most of the time we aren't looking as closely, so while I see some commenters being very happy about their accomplishments, I don't personally see a reason to rejoice.
Regardless, the AP news article[1] linked under the "methods" page provides some useful reading on how to detect these faces, for anyone interested.
My personal observation is that these generators fail miserably when generating low-detail parts and hair. In many of these pictures you don't have to look at the face at all, but rather look at the background and the one with heavy artifacting will be fake. In "enterprise" style pictures one can look at hairs and find heavy artifacting there.
Sure, I completely agree on this one - AI generated faces have been pretty decent for years now. The quality is currently in some weird space: with careful preselection they can fool unsuspecting reader passing by and yet at the same time reader looking for fakes will detect them (given high enough size/quality) with high confidence.
for me the eyes looked really off in all the fakes, the pupils seems like the wrong size for the level of light and looked like the shape of the eye didn't fit into the bone structure of the face.
This is a very important distinction. With deliberate attention, you can indeed catch many fakes in this kind of scenario (to me it seems the background is often a giveaway, but you do need to focus on it)
But in passing, accompanying a news article, a tweet or an instagram post, are you paying as much attention? Those are the scenarios where the potential for harm is much bigger.
Yeah, I had exactly the same reaction. When I take the time to scan for artifacts, I get close to 100%, but when I try to do it quickly, I get close to 50%.
That 100% will gradually come down as the tech improves. And I'd guess the tech is already good enough that most people won't be able to improve on 50% success at first glance -- I don't think my instinct would noticeably improve with practice.
I think that's true to an extent, but serious errors such as the woman with what looked like a horn growing out of her cheek to match a partially-occluded earring seem to happen often enough that they would call attention to people being fake.
Apart from happiness (smiling), all of the deep fakes showed a blunted affect. Genuine humans tend to have quite expressive faces, and the many of the fakes looked like NPCs from an Elder Scrolls game.
These lead me to believe that a situation where deep fakes might matter e.g. security video presented as evidence in court, it would be possible to start picking up the deepfake artifacts/signatures even for a human expert.
Exactly my thought - I scored 5/6 choosing very quickly the one which seemed more imperfect at a glance, but I'm no Reddit-expert photoshop-identifier or whatever. They all looked real, I'd have assumed any of them were without the 'one of these is fake' context.
I got 5/5 correct just by looking for weird artefacts in the hair and background. Just looking at the eyes alone was much harder (I sometimes couldn't tell).
For me I didn't even look at the face. To me the obvious giveaway that made me spot all of them was, first of all the unnatural bokeh that you see in all AI images that don't look like anything a camera would produce. And the second thing is looking at clothing that folds in strange ways.
After 5 minutes, I got tired and started seeing some of the same pictures again. 100% right all the time. For me, the trick is to assess the background, ear shape, synthetic textile (if any), and skin conditions.
For me, the backgrounds were interesting for their _more salient_ features rather than their ambience, e.g. unexpected smears of color, textures that vaguely looked like real things at a glance (like fabric or nature) but wouldn't stand up to scrutiny. They reminded me of the typical "mistakes" that you see when playing around with image generators.
Same but I focused on their expressions. Faces with expression “we are taking a picture of me” were fakes.
But that being said, all the pictures were insanely convincing and I picked fakes only because I knew I had to pick one and not because I knew one was fake.
2 more potential things to check : digital artefacts on teeth and between hair and background
Now they know what to improve ... Next round will be more difficult ...
This. The generator is really bad at compositing people into the image. So while from the actual face it's sometimes hard to tell, backdrop and foreground items (like a mic or toy) are a giveaway. So is face paint or unusual props (fake mustarch or carnival custome).
Especially since a lot of images from the real humans dataset seem to contain these.
So next time you're on a video call with someone and you're unsure if they're human or not, ask them to draw a letter on their face or have them dress like a pirate ;-)
>So next time you're on a video call with someone and you're unsure if they're human or not, ask them to draw a letter on their face or have them dress like a pirate ;-)
is that a thing now? my cursory search for "deepfake video call" gave me https://www.youtube.com/watch?v=wYSmp-nrJ7M
but other than that, there is just youtubers goofing around with the tech. Do you know of a "good quality" deepfake video call that can fool us like the whilefaceisreal does sometimes?
Yeah. In that case I think the intention was to have them look bad. And since the faked person is a celebrity, enough data was available to produce a fake of suitable quality.
Maybe we will see this in the future for CEO scams. Though in that case maybe a good UI that clearly indicates that the victim is called by an external user "Mr. Big CEO <hackerperson@totally-not-s.us>" might already be helpful.
My game is looking at the eyes, seems like the model has a tendency to make faces that have pretty much mirrored eye shapes and sockets, pupils being clear of imperfections or just plain circles or even both pupils being identical to each other, tilt is also a huge issue, most of the AI images have their output "looking directly into the lens" and nearly perpendicular to the aperture, real humans are off center in more ways than one in all of these aspects and more, as well as having numerous orientations
To me, looking at the background is kind of cheating to sus out facial features, after all we are trying to figure out if the face is real not the background
Typically in AI generated images another giveaway is that when the head covers the entire height of the picture, the background may randomly change between the left and right side in implausible ways.
Reflections (highlights) in the eyes being different, artifacts and so on. But that said, if I weren't looking for fakes I'd probably accept them as real enough.
For all the people boasting about how easily they can detect it: yes, you have to deeply look at possible artifacts (especially in teeth/ears) but sometimes it's not that easy and I'm pretty sure it would fool most of the population, especially if not giving a reference, real image on the side. Photoshopped images can also be spotted easily by keen eyes, but they still do their job, which is deceiving the majority.
This is a bit like chess puzzles. When you know there’s some winning tactic, you’ll sit and look for it until you find it. But in most real positions in actual games you don’t know and sometimes have to trust your gut as to whether to spend time on the details. If you know one picture is fake you’ll find it. If it’s just a social media avatar, you’ll assume it’s a real person.
That said, even without looking deeply for weird smooshy patterns, inconsistent curves, lack of symmetry or nonsense clothing, the biggest giveaway is that most AIs are pretty bad a realistic lighting. I got most of these at a glance because it’s a very pronounced difference.
I've spent a lot of time playing with AI image gen and I had to think really hard about most of them. I can confidently say I would be fooled by nearly all of them if I wasn't on the lookout.
As an avatar on twitter or wherever 100% would trick me, if I even clicked the image to take a closer look I wouldn’t know if it’s the compression by the social network or the image being generated…
Also the model is trained on faces, not backgrounds. Pretty soon we’re going to see entire 3D scenes generated and rendered photorealistically through a camera model.
I'm sure this fools a majority of people, contrary to the comments here. Obviously, with detailed analysis, you can probably spot the difference, but in day-to-day activity, and without knowing that one picture is fake, you will fool even more.
Backgrounds should be generated by a different model and face should be pasted in, now that would be a real challenge! Models that fix eyes already exist.
Or just use real background images and composite an AI face on top. The question which face is real so using the background is kinda "cheating" imo. So using a real photo for the background eliminates that technique to cheat.
I also got 10/10, but by looking at ears and facial hair. There's a long way to go for a test like this. That said, if I didn't know I was looking for artifacts, all of those pictures would be passable at a glance.
The backgrounds are a dead giveaway for me most of the time. Granted, I’m a professional photographer and spend a lot of time looking at photos taken with various lenses, so have become pretty familiar with depth of field and all that jazz.
That, or the backgrounds have the weird discombobulated shapes and structures that only vaguely resemble real things, which I’ve also noticed in other AI generation tools.
Either way, it still fools me sometimes and it’s pretty remarkable how quickly this has all been happening.
As a photographer what started throwing me off was back focus issues in the synthetic images. I assume if anything the GAN would generate an image that was uniformly sharp, but I kept seeing images where the focus was just past the subject's eyes, more around the ears and hairline. Just like a real autofocus system might lock onto shirt fabric or something.
After doing 30 I was able to differentiate very quickly, it's surprising how easy it is to detect these. You can tell by abnormalities in ears and AI probably wont show you hands because it struggles a lot. The backgrounds often look correct but dont make architectural sense. I also noticed if I dont look at the person in the eyes it sometimes is a tell, I'm not sure why though.
It's the background. The faces look half decent but all the AI backgrounds are fucked in some way. After a few misses getting my bearings I started getting nearly 100% success rate, and within a second and a half in most cases.
I found that in direct comparison, the background often was enough to tell the difference - but that was mostly because one of the images had a detailed background with text or architecture, which I know the AI would struggle with.
I think a similar test that is not asking for a direct comparison but just "is this image real?" would be much harder, since there is no better "safe" choice to fall back on.
My detection ratio was 100% successful and I didn't pay attention to anything in particular, it just clicked. I don't know what gave it away. I suspect that is because I looked through so many pics on thispersondoesnotexist.com, my brain's own neural network learned how to detect them (which is still a blackbox to the consciousness).
I got the first 3 wrong, then I started looking at the necks and the background and got all of them right (although not always 100% certain I was going to get it).
A few of them to have some artifacts on the face that give it away, but this is very impressive.
I only got one wrong and it's because I clicked through a little quick. To me it's almost immediately apparent by looking at the skin texture / how the hair looks. The AI is particularly "wavy" and doesn't look like normal skin.
Edit: Doing it a couple more times, you can tell pretty much instantly.
In contrast with everyone else, I struggled a lot with this when just looking at the faces. I made twenty attempts and got ten successes and ten failures. After reading other comments, when attempting again by looking at the backgrounds, I tried ten more times and went nine and one.
But I believe I am somewhat face-blind. I have never understood how people were able to describe faces to the cops to make those mockups of criminal suspects. I also struggle to recognize faces sometimes, including celebrities and new dating partners. At a past job, I remember thinking two of my coworkers were the same coworker until I saw them at the same lunch outing and it suddenly clicked. I recently got confused by two characters in an action movie with less than a dozen characters total, and realized shortly after that they had different ethnicities.
Similar here. Although my inability to distinguish faces is only mild. But for the longest time I thought Donna Noble[1] and Sarah Jane Smith[2] were the same character. They still look the same to me, modulo the wrinkles.
The teeth were a big giveaway for me. The gap between the central incisors should roughly line up with the nose but the fakes are almost always noticably offset.
You can take a photo not in the training set and usually find a close match. So in a sense almost any photo matches an AI generated one with "minor adjustments".
I got 18 in a row before I missed. There's something around the corner of the eyes that's weird, but I'll be damned if I could figure out how exactly to articulate it.
On a slightly related note, whenever I see a generated face with other faces in the background, and those faces are warped in strange ways, I get a very unpleasant sensation, like a chill going up my spine. Does anyone else get this?
Example: https://imgur.com/a/eK0jMZx. I can look at it after getting used to it, but at first glance I have to look away.
100% right for 10 minutes on an iPhone (zooming in as needed).
Other giveaways I haven’t seen mentioned in the discussion: vague earrings (fake). Coherent details in glasses reflections (real). If second person in picture has good details, probably real. Second person has bad details, too easy, fake. Gratuitous wisps of disconnected hair, fake. Actual clearly coherent finely detailed design on glasses frames or clothing, real.
This game seems quite easy. When I know one of them is computer generated and one not, it's easy to pick the real one. Like a multiple choice question is much easier than otherwise.
I didnt get a single one wrong, and am now playing with the rule that I have to decide within a few seconds, still all right.
Still, they're pretty good, If one of the CG images came up by itself in the course of other business I wouldn't bat an eyelid.
I always find these "which is real" comparisons interesting because there is always some type of distrotion around the borders of the face, like the AI has a good idea what a face looks like but things get fuzzy when it tries to create the stray hairs a person always has sticking out.
I got five in a row without looking for glitches or background objects, then stopped.
Look really carefully at a small area of skin. See if wrinkles, pores, hairs, and minor skin imperfections are present. See if they make sense in the context of the rest of the face.
I agree with another commenter that "Which face is real?" is somewhat easy to determine. In this scenario, it's A or B. You already know one face is fake, and one is real. It would be substantially more challenging if the question was rather "Can you spot all the AI generated faces?" and it turns out 40% of the time there is no AI generated face at all.
AI vs. Real can become somewhat easy to identify over multiple repetitions - AI vs. Real, Real vs. Real, AI vs. AI. are all scenarios that should be included to increase the difficulty imo.
Same as the other commenters, I got the first couple wrong, but then quickly realised what I was looking for. You can see artefacts in the skin of many of the faces, and often the ears were the giveaway.
Where is the proof? How can we trust that this website is honest about which are using real and fake faces? This might be some grad student's psychology experiment, or some artist's comment on our understanding of reality. If you can fake a face, you can fake a website. If i wanted to sell a database of "real faces" i moght just generate them myself using AI and sell them to researchers as real, forever polluting such tests. That would certainly clear up any copyright issues.
I’m viewing this on a mobile device so I can’t zoom in too efficiently to do all the subtle detail stuff. What I caught onto was the real photos were subtly messy with imperfections. The AI ones have this idealized look to them that has a slight airbrushed effect in aggregate. If people’s faces had blemishes or imperfect skin it’s more likely real. Somehow those imperfections get chopped in AI, probably because they’re so idiosyncratic they don’t survive the AI transformations which look at features en masse.
> What I caught onto was the real photos were subtly messy with imperfections.
ON a bigger screen, I would say in the AI ones, the fake hair is "subtly messy with imperfections" - it's a bit like a weave or rug in places, not correctly modelling strands.
Software is good at making faces now, but concentrating on the the periphery (background, ears, earrings) is still easy to spot a fake. Also computers don’t know how hands look, at all.
Just tried it on desktop. Way more noisy artifacts inside the faces with bigger resolution. At the scale shown on mobile they are barely noticeable (for me).
Perfect score, but it’s such an impressive technology. Traditional graphics with triangles and ray tracing are fake at a glance, but here you need attention to detail and a bit of wit.
I got the first 5 wrong because I thought I was meant to pick the computer, not the real person. After that I did ~20, got them all correct.
There's alot of tells. Glasses make the edge of the eyes look strange. Around the ears, hats, sometimes the backgrounds etc, the blurs are wrong or corrupt.
However, if I didnt know 1 of the 2 were generated, I wouldn't look for anything and would probably just assume it's real, unless there was real obvious corruption on the face.
Since this uses StyleGAN, it's relatively easy to tell when an image is fake or real, since the networks seems to have trouble with backgrounds and faces that are directly adjacent to the main face.
However, since diffusion models are all the rage now, I think we would perform significantly worse with landscapes or images of fruits and animals, especially if the task is "distinguish between the real and fake art".
This site was put up in 2019 presumably with images from 2015-2019 era algorithms. This was early in the viability of these image generation techniques, so the authors work is kind of precinct.
However the state of the art of image generation has moved - I suspect a 2022 version of this would be substantially harder.
At first I was tricked a handful of times, but I trained myself in what to look for. At first uneven blemishes proved a useful heuristic, but then when I looked deeper I found the edges and backgrounds were even more effective. The fakes somehow feel like they are in this... Oily world of illusions.
Any picture with more than one person is real. If there is a second partial face, or a shoulder or hair, or any other sign of another human, then it's the real one. They need to clean up their data, unless they're testing for people figuring that out.
Is this a serious topic worthy of serious responses from high ranking HN readers?
Depressing. They're both photos. A photo (of 'reality') is at its very best, at very best, already just a representation of the subject. Both are (technically) fake, aren't they?
A photograph of reality can and often does look 'unreal', or odd, or fake. So many, many, aspects cause this; lighting, expressions caught mid point, even the colour. My point is, the invitation is nether a robust or an intriguing one. Nevertheless, it is a success for other reasons.
Some of these are just straight up insane fever dreams if you evaluate the entire photo instead of just the face. After easily getting 80%+ correct, I had to stop. It wasn't from boredom, but getting creeped out by how grotesque some of these fakes were.
Eerie. It seems like most people here could tell, but with my morning vision and on my phone screen I did terribly. Not sure if it’s because I can’t see as clearly as usual or I have some deficiency in identifying faces.
If it has blurry and screwed up bokeh or random patterns, it's probably the fake one. If there's something incredibly detailed, but blurred, it's probably real.
Eyes, teeth, and Ears, that’s my endgame… plenty of anomalies there, took me about one per second to get 20 right in a rows then I stopped playing. I figured I was trining the the system… your welcome!
I played about 10 of them and got them all right. It’s very impressive, but I just happened to know that ears and teeth are particularly problematic for these generative models (for now).
This was surprisingly easy when looking out for artifacts. StyleGAN2 significantly reduces artifacts, i'd be very interested to see StyleGAN2 on this website as well!
I wonder if it is possible that a generated face matches a real face. A person with real face is detected by algorithm as unreal and cannot access facilities.
just look at outlines, especially EARS area or earings at women, AI can't make real ears yet, face is already OK (though AI can't really make skin imperfections), but ears are giveaway
tried 3, got all correct, no point to waste my time anymore, same issues as always with these photos
True, I got 9 out of 10 correct, just looking at the hairtifacts. Howdver I might be fooled by this if I wasn't primed to look for which one is fake and which one isn't.
I got 1 incorrectly because it was bad quality and photography which created artifacts of its own.
Regardless, the AP news article[1] linked under the "methods" page provides some useful reading on how to detect these faces, for anyone interested.
[1] https://apnews.com/article/ap-top-news-artificial-intelligen...