Seems like the best way to tell is to not actually look at the face so much as the details around it. There are often obvious glitches in the background.
I got 10 out of 10 correct without looking at the features around the face. It's easy to spot the fake one by wrinkles going the wrong way, or obvious artifacts like incomplete earrings. It's close to being impossible for me to tell, though, if they fixed those issues.
My method was more intuitive than that. There's more detail on the real faces, the fake ones look slightly brushed over. If it wasn't an A/B test, I'd fall for the fakes every time, however.
And some of the real photos have additional accessoirs like a hat, sunglasses, ear rings but none of the fakes seem to have them.
I clicked through 30 picture pairs and had 28 of them right without thinking more than 1 second for each.
I looked at 3 and saw hats, spectacles, and earrings.
One of the fakes had specs; one had earrings and a hat and a partial of another person next to them (that one fooled me as the real photo was crazy, a mortar board with a coloured and unfeasibly large tassel).
My strategy was to pay attention to the lighting conditions. the fake ones fall within a recognisable averaged out distribution, and a photo with lighting that is an "outlier" is easy to spot as real. E.G. Dark blue mood lighting. Overexposed features. Glare in the eyes.
a lot of the fake ones also have a recognisable skin gloss.
There's a super easy machine learning algorithm to generate faces: nearest neighbor.
Joking aside, how do I know they're not doing this? I don't have their dataset so are these people really very "novel" or just slightly messed up existing photos? I have the same concerns with recent writing AI that's been making headlines. It's too good and I swear it's just copying a couple sentences from here and a couple from there, or near enough so as to make no difference.
I think a better game would be to show several faces, and have the user pick which one(s) are not real. With just a side by side of a real vs fake, it's pretty easy to tell from contextual clues.
I've worked with 2D graphics quite a bit and I've often used anisotropic smoothing[1]. GAN images have similar artifacts (probably worth thinking about, btw), which are trivial to spot if you know what you're looking for. They look like waves on water[2].
One could mask these artifacts by blurring, adding noise or downscaling further.
Am I the only one who assumes the people running this site are using the data to feed/teach the algorithm to be more accurate in the future? Like, any time the 'fake' face gets chosen, it gets added to the training dataset of what 'works'.
"On this website, we present pairs of images: a real one from the FFHQ collection, and a synthetic one, as generated by the StyleGAN system and posted to thispersondoesnotexist.com, an web-based demonstration of the StyleGan system that posts a new artificial image every 2 seconds."
I noticed a lot of folks in here are putting the emphasis on the 'this is how I spotted the fake' which is extremely valuable on its own, however, have you thought about the potential practical outcomes and unintended consequences.
Its amazing to think about the implications of being able to create faces that look real. It can have an impact on police questionnaires, future holograms, may be used to adulterate security camera's data and so many others. I wonder if we will be able to keep up with the changes in technology to protect what society holds dear.
Knowing that one is fake you can take the time to pick it out, but these could easily pass as real photos in a context where you're not looking for them.
The game is somewhat simple to beat. Just pick the quirky face -- i.e. the one couldn't realistically be generated using a combination of a collection of faces. A blurrier background is sometimes a giveaway too.
The game would be harder if the real faces were less asymmetrical in detail (no hats, etc.). And a lot harder if you needed to pick all of the real (or fake) faces, rather than pick the real one knowing the other is false.
I have tried it 50 times and had only three wrong answers. The best method _in this case_ to recognize a fake face is to look at the background. In many cases you can see considerable disturbances.
I'll go out on a personal limb here and say that I mostly failed in my guesses, with perhaps 80% incorrect.
I have some degree of faceblindness (often can't recognize someone I know well if they've changed something like makeup or hairstyle or clothing), as well as difficulty in picking up nonverbal cues. I wonder whether brain differences like this might affect image recognition?
I found a reliable criterion to be "Does this person have a consistent eye line?", i.e. do their eyes indicate they are focusing roughly at the camera distance.
I would assume that that is a bias of the "real" photographs, because who would keep a picture where the subject doesn't look at the camera.
> who would keep a picture where the subject doesn't look at the camera.
Anyone shooting candids; even lots of portraits have the subject looking off into the distance or somewhere else other than at the camera. I mean, sure, if your are shooting for a photo ID, you won't keep a shot that isn't directly looking at the camera, but...
(Which isn't to say it's not a real bias in genuine photos, just not as absolute as you seem to suggest it should be expected to be.)
It's interesting how, aside from a few infrequent and slight glitches, the artificial faces look perfect, but yet we still intuitively know which one is real based on lots of other cues (the background, the pose etc).
It appears that the picture with a detailed background is the picture of a real person. I choose a sequence of pictures, without looking at the faces, and I was able to use only the background correctly answer each time.
I keep getting it right after the first failure. And my brain learned it very quick: (1) The background is distorted. (2) Their eyes are open but their irises are not round enough.
Reminds me of a website I came across years ago where you had to tell a paedophile from a computer science professor. It really was sometimes quite different to tell.
Am I an introvert if trust my image manipulation know-how and purpose detection sensor array more than my human instincts in the quest this web site proposes?
the real one loads almost instantly and the generated one takes noticeably more time. you should preload them both because it became obvious pretty quickly
My favorite fake was a very plausible looking professional headshot... with a patchy 5 o'clock shadow. The GAN behind this clearly hasn't figured out that while well-groomed beards are acceptable in headshots, patchy shadows are not.