Hacker News new | past | comments | ask | show | jobs | submit login
Which Face Is Real? (whichfaceisreal.com)
76 points by tom_mellior on Feb 20, 2019 | hide | past | favorite | 57 comments



Seems like the best way to tell is to not actually look at the face so much as the details around it. There are often obvious glitches in the background.


I got 10 out of 10 correct without looking at the features around the face. It's easy to spot the fake one by wrinkles going the wrong way, or obvious artifacts like incomplete earrings. It's close to being impossible for me to tell, though, if they fixed those issues.


My method was more intuitive than that. There's more detail on the real faces, the fake ones look slightly brushed over. If it wasn't an A/B test, I'd fall for the fakes every time, however.


Oddly I succeeded using the exact opposite strategy.


And some of the real photos have additional accessoirs like a hat, sunglasses, ear rings but none of the fakes seem to have them. I clicked through 30 picture pairs and had 28 of them right without thinking more than 1 second for each.


I looked at 3 and saw hats, spectacles, and earrings.

One of the fakes had specs; one had earrings and a hat and a partial of another person next to them (that one fooled me as the real photo was crazy, a mortar board with a coloured and unfeasibly large tassel).


It’s impressive how well the algorithm does with spectacles.


Came here to say the same thing, but the fact remains that the fake faces are indeed quite convincing!

Perhaps they could improve the deception by simply cropping out 100% of the background.


My strategy was to pay attention to the lighting conditions. the fake ones fall within a recognisable averaged out distribution, and a photo with lighting that is an "outlier" is easy to spot as real. E.G. Dark blue mood lighting. Overexposed features. Glare in the eyes.

a lot of the fake ones also have a recognisable skin gloss.


You can also often find unnatural hairs/wrinkles/pigmentation around the forehead/neck if you look closely.


Yes, using these clues I was able to go about 15 or 20 times before getting one wrong... and the one I got wrong was one I was very unsure about.


Background, teeth, hair, edges are giveaways. I easily got 10/10 right.


They could easily solve this by blurring the background.


There's a super easy machine learning algorithm to generate faces: nearest neighbor.

Joking aside, how do I know they're not doing this? I don't have their dataset so are these people really very "novel" or just slightly messed up existing photos? I have the same concerns with recent writing AI that's been making headlines. It's too good and I swear it's just copying a couple sentences from here and a couple from there, or near enough so as to make no difference.


They likely use https://github.com/NVlabs/stylegan. NVidia recently released it, and suddenly sites like this one and https://thiscatdoesnotexist.com pop up.


I think a better game would be to show several faces, and have the user pick which one(s) are not real. With just a side by side of a real vs fake, it's pretty easy to tell from contextual clues.


I've worked with 2D graphics quite a bit and I've often used anisotropic smoothing[1]. GAN images have similar artifacts (probably worth thinking about, btw), which are trivial to spot if you know what you're looking for. They look like waves on water[2].

One could mask these artifacts by blurring, adding noise or downscaling further.

[1] - https://authors.library.caltech.edu/6498/1/PERieeetpami90.pd...

[2] Extreme example: https://c1.staticflickr.com/9/8449/8048065891_7fab061307.jpg


Am I the only one who assumes the people running this site are using the data to feed/teach the algorithm to be more accurate in the future? Like, any time the 'fake' face gets chosen, it gets added to the training dataset of what 'works'.


http://www.whichfaceisreal.com/methods.html

"On this website, we present pairs of images: a real one from the FFHQ collection, and a synthetic one, as generated by the StyleGAN system and posted to thispersondoesnotexist.com, an web-based demonstration of the StyleGan system that posts a new artificial image every 2 seconds."


I noticed a lot of folks in here are putting the emphasis on the 'this is how I spotted the fake' which is extremely valuable on its own, however, have you thought about the potential practical outcomes and unintended consequences.

Its amazing to think about the implications of being able to create faces that look real. It can have an impact on police questionnaires, future holograms, may be used to adulterate security camera's data and so many others. I wonder if we will be able to keep up with the changes in technology to protect what society holds dear.


re: security camera data ... just look at DeepFakes. The train has already left the station there


Knowing that one is fake you can take the time to pick it out, but these could easily pass as real photos in a context where you're not looking for them.


The game is somewhat simple to beat. Just pick the quirky face -- i.e. the one couldn't realistically be generated using a combination of a collection of faces. A blurrier background is sometimes a giveaway too.

The game would be harder if the real faces were less asymmetrical in detail (no hats, etc.). And a lot harder if you needed to pick all of the real (or fake) faces, rather than pick the real one knowing the other is false.


Well maybe this "face" is real but it isn't a "person". I hope, or maybe I'm getting too old :)

http://www.whichfaceisreal.com/results.php?r=0&p=0&i1=fakeim...


Impressive. But, the one that loads first is always the real person. The teeth for generated faces often look weird.


> But, the one that loads first is always the real person

I tested your theory and the loading order was a mixed bag. The first loaded definitely isn't the real person. It's about 50/50 here.


depends on location

it's a pity that one image is progressive jpeg and another is baseline

this makes me always see which one is real



I have tried it 50 times and had only three wrong answers. The best method _in this case_ to recognize a fake face is to look at the background. In many cases you can see considerable disturbances.


I'll go out on a personal limb here and say that I mostly failed in my guesses, with perhaps 80% incorrect.

I have some degree of faceblindness (often can't recognize someone I know well if they've changed something like makeup or hairstyle or clothing), as well as difficulty in picking up nonverbal cues. I wonder whether brain differences like this might affect image recognition?


The fake images would look a more real if they just post-processed them through one of the popular face-enhancing apps.


I found a reliable criterion to be "Does this person have a consistent eye line?", i.e. do their eyes indicate they are focusing roughly at the camera distance.

I would assume that that is a bias of the "real" photographs, because who would keep a picture where the subject doesn't look at the camera.


> who would keep a picture where the subject doesn't look at the camera.

Anyone shooting candids; even lots of portraits have the subject looking off into the distance or somewhere else other than at the camera. I mean, sure, if your are shooting for a photo ID, you won't keep a shot that isn't directly looking at the camera, but...

(Which isn't to say it's not a real bias in genuine photos, just not as absolute as you seem to suggest it should be expected to be.)


http://www.whichfaceisreal.com/results.php?r=0&p=0&i1=fakeim...

I wonder where they get their real images from...


This one was easy, look at that headgear! hahaha.

http://www.whichfaceisreal.com/fakeimages/image-2019-02-17_1...


Seem's it couldn't decide between hair or beanie, so it just went with both


It's interesting how, aside from a few infrequent and slight glitches, the artificial faces look perfect, but yet we still intuitively know which one is real based on lots of other cues (the background, the pose etc).


It appears that the picture with a detailed background is the picture of a real person. I choose a sequence of pictures, without looking at the faces, and I was able to use only the background correctly answer each time.


This is part of the project Calling Bullshit, a course on calling bullshit.

https://callingbullshit.org/syllabus.html


I keep getting it right after the first failure. And my brain learned it very quick: (1) The background is distorted. (2) Their eyes are open but their irises are not round enough.


Reminds me of a website I came across years ago where you had to tell a paedophile from a computer science professor. It really was sometimes quite different to tell.


Am I an introvert if trust my image manipulation know-how and purpose detection sensor array more than my human instincts in the quest this web site proposes?


the real one loads almost instantly and the generated one takes noticeably more time. you should preload them both because it became obvious pretty quickly


I hope they're collecting data on how long it takes someone to answer, not just whether people answer correctly or incorrectly.


It would be good to let folks scribble on the tells. I had to resort to earring comparison a few times to break a tie on facial features.


The fake faces are mostly looks flatter (no facial wrinkle at all) than the original face or sometimes has too much wrinkles


I got 5/5 just by looking at lighting, including the background -- the pictures with the better lighting were fake.


So nvidia opens source the code and everyone's suddenly scraping something together from it on the web...


I never picked a wrong answer, even after 15 tries, but the generated faces certainly look convincing.


The ML seems to not generate backgrounds that are as realistic. That's gotten me to 12 in a row.


I don't know if this is true for anybody else, but the real image always loads first.


Next up: HotOrNot for GAN faces.


Or catfish as a service


My favorite fake was a very plausible looking professional headshot... with a patchy 5 o'clock shadow. The GAN behind this clearly hasn't figured out that while well-groomed beards are acceptable in headshots, patchy shadows are not.


This is really neat, but pupil roundness is a dead giveaway here.


The giveaways are weird-looking teeth and background artifacts.


There is a lag - the generated one comes second.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: