I'm one of the authors of the paper. We're planning on releasing an online demo website at some point where people can submit photos and see what kind of replacements are generated. The process is indeed fully-automatic.
The non-real looking photos have mostly to do with the mixups in gender/age/race, that we do not explicitly guard against currently.
Are there details any place about how you went about implementing this? For example did you use any open source software for the face detection (OpenCV?) and are there any libraries that are particularly good for this kind of thing?
You know this could also be used for censorship. If someone wanted someone to disappear from a photo this would be more convincing than airbrushing them out. This is a really neat program though.
Great stuff! I especially like how the facial characteristics of the hybrids are mostly preserved. Could you have simply introduced more noise to increase variability?
I don't understand what you mean. Why would we introduce noise into the results?
Given a face to replace and a database to find candidates from, the process is fully deterministic. The database itself can of course grow, or be switched out. This allows one to create different databases for different applications (e.g., one with synthetically rendered faces, one with stock photos, one with personal photos, etc.).
It looks like the hybrid is produced by extracting the entire face from the second photo and fitting it onto the face frame of the first. I get a complete match with the resultant face by doing this in Photoshop.
If this is right, then the program is agnostic to individual facial features, as in it's everything from the eyebrows to the lips, or nothing.
Also, if this is right, would there be privacy implications to the person whom the face is extracted from? If you are protecting the privacy of the person whose face is swapped out, then it makes sense that only their frame remains, but the hybrid face is very similar to the original face (second photo).
That's correct (although our adjustment step changes the lighting and color of the replacement candidate to match the frame of the input image).
This is why I said earlier that one could create multiple databases. Since the replacement candidates might be recognizable, they should be from a set of people who've given their permission for their photos to be used (e.g., stock photo collections). Another option is to use computer graphics to render a large collection of synthetic faces. Modern techniques are good enough to generate fairly realistic faces.
Finally, note that if every face within some application has been replaced (e.g., if this were to be applied on all images on Google Streetview), it wouldn't even be a privacy issue for the replacement candidates -- since users would know that all faces are replaced, seeing a particular face only tells the user that that person was not there.
Somewhat unrelated, but I realized that Facebook has tons of face/name data from all the tagged photos. Not only do they have photos tagged with names, but they also know who your friends are, thus who is most likely to be in your photos. I imagine Facebook + Riya could be pretty accurate.
So... when are they going to roll out an (optional) auto-tagging feature?
While the site had images of software for this purpose, it didn't provide links to it. Anyone know where one can find it? I'm curious to see how seamless the process is. Some of the ones in the article looked perfect while others looked very non-real.
The non-real looking photos have mostly to do with the mixups in gender/age/race, that we do not explicitly guard against currently.
Here's the project page describing things in more detail: http://www1.cs.columbia.edu/CAVE/projects/face_replace/
The video linked from that page explains the whole process from beginning to end in under 5 mins.