If I'm reading this correctly, they showed a series of pictures, one at a time, and told people to focus on faces that they found attractive. They then recorded EEG data to generate a score of perceived attractiveness, and used it to influence the GAN to make more beautiful faces.
This is a very weird approach. If the goal is to personalize attractive faces, you could just have subjects rate pictures on a scale of 1-100. If you want to test EEG ratings of attractiveness, you should just compare manual ratings to EEG ratings without prompting them to focus on attractive faces.
In all likelihood, the EEG has no idea if you think a particular face is attractive. They've just trained the AI to recognize when a user is focusing intently vs. not. Which is still kind of cool, but much less interesting than the headline suggests.
> If the goal is to personalize attractive faces, you could just have subjects rate pictures on a scale of 1-100.
However, research in the field (the field here is really "interactive genetic algorithms") suggests that it's more efficient/better HCI to allow selection rather than numerical rating, in order to choose the "parents" of the next generation. So, the right way to do this would be a typical interactive GA, using a mouse. Or eye-tracking, for lazy users.
In what way is this a genetic algorithm? When I think genetic algorithms, I imagine biologically inspired non-linear optimization algorithms. That feels like its possibly relevant, but more of a choice of technique than the main idea at play here. Am I wrong?
> Machine learning-based algorithms subsequently determined which faces produced the greatest amount of activity for each person, then established which traits those faces had in common. Based on that data, the neural network then proceeded to produce new faces that combined those traits.
In particular, it is a reasonably common idea (in the last few years) to use a GA or other metaheuristic to traverse an embedding space. Here, that embedding space is the z layer of the GAN. Especially when "fitness" in the embedding space is purely subjective, it's natural to use an interactive GA.
However, I am over-stating it to say that the field is just interactive GA. It's that, intersected with face-GANs.
The indirection of focusing on more attractive faces seems to purely exist to be able to say that attractiveness response was measured by brains can rather than statements from the subject, which it was not.
Actually measuring attractiveness by way of neural response would be quite interesting, as it præcludes an ability to lie.
There is some discussion about how much people find attractive what they say they find attractive, and whether or not they are lying and instead are saying what they are expected to find attractive to appear more socially gracious; — such research would be able to compare the difference between what a man says, and what he thinks, which never fails to produce interesting contrasts.
>If you want to test EEG ratings of attractiveness, you should just compare manual ratings to EEG ratings without prompting them to focus on attractive faces.
Yeah, as you say, it'd be very interesting if it could first associate self-reports of attractiveness with EEG activity, and then know roughly how attractive you consider someone based solely on blind EEG data. Then reverse the process by generating lots of faces and picking the one that someone seems to be most attracted to purely based on their EEG. (This is what I initially thought the study would be doing.)
I suspect there would be major challenges with this, though. Besides EEG data possibly being too coarse, it's possible that what happens in the brain when you see someone you find attractive is very complicated and multifaceted. I think "attractiveness" may not be a single metric along a gradient even for a single individual's brain, let alone among a population. So even training on each person individually might not be that fruitful.
For example, you may find two different people attractive due to a mix of reasons - some shared reasons, and some differing reasons. I'd guess the brain activity might differ in some ways in those cases. Maybe also even while looking at different photos of the same person, or even the same photo at different times.
I know people tend to go through hormone cycles throughout the day; it's possible activity (as well as subjective attractiveness perception at a given time) may differ depending on what phase of a cycle you're in, or food, coffee, or alcohol you've had recently, etc.
But who knows, maybe there often is some consistent pattern of activity which would serve as a decent proxy for overall attractiveness perception, if it could just be sifted out of the noise. This is probably just extremely difficult to do with current technology and neuroscience understanding, even if it may be feasible in theory. I'm sure we'll see more things like that in the coming decades.
People lie, especially in matters of attraction - eg it is still not safe (legally or culturally ) to be a homosexual in much of the world (including swaths of the US). Thus the EEG-based “mind reading” approach. There’s still much work to be done, clearly , but “simply asking people” is anything but straightforward.
Except they're asking people to consciously choose the face they're most attracted to and focus "more attention" on it, which still allows the user to lie. It reminds me of that "controlling video games with your mind" [1] gimmick, which was just eye tracking with extra steps, or even those "lift a ball with your mind" headsets that came out as toys a decade ago.
> You only started trying it out once they moved to GANS and VR headsets. You are not pathetic or anything, could get a real girl if you wanted to. Just don't have time. Have to focus on your career for now. "Build your empire then build your family", that's your motto.
> You strap on the headset and see an adversarial generated girlfriend designed by world-class ML to maximize engagement.
> She starts off as a generically beautiful young women; over the course of weeks she gradually molds both her appearance and your preferences such that competing products just won't do.
> In her final form, she is just a grotesque undulating array of psychedelic colors perfectly optimized to introduce self-limiting microseizures in the pleasure center of the your brain. Were someone else to put on the headset, they would see only a nauseating mess. But to your eyes there is only Her.
> It strikes you that true love does exist after all.
Very insightful vision of a (proto-?)wireheading future.
I suspect something similar to this may eventually be developed for AI-generated music, too. A kind of superstimulus or wirehead music that initially sounds strange and unintelligible but soon proves to be more pleasurable than anything you've ever heard before. Maybe even trained on and tailored to each individual's brain, to maximize pleasure and reward peaks. Perhaps with a little bit of optional user input, like sliders for fast, slow, bright, dark, intense, soft, "dopaminergically flat vs. spiky", etc.
I guess the non-science-fiction equivalent would be these weird hentai where the girls have huge distorted boobs (like, twice the size of their head or bigger) and similar freakishly unrealistic anatomical details.
Thankfully, I will be among the large subset of the population which will be immune, as, per deep learning tradition, the data set the GAN will be trained on inevitably won't include any fats, blacks, or non-binaries.
Can't tell if downvoted because people are mistakenly reading this as racist/homophobic, or if they're incensed at the completely accurate insinuation that machine learning datasets focused on "attractive" people are likely to mainly consist of photos of skinny white people.
"Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should", Ian Malcolm, Jurrasic Park
This reminds me of a comment someone posted recently here. A situation was described that started with a virtual agent that was generally considered beautiful which was then optimized into a amorphous shape of colours that was the most attractive for this particular user.
Edit: user "tablespoon" mentioned the same comment already, go check theirs for a link.
I've wondered for a long time that with facial recognition and ML, how we could better find matches for people firstly based on appearance then by other traits like personality. Could we use these personalised attractive faces and match people who have similar preferred facial features? Thus with a high level of certainty be able to predict if two people would be attracted to one another.
Or understanding if beauty really is in the eye of the beholder, or if large groups of people with similar facial features are attracted to other groups with different but similar facial features.
Despite the high probability of it leading to some weird dystopia, I suspect some government will try implementing a national ML-powered dating service at some point (especially if birth rates continue to drop in developed countries). Two examples of governments getting involved in the dating market are from Japan[0] and Iran[1].
I’m reminded of that scene in Snow Crash where this company’s holographic secretary has a different racial appearance depending on who’s talking to her. (I read it awhile ago and forget the details)
Or even more of a fit, there is a scene in Stephenson's "The Diamond Age" where a character meets a person while wearing AR/VR goggles as part of a piece of performance art. An algorithm gradually tweaks the performer's appearance by measuring his brain activity until the performer seems dazzlingly attractive. It sounds almost exactly like the technology being described here, to the point where I wonder if one of the authors read that novel.
It’s not far-fetched to imagine that by the end of the decade, a fair percentage of people’s Instagram, Snapchat and TikTok feeds will be generated characters based on their preferences as learned by the network. Engagement will be maximized because you’ll never be “all caught up.” There will always be a completely brand new image created on the fly available to capture your attention.
It is absolutely far fetched as far as I'm concerned. To me it is more plausible the new generation will not even care about insta/snapchat/tiktok. Do people still care about snapchat? Not sure about magic AI generated disposable entertainment either. I don't see it as feasible and even if it was, people would soon learn to filter it just like we filter spam and ads.
I expect that today's social media paradigm will be recognized as harmful, just like Fast Food and Porn.
Sure, sometimes you want to get that fix, but at that point its long-term utility will have degraded to nothing, so it's not something people will want to be addicted to.
I have a feeling this will be used in the future to select candidates for something like a job, beauty pageant, or simply for a " beauty ranking". The latter will probably be how they get test data (if they don't buy it from Facebook)
What if I am not attracted to the faces generated by the GAN? Say what if I prefer a different style, or even a drawing instead of an imitation of a real person? Can I hallucinate my own waifu with this?
This is a very weird approach. If the goal is to personalize attractive faces, you could just have subjects rate pictures on a scale of 1-100. If you want to test EEG ratings of attractiveness, you should just compare manual ratings to EEG ratings without prompting them to focus on attractive faces.
In all likelihood, the EEG has no idea if you think a particular face is attractive. They've just trained the AI to recognize when a user is focusing intently vs. not. Which is still kind of cool, but much less interesting than the headline suggests.