Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Do you have any example where Facebook did something that they knew was bad but still did it?

Yes, here's a study on manipulating the feed of over 600,000 users, in order to elicit an emotional response. Without asking for permission of course, because apparently you agree to FB playing mind tricks on you when "agreeing" to their terms and conditions. Really, it can't get worse than this and they have no justification.

Here's the resulting paper: http://www.pnas.org/content/111/24/8788.full.pdf



Ah yes, I had forgotten about this unethical chestnut you have brought forth. This is the kind of research which would never (and I do mean literally never-- no chance, zero, nada, zilch) get approved by an institutional review board (IRB) if a scientist wanted to perform a similar experiment. There was no consent process, and the study aimed to effect a tangible and measurable emotional change, for no real greater good/purpose.


You are aware that every advertisement you look at every day has been designed to elicit an emotional response yes? The advertisers in the newspaper, Internet and billboard down the street didn't get your opt in permission either.


They do it explicitly with malicious intent; there is no bones about their goal which is to manipulate you and trick you into spending money on their products based on psychological charlatanry rather than offering you actual value.

It's different when you do 'research', i.e. purport to act like a scientist, not a scam artist.


that doesn't make it any more moral. just because others are already doing it, is it automatically OK?


Actually it does. They did something relatively harmless for science (and maybe, indirectly, for profit). The whole advertising industry is based on doing worse things, all the time, for pure profit. That this Facebook study is a subject of an outrage is, frankly, ridiculous.


That is an appeal to tradition, and it a fallacy. It isn't moral for advertisers to do harmful things to users just because most advertisers do it.

Most scientist do not do harmful things to their subjects. Modern science requires consent. So your fallacious argument, even if it weren't, fails to argue correctly in the first place, because scientists aren't advertisers.

Facebook study was done without consent. It was harmful to their users (subjects) because it interfered with their emotions. It is immoral.

The outrage might seem ridiculous, from the perspective of an advertiser, because, as you said, they do worst things all the time, so this study is hardly appalling for them. But from the perspective of other people, it is.

I hope this gave you an insight into (our) reaction to the study.


Not the same context, though you're right, it's similar-- the context of this was specifically to study the capacity for an effective emotional change controlled by social media only. They didn't have any monetization plans for the emotional change, which could potentially be negative.


I also forgot about this one. Good too recall. Especially now that algorithms dictate more and more our lives, are still not (usually) public.


I have always been surprised scientific and ethics committees haven't somehow come down on Facebook and its decision makers for this? Let alone, where is all of their data for public review to allow as many third-parties to analyze it as possible to view its impact?


Ethics committees have come down on FB for this. They just don't have any control over Facebook.


Do you know if any governments have taken any action - or are government ethics committees the brunt of it?


It's frightening to think that they have crunched the numbers on how a user's emotional state is reflected in profitable interactions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: