Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Says It’s Policing Fake Accounts, but They’re Still Easy to Spot (nytimes.com)
130 points by coloneltcb on Nov 6, 2017 | hide | past | favorite | 117 comments



I get that facebook my have complicated motives here, but I don't know that "easy to spot" is a good yardstick.

First, there are the countless instances where something is trivial for a single person to suss out that doesn't scale.

Second, and perhaps most importantly, I don't think simple hunches about fake accounts are really actionable here. Having an account that "originated in the Middle East" that now presents as an "attractive woman" is not really a clear-cut case for having your account terminated.

I get that we really really want to be able to draw a hard line between people who behave in "weird" ways and organized influence attempts, but I don't think we can get there. It's starting to sound like "bots" and "fake news" have disagreeable opinions baked into the definition.


>First, there are the countless instances where something is trivial for a single person to suss out that doesn't scale.

This is an interesting argument to me. It's basically conceding a shittier solution, because we "can't scale" the effective one. As we talk more and more about automation, I think this is similar to what we've seen in meat space with things like tools.

I've heard so many people complain about chinese hammers or socket wrenches (though I also know there are quality chinese tools out there) or many other examples. Things mass produced cheaply are lesser quality, but we're willing to make the trade off as a society because we get things for cheaper.

And now we see this in the information economy. Even though we know a human doing this work would be much more effective, we're not going to do it because the cost is too high.


1) ANY single prominent example of a false positive or false negative will enrage some community. People's brains can only think of examples, not rates.

2) With billions of accounts zero examples of error is an absurdity.

3) different communities drastically disagree on which things false positives and false negatives should be defined as.

4) More perceived effort to work on (1) increases the outrage, i.e. the perceived sense of moral betrayal from (1) and (3), despite (2).

5) human editors won't solve (1) + (2), and definitely not (3), and contribute to some degree to (4).


These are all great arguments for why Facebook's current model can never be a good way to run a social network. We need tools for people to build their own networks and make their own decisions, not a centralized network where all the decisions for the entire world are made by a small team in California.


Some of the value of a network comes from everyone else being on it. The term for this is even literally called "network effect".

https://en.wikipedia.org/wiki/Network_effect


I would think its worth trying out something even if the result is less than perfect.


So you're falling into (3) and ignoring how important (1) is


Um, the person simply defined a problem into existence. There is no agreed upon basis of thinking like this.


But you haven't attempted to refute the well-described basis of that problem statement. Which parts do you think are wrong?


I personally don't find it well described, or useful. I chose to engage the commentator in a way that brought us both to a common ground position.


> I chose to engage the commentator in a way that brought us both to a common ground position.

Where? I don't see any more comments from you in this thread...


Yes they did. So what do you find wrong with their first point or any others? And that might make your suggestion better?


Not that person, but I objected to:

0 -- The outrage caused by misidentified profiles is larger than the outrage or societal damage caused by the existence of fake profiles.

...which was an unstated assumption underpinning why that entire argument would be an objection to implementing account review, and in my estimation is incredibly incorrect. They're being forced into Senate hearings over the false profiles, constantly being harassed by merchants about possible ad fraud over them, and are losing the trust of their communities. By contrast, the storm in a teacup over a few misidentifications is nothing to Facebook. (There's numerous other unstated, highly subjective assumptions as well.)

I also object to:

1 -- This is true, but irrelevant. Facebook cares about the rates of outrage in communities, not the mere existence. Ironically, this person seems to have fallen into the trap they called out, by thinking Facebook cared about instances of outrage (as the poster would). They also care about the relative outrage -- how mad are people compared to another option we can try. This point very poorly frames the problem as being about trying to stop any outrage instead of trying to minimize outrage.

4 -- This is community specific and untrue of the bulk of people. Decreasing error increases the vitriol involved, but it seems to come from a decreasing number of people as the outrage evaporates in the face of better error rates. So I think they're outright wrong here: it's like water cooling as it evaporates. The water is going away, even though it's cooler in the area.

5 -- This one becomes irrelevant (and/or wrong) if you correct the mistakes in 1 and 4.

Overall, this argument is --

a) irrelevant because even if it was correct, it wouldn't matter as it fails to contextualize its point by comparing levels of outrage between sources

b) wrong internally, in that it's largely irrelevant to the problem at hand or just outright wrong.


If you constrain the issue to just fake accounts that's quite a bit easier, everyone agrees about the difference between a bot and a person at least in principle. Versus all forms of possible content moderation, e.g. should youtube ban Alex Jones.

And I'm not sure that the total outrage level will ever decrease the more responsibility for moderation they are seen as having.


Yes. And the way it's often argued, it treats the status quo as some sort of natural given, like the weather. The line of argument seems to be that if Facebook and Twitter can't run safe platforms while maintaining their current revenue models and profit margins, well, we'll just have to live with that lack of safety.

I find this line of argument odd when it's about actual natural givens, like dangerous weather. I'm in a house right now, so it's not like technology to limit the effects of dangerous weather is unheard of. And a bit odder when it's about human inventions that have been around for generations, like guns and cars. But when it's used for something basically new in the world? I find it baffling.


I find it ridiculous that text on random social media accounts is being associated with "safety".

If anything, it's the user's own misuse and mishandling of their own information, and their lack of understanding that anybody can publish anything, that makes their specific use of social media "unsafe". The trust and total integration of Facebook into people's lives is the actual unsafe component, which Facebook themselves actively promote; it all should be held at arm's length.

There is no reason that users should consider random words on random Facebook accounts any differently than words posted on 4chan.


> There is no reason that users should consider random words on random Facebook accounts any differently than words posted on 4chan.

You're right! They're both dangerous. If I posted your public address on both of those websites, how you would you feel about your own safety? Not your Facebook account mind you - just somewhere on Facebook.

That's unsafe speech. Other examples of text where being associated with safety makes sense:

1.) "Hey everybody! Let's kill all type of people who like X tomorrow. Like and share to get the word out!"

2.) "Hey here's the address and personal info of someone I disagree with politically. Let's organize an effort to camp outside their house with guns to scare them. Or lets "SWAT" them.

I am not misusing my information in either of these cases. Instead my information is being misused by others who use the space as an organizing ground. When people ask for Facebook to be regulated, they want hate speech like the above to be removed. People make the argument that this is a slippery slope, but that's a fallacy for a reason. It's trivial to regulate speech of this form on their platforms if they were willing to pay for it, but they aren't. Thus we must force them to regulate this.

You're conflating a regulation of speech one disagrees with, with hate speech. Hate speech is not simply speech I disagree with. It can be clearly defined in many cases (like above), and in the cases where it can't one can simply err towards allowing it.


> You're conflating a regulation of speech one disagrees with, with hate speech.

I'm not calling for regulation, nor labeling anything "hate speech" (ergo not conflating the two, either), so I have no idea where this particular statement of yours is coming from.

What I'm saying is that trusting sites where anybody can post anything is the core danger of the situations instigating this specific mess, and that trust is the fault of the reader, and of Facebook falsely positioning itself as a trustworthy information source.

This situation isn't about "hate speech", incitements to violence, or any other already-illegal acts. To quote, it discusses the "far larger problem, in which the platforms are gamed for profit or political influence." At its core is the observation that people let Facebook content influence their finances and politics; the demonstrable dump of people's unthinking, reactionary blurted opinions, and their personal or incentivized proselytizing/propagandist/marketing skews. These are the random words of random accounts, all unworthy of blind trust.

Facebook wants to discriminate "trustworthy" from "untrustworthy" somehow, and to do so algorithmically. To be very specific, what's at stake here is Facebook's desire to have a particular PR posture as "the" place to be online, for their own gains. There's no actual direct safety issue involved here driving these automated "trustworthiness" speech classifications. It is both unnecessary for the site to function socially, and ultimately unworkable as an implementation from its very concept.

What is necessary for a properly functioning mashup of the world's unfettered speech in one place, is more distrust by the users of what's posted, as there's no reason to trust it in the first place. Again, from a content perspective it's no different from 4chan.


Ah I slightly misunderstood your position - I apologize. Personally I find the topics highly related in my mind and so my mind made a jump in your argument I should not have.

I do actually still disagree with you though:

> At its core is the observation that people let Facebook content influence their finances and politics; the demonstrable dump of people's unthinking, reactionary blurted opinions, and their personal or incentivized proselytizing/propagandist/marketing skews. These are the random words of random accounts, all unworthy of blind trust.

Personally I don't think that people are actually putting blind trust into such sources. Rather, Facebook is instead creating a narrative about how the world is rather than what is truth and what is a lie. I don't think that Facebook is incredibly influential because people put blind faith into it. There's definitely a lot of people who do that, and they're dumb, but I think there's a larger problem at play.

Facebook is able to create a narrative about what "the world" is like. If you're liberal, you see a very different "world" on Facebook than if you're conservative. In this way, I don't think Facebook is influencing the world via trust, but by misconstruing what the world actually looks like. People have the perception that Facebook is an "unbiased" perspective on the world at large and especially your social circle. But we know that Facebook doesn't actually present such a world - it instead gives you a highly "biased" (think more of the statistical version than the political one) perspective on life.

In this way I think it's not necessary to make Facebook the arbitrator of trust on their network, but instead to require them to arbitrate the way information is disseminated.

To say it simpler: It's not a problem that Facebook has fake news, it's a problem that Facebook can create a fake world.

Again sorry for misunderstanding you.


It's cool.

I would further dig into the intentionality of who's creating what. Facebook creates a platform where people can be surrounded by very selective & similar feeds, fed by producers of content seeking to gain traction of their ideas as described in the article (ie seeking sales or political power). To say that Facebook creates these narratives can sound like people at the company are intentionally crafting these narratives, instead of it being a substrate where 3rd parties craft them.

However, let's contrast this with TV, since it also has the creation of narratives and the problem of getting too sucked into it. Different networks have their different slants, people pick their favorites and get wrapped up in the narratives presented there, be they from news or from consistent themes permeating entertainment programming.

But for some reason the response to getting too wrapped up in TV (your fault, watch less or get off it completely) is quite different than the responses to getting too wrapped up in Facebook (regulate Facebook and don't change our habits!). I think the responsibility of the info consumer is the same in either case, and a lot of that is related to how you handle trust vs skepticism regarding these commercial outlets.


This is not a very thoughtful response.

If Facebook and Twitter were no more useful or trustworthy than 4chan, then they wouldn't exist. It's like saying, "I can't believe people are trusting encyclopedias and cookbooks. There's no reason that readers should consider them any different than stuff written on a bathroom wall."

The problem is the gap between what they sell themselves as and what the actual user experience is. If Facebook were advertising themselves as the cloaca of the Internet, your vigorous user-blaming would have some merit. But they don't. They aim to be in daily use by every person in the world, a trusted intermediary. They either need to live up to that or quit.


You can generally trust your connections to your acquaintances, and companies/personas you know from elsewhere.

It's all the external feeds wanting your attention & commenters throwing in their 2 bits that are the random noise. This is the domain where the propagandists are working, and where there is no and should be no trust. There's no difference between some nobody posting from their personal account, and some nobody posting from their account that they've slapped a logo on. There's no distinguishable difference between 100 nobodies throwing in their heartfelt opinion, and 100 bots repeating the opinion of 1 nobody.


Note that you've gone from the dramatic "it all should be held at arm's length" to the somewhat less dramatic "some of it is ok to trust, but this other stuff isn't". That's precisely what I meant when I said your first comment wasn't well thought out.

Your new position is still too extreme; human trust is not binary. We could go another round where I point out exceptions and you come up with another model that is still too simple. And again, and again. Eventually, we might even get to a point where a useful discussion could be had. But it's not my job to force you to think past your taste for dramatic statements. It's yours.


This article isn't about "human trust" in general, nor how you socialize with your known peers. It's quite specifically about sources that are otherwise completely unknown to you affecting your finances and opinions by pursuing magnified exposure on Facebook. This isn't my distinction, it's what the article raises as a problem.


I don't buy it.

Facebook knows with absolute certainty if an account is legit or not.

Cambridge Analytics, Lexis/Nexus (nee Seisent), ChoicePoint (oops, bought by Lexis/Nexus), Palantir, NSA, and others have all uniquely identitied every person, living or dead.

Facebook has even more data. Facebook even has phantom profiles for people without accounts. Of course they know who's legit. They can't not know.

Just like Facebook juices their ad numbers, they're lying here too.

Frankly, I'm shocked, shocked that Facebook would lie about their culpability here.

--

Twitter might plausibly not know. So they can contract with any of the dozen of 3rd parties that provide demographic databases as a service.


But sometimes people create a second account or an account for some odd reason that isn't directly obviously them. But they'll properly use those accounts. Do you just ban these accounts too? Accounts not doing any harm and potentially being beneficial to certain things like FB group participation or posts.

Finally, if they act like real users since they are, it helps Facebook's bottom line. Booting them would hurt FB and piss off those people.

This is just one example of "fake accounts"


They could handle that use in a few ways. The easiest would be to tie the second account back to the first, so that FB knows but nobody else does. But presumably people are doing that for a reason, and the real solution is to create a feature for one's main account that accommodates that reason. And banning is not an unreasonable solution. Facebook is demonstrably ok with losing small amounts of marginal traffic, especially if that helps limit abuse. Their number one asset is trust, and anything that chips away at that trust is extremely dangerous for them.


Maybe people don't want the two accounts associated? Or then they might accidentally post in the wrong one? That can already happen but the blame lies entirely on the user right now.

I do agree there are likely solutions like you're saying. Pages already work that way. Linked to a personal profile. Multiple can be.

Though I guess it likely isn't in Facebook's interest to do a feature like you propose if they can help it. If they can ban enough actual worthless or hurting fake accounts without messing with fake accounts that are being used legitimately enough. Since that boosts their numbers.


Yes, I've always found the 'YouTube gets too much video to moderate it' a particularly shit argument. Maybe they not be allowed to have as much content then.


That doesn’t seem to be a useful solution, either. I mean, eat tool would you use to manage that?

Limit total videos per website? How do you determine who gets to stay? Money? Clicks?

Moderating all the videos is hard, but if the videos go to a different platform, they still need that same moderation, right?


There are plenty of ways you could do that. Off the top of my head: Every video-posting account must be validated with, e.g., a credit card. New accounts have clear posting limits. Accounts behaving badly get charged for cleanup. If you want to get paid by YouTube you have to go through an extensive validation process, including background check. If you are found to be abusing the platform, money will be clawed back, and if you can't prove sufficient resources to handle a clawback, you instead have to post a bond or find an insurer who will guarantee you.

Sure, some videos may go to another platform. And for most things, that'd be fine. A lot of videos people post on YouTube are just accidentally hosted there, and would be just as good posted on, e.g., Facebook, Twitter, or another personal account provider. But there's a whole class of garbage that YouTube has because one way or another it allows people to get paid. That garbage is a negative externality on the rest of us, and there's no particular reason for us to allow YouTube to profit from it.


So which videos (or people posting) would get the short stick with their content not going up on YouTube in this scenario? No way something like this can ever end well.


Why is moderation needed?


This is less like a nut that needs turned and more like getting the courts to consistently put "bad people" in jail or define "obscene". Just because it has to do with software does not mean it is a problem software can solve.

Even if it is a human doing it, it still has to scale to many humans, and there are plenty of examples of things we have never manged to get right due to scale well outside of technological limitations.

Even if you could, my example from the article was one human, with one example case, and as another human I didn't see it as a cut-and-dry bot. Try doing that with 20,000. It's not that it would be "costly", it's that it would be an unavoidable train wreck


Things mass produced cheaply are lesser quality, but we're willing to make the trade off as a society because we get things for cheaper.

Problem is, from what I'm seeing, is that the market bifurcates into "shitty, but cheap" and "OMG, the price, but quality". I want the middle way: "reasonably priced, 'good enough' quality, and won't break the bank". Tools are a prime example. You had Snap-On/Matco/etc., Sears Craftsman, and "stamped steel sockets from China". Snap-On for the pros (and worth every dime if you're a pro), Craftsman for the guy with the '57 Chevy he works on regularly, and then the cheap shit for the person that needs to turn a 13mm nut once a year.

Now we "get things for cheaper", but those things (in the case of tools) are not something I'm going to let anywhere near a fastener I care to ever reuse. Craftsman is gone, so my option is to chase a Snap-On truck around town [0]. I don't need Snap-On quality, it's like buying a $100K CNC machine to turn a table leg when a lathe will do. I just need a decent socket that won't round the fastener.

Seems like the middle fell out of appliances, too. Only in this case, it looks to me like the high-end is shit, too. Every single person I've known that's had a Viking stove or SubZero fridge has had it worked on at least once. I think our Kenmore stove/fridge and LG washer/dryer are disappointingly not built for longevity, but at least nothing has broken after five years. But that washer isn't going to be washing clothes twenty years from now like old Kenmores.

About the only thing I can think of where the quality goes up and the prices down are cars. Man, it is amazing how long a car can go while doing practically nothing for maintenance. And they are much more complex systems, and live in much harsher conditions, than a washer.

To tie this back to the topic at hand, "we can't be bothered to take the time to make it nice" is a fine excuse if only they'd use that as the actual excuse. "Doesn't scale" just means "too much trouble to make it nice, so you'll have to settle with half-assed". If "that's just the way it is", then I don't want "it".

[0] When I bought Snap-On, it was from a truck that drove from auto shop to auto shop, selling tools out of the truck. They had no retail presence; IOW, you couldn't go down to the Snap-On store, it all came off the truck. Not a practical option for the home mechanic, though I've heard they sell online now.


Wandering far from Facebook (the farther the better IMHO) - Craftsman has been dead to me for years so I tried a Dewalt socket/wrench set. The name should be around a while, they have a good warranty and they feel like I shouldn't need it. It was god to see someone was trying to fill in that middle of the road "good enough" and affordable.


I’ll take a look next time I see Dewalt, thanks. Harbor Freight is almost good enough, but not quite there, so I’m eager to try some Dewalt.


The way developed countries handle that is through regulation (specifically, fines). That changes the cost equation.

Eg. Germany passed a law that fines Facebook if they don't comply with German hate speech laws.


It sounds as if these developed countries might be quick to throw out the baby with the bathwater


I'm not sure that analogy fits here; I'm not sure of the size of the fines, but they'd have to be pretty hefty for Facebook to decide to just leave Germany over them


Well said. Also one other thing to consider is what companies do and don't consider to be an extension of themselves. Facebook would sooner fire an employee who said women are dumb, than even consider putting hard cash towards policing people/groups/pages who espouse identical viewpoints. The later is always easily rationalized away.


The issue is also one of false-positives. Even if you could spot fake accounts 100% of the time if you also flagged 1% of legitimate accounts as fake by that approach then it's non-viable. Remember that at Facebooks scale a 1% false-positive rate means 20 MILLION accounts. That's too much to even try sending for manual review.


Trust networks seem to work here: for example I've reported accounts like "Blacksmith Arms" (not a real example) for being not a real person. Blacksmith Arms is a common pub name, it's a business account masquerading as a user which is (or was) against Facebook rules.

Facebook responded on a couple of such reports but the accounts weren't altered.

If my account had sufficient standing then the action could have been automated. My accounts actions could then be meta-moderated by other accounts that are more trusted.

You make decisions using reports from trusted accounts, and verify them with multiple more-trusted accounts. Verified actions improve an accounts trust level.

Surely Facebook already use such a system? I imagine Google Maps are using such a trust network too?

Use the scale of the system to manage itself.


20 million reviews per year * 1/(2000 hours per personyear * 10 reviews per personhour) = 1000 people

Facebook could afford a thousand fulltime account reviewers, they just don't want to.


You're implicitly assuming that manual reviews would have a 1% or better error rate. I sincerely doubt that's realistic based on what I've seen of assembly line style processes.


A manual review would cost what, a dollar at most?

That's about 1/2000th of FB's yearly revenue. They should be able to handle that.


I agree with this sentiment. While you want to develop a scalable filter that is able to police these kind of things, the flip side may be even worse — discrimination, racism, etc. Just look at the backlash that YouTube is getting for their recent changes in their algorithm, if Facebook gets even slightly too many false negatives the backlash is likely to be a lot worse.

I would hate to stand in Zuckerberg’s shoes here. It seems like they will lose this fight, no matter what. The solution might be more political than technical; that is, to teach people to be skeptical and use proper judgement when reading news articles.

But that’s most likely too much to ask.


> I would hate to stand in Zuckerberg’s shoes here. It seems like they will lose this fight, no matter what. The solution might be more political than technical; that is, to teach people to be skeptical and use proper judgement when reading news articles.

He could just hire more and better people to do the moderation (or handle appeals). I kinda feel companies like Facebook take the attitude that things aren't worth doing unless they can be automated by an algorithm.


Could always do what 4chan did and just ban all Russian IP addresses.

https://twitter.com/4chan/status/741452104112439296

(joking...)


>, but I don't know that "easy to spot" is a good yardstick.

Anecdotal, but my dog has had a Facebook account for a decade now. He makes appropriate comments and likes too. His profile picture and stats are accurate. Given that FB can detect faces and assign names to them, I assume they know this account is bullshit. There's got to be some aspects of avoidance of their part. Growth is more important than making sure accounts are real, unless people report it.


there's no real incentive to work on this unless ad revenue takes a hit. When YouTube ads appearing in front of extremist videos caused brand money to get pulled out, solutions started magically appearing.


I think they simply don't have an elegant, technical solution for what amounts to old fashioned "policing" or "investigative" work that they simply refuse to or don't want to do because it probably isn't a priority (until it is).

The solution seams akin to the neighbor stealing cable.

Send someone out to go look.



A major failing with Facebook's process for reporting accounts is that there is no way to enter any text to help them figure it out. I have seen numerous accounts that are breaking the TOS, but it's only obvious if you know some other bit of easily verifiable information.

Without fail, whenever I report one of these accounts facebook tells me "thanks but they're fine, feel free to block them if you still feel this way". Given the limited information available, I can see why they aren't able to kill the account, but the decision would become trivial if they would take even one sentence of information to help explain it.


They don't care anyway. A while back I got a notification that a sexually explicit picture had been removed from my page. I had never seen the picture in question, said so, and asked where it had been posted, as I was worried that my account had been compromised. They completely ignored the question and I got back a standard 'we've reviewed the picture and found it's not compliant with our community standards' reply.

Their 'community standards' are also bullshit, of course, since the community of users has exactly zero input into the design of said standards. People can share video of violence and murder freely (though it may need a click-through to view), and I'm OK with that as some imagery can be both violent and newsworthy. But anything sexual is treated as toxic. My wife got banned for a week sharing a photo of a statue in a museum.

Like many people, I'd really like to leave the platform but not the network of friends that I communicate regularly with. And no, email and or other channels are not good substitutes. Any alternative has to offer at least the same degree of functionality/convenience to be worth using.


> Like many people, I'd really like to leave the platform but not the network of friends that I communicate regularly with. And no, email and or other channels are not good substitutes. Any alternative has to offer at least the same degree of functionality/convenience to be worth using.

Not picking on you specifically, but I've noticed that this is the standard line used by people who love to complain about how shitty facebook (or sometimes twitter or some other social media site) is, but who don't want to vote with their feet.

I have no accounts and never have, yet magically I have been able to keep in touch with everyone that matters in my life. I don't think I've missed out on anything important, either. Then again, I lived a good portion of my adult life before social media was invented, so maybe I just have some skills that a younger generation never acquired. Or maybe just different expectations.

Seriously, this stuff has existed for just over a decade yet people act as if they can't live without it. It makes me fear for the future health and well-being of society.


Facebook has existed for just over a decade but a lot of people and institutions have already moved communications there that used to be elsewhere. Other communications that might not have taken place at all because of cost or other factors are now taking place on social media.

It's sometimes the only place people post to let people know they're sick or well, in or out of danger (e.g., in the hurricanes this year), or had a major life event. Sometimes it's impossible to get this information another way without alarming or traumatizing someone, or tying up someone's phone line or draining their battery during an emergency.

I've noticed some small music venues, theaters, etc. use it as their primary place to post events.

About six years ago, I got one very good full-time job that I learned about through a tweet--I wouldn't have thought to even check if that employer had such a job without Twitter. I've since gotten freelance work that I only learned about through Twitter and Facebook. (I've also gotten work through email, Slack, Google Talk, SMS, LinkedIn, physical posters, and word of mouth).

Lots of people use social media to vet potential dates, and see Facebook classifieds (tied to a real identity) as safer than Craigslist, etc.

Facebook, Twitter and all have flaws and things I wish they'd change--so do my phone company, my cable company and the postal service. But using them is a net positive.


Since you can't live in a parallel universe where you have a Facebook account you can't possibly know if you're missing out or not.

You also negligent to understand that the world isn't static - communication channels move to different platforms. When I was young "texting" took place through AOL's servers. Imagine if a business said "well the web wasn't around when we started so a website must be useless to us."

I've reconnected with childhood friends who have long moved away on Facebook, so have a couple other people I know. That's invaluable.

Without a Facebook you miss out on the one-off things you may say to a group but probably won't call everyone in your address book plus that guy you met at that conference last week, and your old college buddy you haven't seen in 10 years. Most of the time they aren't "important," but they can be, such as "My company is looking to hire for ${position}, is anyone interested?"

Facebook also facilitates getting to know new people so much easier. It's very "low friction" to add someone you "know of but don't know" on Facebook. That can lead to new friendships. Sure, it's not the only way to make friends, that doesn't discount it.


Your response is also a typical post around here. You're perfectly fine without social media or FB specifically. So why aren't other people who don't particularly like FB. It isn't always that simple.

I also don't mean to pick on you specifically. Just expanding the discussion.


Picking on you specifically, but this is the standard line used by people who just want to complain about people complaining.

I also existed before facebook, for quite some time. However I can empathise with those that are dissatisfied with the service and don't see a completely viable alternative, and this doesn't make me fear for the future of our species.


Not only that, the reporting process and categories were weird. A few months ago, I got a series of friend requests with similar pictures (a naked woman with the name of some website superimposed). I dutifully reported them, but there was no "spammer" option, and none of the categories really fit. At least one time, I got a response that I could block the user, but no confirmation that they were deleted.


Thank you for pointing this out! This drives my spouse nuts. I don't have an account myself, but she's flagged things before, and the options are different every time. When they even have the correct option, the result is always exactly as you say - Facebook says it meets community standards no matter how blatantly it's breaking their TOS. It's absolutely insane some of the threatening stuff she's seen that's considered OK. (Luckily none of it directed at her.)


I've been flagging accounts for years, and the quality of decision-making seems to me to have gotten worse recently. I wonder if the push to hire more moderators has lowered the standard.


One thing I learned from my experience in security is that sometimes instantly responding to a threat may be counterproductive because you may be reacting to a form of a honeypot - it is used to determine whether their newest iteration would bypass your detection or not. Similarly, large online game operators tend to do bans for bot systems in waves rather than incrementally to avoid signalling precisely what tipped off their systems.

I don’t know if the dynamics make sense for Facebook’s adversarial users or not.


Agreed. Even if a given fraudulent account wasn't specifically created to be a honeypot, promptly banning it affords the creator more time to iterate on their method.


I can't believe this hasn't been brought up... but there is a massive conflict of interest here. And not due to the slimy but mostly above board reasons posted in the article.

It is deep rooted in the financial industry.

Misleading investors is not punished if you have a big business. Especially in tech.

Facebook stands to lose or gain billions of dollars on market capitalization based on how many people investors perceive to be on Facebook.

If Facebook is over-reporting users by any size multiple, they stand to get billions in additional valuation. The people using fake accounts don't want to get caught, as they make money from them.

Given the above, how can anyone realistically expect Facebook to really commit to cleaning up fake accounts? This isn't just for Facebook.

Big business has proven to be beyond prosecution unless they engage in the most outrageous behavior.


Exactly, they have zero incentive to remove fake accounts, and several incentives to keep them. This is all just a show on their part.

But I doubt that only 10% of facebook accounts are real.


Unless I missed it in skimming The Fine Article (I have a couple minutes between standup and another meeting, so time constraints), they're overlooking an, IMO, obvious tell.

For months now, whenever I've clicked on one of the "trending" stories in the right sidebar, I've seen a wall (not intended) of public posts that are verbatim, to the terrible grammar and obvious non-native English speaker phrasing, copies of one another.

If you have a bunch of accounts posting the same string of text, attached to the same link (or even just links that are grouped together topically), maybe those should be subject to further scrutiny as possibly being "fake"?


I've noticed this too. It's not even just politics. You'll click topics about TV shows and see a ton of the exact same long post with errors from different people in different cities. Some descend into markov chain gibberish while others appear to be trying to genuinely sway public opinion in favor or Kim Kardashian or whatever


Except of cause that's also how very real grassroot movement's look especially those not made up of American college students, i.e. you have a fairly high chance of chance of you accidentally targeting a few thousand genuine users.

Remembering that almost all of facebooks growth potential is outside of the US. Facebook really cannot afford to just kick off people based on not being us college students.

The problem is that while it might be easy to find someone who diverges from what the nytimes think the correct mainstream should be from a beltway centric viewpoint, it's far harder to differentiate between a bot and some yahoo posting her local mainstream view from a netcafe somewhere in Siberia.


> ...you have a fairly high chance of chance of you accidentally targeting a few thousand genuine users.

Which is why I said "further scrutiny", and not robo-ban.

The tone of these posts is very much, and very consistently the whole "sow dissension" and "poke at wedge issues" shtick that is demonstrably part of how the Russians have been meddling in our shit — and which Facebook, perhaps knowingly, profited from. (Unless you're also suggesting that's somehow part of the "correct... beltway centric" narrative that NYT, &c, "think" we should have, and which is instead some kind of false-flag, psy-ops disinfo campaign — in which case, never mind; please don't waste my time further.)

As your sibling comment describes them, they so often have a feel of "Markov chain gibberish", and not just "not being us college students" or "someone who diverges from what the nytimes think the correct mainstream should be".

EDIT: Parenthetical.


The college student problem is btw a very real concern in social science i.e. way to many studies have been made using local college students as study subjects, which is useless in a world where the average person speaks bad English and think the US is the biggest threat to world peace(however wrong that view might be). I.E. you cannot and should not build your algorithm about what is normal for your local but have to look at data that might not be available to people other then Facebook.

Foreign influence is a reality in every single election, worldwide, to the point where nobody have even accused Russia of doing something that the US would be comfortable making illegal when it comes to US based organizations interacting with foreign nations.

The problem is that the entire debate starts from an obsolete(fake past) view on how interstate relations work in the modern world where the diplomatic corp plays the only major role, where as the reality is that with the modern Internet traditional diplomacy have given way to a much more direct people to people interaction meaning that you cannot just assume that a post being foreign and exposing a viewpoint that's not organic to the beltway are the product of state controlled a botnet, it might be a foreign movement deciding to engage in modern direct diplomacy.

And Facebook cannot grow while remaining an US centric platform so the commercial drive for facebook is to become the facilitator of direct people to people diplomacy rather then a defender of the old diplomatic model as some American politicians obviously want them to be.

And if you cannot use content or "national style" as a guide to spot bots you have to depend almost entirely of metadata about the IP or client involved especially as false positives based on content analysis have a potential for market backlash.


I'm talking about literally dozens of verbatim-identical posts on known-divisive issues, from different accounts in different cities — or even countries, though many-to-most of them are already in the US, mitigating so much of your counter-argument — and specifically not the viewpoint they're espousing (except insofar as that viewpoint tends, "with a probability approaching unity", to be driving an extant sociocultural or political wedge further in). This is about form, not content. You really need to understand that or we're talking past one another, and this conversation is moot.

For another example of what I'm talking about, I saw a video a couple of holiday shopping seasons ago, that showed clips from local newscast after local newscast, from dozens of TV stations across the country, all prattling on — verbatim — about how you should "buy yourself a gift this Christmas!" It's a manipulation, that no-one not seeing the broadcasts from multiple markets will ever notice, because they never see another market's telecast.

Obviously, if Facebook can aggregate these posts well enough to present them on a single "trending story" feed, they can damned well perform the step further analysis to check whether they're dropping verbatim posts, and whether there's anything else hinky about the accounts submitting those identical posts. These accounts, if they're fake, are being used to sow disinformation and dissension. They are not dialogue or diplomacy, let alone "direct people to people interaction".

Facebook's "right" to grow in non-US markets is completely orthogonal to that, and also — IMO — way less important than, you know, "a functioning democracy."

EDIT: phrasing.


If you have to defend democracy by restricting one if it's foundational pillars what are you really defending.

I am not arguing the problem of money/power being used to amplify speech is not real i am arguing that it's something we have to deal with as a part of how real world democracies works when it's not constrained by an Jacobin state where the media have to abide by a very narrow set of standards for what can be reprinted defined by someone who don't really answers to nobody. And it's sure as hell not limited to Russia.

Fake news and media manipulation really isn't a new problem, Hearst and Pulitzer used to make their living from soving dissent and and anger, and we call those days the golden age of journalism, we have an multi trillion dollar advertising industry that does nothing but manipulate people into doing things/buying things they might not have done otherwise. Not to mention the circus of day to day politics in pretty much every democratic state.

The notion that democracy needed to be protected by a powerful benevolent central committee(with universal authority and no restrictions) is in many way what separated communist from socialists back when the old European empires fell apart, and new systems had to be created, and while the communist states did hold out far better then many democratic socialist states it was not an particularly attractive society for anyone to live in.

We live in a world of 6 billion people most of them in a partially shared economy so for the US to have to live with foreign people interfering in us elections is not anti-democratic it's widening the definition of demos to everyone affected by a election, just as the US public itself reserves the right to try and influence foreign elections.


Facebook truly needs an independent group to audit their site and give them a grade. Anyone interested in doing this with me?


Well, if you can't robo-ban then that's kind of the problem. As you say, identifying likely bad accounts isn't a problem, and they definitely can do that. But what do you do next?

Let's suppose that you identify just (on FB scale) 10 million likely fake accounts. There are only three options - apply further scrutiny, which would take hundreds of workers and cost lots of money, which isn't worth the (minimal) benefit to FB; robo-ban them which inevitably means false positives and would mean lots and lots of people contesting the ban, and real bot accounts are just as likely to contest than normal users, costing time in manual review; or do nothing unless the account is reported for something horrible, which seems to be the only cost-effective option.


By "policing fake accounts" they also mean "hunt down real accounts who don't use their real name, and ask for a scan of your ID".


If you don't want to abide by their policies, don't sign up. It's not that hard.


I understand the distinction, but how do we expect them to do both? If they can't enforce a real-name policy (by backing it up with required ID) then how can they possibly remove fake accounts which often use fake names?


Just watch account behavior, IP addresses... There are many real users who don't use their names for good reasons. Fake accounts are also used for false clicks, which facebook doesn't mind.

And by the way, there are no way to really know if a name is real or not.

Im just saying facebook use the real name requirement excuse... but there are other ways to fight fake accounts.


That was one of the final straws for me. I don't think anyone should trust Facebook with enough information to completely take over someone's real life identity.

FB feels authoritarian in a very Orwellian way, and it is even scarier that, much like Orwell wrote, nobody seems to care. We might not be burning books a la Bradbury, but we seem to be burning away our rights to privacy as a complicit society.


Do you feel the same way about other major institutions like Google and Amazon (they're still getting there)?


Of course. This kind of power is so easy to abuse.

Unfortunately, google and amazon are a little harder to abandon than fb because of their utility...


Just wondering. You say "of course", but the amount of negativity relating to what we are talking about that Facebook gets compared to other giants like Google or Amazon is incredibly slanted. Mainly in tech or geek centric sites like this or many reddit subreddits. So it's not always an obvious thing for me if someone is cool enough with Google but not Facebook.

I successfully use Bing for 95-99% of my search engine searches. I haven't made the time to get off Gmail. It'll take a while. I don't believe getting off Google is any harder than getting off Facebook for many people. Because if you're not using Google search or Android with Google apps, Google's utility isn't that high for you. Gmail also isn't miles ahead anymore.

Unfortunately I'm so into the Alexa ecosystem I'm nowhere near away from Amazon.


A real shocker for me is how little the big tech companies are using their AI powers for defensive purposes. Each smart-project they undertake to detect your friends in your photos and identify cats in pictures is done for a directly profitable feature. But with Twitter and Facebook both getting caught letting foreign actors buy political ads and use fake personas to influence the population, it seems like an existential risk to them. Wouldn't it be wise to direct some of this work to detecting fake accounts, providing an honest experience to users and being more honest overall?

Yeah, Twitter and Facebook would show slower growth numbers. But they would become respectable and sustainable for society, which seems massively more valuable financially in the long run.


Just how do you know what these companies are using for fighting abuse (fake accounts, spam, account hijacking, etc)? Dealing with abuse is an adversarial process, where both sides are trying to adapt to what the other is doing. It's not like most programming where you can come up with a provably good enough solution. It also means that time is a valuable resource.

There's nothing to be gained by e.g. Twitter announcing that they're now using ML on the profile images to detect fake accounts. That'd let the bad guys know right away that they need to change their pattern. Instead what you want is for them to first waste some time before they realize that something is wrong, and then waste more time experimenting on exactly what changed, and then even more time figuring out how to bypass it.

So unless you're working on abuse at one of these companies, I don't understand how you can so confidently talk about what they are and aren't doing.


Could that be a sign that current AI is not as intellgient as tech blogs suggest?


And that AI is a bubble within a bubble?


I listened in online to the intelligence and judicial committees. Kennedy and Cruz were pretty excited to bash Facebook's head with their oversights. And are probably hoping to get Facebook and Google both back in front of them. A bunch of other members spent time trying to get all three to say/agree to needing to be regulated.

They are already in the middle of a very expensive and shackleing threat. It's just playing games to come out less behind (even if that's just going to be increased lobbying).


it seems like an existential risk to them

Maybe the problem is that they don't think this is true. How will it put them out of business? Lawsuits? Government forcing them to pull the plug? People not using the platform because fake stuff is there? I'm not convinced that the end result of any one of these paths will be strong enough to wreck their businesses.


If people realize that not many users are actively using these platforms it would decrease the value of ad buys.

I mean if 20% of all Twitter users are bots that would be detrimental to the sales department. If Twitter knew how many users are bots and did nothing it might be considered a form of fraud and deceiving investors.

Why should I buy ads on a platform that are overpriced if the ads will not be seen by legitimate users?


I think legitimate users will see them, but so will bots or fake users. I think there is a lot of click fraud, and has been since the early days of ppc. But we're still here.

Now, I can't really answer your question, which was "Why" should you still buy ads on the platforms? I for one don't. But many, many people do. Just like they buy ads on TV knowing a lot of people are going to ffw through them.


The market prioritizes short-term results over long term public goods. Think about it in microeconomic terms, aiming for a simplistic price equilibrium will give you the lowest common denominator outcome. The premise of the utility function selects for crap.


Facebook and twitter have many criminal organizations cat fishing people for money and black mail. Its larger problem that should be tackled than botnets posting memes. (IMHO)


What if this "Kevin Eversely" character had actually immigrated from Macedonia to Minneapolis? Just because many of your friends are not from your region, by itself doesn't mean anything.


Right - one odd thing alone isn't enough. I worked on a platform that allowed contributors sign up and provide content from their area. A user provided some fishy content with a very heavy slant so I started looking into his profile. The big thing that really confirmed we were on to something was his profile photo was one of a retired athlete. Busted by tineye.com. Once we dug in some more we found it was a local politician writing under a fake name.

So yes, having all your friends in another area doesn't mean you are a fake, but it may be an easy thing to look for that flags the account for more inspection.


This needs care, but I don't see why they can't just rate limit key activity from high risk accounts that are reported, then let the person carry on. If they're genuine then things like friend requests will get accepted but if they're fake, those will get a poor response and seal their fate.

Simple: earn trust through good behaviour. And it's not too heavy handed for the false positives.


Facebook's login page encourages you to set up a new account ('always free') despite their knowing you have an account.

Makes me wonder how many sock puppets and variants there are out there. Regarding 'Russia' surely we should be saying 'Eastern European Gangsters?' rather than a nation state...


That depends - Does Hollywood count as 'eastern european gangsters'?

They are probably responsible for every bit as much phony account influence peddling as anyone else..


I wonder how much of the problem would be eliminated if Facebook accounts had a 1 time $5.00 set up fee?

I'd definitely be willing to pay a 1 time fee for a 90%+ reduction in fake accounts.


But the vast majority of people wouldn't be happy about that at all. Even if most people ended up paying, the loss FB would take in trust and reputation from users would be catastrophic.

Even Whatsapp usually seen as a great startup in their growth and sustainability never charged more than a fraction of their user base the $1 fee.


Do they know for sure who is sitting at the computer?


Well. I've reported dozens of fake accounts.

All of them were "REAL" according to FB. I stopped reporting. They can have their fake accounts.


question Can any Social Platform claim to police bot accounts when receiving investment money from the same exact forces doing the bot accounts? Seems to me that their is a bigger problem than just the fake accounts..no?


Who would have guessed that the first large-scale achievement re: the Turing Test would have happened on Facebook. We are watching bots be confused with humans and vice versa on an enormous scale.


Yet I've contacted them twice about a fake profile which they've refused to remove despite passport and driving license evidence.

Does anyone know of a better channel through which to pursue this?


Honestly who cares if there are fake accounts (as a user). If you are an investor I could possibly see interest, but even then who really cares as long as facebook looks like its big and still growing that's all that matters.


Facebook can just charge 1$ per sign-up and the problem is gone (faking credit cards etc does not scale).

Of course, as others have said, they prefer having more "users".


Are you honestly suggesting that requiring having a credit card is in any way a solution to fake accounts? That's a ridiculous proposition not just for Facebook, but any service that operates in developing areas or serves users under the age of 18.


They won’t admit they have a fake account problem because it would admit they been selling ads to bots and ripping off their advertisers. Too bad it’s true.


I just boosted couple of posts on Facebook. Every single one of them had likes from Bangladeshi click farms. I am not even kidding. Each of the likes had western sounding name with about 30 friends all of who were posting in Bengali language.

I am not sure if Silicon Valley show got the idea from Facebook or otherway around.


Crowd source this problem. Add a report fake account option. If so many (x) number of different IP address, etc. report it, sandbox that account. Simple.


Then what happens?

If I were the adversary, I would respond by dedicating a fraction of my fake accounts to reporting thousands of random Facebook users. Gum up the system.


will they also be removing U.S. propaganda bots, or just foreign ones?


may be


Fuck Facebook. Delete your account. A corporation's singular goal is to maximize profits, by definition. That's exactly what they've done by taking Russian money and using it to influence our elections. If you want justice, it's simple. Delete your account and don't use any Facebook services. Then we need laws regarding political "ads" and the internet, in addition to a lot of other changes our system obviously needs. Facebook will fight change as long as it affect their bottom line.


Hilary Clinton spent more on her wardrobe than the Russians spent on Facebook. It was roughly $160k. Hardly enough to influence an election when campaigns spent close to a billion dollars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: