Hacker News new | past | comments | ask | show | jobs | submit login
Google reinstates account of thomasmonopoly (twitlonger.com)
90 points by tshtf on July 27, 2011 | hide | past | favorite | 56 comments



What is troubling about this is not that Google mistakenly removed the account.

What is troubling is:

1: This guy had to go to the fucking MOON to get his account back, ranting and raving on twitter like a mad man- and it took nearly a week!!

2: Google's support sucks- God help any of us who is in a similar position.

3: Even getting Google's attention is not enough. Since there is no formal appeals process, you have to harass Google employees in a informal method. Matt Cutts even said on a HN thread that "Google took the appropriate action" - So what is it? Did he break the TOS or not? This is the whole issue. Without a formal process, you just get knee jerk reactions. Any person other than this guy would have stopped trying after Matt essentially said he wasn't getting it back.

4: Even after all that, he did not even know why his account was deleted for a week.

5: For all Google's talk about data liberation, they suck with this. Takeout only gives you access to useless social crap and if you want to download your email its a long process that involves reading guides, using POP, IMAP, etc.. How about just giving me a link to download my mail?

6: No warnings, no contact- This could have all been solved if Google asked first and shot later.

* Link to where matt_cutts looked into this and still decided he cant have his account back: http://news.ycombinator.com/item?id=2795465


It is absolutely true that Google has their support issues, and they should be criticized for them, but...

Do you seriously think there is any other US provider (i.e. subject to US child pornography laws) that would have behaved any better? Given the allegation, it probably was legally risky for Google to keep his data at all. There are laws that criminalize even the "innocent possession" of child pornography. Had things gone another way, I presume Google would have argued they were preserving evidence, but they have to be very, very careful regardless. I'm absolutely certain there are many providers who would have behaved worse (account closed, no comment, no investigation, no appeal, data given to the police then deleted).

In the alternative scenario, many of the things you want Google to do could easily be illegal. After an allegation, any warning, contact or access to data could easily have been construed as aiding and abetting a crime in progress and/or obstruction of justice (particularly by an ambitious, headline-seeking prosecutor). What you don't seem to understand is that, once the allegation was made Google couldn't do squat for @thomasmonopoly without risking criminal charges until they determined the allegation was false. And that determination was going to take time no matter how you slice it.

Personally, I find it hard to think of what Google could do better in this sort of situation. My two suggestions (and I'm not sure they're practical):

1. Commit to manually reviewing every automatic suspension for these kinds of potential criminal allegations.

2. Be clear about when an account is being suspended and investigated versus suspended with a final determination made.


Why not remove/disable the offending material? From what I've read he didn't have any calendar appointments like "Friday 12:30pm Take windowless van to the candy store". Why kill access to completely unrelated services because an automated (possibly error-prone) process flagged a picture?


None of this is news. People complain about Google's lack of support all the time; it is a well known issue.

Also, complaining about having to use POP or IMAP to pull out your email? Seriously? It's a standard, it lets you import them into any email program you want, and just giving you a compressed archive of everything would be a nightmare of compatibility if you wanted to do anything with it but read the messages in a text editor.


Matt Cutts did look into it.

But he very specifically didn't say that he had "decided [Thomas Monopoly] can't have his account back".

This is presumably because that decision isn't even Matt's to make. (He is the head of the webspam team at Google.)


It's well established that Google's support system is close to non-existant for non-paying customers. Does it suck? Yes. There's definitely tons of room for improvement. But at the end of the day they're still providing you with tons of free, accessible products that are some of the best in the business (paid or not). This isn't much more than any other free service provider on the internet offers these days: half-official forums for free customers, direct support lines for the paying.

But regarding "freeing" your data from Google, it's hilarious trying to see people argue that it's somehow difficult or hidden. It's not. Takeout lets you grab all of your social information. Any popular desktop mail client can connect to Gmail over IMAP and automatically download every last e-mail in your account. I click one checkbox on Google Docs and then hit download to have a backup, or I can connect through WebDAV using a popular FTP client. There are similar easy solutions for Picasa, Calendar, and the rest. Just because they don't provide you with one shiny big button to do all the work for you doesn't mean they don't provide you with very accessible ways to backup your data offline.

At the end of the day, an automated system did what it was designed to do. Google should definitely work on it's customer service channels, but until then I'll happily continue to use their free services and backup on a scheduled basis.


I question that Google took the appropriate action. The default action for an apparently legitimate account[0] that violates the ToS in such a manner should be to block access to the content in question - perhaps even related content until a human can review it. It should never be to entirely disable the account without warning.

[0]I'd be shocked if Google doesn't use data mining and heuristics to discriminate between legitimate accounts and spammers, pornographers, etc....


This isn't just the case of a ToS violation, it's a potential federal crime. I doubt very much that you actually looked up what US law requires in this situation.


(I'm totally speaking for myself personally here, not in any official capacity.)

As the post indicates, Thomasmonopoly's account wasn't disabled for something like misusing AdWords; it was an investigation of potential child pornography. Thomasmonopoly himself says in his write-up that "I too found the image bordering on the limits of what is legally permissible and hoped to highlight the fact that it is allowed to exist within a grey area of legality."

Google has a zero tolerance policy for child pornography. I am glad that Thomasmonopoly got his account reinstated after a full investigation, but it's also incredibly important that Google takes appropriate action on potential child pornography, and United States law compels companies to react to child pornography in certain very specific ways.

For what it's worth, I got a chance to do a question and answer session with some congressional staffers earlier this year, and one of the things I said was that (in my personal opinion), current laws on child pornography were suboptimal.

Here's a quick example from a few months ago: http://www.winknews.com/Local-Florida/2011-04-29/Fla-Senate-... "The Florida Senate voted to extend the state's anti-child pornography law to include not just possessing but also intentionally looking at such images." Looks like the text of the bill is here: http://www.flsenate.gov/Session/Bill/2011/0846/BillText/File... and I don't see any exemption for people who fight or take down child pornography. And that was literally the first link I found after doing a search on Google.

So in theory, looking at images in the process of trying to fight child pornography could be illegal. You don't need to dig far to find similar brittle examples. That's why I'm glad that I work on webspam and not on trying to stop child pornography.


> Google employs an automated system to scan user storage for violations of their ToS

So if I take pictures of my children in the bathtub with my Android phone, I am risking having my entire Google account deleted with no recourse except to try to 'make a stink on the internet'?

Because frankly, I would never take photographs to be developed at Walmart because they're well-known for calling the cops on parents who took pictures of their children.

Is Google the new Walmart?


What exactly is your complaint here? How do you expect a multinational corporation to deal with this sort of issue? How do they know that you're the parent, and not just some pervert who likes pictures of kids in bathtubs? I'm fully supportive of both Google and Walmart in this.

I'm a parent of four young kids, so I can appreciate that those tub pictures are adorable. If I really wanted that picture, I'll take the hassle of finding a smaller print shop willing to do it over enabling child pornography any day.


There's a difference between:

-- the police came and charged my tenant with selling drugs, so I kicked him out.

and

-- every day I obsessively searched through my tenant's belongings when he was at work to make sure he wasn't violating any laws, in my sole opinion, and then when I thought I found something illegal, I kicked him out.

Google can't judge what is or isn't child pornography. Lawyers and judges can't even do it. Nothing is child pornography, no matter how explicit, unless it appeals to the "prurient interest".

And in fact I'm not even located in the United States. The child pornography laws in my jurisdiction are less vague and more narrow than those of the United States. Is Google's crawler programmed with laws of every jurisdiction worldwide? I rather doubt it.

It would be perfectly legal for Walmart to take a non-proactive approach to photo developing. Machines do it all anyway - the only human step is picking up the stack of photos and putting them in an envelope. But Walmart has directed its employees to search through all photos, searching for kiddie porn, and to call the cops. That's a personal stance of Walmart's CEO.

Google is similarly protected - it has no legal liability in the United States for serving as a passive conduit for anything its users care to distribute. It's unfortunate that Google's CEO is adopting a similar stance.


So as a parent I am forced to be fearful and jump through hoops to obtain pictures of my kids?


Yes. Welcome to US law. Take it up with your congressman, not Google.


> United States law compels companies to react to child pornography in certain very specific ways

Maybe they don't have a choice.


Can you give us an idea of the scale of the problem? Is handling flagged accounts something that could plausibly be improved by creating a "large enough", dedicated team? Or do legal and/or practical constraints (e.g. too many cases to look at) prevent that?


(Again, just answering for me personally, not with my official Google hat on.)

Good question. Just like a good programmer finding a bug should ask "How could I prevent this bug from happening next time?" it's pretty common that when a situation like this occurs, the relevant people at Google ask "How could we prevent this situation from happening next time?"

In this case, I believe the solutions that have been proposed elsewhere on this thread (letting the outside person know about the suspected violation, letting the outside person have access to their data) can be fraught with potential legal difficulties.

I'm sure that people at Google will be discussing what different steps could prevent a situation like this from happening in the future.


That's the trouble with child porn...when something is defined as "I know it when I see it", there's going to be a lot of cases incorrectly classified. This reminds me of Walmart child porn case http://trueslant.com/KashmirHill/2009/09/22/in-defense-of-wa...

To just highlight the difficulty of "what is child porn?" problem, it wasn't just Walmart's officials, but local police who took the complaint, prosecutor that initiated the case and probably a number of other officials in the pipeline who made incorrect determination


I find it difficult to believe he did not suspect these images triggered the ban. In that regard, I find it disingenuous when he said he had no idea why his account was terminated.


Really? That slideshow is benign and I've never heard of someone having their Google account banned because of something like this.

I would have assumed AdWords a hundred times over.


The image in question was removed. His own take on the image: I am not angry at Google about this, as some might suggest, only because I too found the image bordering on the limits of what is legally permissible and hoped to highlight the fact that it is allowed to exist within a grey area of legality. I find that sentiment hard to reconcile with the "I have no idea" claim. I don't think he was outright lying, but I suspect he was in denial.

edit: Just realized I said "images" and not "image" in my original post. I was thinking about the image even he thought was a legal grey area.


I'm wondering what type of system all the naysayers would have in place that would properly balance the requirements of google, the law, the community, and the individual users - and how they would scale that to ten million+ users.

If you can describe it, design it, deploy it, and operate it - you might have the beginnings of a multi-billion dollar social network on your hands. Rather than rant at google/facebook for their inadequacies, go out and ship something that will truly demonstrate how wrong Google/Facebook are.


That's like me telling you that certain food sucks and then you telling me "you do it then". The food sucks, I didn't say I could make it better. If I could I would. However, the food still sucks. Just because I cannot fix it doesn't mean that it doesn't suck.


If you can't even point in the direction of a fix, then there is the possibility that the food can't be made better. If it genuinely can't, I think you have to revisit the question of whether or not it sucked in the first place.

To take an extreme example, if you told me a debugger sucked because it couldn't predict in advance whether or not running your program would end with an exception, I'd tell you that you didn't have a reasonable or useful scale for debugger suckiness.


>>If you can't even point in the direction of a fix, then there is the possibility that the food can't be made better

So what you are saying is that if something sucks and cannot be made better than I should not say it sucks. Maybe you should not even be making the food in the first place. Start from scratch and make something totally different. Saying that something sucks is a valid criticism. Saying why it sucks would be even better so that the person taking the criticism understands why but that is besides the point. If something sucks, it sucks. Caveat: suckiness is in the eye of the beholder. I'm sure it sucks for the person who's account gets closed and he doesn't even know why.

>>To take an extreme example, if you told me a debugger sucked because it couldn't predict in advance whether or not running your program would end with an exception, I'd tell you that you didn't have a reasonable or useful scale for debugger suckiness.

Yeah, suckiness is in the eye of the beholder. You believe it doesn't suck, I believe it does. To be fair, your example is a really extreme case and that is not what we have here, therefore it is completely irrelevant.


I think you have missed my point.

I absolutely agree there is a degree to which suckiness is in the eye of the beholder, but there are still measures of suckiness that are unreasonable and useless. Going back to my debugger example, the standard of suckiness proposed is provably impossible to satisfy (it is equivalent to solving the Halting Problem). That's not just an "eye of the beholder" difference.

The next step is realizing that unless you've thought about how you might fix the suckiness and made a good-faith effort to understand the constraints the cook is operating under you don't know where on the reasonableness scale your definition of suckiness falls.

You can say the dessert sucks because it doesn't have fresh mangoes, but if mango season was months ago should the cook take you seriously?


Yes and the chef should consider adding mangos during mango season.

Do you know how many things sucked until we had the knowledge and/or technology to make them better? A lot, I reckon.

It wouldn't be unreasonable to complain that cars sucked because there was no place to safely put hot coffee whether I knew a solution was to put 50 cup holders in the car or not.


> "I'm wondering what type of system all the naysayers would have in place that would properly balance the requirements of google, the law, the community, and the individual users - and how they would scale that to ten million+ users."

This exists - it's called paying for something you use every single day, and is a core part of your existence (both online and off).

Why more people aren't self-hosting or purchasing email hosting services (and yes, that includes SLAs and proper customer support) is beyond me.


Many people - like me - can't self-host email since they don't have a fixed IP address.

And paying for an host, why bother? Just get a domain and then you can change your DNS MX records in a hour or so if Google ever bans you.


This will make more sense when services you pay for are free from this kind of stuff. As it is, well, people pay for cars now and look at all the outright ripoff crap people go through from mechanics who charge exorbitant amounts to do simple repairs, all the while leaving the car in the shop.


Hey if you don't like the broken car that we sold you, go to an iron mine and mine yourself the materials to make one yourself! </sarcasm>

I've seen this false argument too many times on the internet. When can we be done doing this?


Except in this case it's a free car that you've used for years with free gas.


In exchange for being a product. Not very different from in exchange for money.


It's completely different. I don't see how you can't make the distinction between paying lots of money for a product and doing what essentially amounts to filling out a survey for a free prize. The comparison is apples and oranges.


I can make the distinction. Although they are obviously different, there is an obvious similarity, the exchange of two valuable thing. Money for a car. Exposure to advertisements and loss of (some) privacy for a service.

Black and white is rare.


Additionally he has said that Google is proud of their zero-tolerance policy

Seriously? Does anybody think this is OK? If there was a crime, shouldn't it be the legal system that dealt with it, rather than some corporate entity, like Google?

This sounds awfully lot like a witch hunt to me. The least they could have done was inform that they had handed the case over to legal authorities.


> If there was a crime, shouldn't it be the legal system that dealt with it, rather than some corporate entity, like Google?

Yes.

But corporate entities must handle these situations carefully. If your web host botches the DMCA process, they are liable. If Google's support technician accesses child pornography on Google, that technician risks child pornography charges.

It's an absolute case of a corporation covering their ass, but absolutely needed in this legal environment, especially around radioactive allegations like possessing child pornography.


I still don't understand why Google can't advise the user that the account is suspended for legal reasons and that it has been handed over to the authorities to deal with. Just closing it and go silent, is really not acceptable, as I see it.


In some cases, informing the user of the presence of a subpoena is illegal. These secret subpoenas are typically used in conjunction with the PATRIOT Act, but could be in use elsewhere.

For example, Nicholas Merrill (John Doe, of ACLU v. Ashcroft) was hit with a national security letter that barred him from disclosing anything about the subpoena, even that he had received a subpoena: http://en.wikipedia.org/wiki/Doe_v._Ashcroft.

As a response, some providers have started providing warrant canaries: http://www.rsync.net/resources/notices/canary.txt

They're obviously not infallible and have never been tested in court, but at least companies are trying.


OK. So are you suggesting that this might have been the case in this situation? I would suggest that unless it is, then the proper reaction from Google would have been to inform the user what was going on. I understand that Google are covering their backs - and I appreciate that they should do that. What I'm saying is that inasmuch that it doesn't pose a risk (in legal terms) for them to keep the user informed, they could be expected to do so.


troels, I completely agree with the sentiment that "inasmuch that it doesn't pose a risk (in legal terms) for them to keep the user informed, they could be expected to do so."

The issue in this case is that with a potential child pornography situation, the legal risks are much different and harsher in all kinds of unexpected ways.


I would be interested in _how_ someone would develop an automated system for detecting child pornography, as has been said not only is the legal definition subjective, but it would appear that possession and analysis is illegal.

That would make building a training set hard ....

I can only imagine those responsible for developing such systems would require serious therapy as well.


Wow, Google couldn't just tell him, "dude, you got kiddie porn in your album". I commend this person's frankness but it still comes back to the lack of communication happening with Google support.


That could be construed as assisting in the commission of a federal crime.


tl;dr version, anyone?


Google mistakenly flagged a legal image he uploaded as child pornography.


They have an automated system for this. If your account is flagged they don't talk to you about it for legal reasons.


Looking at the link to the pictures... they aren't even remotely 'risky'... except maybe in a Church or extremely conservative home.


He explicitly states early in the post that the image in question is no longer uploaded to the picasa album.


I guess I'm still not sure why a questionable photo in Picasa means a total ban from all Google services.

Or, couldn't users get a warning that they're about to be perma-banned? E.g... "The Google Gods have banned you. You have 48 hours to save any data you wish to keep. Afterward you will be unable to log in & access any Google product." They'd still have backups of everything regardless of what the user does in that 48 hours if they'd like to initiate legal proceedings or whatever.


"The Google Gods have banned you. You have 48 hours to collect your child pornography."

Yeah, that would be fantastic.

They should, however, tell you precisely which part of the TOS you violated - down to highlighting the specific words.


>"The Google Gods have banned you. You have 48 hours to collect your child pornography."

This is completely a non-issue. If the user already potentially committed a crime, Google already has the evidence. Whether or not the user is able to go back and redownload images that may or may not be illegal that may not have been saved is completely inconsequential. The issue is for the false-positive cases or the borderline cases. Google is not the law; let the judicial system determine guilt.


> Whether or not the user is able to go back and redownload images that may or may not be illegal that may not have been saved is completely inconsequential.

With all due respect, unless you're a lawyer, your comment has zero weight.

If I rent a hotel room and use it to store a barrel of cocaine, and a cleaning lady discovers it, I'm fairly certain they are 1) under no obligation to me to allow me to retrieve my cocaine, and 2) are probably legally prohibited from allowing me to collect my coke barrel.


Point taken, but these situations are much more fuzzy. If I understood correctly, this was an automated detection of possible child pornography. In such a case it would seem likely that Google acting proactively has no culpability. The only legal requirement (from the many articles I've read on the subject) is to remove illegal content once the company is made aware. Automated detection based on heuristics likely would not fall under that category.

If specific pictures are under question and a human deems them likely to be illegal they can disable access to those pictures. They have no legal responsibility to prevent access to legal data.

Edit: downvoting without a rebuttal is cowardly at best.


> downvoting without a rebuttal is cowardly at best

I didn't downvote, but I disagree. I don't have to justify every decision I make.


Well that goes against HN etiquette, which I guess is also your choice to ignore :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: