A.K.A. Zuckerberg did something that was supposed to be impossible and to explain how he did it they have now announced a new feature they just thought up as cover.
Came here to say this :). Abuse of admin privileges on social networks has always been a problem. Only recently are the networks worth billions of dollars though.
It raises the question...what other liberties did Zuckerberg take on facebook?
The reddit thing was petty, but didn't really do any material damage. Uber's God View was more troubling...showing how you could track politicians/celebs usage of the platform.
But facebook...this admission opens the door to speculation that Zuckerberg could have theoretically taken ad hoc 'God Mode' actions against users of his personal interest...read their messages, view their activity logs, etc. Only Facebook knows, for now.
I have to agree; I mean, it's a concern, but not a new concern. He's the CEO of Facebook, of course he can potentially read anything stored on their servers. Nothing has changed on that regard. And not just on Facebook - any company. Why would one ever assume otherwise?
Customarily, companies who have shares in trade on a stock exchange are considered a “public” company. It’s why it’s called “going public”, which Facebook did in 2012.
It probably looks that way, but I don't think Facebook would just up and announce a new feature to save some face. My guess is that they really were going to make un-sending into a real public-facing feature, and they made a premature announcement due to this Zuck unsending controversy.
I'm not saying it was developed for Zuck in the last few days. Yeah, I think it was definitely an internal tool or secret ability before. But one could easily imagine that they really were going to roll it out to public users sometime -- and now they're rolling it out earlier to save some face.
> this is not how feature development works at software companies
Is it ?
There are tons of features that don't get published because it would cause user confusion, companies don't want to support them officially, or it has business side effects that make them non suitable to open to a large audience.
Even features that are only open to support people or admins to avoid normal users to abuse them.
Everyone, this is exactly what happened. He just got caught and, within a few days they announce this as a feature. If this is a coincidence the odds of it happening are 99 trillion to one.
Facebook does this shit all the time, they do not deserve the benefit of the doubt.
This story feels like exaggerated due to weak link between Zuckerberg's messages and the new feature.
They also add that Secret Chats in Messenger, Whatsapp and Instagram already support removing an already sent message. Facebook decided to also offer this feature for regular Messenger chats.
I guess any headline carrying Facebook, Zuckerberg and privacy concerns can make it to top these days. Modern online journalism is a whole another subject.
But.. wasn't this product announcement only made after journalists started asking about the Zuckerberg messages? The original story WASN'T exaggerated, facebook are just trying to play it down by making it sound like anyone can do what zuck did.
I thought this was a good question and went to try and find out. I grabbed the old source from https://ycombinator.com/arc/arc3.tar and started grepping around, but couldn't find any function to delete comments (only links, which do delete, but don't get purged from the cache it seems). I didn't set up Arc to try and run the thing though.
If anyone has more knowledge, or understands the source better, or has newer code, I'd be interested to know.
It would be trivial to store data in the browser's local storage and hijack a "real" HTTP POST with this data. The front-end logic would need to be obfuscated far enough to deter anyone from trying to step through the Javascript to find it.
But no, HN doesn't do this, only companies like Facebook or Twitter would stand to gain from that information (e.g. a shooter almost tweets his plan to attack but doesn't post it).
> It would be trivial to store data in the browser's local storage and hijack a "real" HTTP POST with this data. The front-end logic would need to be obfuscated far enough to deter anyone from trying to step through the Javascript to find it.
Hmm, I think I’ll write a library to do something like this, I just need a cool clever name for it.
I'd be surprised if they did. It is extremely normal for forum software to only soft-delete posts — for one reason among many, it means somebody can't harass you while they know you're here but delete it before the mods can have a look.
It's moot if there's anyone or service doing backups of HN, whether for archiving purposes or other. I know Reddit has a few services doing that for threads -- which recently lead to evidence on /r/legaladvice that can be used against an employer as both the employer and employee posted separately, the manager essentially stating that they wanted to fire the employee in retaliation and for not participating (regardless of their cultural/religious practices, and having been told so prior to continued violations).
Unfortunately we don't have government organizations that help with moderating discussion (like a therapist would), such as a parent would help children learn how to communicate with others in difficult situations, to deal with emotions, etc; and if bad behaviour, giving someone time-out until they can "play nice" again.
We have to of course make an environment good enough, rewarding enough to be part of, that it's an incentive for everyone to want to play nice.
Perhaps we need to come up with a new system (separate from legal system, or part of justice-education system) for dealing with situations that teach compassion when compassion/understanding is missing, e.g. should bullying a crime, or who should we leave that up to - is it even required to bring all participants related to bullying together, parents of the perpetrator and the victim, and create a healing circle to uncover the deeper underlying issues?
The whole point would be to rally support, so we can build stronger, deep community - and society could, like with criminal law, have fines or other limitations created if people decide to opt-out of this system.
We really need to become more comfortable with accountability, that our actions have consequences - though we also must support a practice of learning, mistakes happen during learning, etc; there's pretending to have learned, and genuine change/evolution in a person's thinking and behaviour, difficult to perhaps tell in some cases online.
Yeah so would I. Actually, I've often thought there should not be a restriction on the window during which you're allowed to delete a post on this site. I understand that there are reasons they might not want that such as to encourage civil interactions and so forth. But I still think there should be no restriction on it.
There are things I've said in discussions I've had here that were not always well informed or motivated by my better self (yeah, you're free to peruse my comment history and see). Or I may have just grown up a bit since I said them. Either way, I've always felt it a bit unfair that they could be dug up years later.
As a user I understand that, but as a forum administrator I have a different view point.
About a decade ago I was involved in building what became for a short span one of the best cinephile discussion boards on the Internet. Aside from HN, or maybe MeFi I can't think of any forum that sustained a signal-to-noise ratio of that quality for that long. It was night-and-day better than IMDB boards.
Naive and idealistic as we were, we have users full control to edit/delete posts. Over time we tightened that to have time limits, etc. But there was one big loophole which was you could delete your entire account and it would purge all your posts. Eventually one of our most prominent members with thousands of posts decided to (╯°□°)╯︵ ┻━┻. The result? Utter incomprehensibility of half the threads, effectively rendering the history useless if not downright misleading.
I get that the pendulum has swung too far and users deserve more control of their data, but a public forum is fundamentally a different thing from a social network—a forum thread is different from a post in that the value of the thread is communal, it should not be owned by the individual in the same way that it makes sense for a social media post to be. And even in that case, I think most of the problem comes from the giant many-tentacled nature of Facebooks and Googles where the data doesn't even come from directly entering it in and it's incredibly difficult to even visualize what they have. For a simple forum I think it's perfectly reasonable (and clear!) to have ToS that state: you post it, it's here forever, barring special requests due to extreme circumstances or legal reasons.
Sure, I was thinking with time limits on delete (presumably desirable if you're trying to avoid incoherent threads). It could also lead to accounts that post once and get deleted if you don't otherwise allow anonymous comments.
I think there is some merit to the idea of Right to be Forgotten[1] but at the same time, I feel like that would just help evil people hide their true nature. The other half of me wants to just have the NSA dump everything one day and let everyone know what everyone else has really been saying for the last 15 years. The first couple days would be tough for everyone to acclimate knowing who everyone really is. But in theory that is a net good. Deception is rarely for good.
Isn't that the entire point of the Right to be Forgotten? To say that there is such a right is to say that it's incorrect to expect that you could know a person's "true nature" just by seeing what they happened to think at any given time in their lives.
I agree one event does not summarize an individual. But if you can choose what is remembered and what is not you can pick and choose the data others use to determine who you are. If someone goes on a racist tirade on Facebook, yea that does not automatically make them a racist, but its a very good data point that people who meet that person should be able to reference. I believe the sentiment of the Right to be Forgotten is so the racist who changes their mind can move on from their past. But I feel this will be abused more than used correctly. Con artists will wildly abuse the Right to be Forgotten.
I disagree. This right was a natural guarantee until the internet became ubiquitous about 20 years ago. Human society evolved for a long time under the assumption that people are free to think out loud a bit or even openly express anger on occasion and then change their mind later. Isn't it possible that making that harder to do will lead to even more incorrect and entrenched viewpoints? I don't think it's just about con artists.
Maybe another approach to all this could be that, once you post, you can delete for a while shortly thereafter. But then your post goes into a period of extended residence on a site. Perhaps a year or something. After that residency period has expired, you're free to delete the post.
I doubt there's anyone who decides "Imma be evil" or realizes it of themselves to such an extent. If you believed you were evil, wouldn't you want to change it? Even terrorists don't think they're evil.
On the other hand, it can be incredibly frustrating to be looking for a piece of information and find a nice thread about Just That Thing on reddit - only to find that all of the posts that actually contained useful information are [deleted] and there's nothing left except "I have this problem too!" and "Thanks" "Thanks" "That fixed it!" posts.
If you delete those then the recipient can still see them (it just deletes them for you). Zuckerberg's messages (and the new "unsend" feature) delete the messages for everyone, which is different.
Wow thanks for clarifying. That is absolutely deceptive. The UI never mentions that, but I found some support docs that do. I imagine the majority of people deleting messages aren't aware of this...
I think it's a little funny but I wouldn't call it deceptive. If you delete your local copy of a text message, that doesn't remove it from the other person's phone. Same thing with messages on AIM and IRC. So un-sending seems to be the far less obvious feature.
Yes, that's true. I suppose for me, the deception starts at the very beginning from Messenger's UI, which makes it look like any other texting app which uses local storage.
I'm not aware of a messaging service that lets you delete the recipients copy of a message, so I'm fairly surprised about people's reaction to your comments. Am I just not aware?
Slack/Facebook comments/posts are all things I use that let me delete previously sent messages. This is possible because they control it all. (Of course, I'm relying on the other users' clients coming online and respecting the delete instructions)
and fair points about email... I had assumed this worked like other parts of Facebook, which doesn't seem unreasonable given they control the whole stack. (and they did shut down their old XMPP gateway awhile back)
Someone can correct me if I'm wrong: I believe that still leaves a copy of the message with the receiving party. That's the issue of contention here. The messages were also deleted for the receiver of Zuckerberg's messages.
This is awesome! 17 year old me shit talking with friends shouldn't be used against me if I were to obtain a position of power and be blackmailed by that. Also, any message without context could be construed as sexist, racist, privalidged, ablist or whatever.
Really positive development from this entire mess of a scandal.
Those messages aren't going to be deleted, they're just going to be hidden. They could be un-unsent as easily as they could be unsent.
If you don't trust your friends not to blackmail you, you can't trust them not to just save every message themselves at the time you send it anyway. Even so, "unsending" messages is a meaningless gesture to security because you have no way to verify that anything has been destroyed. Facebooks track record clearly demonstrates that in the absence of evidence, they definitely aren't destroying the data. This feature will be marginally useful for correcting typos, and that's about it.
"And until this feature is ready, we will no longer be deleting any executives’ messages. We should have done this sooner — and we’re sorry that we did not."
This says a lot about the current situation. What else don't we know?
Ha, this just confirms what I thought to myself but decided not to comment on, reading some other recent Facebook-disaster post here.
Mark has a different Facebook. I bet he doesn't see ads, unless he wants to. And, come to think of it, when he or other blessed entities delete something, it's really gone. Come to think of it, I wonder whether their data gets scooped up by the various monitoring and collection tools that are doubtless built in to the product and its administrative interface.
Are they really dogfooding their own product, at this point?
I would assume he has an admin type of account on Facebook, but does that even matter tho. He and many other employees have direct access to their databases and code. They can do whatever they want and there do not seem to be any checks in place to prevent crime/abuse.
If some engineer at Facebook wants to log into the account of someone they have a crush on, search for nudes or something else nefarious, I have yet to learn of any systems in place to prevent that from happening besides Facebook telling us to trust them.
They used to have a master password to log into any account[1]. They say they replaced this with a special tool you need "permission" to use.
Quote from the article "Finally, while the interviewee says the ‘master password’ has been deprecated, employees can still access your profile through a special tool, but they need to provide a reason for why they’re doing it. If they get audited down the line and fail to provide an explanation, they can be fired"
That sounds like a system where you trust everyone and audit later to verify nothing bad happened. Imo these systems are put in place to hide abuse not reveal it because they are reactionary instead of preventative.
That entire system of checking employees from abusing data, relies of everyone trusting the people in charge at Facebook to do the right thing, insert "dumb fucks" Zuckerberg quote here[2].
Imo it is a matter of time before we find out about wide scale massive data abuse from Facebook employees. Either an insider feeding out data to others or someone on the inside listening to hedge fund managers private conversations to insider trade off of.
I am just generally tired of everyone trusting everyone all the time. We have to have rules and systems in place to eliminate the need for trust.
If he does it could be why he seems so disconnected from reality? If the algorithm puts positive news in front of him, and people regularly commenting on everything in praise, with no real dissenting commentary, then ...
I don't know the current state, but a year or two ago, Facebook was making a lot of noise about 'Facebook for work'.
I would guess they are doing something like that, internally.
Or, somewhere, um, 'locked in a file cabinet in a lavatory with a sign on the door, Beware of the Leopard' (mentioned on HN yesterday by another commenter, who had the exact wording), is there a giant Google Apps contract for FB? ;-)
This was discussed and patented already in June 2016. Here is an article, including a link to the patent: http://techpp.com/2016/06/24/facebook-messenger-unsend-messa... . Hardly something they have started to put together after the scandal, but maybe the worst timing or idea to go back to now...
Maybe Zuck can prevent some future scandals with this new feature. Unfortunately we all know they aren't actually deleting the messages, just hiding it. You can probably still buy the data.
Not that I know of, I've seen similar situations in previous workplaces (emails from a CEO) and retention was only required under certain circumstances, such as when undergoing a lawsuit.
And that was emails, vs. social media messages. So not sure how it would apply in this case (IANAL).
I don't really know, but as facebook usage declines they will become more and more desperate to make up for the loss of revenue. One of these days when most people have moved over to another platform there will news in the media "Facebook sold users private chatlogs".
Questions I have:
what's the number of Mark's messages that have been deleted before upcoming Congressional hearings?
Is it greater or less than those 33K emails that Hillary Clinton has deleted after getting the subpoena?
Does Mark understand that after uncovering this story his career is most likely ruined forever?
Seeing as how people already record sketchy conversations by screenshotting them this will only impress the naive.
Screenshots aren't absolutely perfect as evidence, but since error level analysis can show up many kinds of graphic manipulation, they're pretty good on a practical level.
What’s naive is to even believe a screenshot. They can easily be faked. In fact I think there’s entire apps just for this purpose. You would never know the difference. You can also just straight up edit HTML and screenshot that.
What permission does the user give FB in terms of how FB can use the messages?
The message might "disappear" after a few seconds, but at that point FB may have already extracted the information they want from it, information they will use to further their business.
Assuming users want some sort of privacy, from whom is it that they want their messages to remain "private"?
Will there ever be options in FB "privacy settings" such as:
[ ] Do not share with Facebook
[ ] Do not share with advertisers, political campaigns, etc.
Absurdity aside, the mere feasibilty of this is itself debatable.
Do users want their messages to remain private from companies like Facebook?
How about from FB's clients, e.g., advertisers, political campaigns, hospitals, etc.?
How about from the rest of the general public?
We now know as confirmed by FB that this user data has
been shared for years. Perhaps FB can argue this was "informed consent".
However users can change their minds in light of more information. They should be able to revoke that consent going forward.
Will all that "leaked" user data FB has intentionally shared self-destruct after some period of time? Unfortunately, no.
It seems to me that, in the main, the parties that have the greatest incentive to monitor users messaging are in fact companies like FB and their clients.
They have a financial incentive that is in the aggregate far greater than any individual (petty criminal, etc.). Collecting user data has become their business. "It is literally just what we do." Period.
Finally, thanks to the explosive growth of single websites into vast monopolies like Facebook that extend their reach into nearly every corner of the internet, they, the companies behind these websites, are in the best position to do it.
It should be self-evident but maybe needs to be restated: These companies cannot protect the user from the company itself. The amount of trust required of the user depending on these websites is simply mind-blowing (from the perspective of someone who has lived in times when no such trust was necessary).
Thats only an opinion. I respect any disagreement and welcome karma subtraction as a means to express it.
Your statement on trust cannot be restated enough.
My additional concern is that, I do not use Facebook. I deleted my account like 10 years. Yet they still retain data on me, not only without my consent. They have the opposite of my consent. I explicitly told them to delete everything they had on me and I am opting out of their service.
Yet articles like this pop up time and time again[1]. There is talks about them scanning every picture on facebook for faces and building profiles of everyone, including non-users.
How am I, a person who has no Facebook and does not want anything to do with them supposed to stop them from monitoring and tracking me.
Imagine if in high school you found out some guy in your class was keeping records of the structure of your face, or the websites you visit and you confront them to stop and all they say is other students have allowed me to do this so I keep everyone's records to make it easier for me to track the people who gave me permission.
I assume everyone would not be comfortable with this, especially if they then go to sell that data to the students running for class president so they can manipulate your friends to harass you to vote for their candidate.
Is HN turning into an anti-Facebook echo chamber? I cannot believe the amount of posts about Facebook, condemning Facebook, blaming Facebook, assuming the worst motives of Facebook, and picking apart every word said by Facebook executives. In every post's comments there are the same tired "you are the product", "I deleted Facebook and if you don't you're an idiot", "dumb fucks trust me" comments.
It's getting so that I don't even want to refresh here, since at any given time a chunk of the headlines are taken up with the same old shit hating on Facebook. I roll my eyes and move on usually, but it's getting to me to see a forum I enjoy devolve into a negative hate train of spite.
I don't work for Facebook, they thought I wasn't smart enough to work there. I have no financial interest in Facebook. I do use Facebook moderately often. I'm mostly just tired of seeing the negativity and hate.
Because the underlying issue is right now one of the largest and most intrinsically important and unresolved issues for not just our industry, but for our society. You're effectively on the front-line of a social discourse that will eventually settle to become part of our social fabric. And on that front line there's a lot of thrashing about - but that's a good thing.
It's worth our time and effort to bring up the issues, hash them out, run through ideas, gather evidence , etc.
> s HN turning into an anti-Facebook echo chamber? I
I've been here 3-4 years, and Fb (and Uber) have a special level of hatred here. If my first job had been at facebook I'd have stayed off here because I'd have found the toxicity impossible to take.
On the other hand, people fall over themselves to defend Elon Musk for doing pretty much the same thing Uber did: Kill a person due to misleading marketing, enforced by a statement with more misleading stuff.
I think Facebook is getting beat on because it's in the middle of a media maelstrom and under a lot of scrutiny as a result. I also think it deserves every bit of it and many of its social media peers do too (LinkedIn is basically Facebook with even more "dark patterns").
Regarding some of the specific comments you're seeing repeated - yes "you are the product" gets trotted out a lot - but it's basically correct, isn't it? You haven't offered any counter-argument here. Pointing out that Facebook's material incentives are absolutely not aligned with those of their users is always going to be relevant when discussing how Facebook handles user data.
Facebook could minimize the data they gather instead of maximizing it, they could give users fine-grained control over what data "partners" get to use/target, they could make the defaults for many options private. There's no incentive to do any of this though - so in many cases they do the opposite [1].
> I think Facebook is getting beat on because it's in the middle of a media maelstrom and under a lot of scrutiny as a result.
Facebook is getting beat on because the left and the privacy conscious are concerned, for slightly different reasons, over the scope
of the corporate manipulation using personal data, and the Trumpist segment of the right wants to keep attention on FB to deflect as much public attention as possible from entities tied to Russia and/or the Trump campaign involved in the same scandal.
That doesn't quite give everyone a beef against Facebook, but it does mean they are getting adverse attention from all sides.