Hacker News new | past | comments | ask | show | jobs | submit login
The Social Dilemma (thesocialdilemma.com)
577 points by vinceleo on Sept 20, 2020 | hide | past | favorite | 405 comments



I just watched this last night. A lot of the content will already be well known to the HN audience. There was one big light-bulb moment for me. One of the speakers was clarifying the "If you're not paying then you are the product" idea.

He refined it to, (rough quote), "Changes to your behavior and actions are the product"

That distinction made the insidiousness much more clear than the former statement.


I work as en external examiner for various CS students, and in my country they do a couple of internships at real companies as part of their degree. At some point around five years ago, or perhaps a little before that, there started being psychology students on some of the courses. A trend that has since grown quite a bit, especially if you happen upon a group project involving mobile gaming.

It’s something I would have likely never met in my my career, but there is an amazing amount of social engineering going on a these places. 20 years ago if you got the odd group of students who did anything gaming related, it would typically be very nerdy, terribly executed and often a very socially awkward exam even for CS standards. These days it feels like I’m very likely examining a group of people out of which two or three are likely to be my executives in a decade if they chose to pursue a career in the public sector.

It’s really interesting how the humanities have crept in and in some way taken over, isn’t it? And all because we didn’t want to pay a coffees worth of money to have the online yellow pages. It’ll be even more interesting to see what happens to this sale of social engineering once legislation eventually catches on.


Blimey, that's super interesting. The ad model needs to die IMHO. We need regulation from a super-national level if you ask me. Bitcoin should pay the way for a paid-for internet. Anything to get ads off the web.


Ads are definitely a large part of the problem, but they are not the only problem. Ads are what turn us from customers into cattle, yes, but I think that online services could be almost as bad if we paid for them. Indeed, micropayments might actually result in many of the same issues as ads do: in a world with micropayments for each page load, publishers would still want to increase engagement, because that would directly increase their bottom line.

What we need is precisely the thing so many of us do not want: we need subscriptions. Subscriptions provide a buffer. They give a publisher room to breathe: he can predict what his next month will look like; he can model what his next year will look like; and he can allocate his resources appropriately. The problem is that subscriptions are high-friction, and cannot compete with ad-driven ‘free’ content and services.


Ads don't turn people into cattle. That is a huge exaggeration, or at least, you should only speak for yourself.

The sort of ads that make the big bucks for companies like Google and even Facebook are niche or local firms buying very simple ads that let people know they exist. Brand advertising on publisher websites is not the biggest source of value, and these tech firms started out by trying to get rid of it entirely. AdSense's entire reason for existence was to chip away at the terrible brand ads that littered the web beforehand, and of course, those are the only kind of ads that you get on other media so they above all others have done more to rid the world of that kind of thing.

Publishers can use subscriptions today if they want to. That's no solution to psychological manipulation. Look at how many cable or satellite TV news channels there are which spend all day trying to create as much outrage and hate as possible, all paid for by subscription.

I haven't watched it but the documentary is probably junk. Seeing the Sundance Festival logo and reading the blurb is enough to get the gist of what it'll be like - a bunch of social activists who want you to feel weak, useless, sheep-like and filled with hate towards people who dare to make and give you useful services. No thanks.


Can’t believe this didn’t get downvoted more. Is this a joke?

Watch the film. The speakers are mostly ex-employees of the big tech firms.

Your take is completely incorrect, unsurprisingly, given you haven’t even seen it.


Examining my own behavior, I do see a lot of the last part of your comment. Even as someone who recognizes the potential for abuse when you have incredibly detailed models of people and can use them to manipulate behavior, it's not as obvious as another subscription fee.

I have a strong bias against any sort of automatically recurring charge. In some ways I'm very illogical about it--yearly subscriptions like Amazon Prime or my VPN aren't too bad because I can see the cost and not worry about it for a while. But I've just been trained through experience to see anything monthly (or less) as a way to disguise the real cost through stretching it out (even if I can easily do the arithmetic and see the annual cost.

Regardless, knowing about the potential for abusing the data compiled on my behavior is enough to make me feel a bit "icky" or mildly uneasy. It's enough to make me avoid using anything from Facebook, for example. But it's not enough to keep me from checking my subs on YouTube or using Google navigation when I drive somewhere new.

If it was as simple as paying $100/year to use Google services without data collection it would be easy, but then what stuff would stop working? A good bit of the usefulness of Google nav or whatever comes from how it collects usage data or ties in with my calendar to remind me when I need to leave to make it to an appointment on time.

And then what about all of the other things I interact with that collect data? Do I need to figure out what they all are and pay them a yearly fee as well? What about the ones who decide it's still easier and more profitable to skip the whole thing and keep making their money by profiling my behavior?

Instead, I end up avoiding Facebook properties, blocking ads, never using rewards cards, trying to ignore the OG offenders (credit card companies), and still feeling uneasy about all the info being analyzed about me daily.


What if we actually got paid for the data collection? Like UBI through the tech giants. Maybe then you have an issue where it's like the more you use the more cash you get, or maybe it's just a flat payment i.e. if you have even a single facebook account and they've used your data in some way, you get the payment?

God, this is when the lawyers get involved! YUCK!


The system has been built to track you in order to show you more relevant ads. Whilst micropayments wont solve every problem pointed out in the documentary, it will offer an alternative business model which doesn’t require pervasive tracking to work. We are working on this problem, shameless plug: satotious.com


> Bitcoin should pay the way for a paid-for internet.

Simple economics tells you that this will never happen. Companies simply have much more money to spend than consumers do.

Indeed, if you ban “ads”, companies will simply invest in “non-ads” to influence your behaviour - fake news, real news (more-or-less thinly veiled PR disguised as news, a la Paul Graham’s “submarine”) or buying whole news outfits (hint hint Bezos).

The only way to stop competition in marketing is to (re-)start competition in product / technology - but that seems a very difficult problem (we’re in the midst of a physics/tech stagnation).


I agree with you, but I will mention that Netflix is a paid service but nevertheless it uses similar strategies to keep us hooked.


Incorrect. Netflix does not attempt to change your real world behaviour.


Advertising needs to be targeted directly. Otherwise, companies rightly reason that why should they opt for direct payments only, when they can take direct payments and shove ads in anyway?


Do you happen to know what titles they go by?


> "Changes to your behavior and actions..."

Isn't this the definition (or goal) of all advertising? I don't see the connection to the first half of the statement tying it to free products and services.

There are plenty of paid services that are riddled with ads as well. Movies, Cable TV, airline flights, etc. The fact is that people and organizations are _constantly_ trying to influence your behavior. It's not obvious that's only a bad thing.


I believe the extra step of social media is that they are not satisfied by changing your behavior from not buying a product to buying a product. They also change your behavior in somewhat indirect ways with the goal that you continue to engage with their platform more and more and be subject to more advertising on the future. So they make you more fanatic or more extremist or more polarized, because it will lead you to stay more time on their site.

They want you perpetually in the state of looking into a rabbit hole rather than feeling that you are satisfied with your knowledge of something and ready to move on with your life.

The fact that we bring our phone everywhere, that there are push notifications, and that we interact with other people (often friends) makes it fundamentally different from a TV or a magazine.


> They also change your behavior in somewhat indirect ways with the goal that you continue to engage with their platform more and more

How is this different than any other kind of media like TV, magazines, newspapers, etc.? All of them are trying to make their product engaging so that you consume more of their media and more of their ads. Just because those services also charge a subscription fee, doesn't change the dynamics of what they're trying to do. Social media has maybe just been more successful in doing so, partially because

> The fact that we bring our phone everywhere, that there are push notifications

So the dynamics have always existed in previous platforms, it's just been ratcheted up to a higher level with social media.


The word "just" in your last sentence does not paint the full picture of the situation.

Imagine two villages. In one of them, it is customary to drink a glass of wine with dinner. On holidays, two.

The other village has a habit of drinking themselves stupid every single evening with gin.

Now: this is really only a difference in quantity. Both are consuming alcohol for pleasure. But the first village will probably be OK; there are tons of such villages in France, Spain and Greece, where people live to be 90.

The second village has a huge, deadly problem.


Other platforms never leveraged opinions, pictures, life facts from people I know, like friends, family, acquaintances, coworkers, etc to induce behavior change.

And things change a lot with scale. Even if the intent or the principle is the same, dynamics are different if it is in a scale orders of magnitude larger.

I don’t think we should be dismissing new understandings of how social media affect our lives on the grounds that other media in the past tried to do the same.


Putting aside for a moment the much tighter feedback loop and way more in-depth metrics that allow much greater tuning and targeting...

With broadcast media, you can turn off the TV, radio, or put down the magazine. The social punishment cost of this is limited - If someone asked you about current affairs or the latest show etc, you might have to deal with being out of the loop.

But with smartphones and social media, everyone's a content publisher and it's all intertwined. Stepping back/checking out has a much higher social cost as you're not reachable and you're gonna miss those life updates from friends and family if you're not sitting on the same communication channels as them.


Simply, your TV doesn't turn back on and tell you that all your friends are watching a certain show. When you turn your TV off, it is off.


Don’t give them ideas! ;)


> So the dynamics have always existed in previous platforms, it's just been ratcheted up to a higher level with social media.

I remember a friend saying the "optimization is killing us". The whole economy is getting better at what it does and as it happens there are many side effects.

Those with no ethics will manage to harvest the social media technology for their profit and not for society. It applies also to taxes where companies are getting master at dodging them. Liers are getting masters at lying and we have to fight to rebuke the lies (the documentary has a quote about how much more effort is required to fight disinformation than to create it)

He also argues that optimization is poison to yourself ! But that's off subject.


Optimization per se isn't the problem, much like thinking isn't (and arguably one is quite likely synonymous to the other at a fundamental level). The problem is with what you optimize for, and how hard.

So most companies aren't optimizing for a goal of "maximize our pockets AND maximize world happiness AND maximize the value our users get from us" with a proviso to not optimize too hard, so that unknown but desirable values that aren't expressed in the goal function don't get optimized away. No, they just optimize for "maximize our pockets", and do it hard. And when they do it like this, any sense of ethics or dignity is one of the first to fly out of the window.

Similarly, the optimization being "poison to yourself" is not as much a danger of optimization per se - wanting to improve yourself is a useful and noble desire! The problems usually start when you focus on a narrow goal too hard, to the exclusion of everything else in your life.

As for the market and how things are: my belief is that we're in a tipping point where the market, as an optimizer, is getting out of control and needs to be reined in. It's worth remembering that the main force behind market optimization is competition - I'll find a new trick to undercut you, you'll find a new trick to undercut me further, etc. until we reach diminishing returns, and the price of goods/services settles at barely above costs. Here are things I strongly believe to be true (and rather self-evident) about competition:

1) In a competitive environment, once one party figures out a trick that gives them advantage, everyone else has to do the same or risk getting outcompeted.

2) You don't have to compete only on production efficiency - you can also compete on quality (in the direction of getting away with lower quality for the same, or slightly lower, price), on business models, on ethics (any ethical principle you can skirt will open up new avenues for free profit), on advertising, on adherence to laws, etc.

So over the past centuries, the market had a lot of low-hanging fruits to pick. New materials, new processes, economies of scale, innovations in transport - all allowed to provide better goods for less. But now, I believe, we've run out of these easy wins - most competition happens on the grounds of lowering quality, business models (Everything as a Service, DRM, razor-and-blades, etc.), ethics (see e.g. social media companies), advertising (better RoI than making marginally better product), legality (Ubers and AirBnBs of the world running on a "break laws, and use VC money to keep regulators at bay until the market is cornered and competitors are destroyed").

The market is no longer optimizing us for giving us best possible goods at lowest possible price. Instead, it gives us worst sellable crap at highest possible price, by tacking on hidden and indirect costs.

That's why I'm increasingly in favor of regulating some business models out of existence - particularly the ad-subsidized everything, razor-and-blades everything, and "move fast and break laws" ones. The market's role in society is to make our lives better. The market has optimized past that. So the easy and socially destructive tricks need to be cut off, so that the market can return to optimizing for betterment of consumers.


they just optimize for "maximize our pockets", and do it hard

That's not the case.

Consider the purest examples: Google routinely incorporates factors beyond short term profit into their ranking functions. For instance they ranked SSL using websites higher than non-SSL using websites, to encourage use of encryption. The ads "quality score" system penalises advertisers whose ads appear to be low quality, defined by users not finding them useful. The constant shovelling of social justice themes into the search engine homepage.

The biggest problem these firms have is that they are not focused exclusively on maximising profit. Profit is an excellent metric! It is the sum of all the happiness and utility you are creating across all your customer and userbase, minus effort expended, in a single quantifiable number. Think of all the happiness created by Apple, Amazon, Google and yes even Facebook and Twitter. Hundreds of millions of people use the services of these firms, often for free, and they use it because they like it.

The problem big tech firms have is they explicitly got on board with "let's maximize world happiness" as a goal. Dumb dumb dumb. Nobody agrees what world happiness is, but there sure are a lot of activists who will always be convinced you're not doing enough about it. Once you go down that path you'll never stop walking, and the further you walk, the more abusive the activists get. It's so subjective and unquantifiable you can never tell if you've done enough.

my belief is that we're in a tipping point where the market, as an optimizer, is getting out of control and needs to be reined in

Isn't that just a classical Marxist or Malthusian collapse proposal? People have been predicting that capitalism will somehow collapse in on itself for hundreds of years, it never has because it's a fundamentally natural, evolved and stable system. The alternatives, not so much.


Comparing newspapers to Facebook is likening a cup of water to the ocean ("just water, scaled up"). People aren't reading an ever-changing, infinite-supply, A/B tested, microtargeted newspaper.


In terms of behavior changes: Don't forget alcohol, tasty food, the whole sex industry, fashion, and even many aspects of the shelter industry. There isn't a seller on earth who passes up the opportunity to engender eagerness to buy.


>How is this different than any other kind of media like TV, magazines, newspapers, etc.? All of them are trying to make their product engaging so that you consume more of their media and more of their ads. Just because those services also charge a subscription fee, doesn't change the dynamics of what they're trying to do. Social media has maybe just been more successful in doing so, partially because

I think this trivializes the problem. The issue isn't that these things have existed for ages and are simply getting better, the problem is that we have gone from mass-marketing designed to appeal to a broad audience to personalized marketing designed to appeal to just you. With that in place, the more data that gets collected about you, the more precise the message delivery can be. The more sophisticated the algorithms that input all that data get, the more susceptible you'll be to their messaging because they are designed to capitalize on human psychological weaknesses.


The problem isn't only the dynamics, but the regulation that comes with the technology.

In many countries, TV is highly regulated: can't advertise for alcohol or cigarettes, political speech time is measured to make sure each "camp" get the same exposure, etc.

There's no such thing on social media.

Same goes for Uber, Airbnb & co, by the way, although this is easier to regulate.


TV, magazines have to choose _one_ editorial line at a given time. Social media can present everyone their own bubble, and so it has bad side effects. When you have a forum, everyone is presented the same content. For example, this results in administrators being aware of bad content most of the time (at least in the few forums I follow). Everyone on HN see the same ranking. Social media and recommendation engines though, will send you into your own rabbit hole of a handful of interests. I advise you to view the documentary, this will answer your question best.


It's also blurring the line. Ads were clearly ads, now you can't really tell where or when you're tracked.


> > "Changes to your behavior and actions..."

> Isn't this the definition (or goal) of all advertising? I don't see the connection to the first half of the statement tying it to free products and services.

I believe it was "the product is the 'ability' to change your behavior"

So the difference here is that not only the sell it but it actually works because it can adapt to the people in realtime. This is probable what is scary..

Standard advertisement is already manipulation but it's more easy to know when it is advertisement (though it's arguable that there are also tricks to get around this for TV).

In my view it is indeed the advertisement industry going too far and being allowed anything. Social Media just build a product for its client and none of them have any doubt and limits when it comes to choosing between profits and ethics.

Advertisement is ok when it is honest and not manipulative.

In addition the product is sold for politics which is even more scary.

There are regulation in many countries about what's allowed or not in advertisement. In some cases it's not allowed to blatantly lie. Or it is not allowed to hide advertisement as content, etc...

It is possible to forbid micro-targetting and manipulative AI algorithms or other methods. Users & Regulators need to understand what is being done, and how the effects are harmful to society.


Well, for movies/tv it's well known they were used to get people to buy wedding rings, cigarettes, and many other things. Not the ads, the actual movie/tv show. When you see some character light up a cigarette with a cool pose IIRC that whole idea was started by the tobacco companies to the point now that most people just seem to assume you show someone smoking to make them appear cool. It's portrayed as cool either because of the way the shot is made the person acts, or because knowing their bad for you clearly this person likes take risks, live dangerously, so they're exciting.

https://smokefreemovies.ucsf.edu/history

https://www.theatlantic.com/international/archive/2015/02/ho...


When we watch ads it's pretty clear that the advertiser is trying to manipulate us. With social media it's different. They are manipulating us to keep us engaged so thay we see more ads. With TV, the network is trying to create a great show that engages people so that they can get more advertising money. I get that, most people get that, it's a win win. We get entertaining shows and they get more eyeballs for afs. With social media, they are doing it in a different way. They use algorithms that create silos of information meant to keep us scrolling. This leads to all kinds of negative effects on society. I would argue that cable news and social media are more alike than different in this respect.


Not just advertising, but any communication. You wrote this comment to influence me and everyone who else who read it.


Dose makes the poison.

There is a difference between someone telling you Band A is good, and someone hooking you on drugs and promising more drugs if you buy Band B.


You would reduce it all to atoms.


I’m just trying to take the ideas in this thread seriously. If we are going to argue that it’s bad to try to influence people’s behavior, we ought to think through what that means when applied to entities other than Facebook. That will help us figure out if we really mean what we are saying. Is it an actual value we hold, or only a rhetorical point when we are taking about a few tech companies?


No it is fine sending a comment expressing opinion. There are so many qualitatives differences:

- This is human to human communication, not an optimised algorithm.

- People here are not sending you 10 comments and looking at the one which you engaged the most time, because they don't have the metrics. Social media knows how many seconds you looked at a post and classify it with AI, they can make a profile of your current sentiment.

- The poster probably didn't analyse your whole past discussion history with algorithms before answering in a way that would be best calibrated for your way of thinking.

Interactions, discussions between people, even conflictual (not violent), are fine. Communication is fine. Mass communication is mostly fine (traditional advertising, broadcast...). What's not fine is manipulating people by mass-communicating while adjusting the message given metadata that those people are not even aware about.


In a conversation, you subconsciously build a model of the mind of person you are talking to, and your words are tuned to fill the gap between what they know and are thinking and the ideas you are trying to express. If I think back through my life, the communication that has affected me most as been from humans in face-to-face conversations, not on websites. But I could be bad at objectively evaluating this.

I think for Facebook to be more effective than humans at influencing behavior, they have to implement theory of mind, which might require AGI.


I also think a human can be much better at influencing people 1-on-1. But adapting his speech to two people at the same time? That's more difficult. There's only so many tricks you can use to have two different people hear what they want to hear. 5 people? 10? 2 billions?

That's the scary thing. Facebook is good enough at influencing someone, but they do it for half the population of the world simultaneously.


In a conversation, the influence goes both ways: if you try to persuade me of a political point of view, I can do a number of things: (and whether these are valid or not, they will impact the flow of argument, and the success of the persuasion)

- I can suggest you're biased

- I can try to change the subject

- I can decide you're a bad person for holding the "wrong" views

- I can directly argue against your point

- I can attack the structure of your argument

- I can make an emotional appeal regarding why I must hold my point of view

- etc ....

In these and other ways, I can push back and modify how much I'm influenced by a one-to-one argument. I won't always be successful, for sure. But sometimes as well the persuasion go will go the other way, and I will influence you.

This is not true on social media because another power is dictating which one-to-one conversations may happen in the first place, and then loading the deck with idea before those one-to-one conversations even happen. Further, social media changes the scale of communications. If we worked together, and saw each other regularly, we could mediate each other's influence, place boundaries, etc. With social media, there is always a crowd of strangers: too many people converse with, know, and set boundaries with.

There are obviously other distinctions between one-to-one conversation and social media, but I particularly wanted to talk about the key differences here: lack of real back and forth, and scale.


Look at the documentary... One of the guy explains that AI is not yet good enough to surplace humans at their strengths, but can game us on our weaknesses. This is the exact point that people think it's not having an impact on them, while it has, even so subtle.

As a person, you can get better at convincing people of course, and masses of people even - humanity has gone through that with ups and downs. Now we have a system that's working totally differently. From example, I remind reading that many conspiracists build distance with their friends and family as they close themselves to in-person discussion or any argumentation not fitting their views.


Well Facebooks content is human.

Also, what about books? Ideas? Film.

IMHO people are too complicated, and situations too situation specific to be able to generalize about how someone is influenced.

Sometimes people ignore the advice of those close to them, sometimes they don't, and, it probably depends on the subject being discussed, the particular relationship between the people, the state of mind of the person in question, and so on.


A single person generally has orders of magnitude less power to influence than the the biggest companies in the world. That's surely not a fair comparison.


Intent matters. Intent of 1:1 conversations between people is usually win-win, or at least win-neutral. Advertising beyond the point of informing that a product exists is exploitative; it's intent is very much win-lose.

Put another way: if your friend came to you and started manipulating and pressuring you the way ads do, for the reason ads do, they'd very quickly stop being your friend.


I don't think that quiet puts the finger on the distinction.

Apple trying to sell me an iPhone claiming it's more secure and more privacy respecting might be true so if buy one it's a win-win if (a) it is true and (b) I actually wanted those features.

> Advertising beyond the point of informing that a product exists is exploitative

Okay, so, Apple showing silhouettes dancing was exploitative? The should only say "we made a device, it plays music, it's this size, the batteries last this long"?

Was the 1984 ad exploitive? I'm just trying to think of famous ads. Is the Ikea add "Time To Leave Home" ad exploitive?

I guess I don't see them as win-lose.


> Apple trying to sell me an iPhone claiming it's more secure and more privacy respecting might be true so if buy one it's a win-win if (a) it is true and (b) I actually wanted those features.

Sure, it's a win-win if the features match your needs. And a win-win would be Apple making the claim, listing the privacy features of their new iPhone, along with honest arguments why these features protect your privacy better than competition. I absolutely do not mind things like that - it's the socially-useful purpose of advertising: informing people about products and their features, so that individuals can pick the best solution to their problems.

It's only when advertising tries to override individual's agency when I consider them bad. And note that purposefully and covertly overriding someone's decision making capability is a malicious behavior, and very rarely justifiable.

> The should only say "we made a device, it plays music, it's this size, the batteries last this long"?

Would it be bad if they did only that?

> Was the 1984 ad exploitive?

Of course. While it's nothing compared to today's ads, it still tried to sell you a computer by tricking your mind with completely irrelevant references to 1984 and the feelings it evoked. It tried to override your capability for thinking, by sneaking in an emotional payload.

> Is the Ikea add "Time To Leave Home" ad exploitive?

It's a fun comedy sketch, but again: it tries to use an emotional payload to get you into the market for furniture and think of IKEA in particular (and make it a first association in your mind, over competitors).

Look at it from this point of view: imagine you live in a small town, and there are two small-time shops building and selling farming widgets. Would you want them to spend all their energy and innovation capacity one-upping each other in comedy stand-ups on the storefront, or would you prefer them to focus on designing better farming widgets?


> if buy one it's a win-win if (a) it is true and (b) I actually wanted those features.

c) you need a phone in the first palce d) you need a USD1200 phone in the first place


"Influence" is a big word.

Context and details matter. As does intent. Saying that all discussion is really "influencing", isn't useful, imho.

In the context of our discussion about Facebook, IMHO these are the defining qualities specific to Facebook:

1. Massive, global scale and scope - behavior via Facebook can influence culture, politics, etc.

2. An algorithm driving the bus, as opposed to people (albeit people choose what the algorithm does or doesn't do).

3. Facebook's primary motivation in influencing is to profit via increased user engagement - they currently have no financial incentive to care about the particular nature of said engagement.

4. Facebook's poor track record when it comes to acknowledging critique or concerns around the power its platform has, and Facebook's denial that it should exercise a degree of responsiblity (or be legally held accountable in some way) for said power.

5. The somewhat covert nature of how Facebook functions; as this documentary shows, while how Facebook functions may have been "in the open", it's not something most non-tech folk are aware of, and it's not something Facebook has been, in good faith, forthright and transparent about.

6. The particular power and impact of computers/the internet as a medium of mass communication, which we are still learning about.

6.5 As a subsection of item 6, the viral nature of the internet/social media which means stuff spreads very quickly, unlike other modes or media of human communication.

So sure, a parent "influences" their child, a teacher influences a student, newspapers influence people, and so do Coca-Cola commercials, but not the same way that Facebook has "influence", which I've attempted to describe above.


Local influence is not nearly as pernicious or destructive as social media. Yes, every person in the world influences others, and there is no escaping this.

But to compare it to what has been proven to be possible via social media is absurd.


None of this is new in principle. The difference we are talking about here is more between a hunting rifle and an M-16. The depth, pervasiveness, and insidiousness of persuasion is so much greater with social media. It’s a wider channel, a bidirectional, channel, and a one on one channel.

Now when tech similar to GPT-3 gets out there it’s going to be like hunting rifle vs atomic weapons. Pretty soon we will have what I am calling “mechanized con artistry.” We are almost literally inventing the medieval demon whispering into everyone’s ear. Mass surveillance plus big data plus generative AI will be like the hydrogen bomb of propaganda.


For anyone unsure like me, GPT-3 is a deep-learning based model which can be trained to produce human like written language (text).

https://en.m.wikipedia.org/wiki/GPT-3

Unsettling indeed.


Also there is an underlying class thing going on with "you are the product". Choosing to pay for a service is not just an equal alternative to a free service for most people. Most people have to work hard to afford paid services, many of them at low wages. Sneering at people who choose to use a free service rather than sell their time for $15 an hour is not a great look.


Exactly, it's a class thing. People who can't afford to pay for services have less "trouble" with being exploited in return for a free product.


Can’t they just not use the thing? Usually this critique is applied to social media, which is not an essential human need. Ubiquitous and handy, yes. But not essential.


The choice to not use a device which is intended to elicit interaction is not available to all, and in that case the only people that can make the choice are the others that know better.


social contact is a need. I don't think i can actually survive without watsapp.


I'll be the first to tell you that humans are social beings that need social contact. I'll also be the first to tell you that social media is not necessary for that. I assume you're being hyperbolic with Whatsapp being vital for life, but do you honestly not know how humans got their fill of social interaction 12 years ago? And no, the answer isn't Facebook.


It’s no longer socially acceptable to make unprompted PSTN voice calls, unless you and the recipient are on very familiar terms.

Even with my parents, we always arrange a time via messaging first.


Ok, and where exactly is Whatsapp a vital life necessity in your world of messaging before calling? Those also aren't forms of meaningful social interaction. I assume you again use messaging and/or voice to set up the actual interactions such as Christmas (or whatever you celebrate) and family get-togethers. I'm assuming you don't show up to those events and talk to your loved ones via Whatsapp.

I don't have social media, barely text (my plan has 100 texts per month), and hardly talk by phone (100 minutes per month). However, I have multiple friends and family that can claim that I'm the only one to have ever come to visit them at their house. This is either so strange or such a positive impact on them that I somehow hear it from other friends/family even though I'm surrounded by ether. The moral of the story, the ether isn't as thick as you think and the vast majority of people's "social interactions" are so shallow that showing up to a friend's door to drop off a bottle of wine will be a highlight of their year.


WhatsApp and its substitutes (iMessage, Facebook Messenger, Telegram, Signal, SMS) are where plans for richer interactions get made. If you don’t have any of these in common with a prospective social group, someone has to be highly motivated to relay plans to your landline.


Signal/SMS is not the same beast as WhatsApp in terms of surveillance, nudging and all of those things falling into "forced addiction". When is the last time a SMS app wanted to give you recommendation or force you online?


WhatsApp has never done any of those things for me; it is exactly analogous to iMessage, just with a different user population.


It looks like the same in the interactions, but it has critical differences beneath the surface. Start with the money - their business incentive is totally different. Apple sells devices, Facebook sells ads. One of those two companies have been regularly in the news regarding privacy breaches and disregard of their user data. And if you don't like or trust Apple, take Signal.


What if someone lives abroad and is very close with a large extended family (plus immediate family), all of whom regularly communicate on WhatsApp?

Probably not uncommon as Whatsapp was at first widely used outside the US or for folks living in the US to be in touch with family abroad.

Very often one person is not able to change an entire family dynamic. Sure, they could not use WhatsApp, but then they’d rarely talk with their family!

Point being, we don’t know the poster’s specific circumstances enough to offer any sort of informed critique.

That said, the question of generalizing from the poster’s experience is, imho, a valid one.

If we want to get all analytical about it, that’s my two cents ;).

Which is to say, my opinion is barely worth the paper it’s written on.

And since this is all on a screen ... ;)


> Ok, and where exactly is Whatsapp a vital life necessity in your world of messaging before calling? Those also aren't forms of meaningful social interaction.

These days, in many worlds countries, especially in Asia. In those places, WhatsApp is also a primary venue for business communication and vital human interaction like setting up job interviews, doctors' appointments and pretty much any other communication with other humans.

Sorry, but your point comes across as horribly oblivious - criticizing usage of WhatsApp is one thing, but acting like it hasn't become a critical part of society structure just shows a major failure of looking outside your bubble.


That would mean your parents and you are not on very familiar terms. Perhaps that is the situation for you, but it is by no means generalisable.

I live pretty well without any personal social media. I call friends and family often, while being mindful of their time. Voice calls are wonderful.


I no longer use anything with posts and likes and a feed, but direct and small-group messaging are pretty important among my peers. May I ask what generation are these people with whom you use only voice (and maybe snail mail)?


That's a good distinction between diff types of messaging.

I use email and messages as well, but the voice call is the big component. Generation-wise, it is from 7yrs old (nieces) to 90yrs old (no surprise there; oldies love to chat).


This site has posts and likes and a feed.


It's hyperbole, but it still indicates a hidden very real and well known social dynamic: humans have an intrinsic drive to belong to a tribe.

This is a very real behavioral mechanism which was essential to human survival as early as the paleolithicum and the emergence of the first hominid species. Not belonging to a tribe meant being exposed to hardships that you might not survive.

Feeling lonely is part of that mechanism. That's your subconscious kicking you into high gear and go seek companionship in order to ensure your chances to survive as an individual.

Kurz Gesagt explains this dynamic in more detail. [1]

[1] https://www.youtube.com/watch?v=n3Xv_g3g-mA

Modern technology, industrialisation and social advancements in healthcare, politics, law enforcement and agriculture have created circumstances in which you don't need to physically belong to a tribe 24/7 in order to survive. You can perfectly live alone and have your basic needs covered.

However, that drive for social connection is still there. That's hard wired into us. And that's what social media companies are exploiting.

Fear of missing out is exactly that. You don't want to be "out of the loop", you don't want to miss out on what's going on, you don't want to find yourself "outside" of a group. Think about how it was when you were in school, and you found out your friends had a get-together over the weekend and you weren't invited: it totally sucked. Well, that's basically that primitive part of your brain kicking into high gear, warning you that your survival may be at stake.

12 years ago, few people were on social media. And the vast majority of your friends contacted each other via cell phones, e-mail or MSN and such. You were less likely to miss out because you knew that the available channels didn't cater to 24/7 real time action with video and audio, plugging you in the middle of the action remotely.

Today, that's totally different. Modern communication is literally that: 24/7 high intense social contacts with video/photo/audio fragmented across dozens of group chats, group calls,.. and dozens of channels to keep track off.

Net result? Studies indicate an increased prevalence in anxiety, depression, loneliness, suicide, self harm and so on. There's a clear correlation between the two. As is shown in the documentary.

The trade off of weaning off from all of that, for many, is having to battle with and against those engrained behavioural changes that make one grasp for their smartphone every other minute. And that's, basically, the very definition of addiction.

Moreover, unlike other addictions, there's a very real chance that if you don't look at your smartphone for a day that, yes, you will miss out on information the in-crowd - peers at school, friends, co-workers with watercooler talk,... - deems important to know.


There are some mechanisms like that but I think this explanation, as well as that from the video, is layman psychology at best. Yes, there are factors or mechanisms that drive your desire for belonging, but it is a pretty unconvincing observation. It doesn't have to be tribalism to prefer being around people you trust.

But if so, being enlightened about the failures and limits of human psychology certainly would constitute a tribe of its own, no? Because it seems to be en vogue to have simple explanations. FOMO is more connected to the fear of the unknown and fear of loss in my opinion. A "tribe" would shield you of course, but it is mostly a sign of other needs not being met. Advertisers use it to their advantage for decades. Some appeal to their audiences to be the source of other peoples FOMOs. "think different" instead of "stay connected".

There are less suicides than in the 90s. That there is a suicide epidemic is a media scare. The main factor reducing the numbers seem to be economic perspectives, not some facebook group where taste was made illegal.

A much worse effect is that social media seems to push questionable characters in focus. Naive viewers and exploitative "influencers" can do quite some damage.


> There are less suicides than in the 90s

It's difficult to compare suicide over time because the method of counting changes, sometimes drastically.


> I think this explanation, as well as that from the video, is layman psychology at best

Well, kurzgesagt back their statements with references to academic research:

https://sites.google.com/view/sourcesloneliness/startseite

They also vet their videos with experts and are transparent in their methodology:

https://www.youtube.com/watch?v=JtUAAXe_0VI&vl=ar

> Yes, there are factors or mechanisms that drive your desire for belonging, but it is a pretty unconvincing observation. It doesn't have to be tribalism to prefer being around people you trust.

Why would you assume that I didn't consider other explanations?

> FOMO is more connected to the fear of the unknown and fear of loss in my opinion.

In what way wouldn't "the fear of the unkown" or "the fear of loss" be less connected with the fear of likely missing crucial parts of the conversations your social network is having?

e.g. you might miss out hearing about a party, where someone makes an personal announcement (e.g. getting married, moving to another country,...). So, now your friends have a shared experience of having heard the news first hand that you aren't part of.

> A "tribe" would shield you of course, but it is mostly a sign of other needs not being met. Advertisers use it to their advantage for decades. Some appeal to their audiences to be the source of other peoples FOMOs. "think different" instead of "stay connected".

What "other needs" are these?

> There are less suicides than in the 90s.

How is the number of suicides 30 years ago relevant to a dynamic observed over the course of the past 15 years?

> That there is a suicide epidemic is a media scare.

[1] https://www.cdc.gov/nchs/products/databriefs/db362.htm [2] https://www.nimh.nih.gov/health/statistics/suicide.shtml [3] https://afsp.org/suicide-statistics/

The exaggerating wording you're using here hints towards minimizing the issue, rather then a willingness on your part to acknowledge that social media usage and mental health are a public health concern.

> The main factor reducing the numbers seem to be economic perspectives, not some facebook group where taste was made illegal.

... but also seems to correlate with social media usage. They aren't mutually exclusive.

Look, we both know that establishing definitive observations on something as sensitive as suicide is hard. It's widely understood that suicide is underreported, and in many cases it's quite hard to establish exactly what compelled individuals to commit suicide.

The documentary equally stated that there's a correlation between increased social media usage after 2007 and an increase in suicide rate. But that's as far as it goes. In and of itself, I think that's compelling enough to warrant paying attention to.

Finally, this is touching upon a serious mental health issue, there was absolutely no need to make your comment sound as dismissive as it did.


Their sources aren't convincing. The questions about loneliness don't support the conclusion.

https://ourworldindata.org/suicide

I didn't say social media usage isn't a public health concern, there are many things that drive addiction. Social media use is convenient and it doesn't expose you to risks. Perfect for any form of escapism.

I doubt suicide is underreported. There are certainly cases misattributed, cases of attempts are excluded perhaps, but concluding something on that assumption seems premature.

I still remain convinced that a lack of perspectives in life is probably a main cause. Maybe social media paints a wrong or a more realistic light, but it is probably not the source of increased suicide.

I specifically criticized the explanation about tribalism. It seems wrong and isn't underlined anywhere.

> In what way wouldn't "the fear of the unkown" or "the fear of loss" [...]

People have the fear that people are bonding while they are absent. Mostly the same sources that are the foundations of envy.

> What "other needs" are these?

Fulfilling companionship or friendship for example.

I think this is a case where the conclusion "social media sucks" was determined before the analysis of issues.

> How is the number of suicides 30 years ago relevant to a dynamic observed over the course of the past 15 years?

To have a reference. Especially if we only have social media for 15 years, it is self evident to lock back a few more years.


> https://ourworldindata.org/suicide

This is not a credible source for suicide data.

> I doubt suicide is underreported. There are certainly cases misattributed, cases of attempts are excluded perhaps, but concluding something on that assumption seems premature.

There are lots of complicated reasons why suicide may be under-reported.

In the US the work to get standard definitions, in the NVDRS, to be used across the country is relatively recent. This document is from 2011.

https://www.cdc.gov/violenceprevention/pdf/Self-Directed-Vio...

> Despite the large volume of data on certain types of SDV, the utility and reproducibility of the resulting information is sometimes questionable. Mortality data are problematic for several reasons: geographical differences in the definition of suicide and how equivocal cases are classified; jurisdictional differences in the requirements for the office of coroner or medical examiner affecting the standard of proof required to classify a death as a suicide; and differences in terms of the extent to which potential suicides are investigated to accurately determine cause of death.18 The quality of the data on nonfatal suicidal behavior is even more problematic than that of suicides. The concerns about discrepancies in nomenclature19-23 and accurate reporting11,24 apply here even more than with suicides. Also, except for rare exceptions there is neither systematic nor mandatory reporting of nonfatal suicidal behavior in the United States at the state or local level, nor is there routine systematic collection of non-suicidal intentional self harm data.

> These “system” problems with data collection have been discussed for more than a generation. Over 35 years ago, the National Institute of Mental Health (NIMH) convened a conference on suicide prevention at which a committee was charged with recommending a system for defining and communicating about suicidal behaviors.25 More recently, two scientific reviews that addressed the state of suicide-related research also remarked on the need for consistent definitions. The Institute of Medicine issued a report entitled Reducing Suicide: A National Imperative.4 This report states ”Research on suicide is plagued by many methodological problems... definitions lack uniformity,...reporting of suicide is inaccurate.” “There is a need for researchers and clinicians in suicidology to use a common language or set of terms in describing suicidal phenomena.” The World Health Organization issued the World Report on Violence and Health.2 In the chapter addressing self-directed violence the authors note “Data on suicide and attempted suicide should be valid and up to date. There should be a set of uniform criteria and definitions and – once established – these should be consistently applied and continually reviewed.”


The criticism at the data is valid, but there is still more evidence that points in the direction that suicide is on decline globally. And if the methodology of acquiring data is flawed to such a degree, we also wouldn't be able to make a statement in the other direction.


I moved most of my critical contacts to Signal. Not done yet though to be able to delete WhatsApp, but near there.


As my ex-colleague says "Any advertisement is propaganda".


I prefer David Foster Wallace's description of advertising:

"It did what all ads are supposed to do: create an anxiety relievable by purchase.”


I always liked: "Advertising robs you of your dignity, and sells it back to you at the price of the product."


With the arguable exception of therapy/counselling, you can never retrieve your dignity and self-worth by buying a product.

So all it really does is sell the promise.


True. As a side point, IMHO, therapy and counseling are not products.

They’re services, and, even more so, should be a service bound by a medical code of ethics, which, since the healing professions go way back, imho makes therapy and counseling distinct from many other services.

:)


So advertisements deplete dignity, with no recompense or amelioration? No wonder our society's replete with indignity.


I believe in most Latin languages there isn't a separate word.

At least in Portuguese and Spanish, both advertisement and propaganda are simply "propaganda".


Not true. In spanish, "ad" is usually anuncio, or sometimes comercial or publicidad. Propaganda can also be used, but is less common than those i think, I've never heard it used.

I once read an original edition of Bernays' Propaganda (1928) with its title in a jaunty typeface on the cover, before the word had a negative connotation.

Pic of its cover here: https://www.baumanrarebooks.com/rare-books/edward--bernays/p...


I have definitely heard "propaganda" in South American Spanish-speaking countries. Perhaps a regional quirk?


You can also use "publicidade" in Portuguese, but "propaganda" is not used exclusively for "political propaganda".


And all propaganda is political.


Right, except in this case Google and Facebook are not paying the people who create the content (so-called "users")^1, however Google and Facebook are the ones getting paid by the advertisers. The so-called "users" are also paying for the internet access and bandwidth over which the ads are delivered.

In sum, the "users" are the ones paying for the costs of this particular advertising vector (the internet and web).

1. In most cases. There are exceptions where original authors can get a cut of Google/Facebook's take.


Actually, Facebook does not share any of their ad revenue with their Facebook "users".


The implications for election advertising are that votes and even revolutions are available to the highest bidder.


The advertisers are the buyers, the platforms are the sellers whose product is your well-trained eyes.


The Jaron Lanier quote (with the first part paraphrased but the rest exact) is this -

"that we are the product being sold to advertisers is too simplistic. It is the gradual, slight, imperceptible change in your own behaviour and perception that is the product."

Yes manipulation is also the goal of all advertising, however the power to manipulate is not mostly in the hands of advertisers. The platform owners wield tremendous power and can decide what gets shown when and what gets priority over what. It means their political leanings influence people whether or not those advertise on the platform. We've never had this kind of concentration of power over the collective mind I think.


I am sure there would be regulation and consequences that would apply to Movies, Cable TV, airline flights if misinformation being pushed about Earth being flat to commercially engage them.


There is a world of difference - not just one word - between between contextual & tracking advertising; contextual only knows that somebody reading/watching this content is interested in this subject. Social media tracks everything about YOU. It is totally different


This is taking the comment out of its context.

Yes its benign taken out of that context.

In the context of social media and tech, this is no longer the case, or it is the case that people are not comfortable with its untrammeled evolution and utilization.


Yea, but it's never spelled out like this.


I was with you until the last sentence.


That was Jaron Lanier.

http://www.jaronlanier.com/ if you're interested in more. He was an early pioneer in VR tech, and is a person with generally interesting ideas. He is more known today for his critiques of the tech industry and Silicon Valley culture.


> There was one big light-bulb moment for me.

Mine was seeing the AI / algorithm visualized as 3 humans trying to figure out what the best plan of action was to make you engage.

That really made me think about how it's another group of humans out there trying to take time away from me for the sole purpose of making their company more money. That's a much stronger mental image than a vague algorithm that we all knew existed.

That concept of the documentary alone made it worth watching.


It never occurred to me until now that Hacker News is free and this statement could be applied here as well, though I'm not exactly sure in what way.


It’s content marketing for Y Combinator itself, like a blog on steroids. Y Combinator wants to market to the following groups:

- startup founders, that could apply to YC

- investors, that could invest in YC companies

- potential customers for YC companies

- potential employees for YC companies

HN brings all of these people into a YC controlled community, and gets them thinking about YC frequently. It’s outstanding marketing, and like all great content marketing, it also provides value to the people being marketed to.


Of course. Remember that Dropbox started here [0]. It inspires lots of people to apply to YC. YC founders participate in these forums. It raises everyone's value in general. HN members often are the early adopters on many YC products, giving important feedback (for free) to these companies.

[0] https://news.ycombinator.com/item?id=8863


Oh, how funny is it, that the top comment would be still the top comment today.


> This is one of the few places where a tech outsider can be exposed to some of the topics that are on the minds of insiders. The caveat is that criticism of any of the conceptual bodies orbiting the starry notion of "tech startup" (social ramifications, diversity and discrimination, wealth and income, etc.) are heavily frowned upon. Now, are mods going through and sweeping away every comment that proclaims, "Tech bad."? (Mostly) no. But no one wants to bite the hand that feeds it, particularly when so many have usernames easily tied to their public personas. So it's disappointing but not surprising to see people hostile to or tip-toeing around said criticism.

> In any case, the basic dynamic is interesting: the less you pay for things directly, the more power you give up to those who do. The abstraction is largely symbolic for ads (consumers pay advertisers in higher product costs), but apparently where and when money exchanges hands matters to who ultimately has influence.

https://news.ycombinator.com/item?id=22513699


> The caveat is that criticism of any of the conceptual bodies orbiting the starry notion of "tech startup" (social ramifications, diversity and discrimination, wealth and income, etc.) are heavily frowned upon.

Perhaps in the US time zone :). I read and comment mostly when EU is awake, and I assure you, I haven't seen a place more deeply cynical and reserved about tech startups than HN (and having the reasons and reasoning to back the cynicism up) :).

I wonder if anyone did an analysis of how HN sentiments change during the day, as people from different parts of the world are participating?


To change your behaviour to look more favourably on Ycombinator startups. (/s?)


Unless they're just losing money on it.

An apartment building I once lived in had free candy in the lobby sometimes. I rather doubt it was driven by any measurable profits for them by doing it.


YCombinator companies advertise here for jobs?


All the time. You'll frequently see front page titles like "My company (YC19) Hiring Engineers"


> He refined it to, (rough quote), "Changes to your behavior and actions are the product"

That's true of all media. Doesn't matter if you pay for it or not. Even netflix. Even this post.

What is the "documentary" trying to achieve? Change behavior.

Who is behind the social media campaign?

"Ask HN: What do you think about “The Social Dilemma”?"

https://news.ycombinator.com/item?id=24468533

> That distinction made the insidiousness much more clear than the former statement.

Just as insidious are "documentaries". They are just as sneaky and agenda driven as facebook algorithms.


Documentaries are made a clear statement of narrative intent. You can disagree with the positioning, but a documentary never pretends to be something other than a documentary.

What is Instagram? Is it a tool for sharing photos, or is a social engineering experiment intended to increase competitive anxiety and narcissism?

What is Facebook? Is it a way to stay in touch with friends and family, or is it a social engineering experiment designed to promote "engagement" through ever-increasing political and emotional extremism?

If I want to make a documentary I can use the camera in my cellphone and maybe a cheap lapel microphone, buy some cheap studio lights, and edit it with cheap software. The research would be harder to buy, but still not impossible for someone with good basic journalism skills.

If it's not defamatory I can post it on YouTube. If it's solid journalism with an original and interesting angle there's even a fair chance I'll be able to sell it a media network.

If I want to create my own global social network - that's a slightly more challenging project.


> Documentaries are made a clear statement of narrative intent. You can disagree with the positioning, but a documentary never pretends to be something other than a documentary.

Except that the masses believe "documentaries" = truth. I know I did when I was younger. No documentary claims to be "agenda ridden 'movie' from biased individuals funded by even more biased individuals".

> If it's not defamatory I can post it on YouTube. If it's solid journalism with an original and interesting angle there's even a fair chance I'll be able to sell it a media network.

Except that you still depend on social media. You depend on youtube or netflix to give you preferential treatment.

> If I want to create my own global social network - that's a slightly more challenging project.

It's also challenging to get the money, not only produce the film, but get netflix to it special treatment and of course buy the "ad" push we see on social media ( ironically enough ).

"The Social Dilemma" isn't just a "documentary". It's a part of a well funded propaganda campaign. There is obviously political and financial backing for this behind the scenes. Not that I'm against the sentiment because social media monopolies or any monopolies are a threat to society. And I also include netflix as a monopoly threat.


> Documentaries are made a clear statement of narrative intent. You can disagree with the positioning, but a documentary never pretends to be something other than a documentary.

But you could describe this as propaganda with anti social media agenda. I don't necessarily disagree with your point, but this argument can easily be flipped and feels a bit like going no true Scotsman [0] since there very much are documentations (as listed) with doubtful or undefined intent.

[0] https://en.m.wikipedia.org/wiki/No_true_Scotsman


> What is the "documentary" trying to achieve? Change behavior.

But will the filmmakers profit if they succeed? They don't make any money by successfully changing your mind.


You can profit in more ways than just with money. For instance, you can gain status in your sphere by successfully changing people's minds with your documentary, and I would assume that's partly what motivates people working on these projects.


Well yea they will make money if they succeed because that means they made a good documentary and will be paid to make more.


Well, we typically consider films to have intrinsic value that scam ads for boner pills and clickjacking attacks don't.


Exactly. Exactly. So many of those are thinly veiled lies in disguise ; and when creators are being pointed to inaccuracies, their response is "we shot this not to accurately convey the sequence of events, but to raise awareness and to start public debate". Obviously, no debate is started, because general public likes to consume, not to analyze what is being consumed. Sorry for the rant, I'm fed up with documentaries being mocumentaries


I really liked the futures analogy. On top of my head:

> there are pork chops futures, oil futures, and now human futures.

This is so full of sense in all degrees.


I feel like they edited that quote a little too far away from the final(?) statement she gave, which was along the lines of "these markets should be banned, just like we have banned markets in human organs and human slaves."


> "If you're not paying then you are the product" idea

To be honest, "Even if you're paying you are the product too".[0,1]

[0] https://time.com/5596033/lawsuit-apple-selling-itunes-listen...

[1] https://www.washingtonpost.com/technology/2019/05/28/its-mid...


"Changes to your behavior and actions"

Is there an industry anywhere that is not in the business of influencing behavior or actions? Why do you think that doesn't apply to news, education, healthcare, whatever?


There's a big difference when you take informed consent into account.

When I go to the doctor I understand the scope and goals of my healthcare provider. I'm not surprised when my behaviors are changed to enhance my health.

When the average person uses social media to connect with friends they aren't informed on the scope or goals of the social media provider. When I RSVP to a birthday party I only expect that my response will be sent -- I don't expect that I'll have birthday gifts suggested to me or have my relationships tracked by surveillance organizations.


> When I go to the doctor I understand the scope and goals of my healthcare provider.

Are you sure about that? The fact that I have to sometimes argue for a test, and their counter argument isn't "the test isn't useful or even worse", but rather "the test is expensive" makes me wonder what their goal is. I'm paying for the test (or my insurance is), aren't I? It seems they're trying to optimize for something, but I'm not sure if its my health.


Physicians have an ethical obligation not to order unnecessary treatments, especially those that could cause iatrogenic harm. That applies regardless of who is paying. Most tests produce some level of false positives, which can potentially lead to more unnecessary treatments.

From an evidence-based medicine standpoint, how likely are those tests to actually optimize your health?


In this case the doctor explicitly said it was to reduce costs, although the other test is preferable from a diagnosis perspective. Like I mentioned previously, if the reason the doctor gave was that the more expensive test was also less useful or had other negative consequences I didn't think of, then I'd appreciate that. To simply rule out the more expensive test based on cost seemed at ends with what I wanted - again as insurance was paying the tab.


You should ask your doctor! I don't have enough information to give you a confident guess, but my intuition is that theyre trying to save you money when an expensive test is unlikely to be helpful.

Many people struggle to make ends meet, so it makes sense to avoid superfluous medical costs. If you tell your healthcare provider that money is no object I'd imagine that they'd be able to weigh the financial aspect more accurately.


The customers of those industries are showing up for roughly the "changes" they are receiving.

Customers of social media are showing up for pictures of grandkids and are receiving disinformation & outrage, because that content hacks the brain & will ensure the eyeballs come back more reliably, and for longer.


In my experience, the best way to get people who have only seen the consumer side is in the third person.

We tend to underestimate the influence advertising (and media, etc.) have on us, but are relatively objective in observing the effects of advertising on other people, society, and such. Being influenced by advertising reads as being duped to most people, and we resist this conclusion.

This is a little different to many subjects, where we tend to empathize more with examples that have us as the object.


not as catchy but much better than "If you're not paying then you are the product". Wikipedia free. Linux in all its flavors are free. Libre Office, Firefox, LLVM, gcc, python. Lots of things are I'm not paying and don't make me the product.


It's an interesting counterpoint, but I'd argue that none of those are really businessman or are expected to have revenue models. The exception being Firefox, which I believe is paid by Google for search engine placement and does make you the product.


businesses*.


This is interesting. I wonder what it's like also to kind of... Not really make it about that at all. Say you're hanging out with your friends IRL, and you're not paying them to hang out with you, or didn't pay the grass to hang out with them among it. Are you still being a "product"? I think in that case we don't dig into it because it all seems part of some natural order of things. I guess we just need to keep living till we get there.

I feel like it seems worthwhile to have an angle that doesn't dichotomize your actions into product/non-product ones; maybe you just act in a space orthogonal to it and/or use your understanding of it to inform yourself but aren't driven by needing to be product or non-product. Capitalizing on that last desire is I think how the language ends up becoming too much ends and not means.


https://coolguy.website/writing/the-future-will-be-technical...

This may or may not be related, because I'm finding it difficult to parse your comment.


Honestly, I love it! Thanks for that. :) Yeah I care about this: we have scaled bakeries that make lots of stuff, and we've industrialized it and commercialized it and ... etc. And that's great! But we also bake at home in our kitchen, for our families. We do things in between: small businesses that serve their communities. I want that we do that with software, and I want it to be as widespread as is with things we take that more for granted about like cooking. We're doing it already: when you write a blog post, you are producing a software experience for someone else. I hope that over time we spread the empowerment of creation of more kinds of software experiences across more people, and also without needing to make everyone feel like it needs to be about scaling: gardening, not necessarily agriculture.


Seems to be in line with the history of PR and advertising presented in https://en.wikipedia.org/wiki/The_Century_of_the_Self


True I had heard many versions of this inference but this came in as a rational explanation.


This kind of wording is conspiratorial mumbo jumbo. A social media's product is it's platform and software. Their revenue comes from advertising. No one ever told someone watching network TV in the 80s, "you are the product", because that would be ridiculous. It's no less ridiculous to do that with websites and phone apps.

Why can't we just have a frank discussion about advertising and privacy without this kind of manipulative language?


“Talking past each other” is now the norm; culture is not a solution of molecules that can settle back down - how the crap do we unbake a cake?

This sounds like fluff: appreciating the art, style, aesthetic of the work/entity we are talking to should be #1 goal.

Engaging in the appreciation of art is to jettison our judgement of the subject to focus on the choices “why” and “how.”

I don’t know if this groks with anyone else....that’s kinda part of it too.

The prospect of work being judged by peers negatively is the pain amplifier on all our hands.

There are some of us out there - good people that are just broken.

Their path has made them feel it as if it’s on all the time, and they are as unstable and judgemental as they arw talented.

I have no idea how to fix the broken - I am just trying to paint myself whole again, one human interaction at a time.


A social media's product is its platform/software only if it manages to sell those or extract rent. That could apply to "Facebook at Work" and LinkedIn, but they're the exception. For other social media, the platform and software are infrastructure, not the product.

The key difference between network TV in the 80s and social media now is the former had content to offer. As such, you could call the ads supplemental. TV also has dual income streams: subscriptions and advertising.

A social media platform has no content to offer, except the content provided by its users. If you provide content for free and the company benefits through increased eyeballs and therefore ad revenue, in a sense you're an unpaid employee. That makes you the product as shorthand for you providing the product.

Based on main revenue streams, network TV is either in the entertainment or advertising sector (I haven't looked up the numbers), but social media are squarely in the advertising sector. That makes the user's data the product, and "you are the product" a totum pro parte.


Nielsen ratings[0] have been a thing since the 1930s, so in a way people's attention has indeed been commoditized for a while now - only with an increasing granularity that has now reached some of an individual's metrics (and we're continuing to go along that path with smart devices and addition of sensors).

I'm not sure why saying to people that their attention and subsequent changes in behaviours were the fundamental parts of entire markets would be conspiratorial, but I might be misunderstanding you.

[0] https://en.wikipedia.org/wiki/Nielsen_ratings


Two key differences:

1. Nielsen boxes were and are opt-in.

2. You were and are compensated for keeping one.


1. Social media is opt-in

2. You are compensated by accessing an incredibly complex system and network for free where otherwise you would have to maintain or pay for some kind of infrastructure.

(I am playing the devil's advocate here because it is interesting to me to understand in what way other than granularity this would be any different from the way the media has been communicated to us for a long time)


What was once "analyitcs" is now called "spying" on HN as well, which pretty much shows how much nuanced debate is now possible here - any kind of attempt at talking about certain products just brings out extremist rhetoric which kinda defeats constructivity on both sides.


In the 80s you weren’t individually targeted based on data collected on you personally.


Ever Heard of Nudge Economics?


Agree, that was the biggest insight in the movie.

I do question whether it's a problem. No one likes the idea of being manipulated, but... Aren't we being influenced by others all the time? What is really genuine and what is external influence?


That was, if I recall correctly, Jaron Lanier [0].

Well worth reading stuff he wrote, of recordings of his past speeches, if you haven't done so already.

[0]: https://en.wikipedia.org/wiki/Jaron_Lanier


This is literally what many parasites do. In the case of GI parasites they make you crave what they want.


Isn't that true of all advertising, be it TV, billboards, magazines, or digital?


No


For the last 5000 years advertising has tried and succeeded at making people change their behavior and actions. It's not all that insidious, it's just how advertising works, that's its entire purpose.

The doc was not awful, it's good to get people talking, but it was more one-sided than a facebook feed, which I found very ironic. Every well lit talking head had the same point of view, they presented their mic drop line and then there was no discussion, no counterpoint, no follow up.


Agree. That was a sharp observation by Jaron Lanier (father of VR?)


The pigs on the farm don't pay rent.


my light bulb moment was when they compare Wikipedia with Facebook and Google. I took a moment of pause and probably thought about for the first time in my life how this changes the whole narrative. Facebook and Google actually manipulates you to give you a different picture of the reality and basically dividing us even on the subjects of prime importance like global warming, corona virus etc where we can't afford to form different opinions. They shouldn't be allowed to do that sometimes its the question of life and death.


So...essentially the product they sell is your metadata; which is also the data the surveillance state finds valuable.


If you don't pay you obey!


in much the same way the documentary was designed to do!


The film does play a major role. Many of us here on HN don't even have an face/insta/twi/tube account. We've known about this for years. But explaining it to the layperson makes you sound like you are paranoid. The movie explains it very well. (although they omit Netflix as a culprit).

Here is one thing that I have a hard time explaining to people. Fun is the enemy. Or at least the enemy is disguised as. When you share a meme, it's because it's fun. No harm there. But memes are fuel that keep the system going. People are not being misinformed (only) because the media is spewing conspiracies. They are misinformed because they are getting their news from facebook memes. You know those low quality screenshot of a news article? Or the set of 3 pictures that shows an exaggerated reaction to some news? Or worse, the twitter screenshot with no context?

It is funny. That's why we share it. But when you realize what is happening in the background, it is not funny anymore.

Edit: I wish they had added some metric to the little bell on top: https://i.imgur.com/WNFThds.png . it would have shown how much the little red circle entices you to click.


It seems to me that the hypothesis that somehow social media is generating more misinformation than before is an exercise in ignoring all human history before 2004 ish. People were always misinformed, how do you explain the whole 20th century otherwise.


The dose makes the poison. Almost everything in life is about 'how much' not 'is or isn't'.

The vectors we have to spread misinformation nowadays are far more powerful and readily available than ever before. What was once a (literal) walk from one village to the next is now an express line straight into the brains of thousands to millions.

It's easier to destroy truth and reality than it ever has been.


Where is the evidence for this hypothesis that it’s easier than ever before ? Communists, Nazis had absolutely no trouble destroying truth. History is full of this.


Your argument of "we've overcome similar things before" was dealt with in the documentary: this is always adapting to hook you in better, it benefits from massive social pressure and it's fueled by billions of $. So actually we've never overcome anything like this before and no one had such power before to spread misinformation.

One could argue that there's plenty of other reliable news sources, but for many reasons the trust in official news sources is collapsing. One reason is that social media has flooded the world with so much content that it's hard to figure out what's factual and what not.


No, it hasn’t been dealt with. My point is prior to 2004 there was enormous amount of disinformation in the world that had no trouble spreading quickly. The documentary presented no evidence that somehow people are now misinformed more than they had been in the past. Social media is one way to spread both information and misinformation but it’s not the only way and it’s not clear that its net effect is that people are misinformed more than before. I remember the world before iPhones and Facebook and I’d say average person in that world was totally misinformed about many things.


> I remember the world before iPhones and Facebook and I’d say average person in that world was totally misinformed about many things.

Not nearly as misinformed as during the communist/nazi eras you mentioned above.


There are studies showing how conspiracy theories travel vs scientific information - what would count as evidence?


I think the poster above is suggesting that whatever conditions led to worldwide tensions (and world war) may be resurfacing now. Not that these tensions always existed to this degree. Just that they have before, and are again now.


I get the point.

In my experience I can safely say that Social Media impact on our information ecosystem is a huge step change from anything we have seen before, to the point that it is effectively new.

That the current problems build on human neuronal frailties that have existed since the dawn of time, is also true.

So it is natural this conversational junction will come up repeatedly in future conversations.

How do we answer it and settle the issue comprehensively? Hence the question.


IMO we can all see how the spread of misinformation is unproductive and benefits only a few stakeholders. So I have confidence that some way or another we will figure out a solution. It will be a new solution, unlike things we've experienced in the past, and it will also be similar to the past in that we will overcome it.


You are right, people were misinformed from the dawn of history. But never before in history has the real information been so available.

If you believed that pharaoh was a God, well there was no way to know otherwise. Not only you didn't know how to read, but if you did, all the writing did confirm he was, in fact, a God.

Today, information about counter arguments is available for any claim. If it sounds suspicious, you can use the discretion of your mobile device and look up 2 or 3 different opinions. You can read the study. You can listen to experts in the field. You can do all that before the king finishes his speech.


If the truth sounds suspicious, you can look that up to, and find plenty of counter arguments with pseudo or self proclaimed authorities.

Flat earthers don't just read one article and suddenly they believe it. They spend a lot of time going down rabbit holes reading and watching all sorts of stuff.

There's good information out there, but there's also a swamp of bullshit pretending not to be that you have to wade through.


And not only do flat earthers not read reputable sources to soberly reflect on the matter, but often the truth or falsehood of their preferred conspiracy theory is not really the point. A lot of people are getting into Flat Earth, QAnon, or 5G-causes-whatever because those forums offer them a sense of community that satisfies their emotional needs. Perpetuating the conspiracy theory is just the ritual they must perform to keep the group vibe going, but in itself the conspiracy isn’t all that really important (as destructive as its side-effects are for the rest of the world).

I think the challenge for defeating those theories is providing the poor, undereducated and marginalized a more wholesome way to spend their time among other people. That is a hard challenge when community centers and churches are no longer a significant thing in many locations, and now the COVID lockdown means even less real-life socializing.


I agree with much of your post, except this:

> I think the challenge for defeating those theories is providing the poor, undereducated and marginalized a more wholesome way to spend their time among other people.

Why do you think it's the poor who buy into these theories? I live in a relatively well to do neighborhood (not in the US) and I've seen plenty of anti-5G banners hanging from balconies, there are plenty of anti vaxxers, plenty of people who think COVID19 is a hoax, and I know some people tangentially related to flat earthers (via some new age beliefs). These are all educated people.


> Why do you think it's the poor who buy into these theories?

It is not solely the poor and uneducated who buy into these theories. However, for several extreme movements in recent years, sociologists have noted that they mainly tend to attract people who are socially marginalized in some way. While the educated and even some elected politicians can become visible proponents of the given conspiracy theory, they are arising on top of a mass of less privileged supporters.


How is it that sociologists know these things about anons?


This is an important point. It reminds me of pyramid schemes (essential oils etc.) which can be found among the affluent.

These conspiracies don’t start with much, but if one gains a bit of traction there’s suddenly a bunch of “influencers” peddling it for more influence (fuck society, I need those subs and likes!... right?). More influence, more ad dollars. Next thing you know a 4chan meme is political propaganda on the national stage.

I really don’t think this is hard to follow. Anyone who’s been in tech or marketing for a while either knows this, or they’re incentivized to pretend they don’t.


People were always misinformed, but now misinformers operate at web scale. Thus, more misinformation.


Information is also operating at web scale, so it’s not obvious that the net effect is more misinformed population than before.


> Many of us here on HN don't even have an face/insta/twi/tube account. We've known about this for years.

No. HN does not get it at all. You never see coherent explanations on HN.

This movie got it pretty well. Far better than HN.

I think people on HN can see there are problems with Social Media, they are smarter than the general population, and technical, so know alternatives.

They see the smoke and can act correctly be they don't get the problem at all.


Social Media platforms are like being famous without any of the good parts.

You just get all the shitty parts of living your life with 1000 fake friends(I mean fans) like a hero complex, or an influence complex, or hyper sensitivity to judgement, or addiction to others opinions, or addiction, or suicide, or an inability to communicate with people different than you, or an inability to determine fact from scam, or a permanent state of social one-up-manship, or a constant stream of unending judgement for all your actions in a public forum, or shit tons of fake friends, or complete social isolation in a room of people, or an advocate complex.

All for the pretend reason of "communication" or "connecting the world".

Humans are not built to have close social bonds with 50+ people, it breaks down something about what grounds us. It is why there is a cliché for the famous person with all the bad parts above.


Humans absolutely are not built for large social networks. There is a theory called Dunbar's number that suggests that humans have a cognitive limit of ~150 social relationships.

https://en.wikipedia.org/wiki/Dunbar%27s_number


IMO, the continuation to Dunbar's number (theory) is Yuval Noah Harari's theory, as outlined in Sapiens.

That is, early humans were intelligent animals, but limited in our sophistication by Dunbar's number, so to speak. That is our range of social behaviour (and therefore most behaviour) was biologically limited to small social groups.

The Palaeolithic revolution circa 50kya-100kya was (according to YNH) the invention of language and "fiction" sophistication which enabled cooperation and in larger groups. The concept of a "tribe" with ancestors, shared traditions and social cooperation in much larger groups. Much larger groups allow for much more sophistication.

In some sense, he defines human progress as the quality of cooperation in larger groups. Basically culture takes over from biology, in determining effective human social group size.

Social networks are just another advancement in this trend. All these advancements are somewhat pathological, because they're at odds with our innate (noncultural) limitations. Consider how bad we are at "politics" and how "politics" is a bad word. The contrast is decision making in small, tight knit groups.


"A friend to all is a friend to none." -Aristotle


Nobody says you have to be (fake) close friends to the people you network with on social media.

I follow Paul Graham on Twitter. I don't think he is my friend (if only because he doesn't know I exist).


Or you just look at cool pictures every once in a while.


Although few people will read this follow up comment.

Since 2008-2009 and the mainstreaming of social media, the US has seen a 40% increase in suicide rate for 15-24 year old's.

The "good" does not out way the bad.


I really like this documentary/film, but if I’m being honest, it does leave a taste of typical SV virtue signaling in my mouth.

The website explicitly states that their site allows cookies "set through our site by our advertising partners and may be used to build a profile of your interests and show you relevant ads on other sites."

Since visiting I received two ads for “Moment” in instagram, and one for Asana on YouTube.

Crazy hypocritical, opt-in or not.


After reading your comment I immediately got reminded of : https://www.youtube.com/watch?v=6XebyM7mU0A (Rick and Morty - Simple Rick Wafer Shattering the Grand Illusion)


Not to mention the film is funded and owned by Netflix itself.


I was actually baffled by this move. Is Netflix going to war with FB and Google? I mean, Netflix itself collects a tonne of data... The film called out FB, Google and Twitter numerous times while Netflix was conveniently missed.

Not saying I'd expect them to damage their own brand, but it's still very, very close to shooting yourself in your own foot.


Either Apple, Microsoft, nor Amazon were mentioned. All of which collect data.

The difference being neither run a social network. Well, LinkedIn is one, I suppose.


Does Google run a social network? There was plus, but that's gone. Youtube is kind of a social network I guess? But I don't know anyone that uses it as anything other than a one way means of communication.


For Google they talk about how the same search can give you different results depending on the user, so for instance they mention search about "climate change" it can return you first neagtionism or confirmation news, based on what Google estimates that will make you to engage more, not by which one is the most relevant / accurate / true


It doesn't have to be a social network. They track how you use their apps and software and construct them with the goal of keeping you in their ecosystem as much as possible. Tristan Harris, one of the main people in the movie, worked on GMail when he started wondering why they weren't focusing on helping people use their apps in a more responsible, less addictive manner.


yea but the difference is you pay for a netflix subscription. Their main purpose is to keep you paying for a subscription, social media's purpose is to keep you engaged so they can advertise to you.


Yes, but: - They're trying to do the same thing by keeping you engaged by shoving up as much content as they can to you, by making recommender systems that "know you". - They polarize you in just the same way as advertisements do. Once I started watching documentaries on meat, that's all I was suggested further. The only thing I'd say is Netflix's content is regulated and less opinionated, whereas YouTube is a free market and every opinion has its own library of rabbit holes to dig into.


Well, netflix's product is the content they serve and one logs on netflix to consume its content. FB or google are supposed to offer other "services" for "free" and end up abusing that in order to shove ads down your throat. Am I the only who sees a difference there ?


as Bill Hicks warned us against, it's like the anti-advertising dollar the advertisers are marketing to.


And the top of the website has a wiggling bell notification icon, so you can sign up for some dopamine fixes.


I agree. These guys want to sell you another technical solution. When Netflix finds one they like they will let you know.


This actually enforces the point of the movie.

We don’t live in a world that can break free of these atrocities.


?

The point of the movie (per the website as well) is literally a call to, “rebuild the system.”


That’s not what it’s about, that’s what they tell you it’s about.


Every anti-war movie ends up being a pro-war movie. I stoped watching it halfway through because I couldn't get past the cheap narrative device of begging a question and interrupting the answer with cheap drama. Sort of like cable television or youtube, "How did they build this big thing, find out after a word of our sponsor". The infamous 1 in 3 success rate to keep the user engaged with little dopamine hits. I think I'll become aware before the machine ever will.


I don't understand this movie. I agree with the sentiment that social media is bad for people, especially youngsters, but I fail to see how advertising and corporations are to blame for the UX practices of social media websites (notifications, infinite scroll, liking posts etc), especially to the level that the movie makes it out to be.

Sure, it is in Facebook's best interest to keep you on the site for as long as possible so that they can drive ads. But the same issues with social media existed before Facebook had ads. People would still be checking their feed constantly, checking if their post had new likes, seeking the approval of their peers.

One of the interviewees said he was addicted to his email... what bad practices does Gmail employ and how exactly are advertisers to blame for this as well? Gmail has a couple of ads but that's about it? It also said Whatsapp was one of the apps responsible for the spread of fake news, this is simply a texting client with the ability to send messages to groups.

Maybe it turns out that if you put technology and instant messaging in the hands of social circles, messages, whether fake news or not, spreads faster than ever.

If Facebook had no financial incentive, sure, maybe they would have realised the negative effect it is having on people and shut the whole thing down. But then another site would have come along doing exactly the same thing. Maybe the issue is that people don't cope well when being connected with hundreds of "friends", constantly seeing updates from their lives and feeling compelled to also share updates of their own.


Yeah, there was a lot of denouncing evil practices, and very little instrospection. Like with every human vice, there being bad agents exploiting the phenomenon shouldn't overshadow the fact that its deep rooted in how we work, because it risks precluding self-examination and self-awareness as a way out.

Remember when the Flappy Bird guy removed the game from the store because it was too addictive for his liking? It goes to show how the addictiveness of tech can easily be accidental. And then people complained that he did. Even when a creator says "please, stop, this is harmful" it will go unheard. So putting all the blame on evil corporate decisions is an incomplete picture, but it sells better I guess.


Huh, I forgot about Flappy bird!! That pretty much hits the nail on the head. I agree blaming corporations sells better, but they really exaggerated it.

I just rewatched some of it and got to the bit on A/B testing and how it is mind control and we are all guinea pigs being manipulated... apparently every marketing department in every business, and every advertising company is now some evil puppet master controlling people without their consent!


Ironically, I recall a discussion on HN about people being irrationally afraid of A/B testing, and how overcoming it could greatly benefit society (the centerpiece topic at the moment were variants of hospital posters about hand-washing, ircc). A/B testing is just another neutral tool, but productions like this feed on those fears.


I don't think they claimed only social media has problems or advertising is the only reason it has them. Social media is where the problems meet and affect the world most. And Facebook was always designed for advertising.[1]

WhatsApp has large groups anyone can join. SMS doesn't.

[1] https://www.businessinsider.com/facebooks-first-ever-ad-sale...


This has been done to death. And, I don't mean that I'm tired of hearing it. It's more like talking about the obesity epidemic. Conceptually the solution ought to be pretty simple. (eat less food, exercise more) But in practice it's a pernicious problem which plays on our weaknesses, and does so in a way such that most people are vulnerable. We never solved this problem. Most people are a little too fat. And so too is seems with social media. There are solutions out there which definitely could work, but I doubt we'll ever truly implement them. The problem of social media is human impulse control. And as you know, across a wide population, impulse control is lacking.


And yet we are now looking at the obesity epidemic from the production side, rather than blaming people. The food itself hijacks important systems that are parts of our hardware. As the movie demonstrates, social media all too often does the same thing: hijacks the hardware. And as humans, we have a lot of important social instincts that are built in. Companies should not be permitted to take advantage of it: Artificial intelligences that plan feeds to addict us should be illegal, in my not so humble opinion.


If you think the blame lies on the production side then how do you explain the low obesity rate in Japan? They have no significant restrictions on food production.

https://www.cia.gov/library/publications/the-world-factbook/...


Extreme social compliance and social comparison, two things that are also built into the hardware.


Watch out for misinformation on all sides. This film tries to use bicycles to make a point and gets it completely wrong. See https://twitter.com/peteyreplies/status/1306186884683567104.


Thanks for this. I was watching it yesterday and thought 'there is no way everyone was ok with bicycles' when he said that. It is not human nature. I also think a healthy dose of skepticism is good as long as we fall back to scientific data.


I was surprised that comment went through a production of dozens of people and no one saw a problem with it.


Yeah, I was quite surprised by this historically incorrect analogy when I saw the film, but this does not invalidate many of the other good points that was made. Every argument should be evaluated on its own merit.


A lot of people are going to say this movie is filled with people who made their millions and are now on an apology tour. Their bank accounts are still large, and their creations are still at large. I agree.

But this also makes them the most effective people to talk about it. I've had so many non-tech people in my life be blown away by this movie. Nothing in it was new to me or anyone here, but most people don't hear/see these things.

Most people in this movie were early employees, before we knew how bad it could get. Back then, social media was connecting people in new ways. Facebook in the early days was fucking magical. It had the potential to change the world for good. And then... it didn't. For example, the inventor of the Like button talked about how the goal was to spread love and appreciation. I believe him that he didn't for-see it would someday cause a rise in suicides among young women. How could anyone? And that's why they're speaking up now.

I agree with the criticisms. Lack of diversity, culpable people trying to save grace, Netflix making money off it and not being mentioned, etc.

But it's not for you or me. It's the most succinct, visual description of what's happening right now, made for the average person, on a streaming platform people respect. And the voices they used mean a lot more when the ideas are coming from previous true believers.


> But this also makes them the most effective people to talk about it.

Nope nope nope. While tech people might have some insight into how some of the tech works, they're still lost in how to both define the problem [1] and describe how these technologies may shape society [2]. Nothing in their professional or personal experience has adequately prepared them to answer these questions.

The thing tech people can do now is listen carefully to people who study and understand the many areas tech touches in society, and work with them on solutions that are well-vetted and considered. Tech people need to open their minds and use their power to give voice to those who can more clearly see and explain what's wrong [3].

> I believe him that he didn't for-see it would someday cause a rise in suicides among young women. How could anyone?

There were, in fact, people who could have predicted this. They study things like... teens and mental health. Not things people in tech tend to study. Lets not assume what we don't know must not be known.

We got into this mess because of hubris. That same hubris will not get us out of it.

[1] you will note they struggled with this part in the documentary.

[2] beyond facile doom and gloom.

[3] Center for Humane Tech's podcast has some good interviews with experts, but those voices were largely absent in this doc, and the response I see to this doc still largely falls in the camp of 'techies should evangelize/solve it', which is what I'm responding to here.


The Center for Humane Tech's founders, Tristan Harris and Aza Raskin, were the main people interviewed. You can't say they don't know what they're talking about, and then cite their organization.

I think it means a lot coming from people who created the technology. Maybe not to me or you, but it definitely does to non-tech people.

It's like how a thousand people can say K-Cups are bad for the environment, but it's way more shareable that the creator regrets it.


Oh I agree that the story 'I made it and I regret it' is catchy and very useful to add to the conversation. It's a 'man bites dog' type of headline in that it gets the general audience to pay attention. It's valuable to have that perspective.

I like Tristan and Aza and CHT and think they're smart, well-meaning people who are doing their best, and I value what they bring. I especially like their focus on trying to explain some complex stuff in terms and ideas that are simple enough for people to grasp. I'd love to see more well-vetted solutions from them; screen time was a valuable concept they advocated for.

As far as whether tech people are the most effective people to talk about the problems they created, I don't agree. I'd prefer someone who has expertise in the implications of technologies, who can root what is happening in social dynamics, human psychology, etc. Someone whose voice in those areas is highly respected, not someone who read a few articles or a book and then translated it. We need more of a Carl Sagan of sociology/psychology/history who can integrate and explain these things.

Any sufficiently complex topic and area of power needs checks and balances, people with different perspectives and backgrounds who help define the boundaries for discussion and consideration. My concern is we're always hearing from tech people; by default they have a seat at the table because they're in the seat of power in this domain. Their voices need no further elevation nor does their lack of expertise in relevant fields warrant it. We need their counterbalance.


> As far as whether tech people are the most effective people to talk about the problems they created, I don't agree. I'd prefer someone who has expertise in the implications of technologies, who can root what is happening in social dynamics, human psychology, etc. Someone whose voice in those areas is highly respected, not someone who read a few articles or a book and then translated it. We need more of a Carl Sagan of sociology/psychology/history who can integrate and explain these things.

Until you or someone else that fits this personal preference of yours, I guess the problem is that we don't have a lot of choices in terms of interesting, enlightening, clear, and coherent documentaries that describe this problem.

So, yes, you may be right that someone else could describe this situation differently ... but absent that actually existing, this is (in my opinion) a pretty fine documentary, even if some of the talking heads are / were 'tech people'


Do you have some recommended reading? I enjoyed reading Neil Postman’s Technopoly, but any contemporaries you find particularly insightful?



I'd love to give you a long list but I don't have it - I just know what I don't know. I'm actively looking for more sources as I'm interested in working in this area, given its societal importance.


> There were, in fact, people who could have predicted this.

Did they?


Not sure if she explicitly wrote about likes = teen suicide, but I'm sure danah boyd would have had something to say about it, given her research focus on the intersection of social tech and teens lives.

She was already well respected at that time and known to my team when I worked in Yahoo's online communities division in 2006 and went to her talks.

The relationship between suicide & social status has been well known for decades. It therefore would not have been a leap for someone in the know to conclude that a tool that conferred social status or approval (such as a like button, a list of friends, etc) would therefore have these social implications.

You can read her stuff here: https://www.danah.org/papers/


So, no?


They are the best at this point because they were the ones that created it so the impact of their words is way bigger.

They might not be the best to talk about solutions tho, but I think the purpose of the movie is to make the problem known.


> For example, the inventor of the Like button talked about how the goal was to spread love and appreciation. I believe him that he didn't for-see it would someday cause a rise in suicides among young women. How could anyone?

Every single thing in the world has upsides and downsides. Cars made transportation affordable to people but people die in car crashes. Farming is feeding billions of people but also polluting environment. Internet brings information to people but also disinformation. Medicine saved 100s millions of people, but also messed up with natural selection as a way of making our spice stronger.

You can (and should!) look for bad things about each technology and try to minimize it and make sure there’s right trade off. But trying to present something as pure evil, just because it’s also doing harm, is intellectually dishonest.


I concur. My friends and relatives, who never cared about what I (a techie) said about online tracking, are all now trying to educate me about the harm done by google and Facebook.


I think the arguments in the film are a bit embellished, but certainly they got interesting people to go in front of the camera and they made some good points.


> "Most people in this movie were early employees, before we knew how bad it could get. Back then, social media was connecting people in new ways. Facebook in the early days was fucking magical."

no, before facebook, there was myspace and friendster (and more primitive social networks before that). we all knew exactly how bad it could get. facebook was not fucking magical. it was a psychological exploit for profit's sake. zuck said as much when he let his guard down. he and facebook are not to be venerated nor absolved for this bare-naked blight on humanity.


I think we had different experiences.

MySpace and Friendster and message boards and IRC were all pretty benign. I don't think MySpace had a single "algorithm". There was no newsfeed; it was just a personal Geocities basically. How were they "bad"?

Facebook WAS magical for anyone who was in college around the time it started. All of a sudden, you could connect with thousands of people around you. You could organize events, stay in touch with friends, share pictures and talk about your life. Back then, it was a very different site than it is now.

If you told me in 2005, when I signed up for Facebook, that someday the election would be swayed by it? I would have laughed at you. The biggest fear back then was a drunken picture costing someone a job someday. Now governments are being crippled.

I'm not venerating anyone. I'm talking about 15 years ago, not now. That was the whole point of the movie. Social media isn't inherently bad; human connection is a good thing. It's bad when it's fueled by a limitless need for growth and profit.


of course everything looks rosy with rose-colored glasses on, but facebook not only encouraged those glasses, but made you believe the glasses didn't even exist. same with google. it only took a little perspective shift and foresight to realize the future dangers under a relentless profit motive. the greatest trick the devil ever pulled was convincing the world he didn't exist.


The first time you loaded up TheFacebook in high school and saw your friend set his status to "Greg is making dinner!", you thought "ah yes, someday this will topple governments due to disinformation campaigns"?

Are you sure you aren't conflating "it's obvious" with hindsight bias?


In 2008, I made an app go viral on Facebook by posting pro-LGBT / anti-prop-8 ads within a 20 mile radius of SLC. Hundreds of millions of dollars as been invested into the FB platform ecosystem by that point.

Plenty of people understood the implications.


We can call it a failure of imagination, but when you have a tool that knows a lot about your political views and your location, we should have predicted that information would be used to hypertarget political propaganda.

And all advertising has the goal of swaying some election - which car you buy, where you get your groceries, what clothes you choose. You think you made those choices without external influence when you didn't.


That's my point, though – that wasn't true in 2006! Maybe, just maybe, I posted "Gregory is voting for ______" in 2008, but that's not how people used Facebook back then.

There was no realtime location collection (no iPhones), no sharing of articles, no commenting on posts, no groups, no minifeed, etc. There wasn't any mechanisms for all the data to be used for targeting. When I started using Facebook, you literally had to go to a person's page to check on them. There was no newsfeed or algorithm or ads. There definitely weren't bots.

It was pretty unimaginable that you could figure out who someone was voting for and influence them based on how FB was in 2006. Now, of course, it's different.


> There wasn't any mechanisms for all the data to be used for targeting.

Facebook's original pitch deck from 2004 disagrees with you.

https://www.businessinsider.com/facebooks-first-ever-ad-sale...

Look at the slide "Our Online Marketing Services" and the box labeled "Targeted Advertising"

> This site allows for targeted advertisement on the basis of any of the parameters listed below.

Then, of course, it lists gender, age, dating interests and... Political Views.

I vividly remember 2002-2005, and I even had a website once about disinformation. Fake news was a thing during the Bush administration.


> "Are you sure you aren't conflating "it's obvious" with hindsight bias?"

no, i avoided facebook at the start because it was obvious what it was going to be. with google, it was 2005 or so (about a year after the gmail launch) that i started weaning myself off of google.

that folks wore blinders willingly doesn't mean it wasn't easy to connect the dots.


> we all knew exactly how bad it could get.

You were very smart then, congratulations.

Back in 2006 there was no mention anywhere of "fake news", "bias bubbles" and such because there was almost no news sharing in Facebook, and your feed was not curated to only show you what you were most likely to like. There was no political advertising on Facebook for a long time, and even no ads at all until like 2007 or 2008.

User tracking was almost non existent as well until 2010 or so.

So sure, there was already some concern about privacy back then, it was mostly about people voluntarily over-sharing thins they shouldn't, not about being tracked and manipulated against your wish.


As I like to say, Facebook should have stayed within the area of Universities and schools; A platform born in Harvard, by well-fed white male US Americans, for use and consumption by their peers, was not meant to be used by billions of regular people.

At some point in the documentary, I was expecting someone in front of the question: "what is the real problem here?", to answer: "people are horrible"


Nothing new in it for the HN crowd I suppose.

For me, privacy is the core issue, though "free" is part of the problem. A social network charging a dollar a year wouldn't be great for investors but would essentially pay for itself. (As a Brit, I'd be more inclined to pay that dollar than the BBC license fee at ~170x the expense for all the "soft power" it has.).

With the monetary dial removed, filter bubbles are still a problem. I'm thinking it would be good if more algorithms could be used to counterweight a persons predilections. Show me stuff based on my interactions/friends/groups I'm in/people like me/opinions I hold/mood I'm in, but counterweight it with things/people that are potentially opposite or more neutral. Perhaps associate with groups of ideas for this cause, i.e. political belief, people at my place of work, people that visit the same places I do. It could be totally transparent, viewed, and even possibly reset.

I think the documentary was good for the general public to me made aware of how Facebook/Google, pretty much all "big tech" is aggressively mining data, some more than others.


I'm really not interested in constantly being recommended opposite viewpoints to mine. I'd constantly be getting climate denial content.

Most conspiracy theories I never look into, but the 5G one made me wonder how stupid can you be. So when they thought they had enough evidence to convince a judge and sued the Dutch government, I opened the evidence they supplied and read through it.

I've spent hours and hours trying to figure it out but there just isn't logic to it. It's all misreadings, misinterpretations, misconceptions, etc. Their references were false as well (saying the WHO claimed X in a paper from year Y, complete with a link that leads to a blog post about it but doesn't link the source, and the WHO site has no publications on that topic in the given year; turns out they had a paper about that topic one year later but it doesn't support the claim at all).

Yesterday I was looking something climate related up and came across a Belgian site, seemed like a newspaper, that denies the whole thing. Confused, I opened it and read a bit to make sure they were really calling it a hoax and I hadn't misunderstood. Yep, they did. They pointed to 500 scientists from around the world who signed something saying we should be less political about it and base it more on science, and then that science should include the uncertainties in their reports because all the climate models are super uncertain, and that it turns out the heating is going much less fast than previously claimed. I mean, all proper papers disclose uncertainties. And yes, we should be more science based and get the fuck on with decarbonizing. Then as for it not being so bad (it not going as fast), it's going super fast, I don't know where they got this.

No, I'm really not interested in this kinda garbage. For me the Youtube recommendations are straightforward: I click a video about maps, next I get more geographic analysis videos recommended to me. I'd rather not that someone feels obliged to make it insert opposite content (what's opposite to maps, flat earther stuff? Or just unscientific blabbering when what I'm mainly watching is objective/sciency stuff?).

Or do we want to only insert opposite viewpoints to those who believe in conspiracy theories? Who decides what's real and what's not? Why have the conspiracy theory content on the site at all if we know it's false? Can we not find the root of the problem: the people who make these videos, where do they get their info and misconceptions from?


I wouldn't take 'opposite' too literally in the black and white sense, I mean providing means of a counterweight or 'something in the other direction(s)' of where the algo is leading you to. Especially so for recommendations. Try private mode on Youtube and see if it's easy to reconstruct your recommendations. I remember the basic 'seeds' of mine, but not how it ended up.

I just think if content is factionalised there should be a means to de-factionalise it.

I'm sure with these engines the scoring is multi-dimensional anyways so the concept of opposite is a bit more fuzzy than "recommend the opposite of stuff I search for and interact with"


Can you give an example? Because if flat earther stuff isn't "'something in the other direction(s)' of where the algo is leading you to" when it's leading me to a video about maps (like Mercator view distortion or so), then I don't know what is.


I'm not that musically orientated but going into the 5th decade of my life there's been a few genres that were flavour of the moment. I might enjoy a beer one night and look through some old songs. That doesn't mean that on a daily basis going forward I want "more in that direction", i.e. I don't want to close the doors on other music and get recommendations based on what I did one day.

I think in your example that any (say YT) algo would flag flat earth videos as more conspiratorial and less scientific. Even though the content is about the same "thing", there'd be more facets/dimensions to that thing than it simply being the geometry of the Earth, i.e. the profiles of people who watch such things. There's the history of the Earth's geometry, alternative projections, people who have travelled it, etc etc.


I don't remember where I saw this, but I remember reading something about each user costing Facebook about $8 a year due to all the complex DB querying they do.

That'd probably shrink if they didn't need targeted advertising, but a lot of the complex friend graphs might result in some hefty computation at scale.


Interesting. Searched around and found [0] (and others with same info) but I suppose there'd be more information if digging deeper. They mention previous years costing $0.40 and $0.60 per month, not sure if the number includes capital investment rather than YoY costs of hardware/electricity etc. Point taken though.

[0] https://www.technologyreview.com/2012/05/16/186064/the-bigge...


Great movie that I agree with 1000% . I suggest Lost Connections and 10 Arguments for deleting your social media ( mentioned in the movie) as well. I stopped using all social media as of last year and the results have been amazing.

I'm more at peace , I no longer getting into stupid Facebook arguments. Online you aren't interacting with whole people, so being absurdly mean is easy.

Most people online are just their for validation( or are bots , see the Ashley Madison hack and the FTC complaint against the Match Group). At this point in my life I'm no longer seeking that. Plus , as the movie points out constant validation seeking will wreck your mental health.

I was having an amazing time meeting folks pre Covid and I have a plan to become more active whenever this ends. I'm thinking I'll try improv. Being around people does something for you that internet interactions simply can't.


>10 Arguments for deleting your social media

I thoroughly enjoyed this book. Mostly for this golden quote which I have saved:

>Cats have done the seemingly impossible: They’ve integrated themselves into the modern high-tech world without giving themselves up. They are still in charge. There is no worry that some stealthy meme crafted by algorithms and paid for by a creepy, hidden oligarch has taken over your cat. No one has taken over your cat; not you, not anyone.


No offence, but you are arguing on HN, Reddit, Chan boards, /. now. Where is the difference? I did similarly but keep getting sucked into these communities simply because I'm alone in my head a lot of the time.


As mentioned below hacker news is a niche website, the dialogue here is much more intelligent than on Facebook.

For now I don't see talking on hacker news as stressful, but I definitely will stop using this site if it becomes so.


Hacker News is, and has been for years, extremely main-stream. Everyone who ever gets into coding ends up here to voice their biased opinions.


> I'm alone in my head a lot of the time.

Better than not!


Discussion on HN and Reddit tends to be more thoughtful in my experience.

Yes, it's still just wasting time away but it can be somewhat more mentally stimulating than FB drama.


I no longer getting into stupid Facebook arguments

That's the one thing I absolutely loathe about Facebook. There's no constructive dialogue going on. Everyone is so full of themselves that any kind of argument derails pretty fast. And then there is the fact that the platform doesn't provide the structure for long conversations. After a couple of replies all text has the same indentation so it's hard to see who's responding to what comment. I've been online for twenty years, and I've never seen a environment more hostile to conversation than Facebook.


I have a lot of great conversations on Facebook, typically better than the conversations my blog posts elsewhere. It's a matter of cultivating a good community.


In my experience, reddit and other online discussion forums have been just as bad as Facebook at times.

Online communities like reddit and, yes, HN, are echo chambers to varying degrees. They can also elicit a sort of inhumane treatment of others by depersonalizing them, sort of like what happens when people get behind the wheel of a vehicle.


Sometimes I have commented on facebook forgetting it was not HN or another niche community. If you start talking about how your country's weather bureau monitors itself for forecast accuracy then you will almost certainly be met with silence if you try to do that on facebook.


You may need smarter friends, just saying. Also, facebook does have niche groups for such topics.


> I stopped using all social media as of last year

You're posting this comment on social media.

https://en.wikipedia.org/wiki/Social_media

> I'm more at peace , I no longer getting into stupid Facebook arguments. Online you aren't interacting with whole people, so being absurdly mean is easy.

What makes HN different?


I don't think people here appreciate you pointing out their hypocrisy. HN is different in many ways(not tailoring content based on profiles, not trying to maximize engagement, etc), but it still has many problems (groupthink, suppression of contrarian ideas, karma whoring, inconsistent fact checking, etc).


I see what you're getting at, but eh, I'm not entirely convinced. What is a social network? I think there's some gray area there. The Wikipedia article you shared calls out Social Media as being "Web 2.0" (which itself is a nebulous term). One could argue that HN is just a forum, not unlike news groups that predated Web 2.0. Like I said: I see what you're getting at, and maybe HN is social media, but I would still assert that when considered as social media, HN is a far cry from a network like Facebook in terms of platform-functionality/feature-set/behavior/agenda.


It's ridiculous to compare hacker news to Facebook or any other social network mentioned in the documentary.


Used to waste so much time arguing on FB comments -- and now that I'm off of it all of that energy has been reclaimed. It's been a remarkably positive change in every way.


I got off Facebook a couple of years ago. A couple of months later, I got rid of my smart phone and went back to a Nokia 3310 for a full year. Actually was a huge relief and felt really good. I'm like many people here on HN, I suspect: I work a desk/computer job. My attitude: If I want something online, I'm in front of my computer 12-14 hours a day. If I'm not in front of my computer, then I WANT to be offline. If I'm offline, but you need to reach my for something important, you can text or call me.

One of the most interesting observations about not having a smart phone was not how my behavior changed, but how others behavior changed. As my friends became painfully aware that I don't have a smart phone, I'm not checking my email 24/7, their texts don't get read-receipts, and they don't see the ellipses as I type text replies... they became less demanding/insistent of my time (except for the urgent things, of course). It was really freeing.

I only got a smartphone again when I lost my Nokia 6 hours before I had to catch a plane for a week long trip -- I pulled an old smartphone out of my junk drawer, threw a SIM card in it real fast, and headed to the airport. I have hardly any apps on it now, and it's great.


Is it supposed to be ironic that this show is, in effect, an advertisement for Netflix?

By not mentioning that Netflix’s goal is to get consumers to watch nonsense all day, it appears so...

Netflix would love people to log off of Facebook and log on Netflix :)


Does Netflix make more money when I spend more time on their platform?

I'm sure there will be a day when they do, but right now I think they make a flat subscription fee from me. Maybe they can sell more of my viewing habits the longer I watch? Maybe they have content which they're paid to serve? I don't know. I think that, right now, Netflix really isn't in the same category as the big social media companies.


I'm positive there's a strong correlation between hours spent on the app and likelihood of renewing next month.


Is that a linear relationship though? The revenue of other social media sites scales with my usage time. Netflix just has to provide me with an amount of value exceeding the subscription price and I will continue to be a customer. It actually costs them more if I overindulge on their service. They should be maximizing a certain level of viewing and no more.


It was a persuasive documentary but I felt it conflates two issues which happened together - the rise of mobile phones and the rise of social media. The rise of smartphones has an important role in making social media more addictive. Some people are addicted to watching videos from Netflix or browsing websites like hacker news etc on smartphones too, which isn't necessarily a "social" activity.

All the engagement tricks played by social media would have only gotten so far without smartphones. On the contrary, people might have been addicted to smartphones even without social media apps.


I'm launching an open source initiative supporting several social media platforms, starting with Matrix/Element, a messaging app similar to WhatsApp; and then Mastodon, a microblogging platform similar to Twitter. The overall effort is being called the "Resiliency Network" in reference to creating a social media platform that will foster resilience in relation to the significant social, environmental, and political challenges we're facing today. If you're interested to get involved, please respond to this comment. Thank you!


I’m interested. But you’re not the first person to think of this. What do you see as the reasons why other similar efforts have failed? Why will you succeed?

I’m not trying to be negative. I’m just strongly of the opinion that making this happen will take much more than saying “we should make a better social media”, and I’m wondering if you’ve got that far.


If the distributed social networks ever get enough users to attract spammers, they'll have an even worse spam problem than the centralized ones. They don't collect enough info network-wide to detect spammers.


I feel like that particular issue could be resolved with even the most rudimentary reputation system. It’s how humans have solved the problem amongst ourselves. But I agree it will be a problem.


Email has struggled with that problem for decades. Also, reputation systems are easily scammed. See eBay, Yelp, Amazon, etc.


I'd recommend supporting Signal as a WhatsApp alternative. It's open source and has the best crypto.


An interesting thread that points a couple of issues in the movie: https://twitter.com/vaishnavi/status/1307455391249301504

Summarizing: the people interviewed were not the one who should have been interviewed. People from in house policy temas and others are much more knowledgeable about this and have been raising these types of issues all along. It would also have helped to show a much more diverse picture than a bunch of repented tech bros.

At the end of the thread, two additional posts reinforce and expand the message:

https://slate.com/technology/2020/09/social-dilemma-netflix-...

https://librarianshipwreck.wordpress.com/2020/09/17/flamethr...


This twitter user seems to be currently working on Instagram and much of the thread seems to be an awkward search for a way to justify her industry's existence and finding ways to blame former colleges for their ignorance. Some of the points made are interesting, but at the same time quite biased.


I'm a techie who indirectly makes money off the explosion in AI so I can't say how to fix this problem. It's good people are talking about it. There are many problems with social media and tech addiction, some of which were partially or totally covered by this movie. It was a nice production and should get people talking.

I hope we all, as HN commenters, take a moment to consider our participation in social media, including this site. Please be aware that your online communities are not the real world, that people are human, and those upvotes do not validate your opinions. I apologize for all the pointless, angry arguments I've participated in, and hope you do as well. Let your humanity shine through everything you do.


Does anyone see a tipping point or an event that could in scale drive people from social media of Facebook, Twitter. By that I don't mean to a competitor.(perhaps a naive question)

Also, is there a common denominator among people here who have stopped using social media? I mean, other than self awareness, retrospective ect. which people here generally have more than the average population.

What was the "I need to reduce or stop looking at facebook so often moment?" Mine was the fact that I became aware that I'm essentially a dog from Pavlovs experiment where instead of a bell I have a notification sound/buzz and instead of a biscuit I get a little dopamin shot. Second was the trackers, especially as they got better and faster. I started feeling 'cheap', as I feel in reality when a merchant uses obviously exaggerated claims while acting as a friend to try to convince me to buy something. (Like last time I went to a car dealership, yuck) Third is that the connection to people I care about, close family and friends, is much better served via private group messaging apps and news via HN and others.


I could possibly see the upcoming US Election causing a bit of an exodus from Social Media platforms as more people become drained by the fatiguing constant flow of extreme views from both sides (mind, I thought that last time).

I will say the big thing for me that made me put my mobile device down (of which I now barely use) was the sheer quantity of hours a day I was wasting away looking at it - this was being amplified by Covid. My thing was getting worked up by stupidity or hateful opinions on Twitter or Reddit (obviously the platforms helped out). The more I looked the more I got annoyed, seeking out arguments if you will. It wasn't until I just stepped back and realised that I'm seeking the opinions of a tiny tiny fraction of a global population that I realised how stupid it was. There are 7 billion people in the world, you can find any opinion you want - it's foolish to spend your day hunting these down and what's worse is that it starts to warp your own personal view away from who you really are. I think the catalyst was Covid for making me realise this, I think more people will start waking up to the same reality.


Wow, thank you! You nail down the issue with a very personal and relatable story. I think only difference between you and I is that I never had a need to go into arguments over posts/tweets I considered wrong (I was a mazohist that just read them). What 'got to me' was not the sheer untruthfulness of things shared on twitter nor insults that flow there and back, but the behavior of people that I think could and should know better. Another lightbulb moment was the realization that any nuanced conversation after 5 to 10 comments goes into the extremes (usually two) that it can boiled down to. There are no different shades of gray, just black or white. In case of covid19, this changed to 3 comments. Another lightbulb moment was the so called "twitter/facebook isn't real life" where I realized that in my surroundings I have a hard time finding a single person that has ever participated in twitter ststorms or facebook comment fights.


I think there's a common missing perspective in this whole debate of social media. Sure, social media is addictive and is not easy to live in the modern world without it, but what if social media is not the only culprit today, and the decadence of real life shared spaces is also part of the problem? Everything is a product nowadays, and public spaces are treated like private spaces by local governments: "we don't want problems in our land" is the default attitude, even if unconsciously. They keep spending money on absurd events, but they never realize the social potential they have around. If streets are not actively promoted as the places where society should live, and we have social media pulling from the other side, what do we expect to happen?

As we have been living with it for a while, the poisoning effects of social media have become more obvious to more people, but if so many are already in social media and public spaces are half dead, it's not easy to escape. Though I actually believe we can use technology to revert this back. We can use technology to get people who is geographically close to interact with other people, in more local networks. It's not about driving people out from X, it's about giving them better spaces. We are not completely unaware of the poisoning, so we just need to offer people better alternatives. And even if local governments have no vision, we can educate them and build spaces on our own, even if the first ones need to be half-digital for the bootstrapping. I was working on something like this, and while I've had to stop for a while both for personal and global pandemic reasons (and the best ideas are still not clear to me and I'm still improving my skills on web development and other areas), I'll keep working on this kind of projects. People likes to be with people if they have a decent place to stay and something to interact around. And everyone has very different preferences, but it's just a matter of working on it.


Can't help but see the anti social backlash as another reincarnation of luddism. The tech is what it is. Hating the biggest companies behind it won't change it. You not using it won't change the world either. Just like refusing to board a train wouldn't stop the changes its invention has brought about. The world is forever changed and looking at GDPs, it wasn't for the worse.

Nothing short of a regressive global regulatory crackdown can change the new reality.

I think it's worth for people to examine what it is and how it's different. But an alarmist and hostile tone isn't too conductive to that.


You should watch the movie. You'll find that it's closer to your opinion than you're assuming here.


I don't think you understand the Luddites, to be frank. Theirs was a reasonable reaction to the evaporation of control, individual agency (as skilled workers) and political sovereignty of the laborer. They saw their livelihoods evaporating, their financial security being siphoned away. They worked their asses off for bosses who then built their automated replacements. It was a callous betrayal, and their response was to destroy the technology which enabled that betrayal.

"Hating the biggest companies behind it" will change things. Hatred, skepticism, cynicism, and distrust will change things. In a world where "regulatory capture" is almost complete, only neo-Luddism (properly understood) can hope to change things.

We should be raising alarms, and we should be hostile to those institutions which siphon away our agency.


>They saw their livelihoods evaporating, their financial security being siphoned away. They worked their asses off for bosses who then built their automated replacements. It was a callous betrayal, and their response was to destroy the technology which enabled that betrayal.

Just like old media.


> Just like old media.

The jury is still out on that one, I believe.


i tried to watch the movie and i honestly don’t get the hype. It really seems to be confirmation bias where those who already hate social media just rave about the movie.

and i actually think it did a very poor job at explaining things to people completely unfamiliar with the issues. i wonder if it will change a single mind or just preach to the choir


Speaking from personal experience - I watched the film with my girlfriend and a lot of it was news to her. It's changed her relationship with social media and technology.

I also suggested it to a friend who was pretty addicted to Facebook and believed a lot of conspiracy theories. It's made him realise how FB has given him (and others) a distorted picture of reality. He's now disconnected from Facebook and only keeping in contact via messages.


This movie hasn't been out long, you sure your friends won't be up to their old ways in a month or two?


I don't know, but they've got a hell of a lot more of a chance now that they're actually aware of the issues.


Mindchanging market's tough. Now the choir, the choir dollar's a good dollar.


This is one of the few Netflix documentaries that I regret wasting my time on. Not only there's no new information in it, but it seems to be deliberately done in a way to promote outrage and get attention from the political apparatus to regulate social media to death.


This film was not made for you. Techies like us have been up to speed on these topics for years, but most people have not.

Many people might have a creeping feeling that something is wrong in the world of social media, but almost none understands the actual mechanics of an algorithm and what it can do when applied on a global scale. This film is made for them, and judging by the reactions to the movie from non-tech people around me, it has had a profound effect and made them reflect deeply about their use of social media.


Maybe not. I am 40+ years old and I clearly remember how the 80s and the 90s were full of stories about satanism. How listening heavy metal will turn kids into satanists. How there are satanist societies with child sacrifices. That's exactly the feeling I got after watching this documentary. Oh, social media is the next satanism and Zach is new Anton LaVey. Got it.


Yeah, I'm not that far from 40 myself and having grown up in a charismatic church in the 90ies, I sure remember the scare around heavy metal (or any rock for that matter) and that most things in fact was satanic. Fun times.

This docudrama explains hard-to-understand concepts by telling emotional stories that most people can relate to, which frankly should be done more often for technical stuff affecting society in not so obvious ways. Again, this film is not for developers and engineers who mostly prefer dry facts. It's for everyone else.


ha, the hypocrites, below is the dns request ublock origin showing on this site

```thesocialdilemma.com``` ```www.thesocialdilemma.com``` ```doubleclick.net``` ```googleads.g.doubleclick.net``` ```static.doubleclick.net``` ```fonts.googleapis.com``` ```youtube.com``` ```www.youtube.com```

if they truly have virtue, they could have used some other open sources to build the site



:-), haha not my intention, but i watched the documentary and it seemed pretty intense, so just got curious why they used google things


I thought the movie was well made and did a good job explaining things, especially to people who don't follow the business.

There is one consideration no one seriously thinks about when it comes to social media companies, in particular Facebook and Twitter. They cannot be fixed. They are failed experiments and need to end. The whole premise of the companies is to guide people's emotions for profit whether where they guide is good for society or not.

There is no way they will be ended due to the money they make and the fact they do some good, but there is also no way they can be fixed. Regulations, corporate leadership or a set of ethics won't do the trick. The framework is diseased and is a net negative on society.


> There is one consideration no one seriously thinks about when it comes to social media companies, in particular Facebook and Twitter. They cannot be fixed. They are failed experiments and need to end.

If I'm understanding this correctly you're basically calling for them to be shut down. Wouldn't everyone just migrate to alternatives based in other countries? (Perhaps those less friendly to western values)

Would you prefer to have TikTok to Facebook? If so, why?


I'm not sure how you go about ending Facebook and Twitter. Facebook's market capitalization is $720B. Twitter is a relative minnow at $32B. Neither will go down without a pretty enormous fight; the resources at their disposal outflank the most well funded political campaigns in history.

It's not realistic to think that these large social networks can be put back into the box. Ultimately, consumers like them, even if they aren't good for us. In a democracy, it's very tough to implement measures that destroy something that people actually like.

I think that social networks should be regulated in the same way as other harmful, yet enjoyable vices such as cigarettes and alcohol. A first pass would be to create a tax on social network data harvesting that starts to recognize the value they extract from us. The revenue from this tax could be invested into research that helps to understand the impact the social networks have on our lives and on democracy itself.

As with cigarettes, while we did not at first realize they were harmful, once it became clear through research that they caused harm, additional funding into high quality research allowed governments to eventually create science-guided policies that have largely put a lid on the problem. We are in the 1950s of social networking: i.e. doctors still recommend it. That needs to change.


It has nothing to do with consumers. Congress created Google and Facebook with a stroke of the pen in 1996. They can delete them with another stroke of the pen anytime.


If they become liable for user content then yes but you’ll have the entire tech industry lobbying against you.


That's why I'm always talking about this on HN. Trying to spread awareness on the inside :)


It's a bigger problem than Facebook or Twitter. It's the business model. The advertising industry is 90% poison. Regulate it into the ground and protect user data.


You could say the same thing about Fox News..


thoughts on nationalisation or decentralization as ways to solve this?


Or forbid targeting of ads on certain dimensions. Say, limit legal targeting to age and location.


I laughed how the industry folks were all product and exec. Like 0 engineers. Talk about a zero social status career choice, and people wonder why people want to be managers.


Hmm time to move to product manager?


I think its good documentaries like this are made, so a greater audience gets attuned with these issues, but I for one could not get further than about 30-mins... I was honestly disappointed that such an important topic was deemed too "complex" and "nerdy" that the creators decided to resort to actors to show us the effects of these technologies... Which I guess is in line with the short attention spans caused by social media :/


I was really affected by the first 3/4th of the movie. But then, they started expressing their (personal?) beliefs, for example: "What happens if people begin to believe that government lies to them?" Oh horror! ...Well that happens sometimes, and sometimes with a good reason.

They kind of stared loosing credibility with me after that. I wish the movie would stay on a single message/issue. It would have been more powerful, at least for me.


Seems like they want to control facts and packaged it as educational, helping piece. Imho that's quite disgusting manipulation.


If an ex heroin dealer (who made lots of money selling the drug) came forward and talked about the problem and how to stop it, it’s more powerful given his previous association.

We shouldn’t discount the people speaking out because they created it.

We should criticize them if all they do this time is continue to make money on the problem versus resolving it with the same greed and passion they put into creating it


There is a major point of difference, between online media monopolies and the old railroad, infrastructure and commodity monopolies of old is "What happens if they dissolve."

If Toyota fell off the edge of the disc, next year the world has less capacity to produce cars. A lot of capital will need to be obtained, and that will need to be turned into factories over several years before the world's productive capacity is restored. Meanwhile, people will have fewer new cars.

If Facebook, Twitter, or even (many parts of) Google disappear... Alternatives will emerge almost immediately. Social networking is not expensive to provide, and certainly not capital intensive. It'll be via a different UI, but none of us will lack for social media, messaging and such.

This is a crucial difference, economically. Social media is worth literal trillions, yet the industry could disappear without any consumer shortages.

Something to consider, as the power of these giants starts to resemble the old colonial trading companies'. This may be the least necessary industry ever.


Are you sure social networking is not expensive to provide? What makes you think so?


I think this is one of those "difficult to fully work through in a comment thread" topics. So, I don't expect to convince anyone that isn't already sympathetic to this point. That said...

For a bottom-up argument, consider mostly text based media like twitter or whatsapp. If you're involved in similar tech, I think it's quite easy to roughly estimate both the development and infrastructure requirements for something like this. It isn't too different from an email service or another text-centric media. Also (like email) it lends well to a distributed architecture, so may not even need a centralized infrastructure.

Hosting and serving video is costlier. Again, distributed architecture can make this irrelevant, or a much smaller. Even assuming a Facebook clone though, hard costs like video streaming are a small fraction of facebook revenue.

Re: distributed architecture. Remember that this is what the internet/web is fundamentally designed to do. It's not radical, and there are working examples out there that will work at massive scale. The www itself is such an example, but there are also more explicit examples that work more like Facebook does.

For a comparative argument, consider FB or Twitter when they were much (revenue-wise) smaller. Consider minor social networks (like HN). These might have 1/100th of FB or Twitter's usage, but generally operate on orders of magnitude less revenue user-for-user.

Basically, there are enough demonstrative examples to convince me that it is possible, even trivial.


Diaspora thought like you and went the length. Not sure how it played out, but iirc in the beginning they found it more work than they had expected. https://diasporafoundation.org/

I think there may be a lot more aspects to it that need attention than one anticipates at first.


Looks like Nir Eyal (Author of Hooked) was interviewed for 3 hours of this documentary. But, none of his clips made it to the final version.

https://twitter.com/nireyal/status/1307577732448964608


I kind of agree with his point

>The filmmakers attempt to manipulate people into fearing technology that the filmmakers say is manipulating them.

I mean there are real problems with fake news stories getting promoted for example. On the other hand I'm quite happy to have a facebook account which I glance at occasionally to see what friends are up to. Or an HN account which is fun if a bit of a time sink.


I watched it last night. It's pretty good for non HN crowd that might not know this already. But I personally felt that they forgot the most important question: what pushed algorithm to recommend what they are recommending? Why are Youtube's rabbit holes more likely to be conspiracy theory related and/or inflammatory content? In the beginning, the algorithm was probably equally pushing for this type of content and more educational content. Why can't we make science related/educational content interesting enough for it to become a rabbit hope recommended by algorithms?


A large amount of my recommended YouTube videos are educational, but I subscribe to a lot of nature docs, history etc, I don't think the algorithm dislikes this kind of content for me. So I guess the answer to your question is, we can?


Money.


i think the more accurate answer is Engagement. we made it maximize paperclips, it maximized paperclips. sadly the most engaging is the most extreme.


I agree with that. I think the question that's left unanswered is "why can't we make educational/scientific/historic content that drives up engagement?"


Probably because reality is complicated, confusing and boring. What drives up engagement are clear, coherent stories and narratives with twists and turns that leave us shocked or awed. If you want educational content to be engaging, you have to find a way to make a good narrative out of truth. That's what this documentary and all other successful documentaries attempt to do.

But sometimes (most of the time, really) truth just doesn't fit this structure, so to make it engaging, you have to deform and twist it a bit. And if reality is really, actually, mundane, there's not much you can do to make it engaging, whereas it's extremely easy to make engaging, convincing falsehoods about nearly any subject.


i mean, we kind of can. 3blue1brown is amazing and popular. Dan Carlin and Hardcore History.

its just that the 7 sins always win hands down. even for someone who's aware of their corrupting influence.


I think money is more accurate really since noone cares about engagement for the sake of engegement, but rather for the money engagement generates.


Money is the root cause. Engagement is the direct cause. Both are important.


I would like to see the ad driven model removed as an option. I think it would help. And I think there is a separate case to be made to end ad targeting for other (privacy) reasons.

But while watching this I realized that wouldn’t be the entire solution. Social platforms would seemingly still be incentivized to drive up engagement and to keep users addicted to their platforms vs other platforms.

So what are some other solutions? Do we need to turn them into publishers to make them responsible for the content the spread? Or regulate algorithms and like buttons?


Another solution is to break the network effect. Allow people to move to different (less intrusive) platforms without losing their contacts.

For example, I would quit Facebook if "events" were public, and I could easily see which events my friends go to without logging in to Facebook, perhaps through some open protocol.

I think universities should be more active in inventing secure, open protocols.


Would your friends be happy having which events they're attending to be made public? If not, how would Facebook know it was you requesting the information unless you log in?


I think the broader issue is just engagement metrics, financial and non-financial, that reward wasting time. I think there's a very achievable alternate equilibrium, where social media companies see themselves as stores to browse for great content rather than TV-style engagement factories, although I don't have a great story for how to get from here to there. If some people spent hours every day aimlessly wandering around Walmart, everyone including Walmart would see the problem - they can't find what they're looking for, the store must be laid out poorly!


We can't get rid of the ad-supported model because AML/KYC laws have crippled all other forms of micropayment. But paying with your attention is exempt from AML/KYC, so it dominates due to lower regulatory load.

I wrote a detailed comment about this here: https://news.ycombinator.com/item?id=24006703


The problem is that if you have two competing products and one has no funding and one has billions in revenue, the one with the most money will tend to dominate. So removing the ad revenue from your own product while not being able to remove it from your competitor really just hamstrings yourself.


I find TikTok is the most insidious of all the social media apps out there. If Instagram is like candy, TikTok is like crack.

Worst part is the primary demographic is younger kids/teens.


I just remember when it was called Musical.ly and advertised with videos of very young teens dancing.


While I enjoyed Jaron Lanier's perspectives during this documentary, I'm somewhat disappointed that the solution he proposed in his 3-part video series for the New York Times was omitted: https://www.nytimes.com/interactive/2019/09/23/opinion/data-...


Which is what? From the titles and what I remember from the social dilemma, it's putting some value on personal data and either paying people for it or taxing it?


One thing suggested in the movie is to consume a wide variety of media. Coincidentally, I've been doing exactly this for a few months. I started reading both vice and breitbart and their comment sections. And it gave me a way more nuanced perspective than staying inside my filter bubble / echo room.


I understand the experiment, but reading the comments on Vice and Breitbart is really not a good way I would pick to spend my time. They’re both quite bad examples of toxic commenters communities, regardless of the political orientation.


On one hand, I think there is certainly a perspective in this doc that shows where critical views are needed around the consequences of certain business models coming from big tech. On the other hand, it seems to attribute a lot of the recent political divide to the rise of social networks without considering that a lot of these changes happened alongside the wake of the 2008 economic collapse, which almost certainly has contributed a lot to this political division. The suggestion that things were good before these social networks permeated people's lives overlooks a much longer narrative of wealth inequality in the West, and particularly in the US issues with housing, jobs, and the education ssystem. In some ways, the social dilemma is the ultimate #firstworldproblem of our time.


The movie consulted (and interviewed) the guys from this podcast Your Undivided Attention https://www.humanetech.com/podcast

It has nice episodes with deeper discussion of the social network social problem.


I wrote a critique of a The Verge editor's critique--on top of their critique--on The Social Dilemma.

Check it out here: https://www.kontxt.io/proxy/https://www.theverge.com/interfa...

Or here's a streamlined summary: https://www.kontxt.io/document/d/JdE8ujfZmPy3bIAiv-tZQXVWfVf...


I wrote a critique of a The Verge editor's critique--on top of their critique--on #theSocialDilemma.

Check it out here: https://www.kontxt.io/proxy/https://www.theverge.com/interfa...

Or here's a streamlined summary: https://www.kontxt.io/document/d/JdE8ujfZmPy3bIAiv-tZQXVWfVf...


I really liked the information in this movie. I didn't hear anything that I didn't already know, but I thought it was well done. I really hope people turn away from these technologies. I feel like the only way we can heal our information ecosystem is buy educating people about the dangers of a social media heavy information diet. I think we also need to be mindful of the identities within us that these algorithms are exploiting. Ask yourself how you feel when you get off these sites. My answer was always "mad', "angry" or "sad". Once you realize the effect these sites have on your mood, you're a lot more motivated to change your behavior.


The moment that got me in this film was when they described how social network monetization algorithms gently move us toward a more radical position. We are better targets for advertisers when we spend more time on the platform. The platform doesn't care _why_ we're on the platform; it just wants us to be there as much as possible.

If the platform nudges us toward radical right-wing conspiracies, so be it. Similarly, if it helps us become a fervent Democrat, so long as we're on the platform, that's all that matters.

I deleted Facebook and Instagram Friday night after watching, and I suspect many of you will as well. I've had a great weekend with so much free time...


Why does it move us to a more radical position? Is it because the more radical you are the more time you spend on the platform?

I haven't seen it, but I'm somewhat confused. FB and Instagram just show me posts from friends and family? Most youth sports, social events (weddings, dinners), babies, etc... FB does have some political posting, but IG very little.

And YouTube has virtually nothing political. I never go to the News channel. Its just all sports highlights recommended to me.


Your experience exemplifies one of the problems with social media: the radically different information diet served to different users.

All of the platforms you mentioned can be profoundly political and filled with extreme content, depending on which rabbit whole you have been exposed to and captivated by.

EDIT: I forgot to attempt to answer your actual question, sorry. It's not that the platforms intentionally tries to radicalize you. The algorithms are programed to keep your attention for as long as possible, to keep you engaged and scrolling through your feed. The more time you spend on their platform, there more ads they can show you and the more money they make.

The radicalization is largely (I believe) an unintended consequence caused by the fact that recommending and showing you content that engages you emotionally makes you spend more time with them. This can also be exploited by people, groups or even nations who want to propagate a certain message or worldview.


I find it interesting when the individual(s) who ruthlessly engineer the problem(s) take on the role of some sort of savior only after successfully extracting life changing amounts of wealth.


Yesterday’s robber baron is today’s philanthropist.


I'd passed this over on Netflix, assuming it would be too introductory. Is it interesting enough for someone who's already familiar with the premise from having been around on HN?


Definitely. There wasn't a single idea in it that was new to me, and you'll easily pick up on where they're going with certain plot points long before they get there. (And yes, there's a "plot"!)

However, there were so many amazing ways they articulated things for non-technical audiences, in a cohesive way, that you'll get a ton out of it.

I especially recommend watching it with non-tech friends or family! Led to a ton of great discussions for me.


Last month I read a bunch of academic papers on modern surveillance theory (Zuboff, who appears in the film briefly, only represents one stand of contemporary thinking about surveillance, the field is much richer), and, like others, I still found this movie interesting even though it is fairly introductory. The presentation is both entertaining and effective, which I think is a major reason this movie has early signs of having been far more successful than previous attempts to highlight the same problems.

The fact that many of the voices in the film had a hand in developing the technologies they raise the alarm about also adds an intriguing edge--especially since it's easy to interpret their participation in either a positive or negative light.


I reacted the same initially, yet watched it on a whim.

The answer is sort of yes, in the sense that they provide you with more succinct explanations and analogies. This is useful if you want to explain this stuff to other people (who are not on HN).


Roughly no, if you’ve followed this topic, it will be the same drumbeat. It still might be worth watching since it’s an important issue and this is the most high profile exposé.


I saw it a couple of days ago. I'm not the audience for this doc but I liked the interviews. The dramatizations were boring but I guess they make the point for younger audiences.


I got half way through this... possibly less. Its really a shocking thing to see that people dont know for themselves someone in tech they can trust and chat to about the actual problem. Actually a group of people. I consider it more the social experiment. The show is great for getting people to consider but also reconsider. Time to get back into it I suppose!


This film could have been much shorter without the cheesy subplot which felt like a bad episode of Black Mirror.


The "Nosedive" episode of Black Mirror [0] is a nice thought experiment, where the whole person's life revolves around a social media score.

[0] https://www.imdb.com/title/tt5497778/


> "Changes to your behavior and actions are the product"

This and how they are eroding the attention spans of the consumers of their product.

Now more than ever it is important that thought workers develop their attentional skills, they're in short supply and getting shorter and shorter!


I have a (unverified) hypothesis that the root cause is government spending: government spending -> inflation -> need for high returns on investments -> investor pressure -> growth -> advertising -> need for attention -> sensationalization.


Screened Out is also very good and touches on the influence social media has on child development https://www.netflix.com/title/81306217


I just watched it this weekend. The idea most resonated to me was the comparison of FB feed with Wikipedia.

What if Wikipedia shows everyone a different page rather than the same? That's manipulation. And it's not good in that context.


Previous discussion of the film: https://news.ycombinator.com/item?id=24468533


I wish they had plugged adblockers at all in the movie.


Hmm, I don't think that was the point of the movie. It was more about social media, and no adblocker in the world will save you from Facebook or Google or YouTube tracking your usage and recommending ex boyfriends or "rabbit hole" videos to you.

In fact, with the exception of one scene, I don't think a computer was even shown – all the interactions were apps on phones.

(They did mention, at the end, extensions for blocking recommendations.)


I think your parent's point is that ultimately these platforms run us down the algorithmic rabbit holes that keep us on the platform in order to show us more ads. So, ad blockers might undermine that incentive.


Facebook and many others don't use third party ad networks anymore due to the rise of ad blockers, so most ad blockers don't even work on them since they work by blocking DNS lookups for those advertising domains.


Kinds of ironic that a web site that says tech controls and monetizes us had a "please accept cookies" dialog in the same breath.


It's not the technology. It's the people and what they are doing with it. Technology is just an amplifier.


There are only two industries that refer to their customers as users: illegal drugs and software (social media).


What really resonated with me is one of the speakers talking about taxing companies on collecting users' data. They first establish that the sole driving factor for these companies is making more money than the previous quarter. Now, if companies had to pay financially for all the data they store, they wouldn't have as strong an incentive to collect every possible data point that they can.


That would destroy net neutrality. No thanks.


How is this different from the manipulation it pictures?


I came out of it with a fresh hatred of Adtech.


BF Skinner. Variable reward loops. Your programmed brain. It’s how Vegas works, it’s how adverts work, it’s how how study for tests in school



The only thing worse than a free product that manipulates you, is a paid product that still manipulates you. The height of irony in The Social Dilemma was the shot where LinkedIn and Google (separately from YouTube, which is fair game IMO) were listed among these evil addictive technologies and somehow Netflix, a pioneer of autoplay and binge watching, didn't make the cut.

That aside, the insidious thing about this movie is that many of the issues in brings up are real. There are real problems with social media addiction, ad tech, and filter bubbles. Changes do need to be made. Some might even need to be forced onto the big tech companies. However, these problems are not the root of all evil, as The Social Dilemma would have us believe.

They are certainly not the primary cause of societal upheaval we've been seeing. Political polarization in the US has been rising since the 90s, well before social media became dominant. Racism and police violence have never really gone away. Blaming tech for everything both makes it harder to fix the real problems with tech and easier to ignore the problems that exist elsewhere in society.

Off the top of my head, some things that deserve just as much blame for the problems this movie pins on technology:

1. The death of the Fairness Doctrine in 1987 (https://en.wikipedia.org/wiki/FCC_fairness_doctrine)

People can see distorted views of the world without the help of social media. One can argue that even now, Fox News and other mainstream media have a greater role in shaping public opinion than social media does. Not to mention the conglomerization of local news (driven by capital rather than tech) and the nationalization of media (enabled by tech, but driven by the MSM).

2. The slow death of the middle class, in the US and elsewhere.

An underlying assumption of The Social Dilemma and its line of argument is that people are dumb and easily manipulated. I suspect the educated elite, well-represented in the movie and over-represented here on HN, are particularly succeptible to this kind of thinking, myself included. But I think the reality is that most people care about their livelihoods more than anything else, and the polarization and upheaval we've been seeing (from Brexit to Trump) can largely be attributed to different reactions to this erosion of the middle class. The left might attribute it to unregulated capitalism and a failing social safety net. The right might attribute it to globalization, immigration, etc. The underlying phenomenon is the same, and has very little to do with tech, unless you're talking about factories and shipping costs.

3. Some problems have always been around - social media just makes everything more visible.

The last century saw two world wars, plenty of revolutions and armed conflict outside of that (which seems like a gross understatement, but there are too many to list), the death of the British Empire and the rise of many new nations, lots of social upheaval in the US... the list goes on. There's plenty to say about the 21st century as well, but looking back on history, I don't know if things have really gotten that much worse.

To put it differently, many on HN (myself included) love to tinker with tooling - editors, note-taking apps, what have you. But at the end of the day, we have to get back to the real work, and use those tools to accomplish meaningful things. If you look at social media and Internet technologies as tools for society, then yes, it's valuable to scrutinize them and see how they can be improved. But at the end of the day, we also need to get out into society and get to work, with whatever tools we have at hand.


If you're looking for your next step after this documentary, it's:

https://en.wikipedia.org/wiki/The_Age_of_Surveillance_Capita...


This should be required viewing for all high school students.


I wonder who is behind the "social dilemma" propaganda/advertising campaign.

"Ask HN: What do you think about “The Social Dilemma”?"

https://news.ycombinator.com/item?id=24468533


I have the top comment on this thread, and nobody's paying me :)

I think people are talking about it here because it's been a popular topic on HN forever, and this movie brought it mainstream. It's "controversial" to tech people for a lot of reasons (diversity, "apology tour", etc). And it's also the first time a lot of our friends and family are talking about these issues.

[EDIT] I had the top comment... I've changed my mind, definitely a conspiracy!


I started that tread because I was genuinely interested in peoples opinions on the film, especially the ones connected to that industry. Looking back at the answers I got, there was remarkably few comments from the latter.


Documentaries are the worst way to put forth a legitimate argument (in this case, the argument is that social media is bad). The Social Dilemma, like other docs meant to persuade, is shallow and one sided. Entertainment in the guise of information.


This film is full of hypocrites who got rich off social media and then found religion.


They were in their early 20s trying to build a company not knowing if they would succeed, and the downside of social media is still not well understood.

People who realize their past actions caused harm should be praised not denigrated.


Please. The free software movement has been highlighting the danger of software that works for its authors rather than its users for 35 years.

If these very intelligent people are going to claim they didn't understand what they they were doing, my only response is: "It is difficult to get a man to understand something, when his salary depends on his not understanding it."


These companies are not software. They are services. Services people engage with voluntarily.

When you engage with a company, any company, you can be sure your incentives are not completely aligned. In the case of social networks, the misalignments are particularly troubling.


> These companies are not software.

Oh whoops, you better tell Big Goog so they can stop paying all those expensive software engineers.

I'm sorry but the apologist arguments on this thread are really laughable: "Nobody could have known!", "It's not software!", "Young teens can just voluntarily not use the software (sorry--"service") that all their friends are on!"


Are they selling you any software? I don't think so. They build and use software for their own goals. Your argument on open source doesn't really apply here.


I'm not talking about "open source" (a corporate red herring) -- I'm talking about free software. Free software empowers users to "run, copy, distribute, study, change and improve the software."

Are any of those true of Facebook's software? Do you think these cloud companies could get away with the abuses that they do if users did have these key freedoms?

The problem of software that works for its owners, rather than its users, is not a new one. It's disingenuous to pretend you only built an exploitative panopticon by accident.


> Do you think these cloud companies could get away with the abuses that they do if users did have these key freedoms?

The software doesn't run on the user's computer. You don't run Facebook - Facebook does, on their computers.

The option rests with the users not to engage with their software. Being exploited by a corporation is not a universal right.


You're refusing to engage with the point I'm making (as is your right).

But just to summarize:

- "Nobody could have foreseen people getting sick when we fed them these poisoned cakes! Oh if only I hadn't made these millions of dollars making people sick!"

- "Actually, many people over decades have argued that people should be able to see and modify the recipes that go into their food."

- "No, that's completely different because, you see, these days Facebook is baking the poison cake in their own oven. It's not even a cake. It's a service that offers cake slices, and you're free to not eat them. (ps we also put little bits of our cake in 90% of the other food in the grocery store)"


You fail to see the simple point Facebook's software runs on Facebook's computers. The four freedoms are not touched by that. People voluntarily use the services they provide.

Besides that, the four freedoms include the freedom to use the software (Facebook's proprietary software, after all, runs on top of free software) for anything the user seems fit, including serving poisoned cakes and make billions of dollars out of that.

I am not engaging with your argument because it doesn't make sense. Facebook (and Google, and Twitter, and Hacker News) are services provided to us. We don't have a natural right to inspect the source of their software because the license it's under doesn't give us that right. Would it be better if we could? That's unclear, since a lot of the behavior is tied to the data that is collected and we can't inspect more than the data we have the right to see (our own).


> You fail to see the simple point Facebook's software runs on Facebook's computers.

a) This isn't strictly true. The Facebook app runs on users phones.

b) I don't know why you think this is a compelling argument. Why shouldn't we demand ethical behaviour from software accessed over a network? I can't tell if you're just a fatalist who can't even imagine expecting better, or if you really believe doing shitty things to your users doesn't count if you do it over TCP.


Yes, and? It's impossible to escape [ideological indoctrination](https://www.metamute.org/editorial/articles/californian-ideo...).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: