Hacker News new | past | comments | ask | show | jobs | submit login
Lawmakers Ask Zuck to Drop 'Instagram for Kids' Since App Made Kids Suicidal (gizmodo.com)
257 points by bryan0 on Sept 16, 2021 | hide | past | favorite | 154 comments



Here in the US we think that freedom from government oversight is the only kind of freedom. We make the mistake thinking this means that the free market is actually the antidote to government, and as long as we have the free market we will be free.

In reality, companies operating within a market is perfectly capable of imposing intrusive and arbitrary power on citizens, and collective action / government intervention is needed in order to protect our freedoms.

As others have pointed out, we don't need to 'ask', we need to legislate.

Congress will have a hard time passing legislation against this not only because of the filibuster (which encourages minority rule), but because of this fundamental confusion. Why would a Senator or Congressperson who believes only in the first type of free market freedom and is blind to the second type of freedom make moves against a corporation?

Anyways I just started reading this great book called Freedom From the Market, points are 100% lifted from there: https://thenewpress.com/books/freedom-from-market


> As others have pointed out, we don't need to 'ask', we need to legislate.

Legislate what, specifically? I always see calls for more legislation around social media, but I rarely see any actionable suggestions.

Frankly, I’m kind of shocked at the degree that tech communities are rushing for legislation on this topic. When I was growing up I remember the big issue being violence in video games. Politicians went on crusades to demonize video games and demand that we pass legislation to protect the kids from any exposure to violent video games, lest the video games turn them into school shooters.

With video games, the online tech communities rallied against any legislation and wrote volumes about how kids were smart enough to manage themselves and not let violent video games influence their thinking. With social media, tech communities are doing the opposite: Writing volumes about how kids are incapable of managing themselves and will succumb to social media destroying their mental health.

The major difference between the two is that tech people (on average) like video games but they don’t like social media. So video games get a pass, but social media must be attacked with everything we’ve got. The secondary subtext buried in most of these articles is that young girls, specifically, are most vulnerable to social media issues, whereas violent video games were largely an issue for young boys. Apparently we’re okay letting young boys handle difficult content but we don’t give the same credit to young girls. The more I watch all of this play out, the more hypocritical it all feels.

I also suspect that if anti-social-media politicians see success that we’ll see a revival of the anti-violent-game politicians. After all, we need to regulate what the kids consume, right?


> With video games, the online tech communities rallied against any legislation and wrote volumes about how kids were smart enough to manage themselves and not let violent video games influence their thinking.

Because there is no evidence for this. In cases where there is, like in lootboxes, we see a push for legislation.

We have enough evidence to at least build correlation between social media use and decline of mental health. This doesn't call for banning or legislating the business to oblivion, but for talks in the society. We need to talk about social media use, our representatives have to discuss measures with the scientific community and promote research so that they can legislate based on data.

One thing is for certain, what we have now is unacceptable. What we will build in the future is up for debate and should be debated.


I agree. Advertising + algorithms without regulation makes the internet a mathematical optimization for lootboxes. Advertising on the internet should be strongly regulated just like it was on television at one point. I remember when subliminal advertising was a huge scandal / scare. It was regulated on TV. There is absolutely no regulation of it on the internet, and the technology and understanding of reinforcement learning in advertising has grown tremendously since then.


What do you think of the connection between video games and unemployment? https://www.economist.com/the-economist-explains/2017/03/30/...


Sounds like a causation / correlation mistake. Unemployed people find a way to spend their free time. Sounds to me like games are one of the most engaging escapes from the misery of unemployment.


I tend to agree with this, but I've started to wonder if that's really true. Is there any way to actually test whether correlation != causation for the link between video games and unemployment? Like, an interventional study of some kind for example.


This is even more interesting in societies where there's little pressure to work, such as nordic welfare states. A lot of men I went to school with in Scandinavia decided they'd rather stay at home and play videogames than find a job.


I think this fits Marcuse's One-Dimensional Man, not a problem with gaming itself.


There is some evidence, however, that violent media tends to cause people to perceive their surroundings as more dangerous. Certainly not as bad as making people psychos, but worth a second of contemplation.


> We have enough evidence to at least build correlation between social media use and decline of mental health.

Again, eerily similar to the violent video game panic in the 90s.

The original congressional hearings on violence in video games followed record-high gun violence in 1993 ( https://en.wikipedia.org/wiki/1993_congressional_hearings_on... ). Politicians pushed a correlation between increasing popularity of violent video games and increasing gun homicides. It sounds dumb now, but video games were relatively new at the time.

The correlation felt right to many, especially those who hadn't grown up with video games in their own lives. I see this social media debate following the same pattern where adults feel that social media is evil and assume evidence will eventually support their feelings.

> Because there is no evidence for this. In cases where there is, like in lootboxes, we see a push for legislation.

That's rewriting history. Decades ago, there was a huge push for legislation against violent video games long before lootboxes were a thing ( https://en.wikipedia.org/wiki/1993_congressional_hearings_on... )

Violent video games have been called out as recently as 2019 by president Trump ( https://www.hollywoodreporter.com/news/politics-news/trumps-... )

Violent video games have been a political scapegoat for decades. The debate about social media is following in the same footsteps.


Just because violent video games isn't problematic doesn't mean social media and/or lootboxes also isn't. The relative innocuousness of marijuana doesn't dismiss the dangers of tobacco, or alcohol.


It's often not mentioned but marijuana use can result in psychosis and early-onset schizophrenia in a substantial subset of the population. It's still relatively innocuous IMO.


Yeah, I just mentioned it because it was an example of a drug that experienced a moral panic (“reefer madness”). But one drug being less bad doesn’t mean other drugs aren’t worse and more immediately harmful.


But doesn't it seem similar to past "moral panics"? I have a 12-year-old and I'm super concerned about social media and kids's wellbeing— there seems to be an obvious relationship. But there also seemed to be an obvious relationship between violent video games and violence — or with comic books and reading.

This is the best study i know about digital media and adolescent wellbeing and they find an extremely small effect size (accounting for less than 0.4% of adolescent wellbeing).

https://www.nature.com/articles/s41562%20018%200506%201?mod=...

This doesn't mean we shouldn't be concerned, especially if we are motivated to design better media culture. I can't stand Facebook.


> doesn't it seem similar to past "moral panics"?

No I don't think so.

I've played lots of first person shooters and it's harmless.

But social media causing damaging self worth worries, I slightly experience myself, sometimes. It's a real problem, in my eyes.

(Loot boxes, and computer gaming addiction, are real problems with the games though.)


I don’t even think that violent video games are harmless. I just think the desensitization is more insidious and longer-term, maybe something with cultural and spiritual ramifications, but not something that’s as immediately harmful to public health like depression caused by social media social stresses.


Hmm. I'm thinking that if there was a graphical video game based on the book "American Psycho" where one was playing that character, maybe it could be damaging to some.

I never saw such a game. -- The FPS games I've seen, I'd compare them more with looking at a ww2 movie.

"The last of us" -- that game maybe that one can cause, like you wrote, a bit desensitization. It's more realistic brutal violence in that game than what I saw in the games i played as a kid.

Any games come to your mind?


Manhunt jumps to mind. There's a few games, some of which from Rockstar, that aims to be edgy and gratuitous for the sake of it. Maybe the Hitman games, since they are literally murder simulators?


I had a look at Manhunt, it's an a more brutal game than what I would have thought.

For sure I wouldn't want military or policemen to play such games, and not kids either


A good chunk of evidence has been summarized here: https://ledger.humanetech.com/


That's a cool site. I wish there was something similar for video games. I've always been on the side of "entertainment doesn't cause real violence", but it would be nice if I could easily point to real research to actually back it up. Of course, I'd also be happy to be proven wrong. It's the fact that all I can do is state an opinion that bothers me.



In 1993 it seems like pushing legislation for reduced violence on TV shows (not video games, as the wiki page you linked says) was a good. And it's a good call to push legislation for kid's social media now as well.


You make a good point, how would we tell the difference though?

If social media as it stands causes unreasonable harm to society, how would we know for sure? And how would we know we aren’t making the same mistakes made with demonizing video games?


Studies have shown that there is a correlation between social media use and decline of mental health.

(https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4183915/)

Not only that but anyone that has used social media can say that to you, specially younger people.

Meanwhile studies show there was no correlation between violence and videogames. And except for lag you can't really find a lot of people that will tell you videogames made them violent. (https://www.psychologicalscience.org/publications/observer/o...)


There weren't studies back then. Just correlation. We have quite a few studies that show the damages of specific kinds of social media and specific practices by social media. Hell a genocide happened because Facebook fell asleep at the wheel.

There are many ways to regulate social media:

- Determine constraints and objective statistical goals the ranking algorithm must meet

- Blocking and marking fake content

etc.

Facebook could choose to rank positivity higher but it does the opposite since it increases engagement.


> We have quite a few studies that show the damages of specific kinds of social media and specific practices by social media. Hell a genocide happened because Facebook fell asleep at the wheel.

Do we actually have these, or are you inferring we do from the general tone of the media conversation around this topic? I'm aware of some correlation studies, and think the topic bears keeping an eye on.

Would you mind sharing one of the "quite a few" causal studies you're referring to?

> Hell a genocide happened because Facebook fell asleep at the wheel.

This is a non sequitur. Facebook didn't cause the genocide any more than Marconi caused the Rwandan genocide or AT&T caused Watergate. Widespread communications technologies and platforms are deeply fundamental to human interaction in the modern age; the only model that marks Facebook as at all causal of Rohingyan persecution requires giving them a nonsensical amount of undue credit for the human interaction that occurs across their platform. Eg, you'd have to make obviously absurd claims like that Facebook is responsible for legalizing gay marriage in the US (activists heavily use FB and other social media platforms to organize).

To be clear, I think criticism of FB from the UN et al on the topic of Myanmar is warranted. The nature of the platform is such that the capability can be built out to direct and constrain the conversations that are had, and it's fair to say that Facebook needs to expand the manner in which it does so.

But this contradicts your point: the salient difference is the ability to control communication so that it stops violence, which in your framing is a _positive_ of social media.

(Note that I'm personally close to an unrestricted speech maximalist, but I'm taking your framing for granted in the above paragraphs)


Um, there actually is a fair amount of evidence that violent video games influence people's thinking.

Just as a quick gut-check, ask yourself this: if they didn't change your thinking, why would you play them? Assuming they don't offer any level of excitement or entertainment, what is the point?

Evidence: the US army uses FPS to train people for combat, and there is a longer history of what it takes to train people to shoot people: switching from round targets to human-shaped targets makes it a lot easier for soldiers to aim the gun at a person before pulling the trigger.

Now clearly most people who play violent video games don't turn into mass killers, but to say that the deep immersion of a FPS doesn't change your thinking is farcical on the face of it.


> Um, there actually is a fair amount of evidence that violent video games influence people's thinking.

Then feel free to post the research. This was the core topic of my studies, and to my knowledge there simply is zero evidence of a causal relationship.

The US indeed has a big violence problem, but there are actual, established causes, e.g. the high wealth inequality, which is a big predictor for violent crimes in most countries.


Why would you assume video games don't offer entertainment?


> Legislate what, specifically?

Treat it like digital tobacco or similar - a product with known mental health side effects engineered specifically to modify your behavior in the name of "engagement". Limit access to children, require warnings with every email and notification. etc. etc. etc.

It's not hard, as long as you keep focus on the problem. This isn't about guns.


End the business model entirely! Make all microtargeted advertising illegal!

(I can dream, can’t I?)


I think it's a worthy dream and I'd be interested in hearing from anyone that says otherwise.

Why should we allow microtargeted advertising?


If I'm going to have to view ads, I'd prefer to see relevant ads instead of irrelevant. The issue is with the data collection and privacy issues around it. Not with making ads more relevant by appropriately targeting.


I reject the notion entirely that we "have to" view ads. Advertisements are nothing less than an unwelcome intrusion into my life. Further, I'd rather know that any ads I see are the same exact ads as seen by anyone else in the same space.

Having said that, I agree that _another_ core issue is the data collection and privacy issues. Certainly, viewing advertisers as the value-subtracting rent-seekers as I do, I wish for them to know as little as possible about the world in general and about me specifically.


In a perverse way, I'd rather see irrelevant ads. That way I am not inclined to purchase.

I prefer not to have salespeople come to my door, I prefer to seek them out when I am inclined to.


This is a very apt way to put it! I'm likely to adopt it into my own descriptions, thank you!

Advertisers should be relegated to their own greasy pit, to speak only at my pleasure and where I know to find them. Any other attempt to engage with me, anytime and anyplace else except where I explicitly permit them, is nothing less than an unwelcome intrusion. Begone, rent-seekers! Trouble us with your A/B tests no more.


The problem is that microtargeted ads aren't necessarily more relevant to your interests — they're more tailored to you.

For example, an advertiser might be able to microtarget people with certain characteristics that correlate with voting for a particular political party, and bombard them with ads that discourage them from voting.

That's targeted, but not "relevant".


Agreed. And perhaps if we call them "optimized" instead of "targeted", this becomes more obvious.

The goal of the advertiser isn't to create a win-win situation, where you get just the right ad at the right time, for a good deal on what you actually need. That would be nice, but it's leaving money on the table - and the goal of the advertiser is to make money. So the ads get optimized further: a better ad is one that gets you to buy something regardless of whether you need it, on a deal that's as bad for you as possible without burning you as a future customer[0].

--

[0] - Which does not imply the deal isn't absolutely shitty. There are ways to guarantee you'll buy again, no matter how much you hate it. See e.g. the telecom industry - there's only few players on the market, and they're all equally shitty, and it doesn't matter because people need phone service, and those abandoning telco A in anger are offset by those flocking to A from the other telcos.


Although I do think you have a point, I find this framing very convenient.

Remembering how incredibly shitty people could be in middle and high school, it's not hard at all for me to see how constantly comparing oneself to one's peers could be detrimental to one's health. It was bad enough having to go to school every day, but (back in the day) at least I didn't have to have that shit follow me around everywhere else too. In short, yeah, I'm having a hard time buying any comparison between Doom and Facebook.

All that said, I also have a hard time seeing a legislative solution to this problem, especially given what the typical legislature looks like these days.


> In short, yeah, I'm having a hard time buying any comparison between Doom and Facebook.

That's kind of my point: The tech community can relate to Doom in the context of their own childhoods because we grew up with it. Adults can't relate to Instagram in the context of childhoods because that came after our time. So without first-hand experience, they substitute whatever feels correct, as prompted by hyperbolic articles like this one.

Every generation thinks the way they grew up was correct, while subsequent social changes are bad. There has always been moral panic about what "kids these days" are doing, from social media to video games to portable music players to watching TV and so on.

It was the exact same story with video game violence: Adults at the time didn't grow up with video games, so they assumed the worst. Politicians and news media stepped in to seed the most dramatic possible interpretations, pointing to correlations with increasing gun violence (at the time). To adults who didn't grow up with the context and saw no personal upside to the video games, heavy-handed legislation felt right.


Absolutely not the same. Social media is an addictive substance in a very real way.

Back in the day we joked about "Evercrack" too, but addiction to it actually ruined lots of people's lives in a way Doom never did.


Video games can be totally addictive, are you saying no one wad harmed by Fortnite or World of Warcraft? China is banning video games for kids because it became a national problem. What studies they did to reach that conclusion I don't know but I don't think it was baseless.


Yes, but that's an entirely distinct argument and a different conversation from "violent games are bad because they're violent".

Also note that both Fortnite and World of Warcraft are entirely multiplayer games. The way I see it, their addictive nature comes from the same set of factors that make social media addictive. In terms of addiction, Fortnite and WoW are like Instagram, not like Doom.


After numerous missteps, of which probably the most famous is either the four pests campaign or the birth control ("one child") campaign, I would be very hesitant before assuming any punitive or corrective policy China has any basis in facts or will even have the desired effect.


"According to one study from the American Journal of Psychiatry, between 0.3% and 1.0% of Americans might have an internet gaming disorder."

"Gaming has also been associated with sleep deprivation, insomnia and circadian rhythm disorders, depression, aggression, and anxiety, though more studies are needed to establish the validity and the strength of these connections"

https://www.health.harvard.edu/blog/the-health-effects-of-to...


If we're gonna dig up every faulty decision ever made it's not like the U.S would appear that great.


Teen suicides are up by a good bit. FB/Instagram popped up right around the time the trend started going up, perhaps a coincidence, but perhaps not.

https://www.pbs.org/newshour/nation/suicide-among-teens-and-...


Tons of things happened lately as well like traditional families collapsing and a huge spike in single parenting. Also the smartphone. I am not saying this has nothing to do with social media, I am saying this is probably really complicated.


US spending on science, space, and technology correlates with an r=0.99789126 to suicides by hanging, strangulation, and suffocation. Is that just a coincidence? Maybe, maybe not. The issue is, you hate FB so you are happy to jump to conclusions. Now, we can all agree the internet does cause harm to individuals and companies in general have a history of skirting responsibility.

And let's get real, this is not just a FB problem. If you really care about this, you have to burn a whole lot more than just FB. And it's probably going to include most internet companies. Cyberbullying is going to happen wherever teens spend most of their time on the internet.

We have important societal issues to solve. c54's comment touched on that in an insightful way. There are other things to balance here. Marginalized groups have benefited a lot from social media much like geeks benefited a lot from video games, D&D, and board games. But we don't like to talk about that because in many minds FB == the devil, and mentioning any benefit means propping up the devil. Soon we'll be talking about TikTok in the same manner.

This should be investigated but it needs to be done with an objective point-of-view. You do not have an objective point-of-view. Facebook does not have an objective point-of-view. The media definitely does not have an objective point-of-view. Let's at least try to be scientific.


An unacceptable number of teens have been bullied to suicide. Suicides cascade. Doom was never that. No way this is a coincidence


I think the flippant rejection of criticism of (violent) video games was and is very mistaken.

I'm unsure about the direct causal relationships between games and violence, but at the very least they're an expression of a violent culture. The glorification of shooters as 'lone wolfs', the fact that a picture of a breastfeeding mother gets banned but someone's head blown off in a game is no big deal, it makes you ponder what priorities that creates in a culture.

And the criticism of the addictive nature of games, how gaming can isolate people, the general lack of sexual relationships of young men in particular, but also young people in general, these concerns written off as 'boomer mentality' or whatever are completely valid.

>After all, we need to regulate what the kids consume, right?

I think it's completely obvious that you need to regulate what children consume, they are after all... children. Regulating what they do is kind of the point of raising them. This consumer culture that leaves everything to 'market choice' and where every intervention is seen as overreach is terrible. This is exactly the sort of pseudo-freedom that OP was talking about.


It's rejected because criticism is unsound.

>the general lack of sexual relationships of young men in particular

That's because everything sexual is declared amoral and banned including a breastfeeding mother. Men simply behave in accordance with what they are taught.


> The major difference between the two is that tech people (on average) like video games but they don’t like social media

Not something like the fact that "social media" is a (very strange) form of communication, not a type of literal media that is produced, marketed, and consumed either as popular commercial productions or niche fan-productions?

I'm sure you could point out more meaningful differences than "tech bros stopped liking one of these" between:

- a form of fantasy entertainment, largely analogous to the nonfiction novels, violent comic books, and violent movies/TV before them (and were complained about before them). Its only novel feature being interactivity - but no more so than even-younger children playing cops and robbers in a playground.

- a ubiquitous new form of peer communication that's full of novel features, from massive network effects and propagation of speech across and in and out of peer groups, gamification of peer approval, infinite-scrolling feeds and other dark patterns, widespread coopting by marketing and degenerate forms of "news" media...


As a first step, how about transparency?

Currently companies can do what they want in secret.

Perhaps they should be able to do what they want, but in the open so it can be examined and criticized.

That means their blacklists, algorithms, word lists, preferred user lists, and reasons for banning users become openly accessible.

Even without compelling them directly to change anything, compelling them to be transparent would help a lot.


> Legislate what, specifically?

I got some suggestios - No company w/more than 1 million MAU and deriving any income from online advertising may not combine any online information about a user with any physical real-world data; they may not sell data to another 3rd party that will combine online and physical;

- Ban the algo: Any user deriving more than $250 in value from an online service may not be banned by an algorithm. They must be given 60 days notice and an opportunity to remove their data from the service

- No company may own more than 1 domain in the top 5,000 (as measured by monthly traffic). All domains must be consolidated to 1

- No more hiding behind scale, expand the carve outs for 230 form beyond child porn to definitely include any threats of violence - but especially death threats

Thats just a start...


I don't agree with the specifics here, but the first point is key. Scale matters. Emergent effects are real. Large companies need different rules than small companies. It's ok if the line is drawn imperfectly, so long as the bulk of labels are reasonable.


>Legislate what, specifically? I always see calls for more legislation around social media, but I rarely see any actionable suggestions.

I have a radical proposal: reduce social media's value proposition by breaking them up. Then users' interest will naturally decrease.

>So video games get a pass, but social media must be attacked with everything we’ve got.

It applies not to any social media, but only to centralized giant monopolistic services. Federated social media like email are fine.


I think it's worth pointing out that those threats of legislation are what led to the ESRB, which was created to show the industry could self-regulate without government intervention. While that system is outdated for modern game design it served it's purpose. These calls for regulation are leading down the same road. They tell companies ike Facebook, "This is problem, solve it before we're forced to."


> Legislate what, specifically?

Advertisement targeted at children and using children.

If your advert features children or offers a product a child might be interested in, purchase it themselves or influence purchasing decision of the parent, is associated with a product children use or service children frequent then it's illegal. As simple as that.

You wouldn't believe how much such ban would clear up media sphere and improve children's health in all aspects.


I agree it is hypocritical but I do think there is a difference between girls and boys and what they can handle. Girls are more sensitive and impressionable, that was common sense until not too long ago but now we all have to pretend that everything is the same. So without getting into the question whether we need to legislate social media or not, the fact is that gaming is a good social activity for boys which didn't cause any harm whilst social media is causing a lot of harm to girls. I can see it with my kids and their friends. We need to first go back to the assumption that girls and boys are not the same and move on from there.


One piece of legislation we could adopt in the US is something like australia's requirement that Facebook pay media providers for their content. Has the effect of at least compensating media providers for their journalism.

https://www.ft.com/content/ad706bd3-2aed-49da-b4f4-862f15a2e...

I think the points about violent video games are good, but the OP here is that people are self-reporting instagram to be negatively correlated with mental health. That's distinct from more nebulous and bygone arguments about violent video games.

Funnily, this is also top page HN right now: "Is America Inc getting less dynamic, less global and more monopolistic?" https://news.ycombinator.com/item?id=28547224


I haven't read this book, but this concept sounds very much like the notions of positive and negative liberty/freedom as described by Isiah Berlin [1].

In short, negative liberty is freedom from external interference, while positive liberty is having the resources and power to actually accomplish things.

[1] http://cactus.dixie.edu/green/B_Readings/I_Berlin%20Two%20Co...


Yeah! Berlin and his ideas are explicitly referenced in the opening portions


You are misinterpreting Isaiah Berlin.

"Such theoretical shifts set the stage, for Berlin, for the ideologies of the totalitarian movements of the twentieth century, both Communist and Fascist–Nazi, which claimed to liberate people by subjecting – and often sacrificing – them to larger groups or principles. To do this was the greatest of political evils; and to do it in the name of freedom, a political principle that Berlin, as a genuine liberal, especially cherished, struck him as a ‘strange […] reversal’ or ‘monstrous impersonation’ (2002b, 198, 180). Against this, Berlin championed, as ‘truer and more humane’, negative liberty and an empirical view of the self." https://plato.stanford.edu/entries/berlin/

The idea of positive liberty is that you are free to become the best man you can be, but there is someone or a group who defines what 'best' is. And that inevitably ends in dictatorships or similar types of abuse.


Thank you for saying this, and saying it well. It’s really scary that all these “freedom” drum beaters and free market worshippers don’t consider this at all. What’s worse is that government at least have a tacit reason to keep the people’s best interest in mind, although that does seem to be eroding, but corporations have zero incentive to care about anything other than the gain of capital and more power.

Also, thanks for the book reference.


It is scary, especially since there are very invested powers interested in keeping it that way. "Corporations good, government bad" they'll say to their people, all the way to the bank


The “free market” in this context is just a consumers ability to make choices for themselves. The “freedom from” argument is just a contrivance that suggests you’ll be more free the less choices you’re allowed to make for yourself. It’s just rebranding a clearly anti-freedom idea as a pro-freedom one as a form of rationalization, and hoping nobody thinks about it long enough to notice.

You’re already free from Facebook if you want to be. Nobody will make you use it. Your also free to parent your children in anyway you want to. But freedom isn’t forcing everybody else to make the same decisions you do.


I mean the main problem there is _NO_ free marked (in the sense of a proper self regulating marked consumers can reasonable influence by their buying decisions).

A proper free marked requires that:

- People have a choice (no monopolies or oligopoly on any level),

- People can see problems and decide to therefore not to buy from a given company => i.e. transparency. We are very far way from this, with how the world and technology moved on the only way to archive it is to require easy accessible transparency, especially of ownership of companies, land and especially in the finanzial/stock market. Even if this means that people privatly investing there in more then small sums losing privacy (because if you move larger amounts of mony you ARE basically a company!)

- If a company messes up, the competition should be able to pick up where the original company left of. I.e. Patents and locked down electronic devices are fundamentally non free marked.

- probably more stuff I missed.

Many of the points above are often not associated with a free marked (at least in the US) potentially the opposite, but they are IMHO essential for a free marked to work like originally intended.

Or in other words, the "free marked" the US has IMHO can not work, in the sense that it can't have the positive effects a free marked is supposed to have because the dynamics in thir marked often outright conflicts the dynamics a free marked is supposed to have leading to a continuous degrading marked (from a human/citicen/society POV).


You're making such a pure argument that I can't resist making the contrarian argument.

I believe that legislation got us into the Facebook problem in the first place. If you disagree, please take the time to correct me instead of downvoting my opinion out of existence. I'm open to a change of mind. Here's how I see it:

All of these social media companies have one thing in common: they exist to sell ads. We don't really know what Facebook would have looked like as a subscription service, because it wouldn't have ever become dominant.

For ads to work best, they have to be targeted at users. This incentivizes data collection that helps with ad targeting, which is pretty much everything.

But for ads to work at all, the user has to actually see the ad. This incentivizes engagement, because you need attention on your ad for the ad to have value. If you need engagement, you have to find a way to reach users. This is quite good if your business is selling ads, as you already have all of the data. And so the cycle of collecting and using data for acquisition goes on forever.

Since the very beginning of the ad business, companies like DoubleClick (later acquired by Google) have stretched the limits of web technology to make all of this work. Pixel bugs, cookies, app SDKs, all kinds of tricks, anything they can do to get more data about the user, because that's the gas that makes the whole engine run.

Users, however, are prevented by government regulation from stretching the limits of technology to fight back against these companies.

Have you ever wondered why there isn't a large market for paid alternative Twitter clients that cut out ads and data collection? There used to be, back when Twitter had a very permissive view of their data. Then Twitter changed the rules and now it's a grey area if using their private API with an app-extracted key is a crime or not. It's certainly not something you can build a company on.

So the situation here is that social media companies rely on you not being able to "hack" because there are legal protections on how you can interface with their public-facing internet servers and what you can do with the data that those servers send in reply. Because of legislation like the DMCA, paid for by media companies, users never even got the chance to fight on the same technical battleground where they were already being exploited by ad companies.

As I see it, these companies are using public infrastructure when it suits them. If a computer client is allowing third-party cookies, well, that's just downright reasonable to use for ad tracking. But if you alter a query string, you might be a criminal. That's where regulation has got us.

If Facebook is to be regulated, you can be sure that they'll write the regulation.

My litmus test for if there's a free market situation here: Are you free to make an alternative ad-free Facebook client that uses private APIs extracted from the real Facebook app? Because if the answer to that is no, as it has been since these companies have existed, then no amount of regulation is going to improve the situation.

User rights and privacy is a fight we could win technically without some very specific computer and copyright laws. We need to get back to the idea of the internet being public infrastructure.

If there was some change to the law that suddenly let people build better Facebook clients without fear of being sued into oblivion or even arrested, then I think we could finally get these companies under control.


I fully agree. Adding to this: I think there should be an interface/data divide. Facebook has created an entirely new market by aggregating specific data into a public space. Their only innovation is their interface; every single byte of data they collect from there is not theirs. They did not produce it, they do not own it.

The only new legislation I'd like to see enacted is for these social media and data aggregators to be split into two companies: the interface company, which would exist to serve their interface, and a data utilities company, which would exist to serve the data they don't own. This would open their data market for hacking upon, in the spirit of what you suggest and enabled by the repeal of the legislation you've outlined.

The only way towards greater freedom is to make the world freer.


I think this is roughly the approach that Liz Warren et all have been suggesting, roughly some sort of platform/participant divide. Applied to amazon it means they can't sell 'amazon basics' products, applied to Apple it'd mean something like they can't offer their own first party apps with higher priority, and applied to Facebook, it'd be exactly along the lines you've pointed out.

Harks back to the antitrust action splitting train lines from steel producers in the 1920s. You need steel to make rail, and you need rail to transport steel, and the steel barons got rich by integrating across these two domains and giving themselves massive discounts compared to participants who were only in one industry and not both.

I'm not sure exactly what the details or enforeement would look like here but I do think it's an interesting line of thought. One thing Matt Stoller says with respect to antitrust legislation is that every company is a unique situation and should be handled as such.


I too would like to see more of the internet as public infrastructure, but your second paragraph implies you'd be against regulation?

How would any third-party client win the inevitable arms race this would initiate and what would stop your system from re-centralizing? Also how would any infrastructure become 'public' when it can't marshall the public interest or allocate resources via legislation?


> I too would like to see more of the internet as public infrastructure, but your second paragraph implies you'd be against regulation?

Yes, I believe we lost the public nature of the internet by adopting regulation like the DMCA.

> How would any third-party client win the inevitable arms race this would initiate and what would stop your system from re-centralizing?

Attrition. Facebook would have to realize at some point that they can't stop their client from being reversed and can't prevent clones from popping up. Right now, they can stop that, but they stop it with laws not with technology. Without that protection, Facebook could never keep up with the whole internet collectively reverse engineering their apps out in the open. Also FB engineers would know that being on the DRM team is a waste of time and nobody would want to do the work.

> Also how would any infrastructure become 'public' when it can't marshall the public interest or allocate resources via legislation?

As another commenter pointed out, I shouldn't have used the term "public infrastructure" when referring to the internet, because it's largely privately owned and not regulated as a public utility, which is true.

What I mean is that if you're going to open up a public web server and publish a free app in an app store, I should be able to interact with your servers however that app does it without committing a crime. And I should be able to publish my own app that interacts with your servers too, without your claim to intellectual property standing in the way.

I think this would cause companies like Twitter and Facebook to lose their entire market, but the core of the service would be maintained by the community of people interested in the service.

So for example, if there were 10 Twitter clients allowed to exist and be above-board companies, and they captured 80% of the Twitter client market share, then there could be a serious push to invent or adopt a new protocol to serve their users and de-federate Twitter themselves if they don't want to go along with the protocol.

I think that's a really solvable technical problem and all the privacy activists would have great reason to pick up an editor and join the fight. But as it is now, regulation has made that type of activity illegal, because it would infringe on Twitter's rights.


My interpretation of their suggestion is that the third party apps are merely a UI wrapper built on top of Facebook that removes ads or enforced privacy, and that building such a wrapper is currently illegal. Such a wrapper wouldn't be subject to centralization since there's no network effects, the network effects still lie with Facebook.


It's easy for social media companies to break third party clients on a whim, or ban the users. I don't think any legal gray areas are the problem. You'd have to force companies to allow third party clients if you want them to be viable on most platforms.


It's actually only easy for them to stop this because reverse engineering their code, extracting secrets, and then selling a product with those secrets is forbidden by law.


Doesn't that only apply to copy protection code? And you don't need to get into reverse engineering to make a client.


Bunch of good points here, I appreciate you for writing it up. I'm really interested in this space in general and I don't know what the perfect solutions are.

One thing I'm cautiously interested in is the anti-corporate action taking place in China over the last couple months. From my view on the sidelines as an American, the Chinese Politbureau seems to have just up and deemed certain profitable business models counterproductive for their view of society, and just laid down the hammer to say that those companies can't be profitable anymore. Specifically they did this for private test-prep/tutoring companies--they have to be nonprofits now.

I think the US doesn't have the state capacity or the political will to do something quite so hard hitting. Maybe we say that ad companies can only have X percent profit margin, and above that their tax rates increase? Force companies to invest their earnings in non-ad technologies? Not exactly sure.

Speculation aside, I like your point about a 3rd party ad-free facebook client. This mirrors what another commenter mentioned about the platform/participant distinction. This could mirror the antitrust action taken against Microsoft in the 90s, maybe we say that facebook has to open up their internal APIs such that third party people people should be able to make their own clients.

The weird thing here is that the client app isn't really facebook's business model, right? The marketplace facebook creates is to their ad buyers, the users are subjected to that on the back-end. Maybe the ad marketplace should be made public in some way? Could imagine at least forcing the ad marketplace to operate like a 'lit' exchange where bids and offers are visible by all participants. Not sure what this would actually solve though.

It's been said before but Facebook should've never been allowed to buy Instagram.

More concretely there's room for small incrementalist pieces of legislation which could help shift the landscape in beneficial ways, like Australia's laws forcing social media networks to pay news media sites for their content. [0]

Matt Stoller's not perfect but I like a lot of his stuff on antitrust news, some links below [0] https://mattstoller.substack.com/p/australia-forced-google-a... https://mattstoller.substack.com/p/facecrook-dealing-with-a-...


Ah, you're so close ;)

Thanks for taking the time to reply, these discussions are much more interesting on HN than anywhere else on the internet.

Your line of thinking really drives me crazy because I feel like we're so close to agreeing. The horseshoe is almost touching in this thread.

I don't know enough about China to know if their societal engineering efforts are considered successful, but I do know enough to see the parallels to the regulatory solutions that you're floating.

I think these are all well intentioned but terrible ideas:

> can only have X percent profit margin

Profit margins can be engineered with creative accounting. You can't practically do this.

> Force companies to invest

I don't even know how you would force a company to invest in something. Generally you'd tax or subsidize them, but in either case you're disrupting market forces.

> maybe we say that facebook has to open up their internal APIs

This would actually help the situation in the short term, but it's still not a good idea. Facebook would continue asserting intellectual property rights on the data they serve and entrench them as a public utility. The goal here should be to create an environment that's hostile to Facebook by protecting them less.

> forcing the ad marketplace to operate like a 'lit' exchange

The ad marketplace doesn't need to be transparent, because Facebook can't actually compete for attention if they lose government protection of their IP or government protection from the harm caused by their UGC. If Facebook didn't have these protections, we wouldn't need to worry about the marketplace because their business would fail.

Though Facebook is currently in a position to easily commit undetectable fraud on ad buyers. The low quality of their market should be another reason to not use Facebook. The reason we don't have a transparent and competitive ad market is because Facebook gets to decide the rules for their marketplace.

> Facebook should've never been allowed to buy Instagram.

I don't think they should have been prevented from buying Instagram. If they didn't own them, they'd just share data anyway for mutual benefit. Who owns the stock is separate to me from the abusive products they produce.

> forcing social media networks to pay news media sites for their content

News media sites are their own problem, but they shouldn't get a special carve out for being paid to be indexed. If they don't want their content public on the internet, they shouldn't publish it. Once they do, they should assume it can be liberally indexed/mirrored/redistributed.

I think we'll ultimately see blockchain solutions take over the space, because fundamentally they work and are an unstoppable technology.

It's also possible that all these regulations don't have any of their intended effects, but are still successful in crippling Facebook by making its operation very restricted. In other words, use regulation to make Facebook a worse product, to the point where the users organically leave.

The best outcome would be if we simply got rid of all these silly intellectual property protections and let the chips fall where they may. That would end a lot of media companies, but it'd be better than the problems they've caused.

A good outcome is that social media companies eventually lose their users organically, through some combination of onerous regulations that can't apply to decentralized projects, and new technologies that don't fit their business model simply being better.

A bad outcome is if governments adopts social media in official ways, it becomes essential to government business and therefore gets regulated into permanence. This blurs the line between what it means to have a Facebook account as opposed to an internet connection. That's probably where Facebook will take us if they can write the regulations. Maybe to vote or do some other government business, I'll need to let Facebook verify my identity, which will be fine, because it's totally regulated and normal.

I've read a lot of posts on HN about the "aha" moment of people using metamask for the first time. I think we're going to see a new wave of internet technologies that obsolete things like Facebook and Twitter very quickly. They might also be way more damaging and worse, but they won't be companies and they won't be able to be regulated. I'm not sure how that plays out, but we'll figure it out when we get there, because that's where we're going.


>>As I see it, these companies are using public infrastructure when it suits them.

I agree with everything you've except what's quoted above. How is FAANG using public infrastructure, when private companies own everything from the undersea cables to the servers?


Ah, I see the confusion there. I meant public like "public ipv4 address", not a governmental "public utility".


I see. I think you meant to suggest that public accessibly to information is made relatively one-sided (to Facebook's advantage) by the DMCA on top of the software protection schemes employed by FB & co.


Yes, exactly.


I don’t think this argument works well against “Instagram for kids”. Yes some big tech companies have services that they supply that are close to feeling like essential utilities, but certainly not this, no one needs to use Instagram for kids, and there should indeed be freedom to choose what to do here for parents, kids and people without government intervention.


Taking away power from arbitrary companies arbitrarily isn't very free either... People dislike government oversight because government is fundamentally imposed on all people, and funded by your own money, can't say the same about Facebook...


The only reason the government doesnt run an instagram for kids is because it doesnt know how to. Instead, it runs lotteries and liquor shops, which they know do damage as well, but you dont get government to shut down government in those cases.


The federal government ignores them as these are not tasks of the federal government and therefore are at the complete discretion of the state governments

10th amendment

Now sure, the federal government has subjugated all of the states and is in a position to enforce arbitrary compliance to arbitrary whims, but it does also derive power from the collection of states and so it mostly ignores the discretion provided by the 10th amendment, to prevent that power from dissolving


Mostly agreed.

I know it is tired to say this but we don't have a free market with the existence of intellectual property. They are also a Natural Monopoly, mostly due to network effect, but also because of their tendency to acquire the competition. Add to that that we don't put executives in jail for corporate wrongdoing, alongside treating corporations as people in terms of speech and political donations.. you end up with the present situation.


I often lament the loss of the ideal of the internet persona. Much of my growth as a preteen and teen came from online communities where it was easy to simply assume a new identity after learning from your mistakes. I would find it terrifying to grow up now, when your real identity follows you everywhere.


Shouldn't, ideally, a mistake you learn from not require a new persona? Seems like the old system created part of the problem: Removing people that make mistakes, until the community consists of only participants that have always been perfect!


Kids do stupid stuff all the time. It's nice to not be chased by your previous mistakes for your whole life. Especially the mistakes you did when u was a kid.


That ideal leans heavily on the other community members being able to recognise and accept that you have learned and changed.

Especially when the community = everyone on the internet that's a problem.


I am looking at this headline and I do not want to click on the story for further details. How do I get out of this shitty, shitty cyberpunk dystopia I live in.


Well, not directed to you specifically, but for anyone who works for companies that are creating this shitty dystopia and don't like it: stop.

One of the reasons I left Silicon Valley was that over the years I came to see that people even among the people who were uncomfortable with or knew that the things they worked out were a net negative to society, fewer and fewer were willing to leave, usually because of money.

It's Upton Sinclair's "It is difficult to get a man to understand something when his salary depends on his not understanding it."


I have a friend who works in big tech. We worked together before. I moved to a new gig where I work fewer hours and earn a lot less money but I have a huge amount of freedom. From our conversations I gather that he would love to do what I am doing, but he has a house payment and a wife and kids. It seems like people get comfortable for a while with their big tech gig, and then get real used to the cash flow. After a while they can’t make a change because they’re so embedded.


Keep in touch with your friend. A lot of times, they are wracking their brains to find a way. The weird thing about stepping away, is that the sacrifice of supposed "needs" turns out to be the sweetest benefit.


I've come to view the ills of technology as an inevitably... this is how I sleep at night.

I have new fangled ideas for server less infrastructure, but I know it has a hidden cost as it would be used for all sorts of evil.


very yes :)

personally I am a freelance artist who fights to get stupid corporate social media platforms to actually show her work to the people who've said they want to see it, so I'm somewhat less complicit in the creation of this cyberpunk dystopia than the average HN reader.


I worked at Instagram for over a year and snooped very heavily while I was there on all the data related to teen well being and even suicides.

The interesting thing about all this is that the #1 competitive advantage of Instagram in the US that prevented it from completely losing the teen demographic to Snapchat was that teens felt less bullied and more safe on IG. I read literally dozens of surveys and studies that specifically asked about wellbeing and choices to use one platform over the other. Because of an intense and intentional push from FB leadership, Instagram was comfortably the safest destination for US teens.

Also there's certainly no evidence that Instagram usage has affected teen suicides in a significant way. The total number of teen suicides in the US per year is far too small for any statistically significant measurement claim. Any extrapolation or anecdote-based claim has to explain why IG doesn't seem to cause any decrease in self-reported wellbeing.


So the data said that Snapchat was worse in terms of safety and sanity? Any details?


Also it would be quite bad for businesses if their future most profitable demographic kept killing themselves before getting to their income generating years


But what if it is also profitable? And what if profitability can be measured with hard numbers and “suicidal” is somewhat open to interpretation?

And what if the people making the decisions also share the profit?

I’m sure we can reach some kind of middle ground.

/s


The real question is how many dead kids matter until shareholder value growth is limited.

The thing I seriously wonder if there is a new evolutionary pressure unfolding, and whether or not we should fight it. Or, will it take care of itself?


Maybe they can set up some kind of affiliate program with parents to further increase engagement.


Are these leaked slides available anywhere? Or were they only shared with journalists?

The article buries the actual data in a paragraph toward the end of the article:

> According to one slide, 32 percent of teen girls said the app made them feel worse about their bodies. Of those who’d experienced suicidal thoughts, “13% of British users and 6% of American users traced the desire to kill themselves to Instagram,” the Journal reported, citing another presentation.

We really need to see the survey questions and the data to understand what’s going on here. 6% or 13% of respondents reporting suicidal thoughts is worth investigating, but I’m really curious what the other 94% of self-reported contributors were.


Not to sound callous, but we really need a control group. Magazines, movies, TV, advertising… everything targeted at women pushes an image that would make non-models feel worse about their bodies. I don't see how Instagram is unique in this respect, other than how addicting it seems to be.

I say this as someone who uses no Facebook properties, simply on principle.


The Wall Street Journal published many of the slides in 3 stories this week (and I presume there are more to come.)


'Instagram for Kids', because nothing could go wrong with a Facebook product targeted at kids.


Wasn't Zuck's idea quite the opposite to make Instagram for Kids safer for children by filtering predatory, malicious and disturbing content that might affect them at that young age.


The provocation of suicidal thoughts of grown-ups often comes from feeling stuck in a shitty life situation while watching the never ending photo-streams falsely portraying happy lives, beautiful products and experiences you cannot afford, the partners you will never have participating in staged social occasions that you will never attend while dining your depression meds. and feeling numb, lifeless and without energy.

So even if Facebook succeeds in removing all predatory, malicious and disturbing content, are they going to eliminate all of the above from Zuck's kiddy-insta?

I see absolutely no reason why kids should be any different than grown-ups in this regard. Quite the opposite. Kids are absolutely obsessed with comparing themselves to other kids, and Zuck's Kiddy-insta is an absolute recipe for disaster.


I personally stopped using Facebook 10 years ago and will probably never come back. At the time when I used it I only used it to connect with friends and family and I saw no disturbing content or content that would incite jealousy, greed or lust.

I think the trend of hyper self-centric content begun with Instagram indeed. For example Photo Models and Body Builders sharing photos of their bodies for no apparent reason besides to glorify and/or sell themselves.

Photos of food make no sense to me, photos of pets I can somehow understand and yes I can understand how looking at happy lives and happy experiences can make you feel miserable but then again Facebook or Instagram don't really know your mental health and state you are in and can not magically remove "sensitive" content that would disturb you.

Best thing would be to stop using social media altogether or use it only to inform yourself about the world around you but in the form of text or perhaps audio/voice only.


Every now and again I look into this “x causes rise in suicide rate in young females” nd every time it ends up being only visible in the US. Last time i looked into it, people where blaming snapchat, but snapchat is available outside the us, so why do we only see the rise in the us?

So I’m curious if this was also seen outside of US? The root cause might not be social media itself, but just the fact that these apps allow amplification of unrelated damaging cultural aspects of youth as a female in the US.


Get some laws governing social media. Don't dilly dally about it like everything else. If you let them, they will act like drug dealers and will not be a nice drug dealer and say, hey, you shot up enough now, come back tomorrow...


Less letter-writing, more legislating. U.S. senators don't need to politely ask Zuckerberg to stop this app--if they want to put a stop to it, they should be willing to actually make a law against it.


I’m sure Markey would be happy to legislate against it. But thanks to the filibuster he would need to get 49 Democrats and another 10 Republicans on board before that would accomplish anything.


The filibuster sucks but it doesn’t obviate the responsibility to try.


Suicide rates for children are onthe rise. There is a clear and obvious solution: repeal section 230. It is time big social media face the consequences of their destructive rent seeking of our collective minds space.


Have we ever collectively put the car back into the bag? Is there a realistic version of the world where kids are using social media less(and not because they’ve gone on to adopt something worse).


Are we assuming that parents have no oversight over their children’s devices? This seems like a social problem (ie family relations/management) rather than a technology/legal one.


Does it really? We see the same effects in adults.

It seems like a problem with ability to do x colliding with basic human behavior.


That someone thought this was a good idea is a social problem in itself.


The assumption is correct. Have you met children?


I have children. Their access and usage of devices is my responsibility.


The fact that you have parental responsibilities does not absolve anyone else of their responsibilities. That would be a non sequitur.

But anyway, it’s not reasonable to expect parents to know if this or that app is more likely to increase the risk of suicide. Fortunately I have a government that I pay for that can research this and help me with that.


They can assume all apps increase the risk of suicide.


Any remotely tech-savvy kid can go around you. I did insurmountable amounts of questionable shit when I was under 16. Only thing my parents ever confronted me about was when I forgot to switch off the “let contacts find me” feature in Instagram when I made an account and that was far from the worst way I defied their rules.

There’s a balance to be had here. First there’s the social responsibility- if there isn't something targeted towards kids that is proven to harm their mental health, that is a net good.

There’s also parental responsibility- being a college student I of course do not have the insight necessary here but I feel like I would’ve made better decisions if my parents weren’t as controlling with tech. It was almost a game to me to see what I could get away with. Simple things like adding me to a web filter _when I was in high school_ eroded the trust I had with them. Granted, it took me < 5 minutes to bypass it, but I still felt wronged.

Parenting wise, again, I’m completely unqualified, but I think having an open and honest relationship with technology is a better way than what my parents did. Rather than harping about “everything you do is our business,” being allowed to have some degree of privacy would have fostered trust.

tl;dr there are ethics involved with shipping a product. Don’t offload these ethical decisions entirely to parents, because kids generally don’t give a shit.

source: am 19


Tech-savvy kids don't have this problem. Also you don't need to shut it down for good, only add a little friction.


Pretty much every young teen watches porn or violence/gore/etc online without their parents knowing, either because they search for it or one of their school friends show it to them. That's the reality.


Children aren't the problem. Parents are.


They do have oversight, they just elect not to use it because the devices are pacifiers and their individual lives are more important than that of their family.


Work for them? Quit. Use them? Don't.

They and their ecosystem are individually toxic and societally destructive; they are bad for innovation, bad for the marketplace, and most of all, bad for the humans caught in their sociopathic pursuit of money via "engagement" at all and every cost.

This week alone the number of truly damning stories is infuriating, yet these are now some continual and commonplace, that it is hard to keep track of the myriad flavors of abuse, deceit, deception, double-dealing, and outright lies, including to putative oversight e.g. the US Congress.

If you work for or with them, you should take a hard look at the real costs and consequences, and see if you find your soul in order.

Abuse and parasitism at the expense of the common good and of individual lives and wellbeing is real, literal evil;

and you don't need to believe in the divine to know that doing evil has a cost.

Get off the ecosystem and work against it.


The law should not govern or restrict younger people ( age 18 or below ) from using a website.... instead, it should explicitly prohibit a website from TARGETING or allowing an 18 or younger to use it. The Onus is on the website, not the user.


Well.. Because you asked nice. We will get started on revealing our new parental controls system we made a while back.

Sends you emails and status reports about your childrens mood and emotional changes. We call it Instaface parent.


Your child scored a lower than average score today on instaface. Maybe you should reach out to your child and show them proper usage of the instaface user interface (user in-yer-face?)


We have detected very slow input speeds as well. Have you considered teaching them to use speech to text? Get started here.


i wonder how youtube has an affect on children?


The US has only 546 suicides in a year in the 5-14 age group. Age 15-24 is 10x greater.[1] Risk of suicide increases with age, peaking at age 45-54.[2] Under-13s, the target group for Instagram for Kids, is not at high risk.

[1] https://www.kidsdata.org/topic/211/suicides-age/table#fmt=12...

[2] https://www.sprc.org/scope/age


Your argument is ridiculous for two simple reasons:

1. Do understand it is the same group of 5-14 who go on to transition to becoming 15-24 & so after having their life experiences all of which contribute to suicide 2. Low rates among 5-14 might precisely be because they are currently sheltered from the shitstorm that is the world & wider implications of losing that innocence. Mass adoption of social media by that age group might directly accelerate that & atleast may cause a rise in suicide rates. Even a single life lost to such bullshit is devastating.

Please stop advocating for bs like social media that is really never required in our lives much less kids' lives.


[0] Under-13s, the target group for Instagram for Kids, is not at high risk

Genuinely curious : I'd like to know the benefits (not to FB but to society) of Instagram for kids in order to even contemplate the risk benefit analysis. Do you have any sources at hand for this?


Mainly to completely segment off their content (like YT kids) so that while they can have an "Instagram" account, they can't have creepyperson69 sliding into their DMs. It would be similar to what they did with messenger kids.


What content would a kid be consuming on IG? There is no children's content on the platform, as far as I'm aware of.


It's a platform for user-generated content so I'd imagine they would be able to follow their friends and see their pics/videos/reels. That plus kid-approved verified partner content (like @pbskids or @nick).

That plus the idea around tying it to a custodial "parent" account so that a parent can keep tabs on who they are following vastly improves the current situation.


>able to follow their friends and see their pics/videos/reels. That plus kid-approved verified partner content (like @pbskids or @nick)

AnecData alert : We tried Youtube kids in 2019 for about a day (for my then 4 year old son). We were so shocked to see suggestive content and other crazy stuff from "verified" and reputed (100k+ followers) that we uninstalled the app then and there. There's no worse insecurity than a false sense of security


Yeah. Given the experience with regular social media platforms, the very concept of a platform for kid content, where the consumers are kids and creators are random adults from the Internet[0] seems insane.

And yes, I've heard all the stories about YT Kids, and even took a brief look. We're staying away from it. Some research + youtube-dl is our way of choice for sourcing songs for our kid to listen to.

--

[0] - What do people expect? That content for YTKids / IGKids will be created by 9-13 years old?


A group not being at high risk is not a good argument that it’s ok to increase their level of risk.


Respected Animats:

Wouldn't it be ideal that our children don't feel suicidal due to the technology and dark patterns that we make available for their young minds (which would take till post-25 in general to actually know what's good for them)?

I suggest that even one suicide - especially in a child - is too high.

-- A Fan


Does your argument boil down to 'just because risk is currently low, it is acceptable to make a change that is known to increase risk'? What if currently risk is low simply due to not making that change. Doesn't make sense.

Or are you saying suicidal ideation cannot be correlated to anything other than age? Which is also untrue since we have evidence social media (and other factors) can increase this risk.


I haven’t checked your stats, but how are they relevant to this discussion, at least in the way you imply?

If one even ignores the insane fact that that many young kids kill themselves every year, in what world is it okay to gleefully ignore an increase in suicides to any age group and the cause(s) of the increase?


It might be the world where you fabricate descriptions like 'gleefully' just to make the person you're disagreeing with look bad.


I didn't fabricate anything. It's simply a bad word choice, and I was really searching for some synonym of deliberately in a reckless way that I can't quite put my finger on.


> Under-13s, the target group for Instagram for Kids, is not at high risk.

Until now it seems


What a ridiculous argument. Are you aware that humans grow older every year?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: