Hacker News new | past | comments | ask | show | jobs | submit login

Irrespective of the incidental value "Meta" provides, let's keep in mind that no business has a right to exist independently from its standing in the system of laws and the health of its users - especially children.

Social media companies argue that they can't monitor every post and do appropriate age verification online because it would be "impossible to scale". Well, tough luck. Not being able to scale is YOUR problem, not MINE.

We routinely prevent certain business from existing, like a factory that wants to dump harmful chemical on water people use for drinking. If the factory investors say that there is no alternative, we will still prevent, to the best of our ability, the construction of the factory because it's a business that simply shouldn't exist in the proposed manner.




As a parent of a now-13 year old, I’m disappointed that COPPA hasn’t been updated after companies have gone aggressively after children.

I was shocked over the summer when I received an email saying I was going to lose some of the Xbox controls we have in place, as if 13 was some milestone of maturity and self-control. AFAICT, I’m legally responsible for my children until they’re 18, and at least at the moment, they are completely dependent on their parents for food, shelter, and transportation. Just because they’re online, I don’t abdicate my responsibility as their guardian, but companies think it’s a good idea to allow kids to bypass the restrictions that were in place because they’ve reached an age that was meant as a minimal age for tracking in a very young Internet.

Unfortunately for our kids, we’ve become extremely restrictive to the service they have access to, they can only use platforms that allow communicating with people we know, etc. Is that the right move? For us, right now, it seems to be.


> As a parent of a now-13 year old, I’m disappointed that COPPA hasn’t been updated after companies have gone aggressively after children.

COPPA was the culmination of a series of progressively more narrow attempts to restrict internet content and practices using “think of the children” as a justification, the previous efforts were, but for some standalone bits (like Section 230, which survived from the mostly-unconstitutional Communications Decency Act) struck down as unconstitutional, which is why there haven't been attempts to update it with more broadly applicable restrictions—its already the outcome of a process of backing off to the maximal constitutional restriction (OTOH, with a socially conservative Supreme Court that is unusually unconcerned with respecting precedent, I suppose now would be the time, if you wanted to do that.)


There is historical precedent for limiting access for children. Children can't legally buy cigarettes, alcohol, lottery tickets, firearms (of which owning is a constitutional right), or enter certain prohibited places. Unfortunately, I many service providers have abused their access to children and acted as if children can legally enter into some type of one sided quasi-contract. The bar for working with children at any physical location is huge and boundaries are well understood.

I know of exactly zero places I take my children (activities, sports, clubs, etc.) that would speak to them privately to coerce them to use that service more without parental consent. That's what Meta and a million other online services are doing to children.


"Not being able to scale is YOUR problem, not MINE."

Indeed, it is doubtful that these mega-sized websites were intended by the original architects of the www. More likely, they may have imagined the web as a whole would grow (scale) as more people gained internet access. Instead the "Silicon Six" intend only to scale their own gargantuan websites that they now refer to as "platforms". All web publishing and even internet communications are expected to be intermediated by a handful of surveillance advertising companies. What a disaster.

https://www.washingtonpost.com/outlook/2019/11/25/silicon-si...


>Indeed, it is doubtful that these mega-sized websites were intended by the original architects of the www. More likely, they may have imagined the web as a whole would grow (scale) as more people gained internet access.

This, I think, is roughly true but the issue is nuanced. The history of the web can be characterized by ever-decreasing barriers to entry. At first you needed a physical host and apache. Then, you could have a shared host and put html files in your user directory. Then, you could run wordpress. Then, you could sign up for myspace and never have to run anything. The 'like' is the smallest unit of content that can be captured by such systems. TikTok, Facebook, Instagram, wikipedia, etc are what you get from this evolution.

The unexpected downside is that with reduced friction to content-creation, the content has become quite awful. In part because people are awful, but also because of money and how effective it is to hire people to say things using these systems. If the friction to create content remained high, it may have been a "better" internet in some measures. It would certainly be a much smaller one.

Who knows how it will turn out? I suspect that AI will have a significant impact on all these systems, to wit, injecting massive amounts of noise that will reduce their utility significantly, if not to zero. No empire lasts forever.


Regardless of what the original architects of the web may have thought, it was clearly not designed for (a) only a small number of mega-sized websites with hundreds of millions of page that only serve resources produced by others as a substitute for (b) many websites each hosting resources their operators have produced or at least public information.

I find the history interesting. But I started using the www in 1993. There were no so-called "tech" companies like Facebook or Google. Younger web users born into a web intermediated and surveilled by so-called "tech" companies for the purpose of advertising may have no such interest.


> it is doubtful that these mega-sized websites were intended by the original architects of the www

Not to jump to the defense of social media companies, because I honestly think they're not worth the cost of keeping around, but who gives a shit what the original architects of the www intended?

They weren't forming a country and making a constitution. Hell, even for constitutions I don't really care what original founders wanted, and for the most part, and most of them also thought you shouldn't care either, as they meant for them to be amended.


Not saying I think it's a good outcome but I don't see why it's relevant what the "original architects" thought. This is just a variant of 'argument from authority'.


The mistake we are making here is to draw the problem along the boundary of age.

The problem is not exclusive to children!

Even if you could successfully bar all children from participating on Facebook, the problem would still exist, and Facebook would continue to harm people.


There are many things in life more harmful than Facebook that we don't ban. Just because you don't like something or it has some negatives doesn't mean it should be illegal. Indeed given that it's impossible to specifically define "social media" such a ban wouldn't even really be legally feasible given 1st amendment issues.

Now personally I don't use alcohol, don't do drugs (including caffeine), don't gamble, and barely ever use social media (except HN, my one guilty pleasure). But I still think those things should all be legal.


Very much true. Marketing content should be carefully vetted, filtered. Whom you wish to project that content for may want to take back control from addictions they suffer from (i.e. food).


Presumably adults are supposed to be old enough to know better than to use Facebook.


> Social media companies argue that they can't monitor ever post and do appropriate age verification online because it would be "impossible to scale". Well, tough luck. Not being able to scale is YOUR problem, not MINE.

1000%. I think there are more complex problems in the world that are solved that this. If you look at youth and teenagers activities in Instagram, their photos and videos, and social graph I don't think it is so difficult to detect them, within a high percentile.


I think monitoring every post is a pretty solved problem, and it only gets better (worse?). Censoring has varying degrees of success, whether that is outright deleting a post/user or just hiding it from "not interested" users.


> a factory that wants to dump harmful chemical on water people use for drinking

Your example is describing an externality, a cost imposed on a unwilling third party. Facebook is a platform that individuals can choose to avoid or restrict their children from using if they deem it harmful. These situations are not comparable.

> no business has a right to exist independently from its standing in the system of laws and the health of its users - especially children.

I disagree. Businesses and websites ought to have a right to exist as long as they do not inflict harm on third parties and their users participate willingly. Individuals ought to be free to make decisions regarding their own health and that of their children. If we were to consider otherwise, can you fathom the sheer volume of things we'd need to prohibit?


> Your example is describing an externality, a cost imposed on a unwilling third party. Facebook is a platform that individuals can choose to avoid or restrict their children from using if they deem it harmful. These situations are not comparable.

Yeah just like alcohol and drugs, you can easily restrict your children from using them so companies should be free to sell to whomever. Good parents ensure their children aren’t doing this.


Talking from personal experience growing up, I would generally agree. The prohibition from selling alcohol/cigarettes to minors was pretty ineffective. The only effective prevention was having stricter parents and facing consequences. That said, I wouldn't say Facebook is at all comparable to alcohol or drugs...


The prohibition on selling alcohol and drugs has clearly had no effect on their availability. So I'm not sure what you are trying to argue.


But what about all the small, independent forums in the internet? For them it would really be impossible to comply with monitoring every post. Often they are hobby projects.


Thats obviously not a primary concern. Similar to how your home kitchen isn't going to have a food inspector show up and close it.


The comparison you're making is nonsensical.

My home kitchen doesn't routinely serve strangers. The forum I host trough a server in my cupboard does.

If you want to make a comparison, it's between a small family restaurant and a giant fast food chain. It may surprise you but food inspectors treat both in roughly the same way.

Face it. Nothing meaningfully separates my forum from giant social media companies apart from scale.

Hence, small forums hosted by hobbyists and volunteers should be shut down.


> My home kitchen doesn't routinely serve strangers.

Most states have cottage food laws, with basic regulations for small-scale commercial production. E.g. you need to take a course on food safety, you need your water tested, and you can only sell up to $X per year, but you aren't subject to random inspections. There is no reason why the web shouldn't be regulated similarly, with different rules based on the size of the audience.


How does that mesh with the original post's point?

If we have different rules depending on scale, how is it OK for the rules for large scale social media platforms to be impossible to comply with by large scale social media platforms?


Ideally, no matter what the size of the platform is, if it's genuinely impossible for the people running that platform to not harm children, that should be their problem. If they can't figure out how to not be harmful to children, they don't get to exist. That's how it's supposed to work.

In practice though, it's not unreasonable to say that we should expect more from companies that can cause harms at a much larger scale. Perhaps there's some kind of threshold where we can accept some harm being done at a small scale while still not allowing harm at a much larger scale. Seems fair enough.


Anything at scale is something else entirely.


You're making a ridiculous argument


What? How do you figure? I run a small hobby forum for a niche interest and it's very easy to monitor every post. We have five moderators, a couple hundred registered users and we get maybe 3 or 4 posts a day.


There needs to be a line drawn from those who service a large number of people, and those who don't. Either way, only those who are big enough can be actively monitored and accounted for.

Before the obvious argument "where do you draw the line?", I may add that only a select and obvious number of social platforms reach the level of Meta's.


The original post claimed every post should be monitored for "harm to children". And that if giant social media companies can't do that, they shouldn't exist.

Now you want to exempt everyone apart from giant social media companies from that same rule?

This is getting a bit Kafkaesque.


Oh, I'm sorry. I didn't notice that extremely important part. That is... more than kafkaesque. I thought the conversation was around marketing, not user posts.


I frequent the OCAU forum here in Australia.

Moderating the News board became too arduous and carried potential legal liability.

So they turned the board off.

It’s really not that hard.

And anyway, just because you can’t fix everything, doesn’t mean we shouldn’t try and fix something.

All or nothing thinking leads to inaction.


So we can only have discussion forums where the owner is a big company that can afford to have a team of moderators around the clock. Yes, that's going to end well. (For the elites, that is.)


My direct experience is that all social is now Dark Social.

It's invite only Groups on Telegram, FB Messenger, or Apple Messages mostly.

So it has ended well for my social circle ;-)


A lot of discussion forums died that way. They shut down the problematic parts, but those were also the parts people were engaged with.


To paraphrase the GP - Well, tough luck. [This] is YOUR problem, not MINE.

If hobbyists can't comply, maybe it's about time we prevented small independent forums from existing.


> Social media companies argue that they can't monitor every post and do appropriate age verification online because it would be "impossible to scale". Well, tough luck. Not being able to scale is YOUR problem, not MINE.

Nope. It's also your problem, because it has to be enforced somehow. Few websites are age appropriate for young children (this one certainly isn't), so perfect enforcement would mean essentially no viewing websites without login, and all logins requiring display of face and government issued ID to live webcam.

The ID check is already in place for several countries with YouTube if you try to view an age restricted video.


> no business has a right to exist independently from its standing in the system of laws and the health of its users

I don't understand what that means, and how I can know if my business in inside or outside the law.


> I don't understand what that means, and how I can know if my business in inside or outside the law.

Simply put this sentiment on the landing page of your website, labeling on your product or, if you have a mailing list, in the subject field of your next newsletter. “I do not understand if my business is inside or outside the law” is bound to generate useful feedback


It's unclear if even the porn site verification laws will be allowed[1] so I'm not sure the chances are that good in terms of requiring age verification for all social media.

https://www.theverge.com/2023/8/31/23854369/texas-porn-age-v...


Look here https://news.ycombinator.com/item?id=38001630 for talk about regulating this new-ish aspect of business.


Glad to see this take here.

So often I've seen the opinion that effectively expresses that some company or business plan has a God-given right to be viable. Not directly of course but through about-face when someone points out that the company's business proposition conflicts with something—but how will the business make a profit if it has to [submit to government regulation] because of [concerns about marketing to children]?

Some people could probably steadfastly support a new business that was selling something dangerous but that was new and “innovative”. “Oh well people just have take responsibility for their own well-being—regulating crystal dust fumes™ would be way too onerous and impractical.”


> Not being able to scale is YOUR problem, not MINE.

Following your reasoning, anything that could possibly harm children should be banned. Cars sometimes kill children, shall we ban them too?

Now, suppose we do ban Meta. Where should we stop? should we ban open decentralized social media too?

You may not like social media, but a lot of people use them without any particular issues.


I think the lawsuit in the article draws a reasonable, if not conservative, line at knowingly exploiting vulnerable people to boost profits, which a whistleblower is saying Meta is doing. It's not just that the product is harming people, but that it is going out of its way to harm people. Is open decentralized social media doing this?

There are restrictions around advertising for cigarettes, I think for the same reason. To take it a step further in discussing around cars, I think companies who sell certain sized trucks are aware of the blind spots that make it hard to see people in certain spots. Reasonably, these trucks should be banned, or at least more conservatively, no new trucks with this flaw should be built or sold or require a commercial license.


> knowingly > exploitable > vulnerable

None of these are cut and dry. How vulnerable does someone have to be before you can exploit them? How harmful does the behavior have to be before it’s exploitation? What is “knowingly” in an organization? How should events be handled if it’s a single engineer operating in the shadows vs a board decision? etc etc etc

The answers to these question may seem obvious to you, and someone else nay have a completely different answer that seems equally obvious to them.


It's not about any of our opinions. As the article says, there is a whistleblower from Meta saying that this is documented. The lawsuit is what's going to sort it out.


The internal study presented by the whistleblower actually found that Instagram made users feel better more often than it made them feel worse: https://about.fb.com/wp-content/uploads/2021/09/Instagram-Te...


That just means it’s about the opinion of the government, which isn’t much better in my opinion (and is arguably worse).


No, it means it's the opinion of the jury. Their job is to determine fact.


> it is going out of its way to harm people

I'm having a very hard time buying this. I may be wrong but I'd assume that Meta is audited and well-regulated. Besides, the company is very transparent. I know a bunch of people working there and they don't strike me as people who would purposely harm people. Besides, every employee has access to the whole code base and freely talk about what they work on. I don't believe in some conspiracy.

That being said, it's perfectly reasonable that we keep these companies under the microscope given the power they have, and they may be making mistakes.

> Is open decentralized social media doing this?

Arguably, decentralized social media without moderation would be more harmful than Facebook/Instagram.


> Cars sometimes kill children, shall we ban them too?

Yeah if they sold cars with 0-star zero safety rating claiming that it's just too hard to do anything about making cars safer when you sell cars at scale.

If Meta was a low-margin business, we might buy their argument that "it's just too hard".


We also don’t allow children to drive cars, so the analogy is DOA


That's not following their reasoning at all, it's a strawman. His point was that a business shouldn't be able to claim that doing something we want/require them to do is too hard and just not do it, which is what they currently do with content moderation.


How do you know they're not doing it and that it's unreasonable to ask more considering the difficulty of the task? His point is that there's no excuse for not doing it, even if the task is impossible. We don't ask the impossible to car manufacturers, why should it be different with Meta? who's to decide?


Because it's well documented that they're not doing it. They have had several whistleblowers around content moderation already. The whole of Africa's content moderation was basically being handled by a mid level employee.


It is for the society in which these companies operate and profit to decide, not the companies themselves. And if it is indeed “too difficult”, which we all know just means it will hurt profits to a degree that is unacceptable to shareholders, too bad, they can either make less money or shut it down.

If a cop can kill a man in broad daylight for selling loose cigarettes, we should be able to kill a company like meta for breaking the law.


We do ban cars. We have safety ratings and regulations that they have to meet, otherwise they are banned from sales. We also have driving tests and age requirements for using them.

Since we still have cars post regulation, maybe we would still have social media as well.


Using a car without a license is in fact banned, because this minimizes harm to kids (and non-kids).


> Using a car without a license is in fact banned

On public roads, but not otherwise.


The poster didn't say ban, this is a huge strawman.

The poster said laying down requirements and them being unable to scale whilst adhering to those requirements is not an argument for removing said requirements.

Our priority is the safety and health of society, if the only way they can scale is to put people at risk then maybe as a society we don't WANT them to scale.


We actually do restrict children from driving cars. It’s pretty common in most countries


> Now, suppose we do ban Meta. Where should we stop?

Google.


Meta is not being banned, though.


Cars aren't a great example, they're a huge problem and we'd be better off if we weren't all dependent on having a car


PLEASE PLEASE PLEASE explain how that's in any remote way equivalent??

"hurr durr lets ban cars now". PLEASE.

Unlike social media, to drive a car you need a license, you need insurance, and if you run over a kid you most likely will go to prison. Or maybe you weren't driving your car, but it was unsecured, or had a mechanical malfunction. You are still responsible and you will in one way or another have to pay.

But, you can create ads on social media targeting minors with toxic messaging, and nobody bats a damn eye.

It's about time that online commercial endeavors, like ad networks, get reigned in and have rules applied to them, same as for other activities.


> PLEASE PLEASE PLEASE explain how that's in any remote way equivalent??

let's ban meta because they can't prevent that some kid somewhere is gonna be able to create an account (and may see really horrible things there)

let's ban car because some kids are going to get run over because car manufacturers can't prevent it.

You're welcome


I mean FB engineers are specifically optimizing for toxicity at an algorithmic level. It’s the product. You can’t perform controlled experiments to measure the profitability of depression and also fiend ignorance.

You’re all very welcome to whatabout but working at FB is a stain on your character.


I mean no, they're specifically optimizing for engagement. It just turns out that toxicity fuels engagement.


Among Haugen’s allegations was a claim that the company was knowingly preying on vulnerable young people to boost profits.

"Knowingly preying on vulnerable young people to boost profits" is different from blindly optimizing for engagement without understanding what works.


Oh they understood how it works, and they definitely accepted that their efforts to boost engagement caused mental health issues, especially among teenage girls. But toxicity was not a goal in itself.


True, although that's just an "allegation".


Non sequitor. They're optimizing for money. Contrarian status: weak.


Isn't that a bit like tobacco companies "optimizing for engagement", but knowing that the same ingredients cause cancer?


In all seriousness, no, and for obvious reasons. The toxicity is created by the participants, freely and of their own volition. Is it Facebooks responsibility to police that, shut it down, censor it? Honestly... And where who and when? According to what laws in what regions. Also how do we know Facebook created it, or if it is simply making visible the toxicity that everyone knows is middle school and highschool.

I honestly don't know, but I don't like so disingenuously simple shoot the messenger answers. That being said, if Facebook amplified it willfully knowing the damage already -- then it's a different issue. And I think there is some evidence of that.

Either way its different than tobacco which was created by the company itself.


> Either way its different than tobacco which was created by the company itself.

One could argue that Meta willingly boosted toxic engagement themselves.

This didn't happen in a vacuum. When a controversial post or comment suddenly pops up in your feed, out of nowhere, participants may be responsible for the responses to it, but Meta is very clearly seeking, targeting, and amplifying that toxic behaviour for profit.


Again, you don't know that. That can be the result of, show what is engaged with more, and you end up with a rage-loop, that was in no way designed purposefully into the original system.

You're on hacker news you should know these types of system design issues...


> That can be the result of, show what is engaged with more, and you end up with a rage-loop, that was in no way designed purposefully into the original system.

That may have been true fifteen years ago. There is now a decade worth of scientific literature on social network effects on human behaviour, and I think we can agree that this is hardly something not a single Meta executive know anything about.

> You're on hacker news you should know these types of system design issues...

What I'm suggesting is that this is not an issue for them, but a feature.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: