> Zuckerberg vetoed a 2019 proposal that would have disabled Instagram’s so-called “beauty filters,” a technology that digitally alters a user’s on-screen appearance and allegedly harms teens’ mental health by promoting unrealistic body image expectations, according to the unredacted version of the complaint filed this week by Massachusetts officials.
> After sitting on the proposal for months, Zuckerberg wrote to his deputies in April 2020 asserting that there was “demand” for the filters and that he had seen “no data” suggesting the filters were harmful, according to the complaint.
So, why didn't instagram ceo provide that data? Seems a huge oversight for someone in that position.
I remember when Facebook did research to show causation between newsfeed items and mental health. Everyone reviled them for “running dangerous manipulative experiments on human beings”.
After that, I bet they definitely want to avoid the same bad PR. It would be worse if they ran it on children, and someone died during the course of their experiment.
The problem wasn't necessarily the A/B testing rather than what came next, Cambridge Analytica. This current headline seems to follow the same message, that being able to manipulate a users mental health is beneficial to Meta.
I’m sure they had A/B tested the feature, but it’s hard to think of a metric they could measure in those tests that would reliably detect the alleged harm.
I’m not sure they made the right choice in the absence of data, but I do believe that they didn’t have the data or a good way to get it.
The idea that we can, as technologists, build or continue to do something that is so intuitively wrong because it is, in some sense, profitable and there is an absence of data to prove it’s harm is an indictment of modern society and ethics.
“Intuitively wrong” is subjective. The idea that a self-selected, self-righteous minority should set the moral compass for the majority is the real indictment of modern society and ethics. If there is no way to achieve majority consensus on one decision (there’s always a collective consensus on whether to be on a platform), then it only makes sense that the platform owners should be the one to choose
Yeah I think what you are identifying is also definitely wrong. But what I am saying holds.
You don’t think, for example, that most people would think it’s a bad idea to write software to facilitate showing teenagers unattainable and more desirable versions of them selves?
> You don’t think, for example, that most people would think it’s a bad idea to write software to facilitate showing teenagers unattainable and more desirable versions of them selves?
That ship sailed with the invention of the printing press. This conversation happened in the fashion industry with publications like Cosmopolitan long long ago.
The difference is in fashion magazine it is editors who choose what to project. With apps, it is teenagers themselves who create versions of themselves because kids compare themselves to others and it gets very clique - the apps aren't the problem per se, it is human behavior that is buggy and that is the real thing to fix because the next thing will come around that feeds off of the same suboptimal pathways in our brains, whether it is designed intentionally to make money or not.
People say this but what, you're going to A/B test a feature removal? Randos being like "hey, where is this feature now?"
At least if you remove the thing wholesale you can do the PR push along with it, and actually do something _good for people_ (at least if you subscribe to this concept in general).
I think it's hard to suss out some stuff quantitatively on this, but ... honestly? If you do qualitative interviews with a bunch of kids across the spectrum, have quotes with them, an info packet with all these articles.... maybe you're wrong, but the upside feels important and the downside feels pretty trivial.
You’re going to use company resources to create a test for your position or pull the data to support it (using resources that could be pushing forward on the business) when the CEO hates your position? That’s not generally a wise strategy for corporate executives.
Even if you have the data, this is likely all about the CEO not really wanting to hear the argument or go in this direction and not his officer’s inability to provide supporting data.
I'm seeing a lot of responses about the morality or PR implications of trying to A/B test this, but this seems fundamentally impossible to A/B test to me and points at a bigger problem with what companies and their marketing departments believe they can know and the limits of what science can actually do.
The hypothesis here is that usage of professional but automated editing tools to make people look more beautiful than they really are promotes unrealistic competition standards and makes people feel worse about how they look in a regular mirror that is showing them the truth.
How do you A/B test this? Giving the feature to some people and not others isn't good enough. The impact is because of seeing other people look more beautiful on the Internet than they actually look in person. How are you going to prevent a user from seeing the photos of other users who use filters? Even if the global social graph had strict partitions, which I imagine isn't the case, these photos leak into web search, news articles, and the same technology gets adopted by other platforms. You can't A/B test something that impacts the entire broad culture. The world is too connected.
> “All the people that I’ve talked to internally about this were like… Mark’s level of proof, in order to be able to take the work seriously and act on it, is too high,” Bejar added. “I think it’s an impossible standard to meet.”
"Meta CEO Mark Zuckerberg allegedly halted proposals aimed at improving Facebook and Instagram’s impact on teen mental health, according to internal communications revealed as part of unsealed court documents.
Zuckerberg allegedly vetoed plans to ban filters that simulate plastic surgery on Meta-owned platforms, according to the unredacted lawsuit filed by Massachusetts Attorney General Andrea Campbell (D), and ignored requests from top executives to boost investments in teens’ well-being."
> “I respect your call on this and I’ll support it,” Stewart wrote, according to a message cited in the complaint, “but want to just say for the record that I don’t think it’s the right call given the risks…. I just hope that years from now we will look back and feel good about the decision we made here.”
Were they saying "for the record" just as an idiom, or did they have a particular paper trail purpose in mind when putting something in writing to the CEO?
"If you're going to go down, take people with you."
If there's an issue, you partner with your superiors to fix it. This is what the Insta CEO is trying to do (because he believes there's an issue). Having done that, yes, you leave a paper trail to cover your ass.
Is there actually strong evidence that social media harms teens' mental health?
From what I can find, the correlation between social media use and mental health problems is small [1] or nonexistent [2]. There are few causal studies, and their results are even smaller, eg: halting Facebook usage for a month decreased depression by 0.09 stdev [3].
[1] "We found a small but significant positive correlation (k=12 studies, r=.11, p<.01) between adolescent social media use and depressive symptoms."
[2] "There was no association between frequency of social media use and SITBs [self-injurious thoughts and behaviors] however, studies on this topic were limited."
[3] "A study using an experimental design measured the effects of abstaining from
Facebook for four weeks in an adult population and found that there
was a slight decrease (SD=0.09) in depression."
My first criticism would be that intervening only to restrict Facebook would likely just result in a substitution effect, with instagram, snapchat, youtube, reddit, etc filling the void. Likewise, if I remove cake from my diet for 4 weeks but make no restrictions on all other forms of sugary baked goods, I'm not likely to see the same magnitude of effect as I would've otherwise.
Here's an associational study that found almost 3x odds of having depression between the most and least frequent users of social media sites. This was among US adults aged 19-32 and adjusted for age, sex, race, relationship status, living situation, household income, and education level
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4853817/
I know the media often cites Instagram's internal research saying "we make body image issues worse for 1 in 3 teen girls", but the actual stat is not too damning IMO: 'Among teen girls with body image issues, 32% said Instagram made it worse, 22% said Instagram made it better, and 46% said it had no impact' [1]
> Zuckerberg vetoed a 2019 proposal that would have disabled Instagram’s so-called “beauty filters,” a technology that digitally alters a user’s on-screen appearance and allegedly harms teens’ mental health by promoting unrealistic body image expectations, according to the unredacted version of the complaint filed this week by Massachusetts officials.
Focusing on beauty filters really undermines the narrative. If you are talking about ways that Meta harms children and the first thing you list is that they didn’t ban beauty filters, it actually makes Zuckerberg seem reasonable.
We don't know what wasn't proposed or investigated to lessen issues if the initiatives were all shut down. If cursory work admits there's a problem and your profit is in the billions, where you pay engineers to mostly sit on their ass - the onus is on you to reduce harm.
Rejecting all proposals indicates any harms of the platform are inbuilt fundamentally.
But… they didn’t do that? Filters became ubiquitous when they were put into Instagram?
Obviously the bad version of anything is always available at some level of effort, but also obviously “level of effort” is in fact the only deterrent that exists.
It is amazing how much internet bandwidth is wasted by stories like this... Expecting anything less that big tech favoring profits over EVERYTHING else is like expecting Sun not to rise tomorrow.
Yet somehow each of these stories always goes viral and everyone seems "stunned" like "how can this amazing human Zuck (the worst scum of the Earth as most other big tech CEOs are) do this!?? So surprising given his impeccable record as a decent human being who cares about the well-being of our children..."!!
It is amazing how you jumped to "favoring profits" conclusion from this article, which doesn't even mention that the plans were rejected because there was no data supporting their effectiveness to any degree.
That seems super reasonable. However, I believe it has the opposite effect. Everyone with a shred of common sense gets numb to these stories which the effectively prepares us to be numb to even more outrageous stories that come after it
There was a post on HN months back about testimony to Congress. I forget the details except for one graph. Clear and direct correlation between the rise in social media and teen mental health issues.
The number of teen girls that I’ve seen with cut marks is terrifying. The dramatic increase in mental health issues is shocking.
I know with my kids social media has made every one of their challenging teenage moments more difficult.
Before social media we had rap music glorifying criminal activity, 16 and pregnant and other assorted reality TV, before that we had old MTV and metal music that glorified hedonism, before that we had girl magazines with "impossible beauty standards", the pressure to wear make up, before that we had the sexual revolution... It seems that for every generation there has been something corrupting it. The older people get their panties in a wad and the younger people become more self destructive.
America just has a culture of pushing limits and often those limits were there for a reason. Chesterton's fence and all that. All this social upheaval and people aren't happy. So much progress and everyone seems to be more and more miserable. So what is the solution? Ban Instagram filters? Make self esteem our golden bull? Begin goose stepping? I don't think anyone knows, and we are all just looking for someone to blame.
There is solid research out there suggesting that this is different. Large n, several years, multiple countries all pointing to all increase in tune with the advent of visual social media (Instagram, TikTok - not Facebook or WhatsApp). The time series analysis makes this not merely correlational.
Everything is a different beast than what came before it.
Notice, I'm not saying the panty wringers were wrong about all the stuff that came before. I've come to terms with the fact that they were largely right. The corruption of each successive generation actually happened. What I'm saying is, it seems that this is just an artifact of culture rather than standalone negative impacts of new things. I don't know what to do with that information, but it is staring me in the face.
That would be worth something in a debate, where truth is the goal.
But this isn't a debate, this is a decision. The agent is Meta. The field is their products. The goal is reduced teen mental health issues. And the buck stops with Zuckerberg.
"Correlation does not prove causation" alone is not an excuse to avoid an action. It is a principle that can be used to compare actions for how likely they are to achieve the goal, and so it must be combined with some other action and evidence that makes that other action appear more likely to work. Without that, then the action suggested by the correlation remains the action most likely to work, and so it must still be taken.
If, for whatever reason, Zuckerberg doesn't want these changes to be made, then he needs to either dole out resources to gather the evidence he demands and which may free him from them, or he needs to implement some other action which may turn out to work.
If he doles out the resources, and the evidence comes back and establishes causation, then he has to make the changes. It sucks for him, but the goal isn't to please Zuckerberg, it's to reduce teen mental health issues.
If he doles out the resources, and the evidence comes back failing to establish causation, then that's convenient for him. But he still has to try something else, because the goal is reduced teen mental health issues and it has not been achieved.
If he goes straight to implementing some other action, and it at least appears to work, then he doesn't have to make the changes he didn't want to make. That's great for him, and since teen mental health issues were reduced, he can put it behind him.
If he goes straight to implementing some other action, and it doesn't work, then still has to try something else, because the goal is reduced teen mental health issues and it has not been achieved.
Any evidence to support that other social network like Snapchat, TikTok, Twitter don't promote such behaviors, what about communication platforms like iMessage, Telegram, WhatsApp, SMS etc. Do we have evidence that smartphone is not net harmful for tweens, teens and adults in general. Why isn't govt doing large enough meta studies on it? Is it Meta empire or social media or smartphones or Internet itself is responsible for mental health issues?
And why does buck stop with Zuckerberg, not parents, school, govt. Why do under 13 have a smartphone? Is the usage monitored? Should Govt bring legislations on how much phone can be used for social media/video games by a child? Should bullying be handled at local level instead of a Corp controlling all?
For generations we’ve had this concern that technology leads to poor outcomes, Johnny just wastes his time watching TV/Gaming etc…
Still, parents, due to various factors had to let their kids be kids, and fend for themselves.
The kids didn’t seem to suffer massive mental health issues though, things pretty much worked out in the end.
Fast forward, parents may be more overworked, or may be addicted to the same devices we’re discussing here, so things may be slightly worse, but we’re seeing massive mental health challenges (not solely as a result of insta etc…).
Parents are ill equipped to cope with this from my experience, companies are weaponizing their apps to draw eyeballs with a level of competency never seen before, we have few defences, it needs to be on the co. / gov.
The social sciences are not advanced enough to measure the harm caused by advances in technology. Social science is in its infancy and lacks systemic tools and viewpoints.
The legal use of dark patterns needs to be eliminated. This is the core issue.
This is a disgustingly bad look for Meta. I can't even begin to figure out how they are (read: Zuckerberg is) able to draw even the most tenuous of connections between _plastic surgery filters_ and profit. Nobody is going to stop using Meta products or looking at ads because they can't post selfies with artificially beautified versions of themselves.
TBF, is this not a result of being in competition with TikTok and Snap? I don't do social media but when I read about this issue I tend to think of those platforms first.
My understanding is because of things like this Facebook and Instagram were losing interest in the teen market (ie. 'legacy' social media), so it doesn't surprise me that they decided to incorporate similar functionality.
I'm not saying Zuck is a saint here, but this is more of an issue with social media being a corrosive force in general... the modern version of big tobacco in a way.
> TBF .... I'm not saying Zuck is a saint here, but this is more of an issue with social media being a corrosive force in general... the modern version of big tobacco in a way.
There's nothing "to be fair" about here; it's disgusting no matter how you frame it. Big tobacco did disgusting things to glamorize smoking and hook generations of people on nicotine. The core "issue" you're describing is a bad one: being unable to concede even the smallest sliver of your engagement metrics in the interest of not being a driving force behind children's body dysmorphia. It's just as bad as adding camera filters to let kids see themselves lighting up a Marlboro.
Well sure it's disgusting, the "TBF" was simply saying it's unreasonable to think someone at the helm of a publicly traded ship that has employed ethically dubious means for growth to suddenly take ethical stances that hurt said business.
At the corporate level it's really weighing optics vs features that drive engagement and revenue.
It's a really vague ask and the article doesn't explain the specifics of what constitutes "plastic surgery filters." Are you no longer allowed to have pics with those Snapchat-esque glowing faces or big eyes? Is the detection system able to differentiate between a filter and actual plastic surgery? It's also a bit of a can of worms if Meta no longer allows certain modifications to images of people on its platform "for the sake of the children."
Let's not pretend like young people chasing sex and beauty is some brand new concept only concocted in the last 15 years by the evil developers at Facebook.
> It's a really vague ask and the article doesn't explain the specifics of what constitutes "plastic surgery filters."
They had specific rules in place with a ban, then Zuck removed the ban. It's not like there wasn't a line in the sand here.
> It's also a bit of a can of worms if Meta no longer allows certain modifications to images of people on its platform "for the sake of the children."
That's not what's being discussed here. There's been no proposal to ban images. The proposal is to ban camera filters. Nothing would be stopping you from modifying and posting images. Nobody is having their freedoms taken away (and having Meta offer you certain kinds of camera filters by default is in no way a form of censorship).
> Let's not pretend like young people chasing sex and beauty is some brand new concept only concocted in the last 15 years by the evil developers at Facebook.
Let's also not pretend that Facebook doesn't advertise filters in Facebook and Instagram, pushing kids to use them. Fifteen years ago you could photoshop a picture. Today, kids are being _actively solicited_ by social media to use these filters.
Researchers are actively telling us that these are harmful to kids' self-images (we've known this for many years!). They accomplish no useful outcome, and the largest net effect is harm.
If a kid is experiencing dysphoria to the point that they abandon a social network [0] because they can't use built-in tools to make themselves look like they have had plastic surgery, it's even more disgusting to suggest that keeping the kid engaged and on the platform is the most valuable outcome. If you look at that situation and say "well _someone_ is going to ruin their mental health, it had might as well be us" I really weep for our future.
[0] note: they're not switching, because kids are already using all of them
(For non-Italian people, it literally means "this piece of sh*". It's a rude yell at a person who is not behaving morally or ethically well proved of doing harm to other people, and is thus indicted of being a literal ... you know).
I have some hope, my neice referred to FB as 'legacy' and from talking to her these kids are getting smart a lot quicker about the tactics used on these platforms and by platforms themselves, they also seem very fluid and less prone to lock-in unlike their parents and grandparents.
Sample size of my neice and her social circle isn't representitive but perhaps it represents a shift.
Where are these dreaded "beauty filters" in Instagram app? I don't remember any from back in the day when I still used it. Just checked again and it seems there are still bunch of typical "hipster colors" old-school Instagram filters, then some basic editing options.
You are missing 90% of the app. These are LIVE filters they are talking about. Not a post-processing photo effect.
Don't select a photo from your phone - that brings up the boring old-school photo editor mode.
You want to start a new post by clicking the "+" at the bottom and then swipe to the left or click on "Story" or "Live".
This gives you the advanced AI filters with live preview. They are created and distributed by third parties through a gallery/store like interface.
These are the type that say add dog ears to your head. Or detect when your mouth is open and makes a scary effect. Or can give you perfect skin and AI-upgrade you or turn you into an anime character.
These are what people are talking about. Not some laughable "color filter"
The genie is already out of the bottle, you simply cannot take away a feature like this that people have already gotten used to and not expect a backlash.
If the feature is removed, unattractive individuals will cry out that this will give genuinely attractive individuals a large advantage over them, widening the beauty gap.
Honestly, who cares? We have recently seen much more damning information come about Facebook's role in facilitating in the Rohingya genocide. This mental health shit pales in comparison.
The first Internet tycoon born of the Internet age. It's going to be ironic how his oil and railroad predecessors escaped scrutiny by buying newspapers and donating money. Bezos is trying that playbook. But my bet is that we will see Zucks true legacy unfold right now and he won't be lionized in at all the same way. He doesn't seem so different from SBF in that he justified any action to get his way, from the beginning when he started Facebook to rank and spy on women.
There is no evidence that anyone actually needs FB/IG/Threads.
If tommorow Meta shuts down and the entirety of its assets, all the money, all the infrastructure and the people get repurposed to do science instead of reels, commercials, cat videos, selfies and trending videos, that would be great.
I disagree with this. Even though I'm unhappy with Facebook/Meta and I believe the world would have been better off without the current iteration of their products, lots of small businesses depend on their platforms for their marketing/advertising needs and success in general. Their instant messaging platforms like Messenger/WhatsApp are essential communication tools used by literally millions of users. There is also an argument to be made that Facebook has allowed relationships/connections to be preserved that would've eventually ceased. Facebook/Instagram are terrible tools for creating new relationships or social networks though
I’m not a Facebook fan by any stretch of the imagination, but it’s true that Facebook is what a lot of small businesses rely on.
It’s a good platform for certain types of businesses to share information with a lot of people in a channel other than email. Think gyms, restaurants, salons, laundromats, etc.
And then there are people whose businesses run on Facebook. Not that it’s a common thing, but I actually pay a fitness instructor $25/mo to join daily group fitness classes that anre available exclusively via Facebook Live.
If Facebook were to go away, the biggest missing piece up for grabs would be a common platform used by a lot of people to broadcast information to people who share things in common.
Small businesses marketed before the existence Facebook, and people messaged before the existence of Facebook. They aren't doing anything special other than being the beneficiaries of the network-effect-du-jour.
>Their instant messaging platforms like Messenger/WhatsApp are essential communication tools used by literally millions of users.
As if in the absence of this parasitic monopoly an alternative couldn’t be spun up in a matter of days, maybe weeks, given modern frameworks and cloud infrastructure. Hell, they bought WhatsApp. Most of what they have done to it just made it worse or knocked off features from other platforms like Telegram.
Is this based on any sort of understanding of the economy behind FB/IG/Threads, or do you just not like them? Because like it or not, people's livelihoods are being run off those things these days, which amounts to people needing FB/IG/Threads.
How much of that economy actually matters? Who benefits from the transfer of products/services here? It's not the end users for the most part. Some people are able to benefit from it, but is that a large segment of any population? And is that something that we really feel is a necessary market?
I think we need to be able to take a look at ourselves and evaluate what's chaff as tech evolves.
Science can be a money pit, waste of time and threat to health if not directed properly. All that money based on what is funded now would go to studying the effects of coffee or if red wine is good or bad for health with some minor studies that time change sleeping habits and screentime vs teens vaping.
What's the point of doing the science if we're not going to use the resulting output for things that aren't needs? Why have the needs at all if we dont get to have wants?
No offence but you live in an out of touch worldview. A majority of the world uses instagram/fb. Every single non techy adult I know uses instagram regularly to stay in touch with their social circle. Older generations use FB. Most people value social connection over “doing science”, believe it or not.
I wouldn't be surprised at all. I have several friends who were extremely early FB employees and the horror stories I've heard about his behavior, 2nd hand, if even 10% true are pretty damn horrible.
I'd rather not I don't think many other people were party to some of the things I heard them describe that they personally witnessed. It'd probably be very easy to figure out who they were. This is all from the 2008-2013 era.
There tons of existing stories of partying, sex and coke. Spying on private profiles, etc. Would your stories add something new or be along the same lines of what's out there?
I don't know--we're talking about the personal, moral character of the leader of this huge company. A direct quote from the guy, infamous as it is, is at least on-topic.
That may have been true the first thousand times it was repeated, but at a certain point these things become mechanical. That's not interesting in HN's sense of the word—and doubly so for the combo of mechanical+indignant.
I’m actually not surprised. Meta has had a history of this because their core business is unlocking that psychology in our social graphs from young to old and keeping us hooked to it. Improving mental health would be an opposite direction to their goals since they’d have to reduce usage and inform them of taking breaks and better mental health.
I'm more surprised anybody still uses Facebook. They show me 1 post from someone I know, then an infinite stream of junk posts with tons of soft porn, like ads for Hooters or a coffee shop in Florida where the waitresses are in bikini.
I don't drink coffee and don't even live in the U.S. I have absolutely no idea why it wants me so bad to go to sexy coffee shops in Florida. I swear, the spam is worse than when they were trying to sell me Christian dating sites and make me buy Russian wives 15 years ago before they started personalizing ads.
The theme of judging others for working at a company that they don't agree with on HN is tiring. The world is large , expansive and interesting with a range of hobbies and interests.
Unless you are a nobel peace prize laureate or curing cancer, lets save the judgement.
And I would 100% work at Philip Morris if it meant feeding my family. Why shouldn't an adult be able to smoke a ciggaratte?
No amount of personal action is going to matter on that front, all not working at companies that are morally dubious would net is a lower income for myself. Just like with climate change the solution is collective action at the govt/national level, anything else is virtue signaling.
C'mon now. I'm not trying to judge you. But every action each individual takes compounds into collective action.
If you think only groups and masses have significance, and individuals do not, then I'd be genuinely curious to hear the worldview & belief system behind this. Like what's the phenomenon behind groups coalescing and forming a critical mass?
If there are enough individuals that the group forms anyway, and being a member of the group is profitable, then any single individual who decides not to join just suffers the loss of that profit.
Only if enough individuals decide together to not do it - enough to not allow the group to form - does it change the math. But unless there is a clear group of folks doing that, communicating that, and co-ordinating with that goal - no individual has any way of determining if they would ‘tip the scales’ or not. So it’s very unlikely to happen ‘naturally’ once a certain degree of momentum has happened.
A type of social prisoners delimma perhaps?
And if you’re noticing this also applied to the Nazi party, you’re not wrong.
Sounds like in either case, someone has to take a stance, and hold the line. And such folks might not have individual profitability that involves going along with the group as the highest order.
The challenge I think is that for any individual the power/information dynamic is vastly against them.
Taken to an extreme, one just ends up poor and ostracized (best case!) and made no difference at all.
Any individual in Russia trying to stop the machine that attacked Ukraine would have accomplished nothing but their own destruction. If all the folks against it had taken action, it would have been stopped.
Which is why coordination to do that was, has been, and will continue to be, illegal and targeted by extensive gov’t security apparatus’s.
Not to say Meta is the Russian gov’t, but rather that for any potential employee, their incentive is always to get paid the most and go with the flow, and hope nothing too bad happens.
Taking idealogical stances can be high risk, low reward.
> [...] all not working at companies that are morally dubious would net is a lower income for myself. [...]
Eh, everyone has a price. The question is just how much.
You pretend that your price is zero or an epsilon away from that. I don't believe that. I don't think you would pick a job offer from a more 'evil' company, if they offered a hundred bucks more.
(I can believe that your price is pretty low compared to other people. But I don't think it's zero.)
Why would he care? He's like a greasy used car salesman who is there to take advantage of your gullibility. Don't see him as anything other than a conman who got lucky being in the right place at the right time.
> After sitting on the proposal for months, Zuckerberg wrote to his deputies in April 2020 asserting that there was “demand” for the filters and that he had seen “no data” suggesting the filters were harmful, according to the complaint.
So, why didn't instagram ceo provide that data? Seems a huge oversight for someone in that position.