Creating regulations like this may end up acting as a regulatory moat for existing social network companies. This probably makes the overall social media pie smaller overall but guarantees incumbents slice of the pie.
We've seen this happen in other highly regulated areas such as healthcare, telecommunications, etc.
I suspect the way things are trending, in 10 years, very few entrepreneurs would be willing to start a new social media company.
Clearly the regulators see this as a potential downside, which is why the $100 million in gross revenue provision was added.
On the other hand, we want to prevent new companies forming on the premise they can make money from gamifying their product so much it becomes addictive. Now, addictiveness has to have some definition to be useful. It can't be some odd random user who suffers from some kind of condition that prevents them from self-regulating their particular addiction whereas for example other people don't get addicted. There have to be some thresholds and milestones which indicate some service is "addictive" and then regulate against that.
I think when companies consult with behavioral, and other, psychologists in order to craft better ways to wring more engagement out of their users, those companies are intentionally making their products addictive. I don't see it as being much different than exploiting the addictive potential of new nicotine salts, or exploiting additives to cigarettes to do the same.
It can be spun however anyone wants to spin it, but at the end of the day, that's exploiting biology to subvert the will of others. And yes, I know this can describe advertising, as well.
Agreed. This is why the "metaverse" needs oversight as well. It will be a new frontier ripe for gamification, explication and general behavioral control.
So, just {Facebook,} has to have a timer at the top? Above the fold? Indicating cumulative time spent? What about Amazon?
For context here, there are many existing methods for limiting ones access to certain DNS domains and applications at the client. A reasonable person can:
- Install a browser extension for limiting time spent by domain
- Buy a router that can limit access by DNS domain and time of day (with regard for the essential communications of others who could bypass default DNS blocking with a VPN that can be expected to also regularly fail)
- Install an app (with access to the OS process space of other programs in order to restrict them) to limit time spent by application and/or DNS domain
-- Enable "Focus Mode" in Android; with the "Digital Wellbeing" application
-- Enable "Screen Time" in iOS; and also report on and limit usage of apps [and also to limit access to DNS domains, you'd also need required integration with a required browser]
You can install just an IM (Instant Messenging) app, and only then learn strategies for focusing amidst information overload and alert fatigue.
Some users do manage brands and provide customer service and support non-blocked but blocking essential communications, while managing their health at a desk all day. Some places literally require you to check your mobile device at the door. What should the default timer above the fold on just facebook be set to?
> How are they supposed to know that you don't want to be in the store anymore?
Is the store - who you are not paying - a 1) business of public accomodation; or 2) obligated to provide such detection and ejection services; or 3) required to provide service?
You can't cut them off, they're Donny (who TOS don't apply to). You must cut them off, they can't they're not even. You are responsible for my behavior!
Best of luck suing the game publisher for optimizing the game, the author of a book you got for FREE for your time lost, suing the bar that you chose to frequent; do you think they're required to provide service to you? Did they prevent you from leaving? Did they prevent you from choosing to do something else, like entering a different URL in your address bar at will?
You should instead pay someone to provide the hypothetical service you're demanding fascist control over. CA is not a shareholder; and this new policy will be challenged and the state must apply said policy to all other businesses equally: you may not dog just {business you're trying to illegally dom} with an obligation to put a countdown or countup timer above the fold because they keep taking so much of your time.
EDIT: I just can't f believe they thought that only {Facebook} would have to check real name IDs at the door, run a stopwatch for each user with a clock above the fold, profile your mental health status, allow Donny to keep harassing just whoever, and tell you when it's time to go because you can't help yourself when it's time to leave the store they continued to optimize.
> we want to prevent new companies forming on the premise they can make money from gamifying their product so much it becomes addictive
Not if the existing companies are still allowed to do exactly that, or it's just called regulatory capture and serves the interests of Reddit/Twitter/Facebook/TikTok and co. Either it's illegal for everybody to exploit mental illness for profit or it's legal for everybody.
It's possible for an adult to enjoy these strongly addictive products responsibly, probably up until they figure out a way to audiovisually create heroin in your brain.
that's exactly why faecebook etc are fully onboard with other such initiatives, like fines for hate speech. they want it to be prohibitively expensive to dethrone them.
If this is a concern, lobby for applying the law to companies or social networks over set sizes. Tangential laws are proposed in the EU that apply to companies with market caps over ~$80 billion, for example[1].
If the EU was making a directive on this topic it would not differentiate between company sizes because it thinks that the company size has a relation to how addictive their product is.
The reason why it would differentiate on size is precisely the reasoning you presented in your previous post. Because it doesn't want to create moats around big existing players on the market. If you look at the act the GP linked to you'll see that one of its purposes it to tear down a non regulatory moat that those big existing player have.
And because a small company, without a doubt, has less potential for harm on a societal level than a big company.
> I suspect the way things are trending, in 10 years, very few entrepreneurs would be willing to start a new social media company.
I suspect that future social media will be even worse that Twitter/TikTok/Reddit and co, just like TikTok is even more unhinged than Facebook or Twitter. Social media is entertainment, entertainment is always pushing the limits of what is morally acceptable to broadcast because it's profitable and brings in more eyeballs. It's exactly the same reason as to why reality TV perverted television even more by turning TV into some voyeuristic dystopia. That's what social media is.
> very few entrepreneurs would be willing to start a new social media company.
I’m not sure this is a bad thing. Is the world better and happier from social media? Do we want it to keep rising from the ashes and sowing misery, ignorance, and depression? Maybe it is my nostalgia glasses but I feel like everyone I know was happier when the only way to communicate long distance for the average person was a rotary phone that weighed about 20 lbs.
Yes, it's a very bad thing. There are a lot of different types of social media. How about projects with Discourse forums? That could be considered social media. Do you want to force them onto Facebook?
> The proposal would only apply to social media companies that had at least $100 million in gross revenue in the past year
And also:
> Also, companies that conduct regular audits of their practices to identify and remove features that could be addictive to children would be immune from lawsuits.
...if a company is earning $100 million per year from their social media product I think I'm kind of okay with demanding they spend a person-day every couple months verifying that they're not adding features classed as addictive to children.
I'm not really seeing where your outrage is coming from, here.
Self-audits never work, which is why there are regulatory agencies everywhere to 'double check' a company's work when ensuring legal compliance.
The real issue with this bill is that "addiction" isn't the same for everyone. The Twitter "trends" paradigm itself could be considered addictive if you really stretched it, maybe a 15 year old looks at it every waking hour of their day and goes through it all looking for tweets to reply to? Since the law doesn't only apply in a class-action sense, any parent could bring this to court with their anecdote and the court's main argument would be on whether that specific feature is 'addictive', not "addictive to all children".
For reference:
> An operator of a social media platform shall be found to have violated their duty if the social media platform is found to have addicted a child user by either of the following means:
> (1) The use or sale of a child user’s personal data.
> (2) The development, design, implementation, or maintenance of a design, feature, or affordance.
Quite literally anything could be grounds for the $25k + 2x attorney's fees in damages.
They send notifications to your phone whenever someone messages you. Those unread icons are addictive, at least for me I always have to actively think to not open them immediately.
That’s just the natural outcome of blurring chat rooms with social media.
That still applies for non-social media. My iOS IRC client connected to my znc bouncer also shows notifications. Same for iMessage which is more profile-like, since you can send memoji and stickers and play games with your friends (eg. GamePigeon), which is why the bill tries to exclude 'chat' apps and email apps.
How much different is IRC/Discourse vs Twitter? Hashtags are effectively channels. Twitter just has more subscribers and more individuals in each channel.
The question isn't whether you think they have an addictive algorithm, the question is whether a lawyer can convince a jury that they fit the law's definition of one.
I think the person you responded to was referring to Discourse[1] forums, not Discord[2]. Although I agree with you and the gp, both Discourse and Discord have very different incentives than traditional social media FB, IG, etc.
Preventing competition in social media won't eliminate it - it will prevent innovative new entrants from solving its issues, while allowing the status quo to continue and likely deteriorate further.
> Is the world better and happier from social media?
Yes, absolutely. I actually lived in the rotary phone era and it was fucking lonely. It's much easier to stay connected with friends now. That's especially true now that I've moved out of the city and started a family.
We've seen this happen in other highly regulated areas such as healthcare, telecommunications, etc.
I suspect the way things are trending, in 10 years, very few entrepreneurs would be willing to start a new social media company.
Clearly the regulators see this as a potential downside, which is why the $100 million in gross revenue provision was added.