Serious question: What exactly do you want to see done? I mean real specifics, not just the angry mob pitchfork calls for corporate death penalty or throwing Mark Zuckerberg in jail.
Amend Section 230 so that it does not apply to content that is served algorithmically. Social media companies can either allow us to select what content we want to see by giving us a chronological feed of the people/topics we follow or they can serve us content according to some algorithm designed to keep us on their platform longer. The former is neutral and deserves protection, but the latter is editorial. Once they take on that editorial role of deciding what content we see, they should become liable for the content they put in front of us.
So Hacker News should lose section 230 protection?
Because the content served here isn't served in chronological order. The front page takes votes into account and displays hotter posts higher in the feed.
Technically sorting by timestamp is an "algorithm" too, so I was just speaking informally rather than drafting the exact language of a piece of legislation. I would define the categories as something like algorithms determined by direct proactive user decisions (following, upvoting, etc) versus algorithms that are determined by other factors (views, watch time, behavior by similar users, etc). Basically it should always be clear why you're being served what you're being served, either because the user chose to see it or because everyone is seeing it. No more nebulous black box algorithms that give every user an experience individually designed to keep them on the platform.
This will still impact HN because of stuff like the flame war downranker they use here. However, that doesn't automatically mean HN loses Section 230 protection. HN could respond by simplifying its ranking algorithm to maintain 230 protections.
I think the best way to put it is, users with the same user picked settings should see the same things, in the same order.
That's a given on HackerNews, as there's only one frontpage. On Reddit that would be, users subscribed to the same subreddits would always see the same things on their frontpages. Same as users on YouTube subscribed to the same channels, users on Facebook who liked the same pages, and so on.
The real problem starts when the algorithm takes into account implicit user actions. E.g., two users are subscribed to the same channels, and both click on the same video. User A watches the whole video, user B leaves halfway through. If the algorithm takes that into account, now user A will see different suggestions than user B.
That's what gets the ball rolling into hyper specialized endless feeds which tend to push you into extremes, as small signals will end up being amplified without the user ever taking an explicit action other than clicking or not suggestions in the feed.
As long as every signal the algorithm takes into account is either a global state (user votes, total watch time, etc), or something the user explicitly and proactively has stated is their preference, I think that would be enough to curb most of the problems with algorithmic feeds.
Users could still manually configure feeds that provide hyper personalized, hyper specific, and hyper addictive content. But I bet the vast majority of users would never go beyond picking 1 specific sport, 2 personal hobbies and 3 genres of music they're interested in and calling it a day. Really, most would probably never even go that far. That's the reason platforms all converged on using those implicit signals, after all: they work much better than the user's explicit signals (if your ultimate goal is maximizing user retention/addiction, and you don't care at all about the collateral damage resulting from that).
But Meta's content ranking would conform to this too: in theory a user that had the exact same friends, is a member of the exact same groups, had the exact same watch history, etc. would be served the same content. Although I'm pretty sure there's at least some degree of randomization, but putting that aside it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.
Even that, I don't think is entirely true. I'm pretty sure they use signals as implicit as how long you took to scroll past an autoplaying video, or if you even hovered your mouse pointer over the video but ultimately didn't click on it.
Same with friends, even if you have the exact same friends, if you message friend A more than friend B, and this otherwise identical account does the opposite, than the recommendation engine will give you different friend-related suggestions.
Then there's geolocation data, connection type/speeds, OS and browser type, account name (which, if they are real names such as on Facebook, can be used to infer age, race, etc), and many others, which can also be taken into account for further tailoring suggestions.
You can say that, oh, but some automated system that sent the exact same signals on all these fronts would end up with the same recommendations, which I guess is probably true, but it's definitely not reasonable. No two (human) users would ever be able to achieve such state for any extended period of time.
That's why we are arguing that only explicit individual actions should be allowed into these systems. You can maybe argue what would count as an explicit action. You mention adding friends, I don't think that should count as an explicit action for changing your content feed, but I can see that being debated.
Maybe the ultimate solution could be legislation requiring that any action that influences recommendation engines to be explicitly labeled as such (similar to how advertising needs to be labeled), and maybe require at least a confirmation prompt, instead of working with a single click. Then platforms would be incentivized to ask as little as possible, as otherwise confirming every single action would become a bit vexing.
People also slow down to look at the flipped car on the side of the road. Doesn't mean you want to see more flipped cars down the road.
Either way. Do you have any points other than that you think any and every action, no matter how small, is explicit, and therefore it's ok that for it to be fed into the recommendation engine? Cause that's an ok position to have, even if one I disagree with. But if that's all, I think that's as long as this conversation needs to go. But if there's any nuance I'm failing to get, or you have comments on other points I raised such as labeling of recommendation altering actions, I'm happy to hear it.
I'm mostly interested in getting concrete answers as to what people are referring to when they talk about "algorithmically served" content. This kind of phrasing is thrown around a lot, and I'm still unsure by what people are referring to by this phrase and I've rarely found people proposing fleshed out ideas as to how to define "algorithmically served content".
Some people take the stance that even using view counts as part of ranking should result in a company listing section 230 protections, e.g. https://news.ycombinator.com/item?id=46027529
You proposed an interesting framing around reproducibility of content ranking, as in two users who have the exact same watch history, liked posts, group memberships, etc. should have the same content served to them. But in subsequent responses it sounds like reproducibility isn't enough, certain actions shouldn't be used for recommendation even if it is entirely reproducible. My reading is that in your model, there are "small" actions that user take that shouldn't be used for recommendations, and presumably there are also "big" actions that are okay to use for recommendation. If that's the case, then what user actions would you permit to be used for recommendations and which ones would not be permitted to use? What are the delineation between "small" and "big" actions?
As I pointed out, I agree that defining what should be deemed acceptable and what shouldn't is a bit subjective, and can definitely be debated. Reasonable people can disagree here, for sure.
That's why I proposed that maybe the solution is:
1. only explicit actions are considered. A click, a tap, an interaction, but not just viewing, hovering, or scrolling past. That's an objective distinction that we already have legal framework for. You always have to explicitly mark the "I accept the terms and conditions" box, for example. It can't be the default, and you can't have a system where just by entering the website it is considered that you accepted the terms.
2. explicitly labeling and confirming of what is an suggestion algorithm altering action and what isn't. And I mean, in band, visible labeling right there in the UI, not a separate page like that Meta link. Click the "Subscribe" button, you get a confirmation popup "Subscribing will make it so that this content appears in your feed. Confirm/Cancel". Any personalized input into the suggestion algorithm should be labelled as such. So companies can use any inputs they see fit, but the user must explicitly give them these inputs, and the platforms will be incentivized to keep this number as low as possible, as, in the limit, having to confirm every single interaction would be annoying and drive users away. Imagine if every time you clicked on a video, YouTube prompted you to confirm that viewing that video would alter future suggestions.
I'm ok with global state being fed into the algorithm by default. Total watch time/votes/comments/whatever. My main problem is with hyper personalized, targeted, self reinforcing feeds.
So under this regime Meta, or any other social media site, can do pretty much any recommendation system they want, so long as they have a UI cluttered with labels and confirmation prompts disclaiming that liking someone, joining a group, adding a friend will affect your feed and recommendations.
> Imagine if every time you clicked on a video, YouTube prompted you to confirm that viewing that video would alter future suggestions.
In practice, I suspect this will make nearly every online interactions - posting a comment, viewing a video, liking a post, etc - accompanied by a confirmation prompt telling the user that this action will affect their recommendations, and pretty quickly users just hit confirm instinctively.
E.g. when viewing a youtube video, users often have to watch 3-5 seconds of an ad, then click "skip ad", before proceeding. Adding a 2nd button "I acknowledge that this will affect my recommendations" is actually a pretty low barrier compared to the interactions already required of the user.
The end result: a web that's roughly got the same recommendation systems, just with the extra enshitification of the now-mandated confirmation prompts.
I really do think that would be annoying enough to snap a good amount of people out of the mindless autoscrolling loop. There's a reason why companies love 1 click buying, for example. At scale, any extra interactions costs real money. The example of the ad is a good one where that's already a kind of high friction interaction, so one extra click is not that much more annoying, but that's not the case for the vast majority of interactions.
Granted, some people will certainly have a higher tolerance for this kind of enshittification. If companies find that the amount of money they can extract from highly targeting a given amount of users is greater than the amount of money they can make from more numerous but less targeted users, then they could choose to go down that path. That's a function of how tolerant the average user is of the confirmation prompts, and how much more money they can make from a targeted user.
We can't control that last variable, but we ultimately could control that first one. If we find that a simple confirmation prompt is not annoying enough for as many people as we'd like, we could make the confirmation prompts more annoying. Maybe make every confirmation prompt have to be shown for at least 5 seconds. Or require a cooldown between multiple confirmations. Or add captcha like challenges. And so on.
In the limit, I think you'd agree that if you had to wait 24 hours before confirming, that would probably be enough to dissuade almost everyone from going through with it, to the point most platforms would try to not have any personalization at all. (I wouldn't be happy with this end result either)
I think even a single, instant confirmation prompt would be enough to cause a sizeable difference. Maybe not enough. Maybe you're right, and it would make barely any difference at all. Then I'd be totally in favor of these more annoying requirements. But as a first step, I'd be happy with a small requirement like this, and progressively making requirements stringier if it proves not enough.
> Doesn't mean you want to see more flipped cars down the road.
It absolutely does mean that seeing as how everybody wants to see the flipped car on the side of the road. The local news reports on the car flipped on the side of the road and not the boring city council meeting for a reason.
That's a mindboggling take, to be honest, to the point I can't help but suspect that you're being contrarian just for the sake of it. I'm absolutely sure that you, yourself, have gone by some terrible scene which you couldn't help but stare at for at least a bit, which you would not classify as something you would like to see more of.
There's a huge difference between something people want to see and something people can't ignore. There is some intersection between those categories, but they are by no means one and the same. And news reports, headlines, thumbnails et al. optimize for the latter, not the former.
Watching the content that is being served to you is a passive decision. It's totally different from clicking a button that says you want to see specific content in the future. You show me something that enrages me, I might watch it, but I'll never click a button saying "show me more stuff that enrages me". It's the platform taking advantage of human psychology and that is a huge part of what I want to stop.
>it remains unclear how you're constructing a set of criteria that does Hacker News, and plenty of other sites but not Meta.
I already said "This will still impact HN because of stuff like the flame war downranker...". I don't know this and your reply to my other comment seem to be implying that I think HN is perfect and untouchable. My proposal would force HN to make a choice on whether to change or lose 230 protections. I'm fine with that.
It's still unclear what choice Hacker News and other sites will have to make to retain section 230 protection in your proposed solution.
Again, something like counting the number of views in a video is, in your framing, not an active choice on the part of user. So simply counting views and and floating popular content to the top of a page sounds like it'd trigger loss section 230 protections.
You're making me repeat myself multiple times now. I don't know what else I can say. HN would need to rank posts by a combination of upvotes and chronology. That is how they "float popular content to the top". You don't need passive metrics like views to do that.
> I think the best way to put it is, users with the same user picked settings should see the same things, in the same order. That's a given on HackerNews, as there's only one frontpage.
Are you sure? The algorithm isn't public, but putting a tiny fraction of "nearly ready for the frontpage" posts on the front page for randomly selected users would be a good way to get more votes on them without subjecting everyone to /new
That's a good point. As I pointed out, I'm ok with global state (total votes, how recent is a post, etc). Randomness could be thought as a kind of global state, even if it's not reproducible. As long as it's truly random, and not something where user A is more likely to see it than user B for any reason, then I'm fine with it.
Another possibility would be to somehow incorporate the possibility of publishing the algorithm and providing some kind of "under the hood" view that reveals to people what determined what they're seeing. Part of the issue currently is that everything is opaque. If Facebook could not change their algorithm without some of kind of public registration process, well, it might not make things better but it might make it get worse a bit slower.
So a simple "most viewed in last month" page would trigger a loss of protection? Because that ranking is determined by number of views, rather than a proactive user decision like upvoting.
>So a simple "most viewed in last month" page would trigger a loss of protection?
The key word there is "page". I have no problem with news.ycombinator.com/active, but that is a page that a user must proactively seek out. It's not the default or even possible to make it the default. Every time a user visits it, it is because they decided to visit it. The page is also the same for everyone who visits it.
To be clear, even the front page of Hacker News is not just a simple question of upvotes. Views, comments, time since posting, political content down ranking, etc. all at a factor in the ordering of posts.
This is an unpopular opinion here, but I think in general the whole "immunity for third-party content" thing in 230 was a big mistake overall. If you're a web site that exercises editorial control over the content you publish (such as moderating, manually curating, algorithmically curating, demoting or promoting individual contents, and so on), then you have already shown that you are the ones controlling the content that gets published, not end users. So you should take responsibility for what you publish. You shouldn't be able to hide behind "But it was a third party end user who gave it to me!" You've shown (by your moderation practices) that you are the final say in what gets posted, not your users. So you should stand behind the content that you are specifically allowing.
If a web site makes a good faith effort to moderate things away that could get them in trouble, then they shouldn't get in trouble. And if they have a policy of not moderating or curating, then they should be treated like a dumb pipe, like an ISP. They shouldn't be able to have their cake (exercise editorial control) and eat it too (enjoy liability protection over what they publish).
Moderating and ranking content is distinct from editorial control. Editorial control refers to editing the actual contents of posts. Sites that exercise editorial control are liable for their edits. For instance if a user posts "Joe Smith is not a criminal" and the website operators delete the word "not", then the company can be held liable for defaming Joe Smith. https://en.wikipedia.org/wiki/Section_230#Application_and_li...
I’d go farther and say that any content presented to the public should be exempt from protection. If it’s between individuals (like email) then the email provider is a dumb pipe. If it’s a post on a public website the owner of the site should be ultimately responsible for it. Yes that means reviewing everything on your site before publishing it. This is what publishers in the age of print have always had to do.
The thing that's missing is the difference between a unsolicited external content (ie. pay-for-play stuff) and directly user-supplied content.
If you're doing editorial decisions, you should be treated like a syndicator. Yep, that means vetting the ads you show, paid propaganda that you accept to publish, and generally having legal and financial liability for the outcomes.
User-supplied content needs moderation too, but with them you have to apply different standards. Prefiltering what someone else can post on your platform makes you a censor. You have to do some to prevent your system from becoming a Nazi bar or an abuse demo reel, but beyond that the users themselves should be allowed to say what they want to see and in what order of preference. Section 230 needs to protect the latter.
The thing I would have liked to see long time ago is for the platforms / syndicators to have obligation to notify their users who have been subjected to any kind of influence operations. Whether that's political pestering, black propaganda or even out-and-out "classic" advertising campaign, should make no difference.
Can’t you just limit scope for section 230 by revenue or users?
E.g. it only applies to companies with revenue <$10m. Or services with <10,000 active users. This allows blogs and small forums to continue as is, but once you’re making meaningful money or have a meaningful user base you become responsible for what you’re publishing.
I think the biggest problem is when we're all served a uniquely personalized feed. Everyone on Hacker News gets the same front page, but on Facebook users get one specifically tailored to them.
If Hacker News filled their front page with hate speech and self-harm tutorials there would be public outcry. But Facebook can serve that to people on their timeline and no one bats an eye, because Facebook can algorithmically serve that content only to people who engage with it.
Most sites that accept user-generated-content are forced to do some level of moderation, lest they become a cesspit of one form or another (CSAM, threats, hate speech, exposing porn to underage users, stolen credit card sales, etc...)
That’s the first reasonable take I’ve seen on this. Thanks for explaining it, I will use it for offline discussions on the subject. It’s been hard to explain.
Yeah, I wonder if the rules should basically state something like everything must be topical and you must opt in to certain topics (adult, politics, etc)People can request recommendations but they must be requested; no accidental pro-Ana content. If you want to allow hate speech fine, but people have to opt in to every offensive category/slur explicitly. (We can call it “potentially divisive” for all the “one persons hate speech is another persons love rap” folks or whatever.
Go after the specific companies and executives you believe are doing wrong. Blanket regulations raise costs for smaller competitors and end up entrenching giants like Meta, Google, and Apple because they can afford compliance while smaller competitors can’t. These rules are a big reason the largest firms are more dominant than ever and have become, effectively, monopolies in their markets. And the irony is that many of these regulations are influenced or supported by the big companies themselves, since a small "investment" in shaping the rules helps them secure even more market share.
I think with the harm that these companies are doing, the angry pitchfork mobs are a serious suggestion and not just hyperbole anymore
Keep in mind that not very long ago some random person assassinated an insurance CEO and many people's reaction was along the lines of "awesome, that fat cat got what he deserved"
Don't underestimate how much of society absolutely loathes the upper class right now.
I would bet that many people are one layoff away from calling for execs to get much worse than jail
If my dog bites somebody, I'm on the hook. It should be no different with companies.
We have to create incentives to not invest in troublesome companies. Fines are inadequate, they incentivize buying shares in troublesome companies and then selling them before the harm comes to light.
You just go after the top four or five. It's not about proportional punishment, but about ensuring that those with enough power to actually affect outcomes feel a sense of responsibility over those outcomes whether or not they later divest.
Blindly letting a CEO commit crimes should itself be a crime, but only if there's something you could've done to prevent it--that's not most shareholders.
I don't really get why corporate death penalty and Zuck in jail is not a good idea. It might not be the best idea, but I think it would absolutely be better than what we have now. Even a random-chainsaw-esque destruction of Facebook, Google, Amazon, and Apple would be better than what we have now.
This is what I meant by angry mob pitchfork ideas. This isn’t a real idea, it’s just rage venting.
It’s also wrong, as anyone familiar with the problems in pay-to-play social video games for kids, which are not ad supported, can tell you. These platforms have just as many problems if not more, yet advertising has nothing to do with it. I bet you could charge $10/month for Instagram and the same social problems would exist. It’s a silly suggestion.
literally the opposite of a pitchfork idea; quite simple, relatively easy to implement, and immediately effective. incentives from advertising is the underlying issue with the addictive nature of these platforms (and much more)
The mere fact that commenters think banning advertising is a simple and realistic idea, without any constitutional road blocks or practical objections, is what I mean when I say these comment sections are just angry bloviating with unrealistic expectations.
If you think banning all advertising is “simple” then I don’t know what to say, but there isn’t a real conversation here.
so is it a pitchfork idea? I want Mark’s head? or it’s impractical? you’ve changed your apprehension to my idea twice in two comments
constitutional roadblock…to banning digital advertisement? please do explain!
I didn’t claim it’s easy to get it done in the real world, but it’s not a reactive/vindictive pitchfork idea. it’s really not that hard, if people wanted it we’ve banned plenty of things at the federal level in this country over the years (the hard part is of course people realizing how detrimental digital advertising is)
it’s a simple solution that’s very effective. obviously any large-scale change, to fix a large-scale problem, is not “simple” to implement, but it’s also not fucking rocket science on this one mate
you’re clearly not having a conversation in good faith. you asked, I answered, I’m done with this
What constitutes an advertisement is not a simple proposition. eg Is a paragraph describing some facts (phrased carefully) about a product or company an advertisement?
To what effect speech would have to be controlled to enforce this, is unthinkable. While some handwaving is necessary, as anyone can agree (since even the simplest legislation would be corrupted by the US political class), "banning advertising" is not a practical goal.
it’s quite a simple definition of what is or is not advertisement. run it through real world examples, it’s trivial to say whether something is or isn’t an advertisement
as with any broad regulation there would be grey areas, continued cat and mouse games with bad actors, etc.
but it is not a remotely insurmountable obstacle to define what is and is not advertisement in relation to free speech
(as an aside it’s really funny to me anyone would consider being paid to say something free speech, but I get it)
you’re just doing ad hominems and strawmans. I’m not suggesting banning anything other than digital advertisement. you’re not open to having a productive discussion about it, just misdirection and whataboutism
please stop ascribing intent I do not have and words I did not say in your juvenile attempt to win an argument
p.s. still would love to hear your constitutional argument against it! banning digital advertisement at the federal level is not unrealistic and if you've actually given it the thought you’re pretending to and still reach that conclusion, I do have an ad hominem to throw back at you
> p.s. still would love to hear your constitutional argument against it!
You don’t need to hear my argument against it. The fact that advertising your services is free speech is well established. It’s a major challenge for movements like those trying to tackle pharmaceutical advertising.
Also, if you can’t see how I’ve been addressing your arguments and you think it’s all ad hominem then I don’t think there’s any real conversation to be had here. Between all the downvotes you’re collecting and the weird attempts to ignore everything I say and pretend it’s ad hominem as a defensive tactic, this is pure trolling at this point.
1) downvotes: you’re the one insinuating HN commenters (and presumably voters) are idiots; I’m not sure that I should care if I’m downvoted while correct. and regardless, doesn’t seem like I’m very downvoted (rather the opposite) so not sure what your point is. try making one next time!
2) freedom of speech: lol! I just want to point out I had no fucking clue that’s what you were angling for before. rather than launch into attacks as you do, I actually try to understand things. this argument doesn’t concern me at all, I was worried I wasn’t aware of something in the constitution you’d brilliantly raise
we are beyond having a conversation at this point, but if you actually raised your arguments against banning digital advertisement (freedom of speech and ??? solving real-world problems is hard?) I would have debated them on their merits, you troll
Just FYI. For a very long time, strong alcohol ads were banned on TV, and the same with tobacco.
I don't watch regular TV, anymore, so I don't know if it still is in place.
Mentioning "banning advertising" on HN is bound to draw downvotes. A significant number of HN members make money directly, or indirectly, from digital advertising.
It's like walking into a mosque, and demanding they allow drinking.
There's a large difference between banning strong alcohol ads, and instantly collapsing a whole huge advertisement economy (that indirectly funds most of the free things people take for granted).
Either I misunderstand something or I'm baffled how anyone can consider that easy.
Not easy per se, but definitely doable. It's a relatively new economy, there's no blood oath anywhere saying we have to allow it.
We've banned literally all tobacco ads and its... fine. I mean not for the tobacco company, but who cares?
I'm not gonna advocate making the world worse so some people stay employed. That's so counter productive. Who knows - maybe in a less shitty world, new jobs will emerge!
In this case, the suggestion of banning advertising is drawing downvotes from me because I see it as politically unrealistic.
At least in my state, there isn’t even a ban on advertising online gambling!! It is quite a stretch to think we could move from there to banning any kind of advertising.
It has nothing to do with the fact that a bunch of HN readers make money from ads. I don’t.
Somewhat meta question, do you believe that down voting opinions we don't like is a good way of engaging with one another on HN?
I wish we could discuss the issue here, and instead would have liked to hear from you why you think it is a pitically unrealistic proposal, and what your criteria is for seeming something politically unrealistic.
The parent comment called for banning all advertising, not for banning ads promoting social media platforms.
They don’t want anyone to be able to advertise anything. Not even your local contractors trying to advertise their businesses that you want to find, because that’s advertising.
The tobacco ad ban isn’t relevant to what was claimed.
> The parent comment called for banning all advertising, not for banning ads promoting social media platforms.
This wasn't my reading of it, but it does appear that's what GP meant. I don't agree with that. Even so, if you were interested in having a good faith discussion about solutions here, you might have responded to both interpretations.
You may consider this me putting forth the suggestion as an answer to your question, if you must.
“Just ban everything I don’t like as long as it won’t impact anything I do like” is a frequent take on HN these days.
Then when states start doing things like adding ID requirements for websites it’s shock and rage as the consequences of banning things (even for under 18s) encounter the realities of what happens when you “just ban” things.
I think we can separate the banning of things which affect personal freedom from the rest. Like if oil were "banned", I'm imagining it's not illegal to possess oil, but rather oil companies wouldn't be able to drill it up and sell it anymore. A bit like fazing out asbestos. The ordinary people with asbestos tiles in their basement don't get into trouble, but new house builds can't/won't use that tile anymore.
ID requirements seem like the main burden is being put on ordinary people instead of corporations, and by extension seems clearly bad.
> Like if oil were "banned", I'm imagining it's not illegal to possess oil, but rather oil companies wouldn't be able to drill it up and sell it anymore.
What does that have to do with anything?
It doesn’t matter where you ban it, if you turn off oil overnight a lot of people are left stranded from their jobs, sectors of the economy collapse, unemployment becomes out of control.
Banning things like this is just fantasy talk that only makes sense to people who can’t imagine consequences or think they don’t care. I guarantee you would change your mind very quickly about banning oil overnight as soon as the consequence became obvious.
I'm curious: Where do you put the line? For example, leaded gas improved car performance and arguably key to economic performance. But it was also incredibly neurotoxic and damaging to society. Do you believe banning it was a bad idea because it resulted in a lot of people losing their jobs?
>For example, leaded gas improved car performance and arguably key to economic performance
This is not true. We currently use ethanol to boost octane, and that additive was known at the time by the company that invented TEL, and they did not use it because they did not control the market for ethanol like they could control the market of a new and patented chemical.
TEL was never actually necessary, and we poisoned ourselves for most of a decade to enrich a corporation. Large scale ethanol (as beer) production was one of humanity's earliest industries.
Indeed, after we banned leaded gas, we tried using yet another stupid poison additive, MTBE, for a decade or so, and that continued to poison people because gas tanks leak and that chemical was toxic. Most of Asia actually still uses MTBE, to their detriment.
Ethanol has never had this problem. Arguably, when Bush required all US gasoline to include 10-20% ethanol, he wasn't even trying to fix the poison problem of MTBE, he might have just been greenwashing and kicking more subsidies to corn growers, but it definitely solved the poisonous additive problem for octane boosters.
Indeed, zero additives for octane are "required" at all. You can produce high octane gasoline just by choosing different refined components but this results in less gasoline produced per barrel of oil.
The Ethyl Corporation primarily, they had to quickly diversify and adjust their business model as a result of the US phase out of tetraethyllead. They managed to stem some of the bleeding by simply just...selling the rest to other countries before they instituted their own restrictions on leaded gas (which tells you how ethically sound said business was) but this was a massive change at the time considering just about every vehicle used leaded gas even if it was a slow rollout.
Who suggested "turning oil off overnight"? What does that even mean?
GP (and I) have given you several examples of stuff society learned was harmful and then phased out with regulations/legislation. No, it didn't and does not happen overnight.
Why are you acting in such bad faith, trying to disregard people you don't agree with as "not being able to imagine consequences"?
I was on board until the end. If we don't have kids, we're wiping ourselves out even faster than with climate change. I also wonder with oil if we'd need it for some things still, though maybe it's fine if it's made from something else. Gasoline has some obvious alternatives in most areas, but oil seems to be more than fuel. It's also a lubricant.
yep! it’d be hard, but we’re already at most people nodding their head when you say “social media is addictive, detrimental to individual mental health, and overall negative for society”
you just got to get enough people to nod at “…and this is caused by the underlying incentives from digital advertisement” then to “and the most effective course of action is to ban digital advertisement”
I truly don’t believe it’s a big leap, especially after a few more years of all this
For one, I'd like the EU to use this as evidence to straight up ban Meta apps. If countries can ban TikTok, why not extend the same privilege to Meta?
But then again, the EU are a bunch of vacuous chicken shits incapable of pulling their heads out of their arses, never mind safeguarding their own children.
Larger fines, more robust methods for Meta to keep children off their platforms, more robust methods to stop the spread of propaganda and spam on their platforms, for Meta to start prioritizing connection between others instead of attention.
If you want a company to do something, you do need to ensure that the fine is bigger than the amount of money they made or will make by doing the thing you are trying to discourage. You need there to be a real downside. I don't think any of the fines that have been discussed are anywhere close to the levels that I am talking about.
Don’t corporate fines often come with requirements that the company also discontinue certain activities, start certain other ones, and be able to prove this or that to a regulator?
Why is the corporate death penalty or Zuckerberg in jail reduced to angry mob ideas? I think both are valid responses to the social harms that Facebook and social media generally have caused.
Serious question: What exactly do you want to see done? I mean real specifics, not just the angry mob pitchfork calls for corporate death penalty or throwing Mark Zuckerberg in jail.