* Let me adjust the relative weight that the current video gets vs my past views (i.e. do I want to see related videos to this one, or ones that YouTube generally thinks I'll click on)
* Let me adjust the weighting of thumbs up vs. watch time.
* Let me configure the homepage
* Let me make the entire recommender panel be just subscriptions or items from a particular list
* Let me replace the recommender entirely with something of my own devising, called back through a webhook
These things would make YouTube much more useful for me. I'm not going to YouTube just to kill time, and I pay them $13/mo. They're not getting any ad revenue from me, so why am I stuck with their recommender that only cares what it thinks will cause me to spend the most time on YouTube. I am not interested in spending the most time on YouTube. I'm interested in getting the most out of it in the minimum time.
YouTube, Spotify, Snapchat, Google all have horrid recommendation systems. I mistakenly clicked a link to a YouTube clip of a TV show once and closed it immediately before even watching 2 seconds of it, but now my home page and the "watch next" bar are filled with clips from that show. I'm convinced that the dislike button and "I'm not interested in these" option in the hamburger menu are actually defunct because as far as I can tell they have absolutely no effect.
Spotify is just as bad. I once made the mistake of liking an instrumental track on one of the radio stations I listen to. Now a good 25-30% of the songs that come up are just the instrumental versions, not the songs themselves. No matter how many of them I dislike they keep appearing. The same goes for remixes.
There's no notion of "nuance" in these systems and it just makes me think that they were only ever evaluated internally. The models might perform well when they're tested on all 1 billion YouTube users, but on an individual level they're really awful.
As an aside, I've never given them any of my data so this is purely anecdotal, but I frequently hear (in both positive and negative contexts) about how great Facebook's recommendation systems are on both Facebook and Instagram. I think that's what investor confidence in them as an ad company has stayed so high even through their controversies. I honestly don't think I've ever heard anyone praise Google's recommendation systems, by contrast.
> There's no notion of "nuance" in these systems and it just makes me think that they were only ever evaluated internally. The models might perform well when they're tested on all 1 billion YouTube users, but on an individual level they're really awful.
Well, that is all that really matters to them in the end, isn't it? A more accurate recommendation system might be better for many individuals but not drive engagement, whereas what they are going after may be lousy for the individual but drive more engagement overall. It's the classic playing to the lowest common denominator. Getting enough people to keep viewing ads is all that matters.
There has been an explosion of Fanta flavors and canned pasta sauce because they realized that optimizing for the median consumer is a lot worse than optimizing for a small handfull of cluster centroid users.
Recommender systems don't optimize for the median consumer though. They optimize for clusters, much as pasta sauces now do. That's why my recommendations probably include a lot more super smash Bros content than yours do.
At a certain level of volume, more transparency should be LEGALLY required for multiple companies (auto personalization / recommendation engines / etc.). At least the ability to opt-out. Just hard to define. Could you legally define “bubbling” or super duper smart recommendation algorithms?
Big tech knows this is the direction IMO.
It’s my guess why they randomly started pressing so hard for screen time usage reminders.
They’re selling addiction to kids (and adults).
YouTube, FB, Instagram, Twitter Moments all have liability when it starts to click with everyone.
It’s a serious issue that is totally unregulated and not understood by most.
I built Pony [0], an email platform that delivers once a day, to see if it is possible for a platform that doesn't manipulate user psychology to succeed.
In the UI I eschew every traditional user manipulation technique I could think of: there are no notifications, there are no unread message counts. There even isn’t read/unread message state. It's launching now, so finally I get to find out: can a platform that doesn't vie for attention, that lets users create their own unguided, unprompted experience succeed?
> - Weekly/Fortnightly/monthly delivery. I want a way to say to an old friend "hey, lets keep up with each other every fortnight.
This is probably the most requested feature right now. I want this too but decided I had to keep it as simple as possible for the launch. But this will definitely be fleshed out as there becomes room to evolve the concept. Turning those delivery time radio buttons into check boxes will be exciting. I, for one, would like a delivery mode that syncs up with sunrise/midday/sunset at your geolocation. This will be fun to explore.
> - Offline mode. I want to be able to take 5 letters to a park near my house, turn off wifi, and just write with a clear head.
This is already in the works. I want this too. I even have a ServiceWorker installed for another reason (https://stackoverflow.com/a/55245528/741970) but will leverage it for proper offline functionality as well.
Thank you. That's a great way to put it. I also liked when someone recently described it as an "anti-technology". Why is it often easy to describe something in terms of what it's not?
Because people think in terms of responding to stimulus. It is a lot easier to see pain points and to say, "I don't like that" because the problem is concrete. To describe something in terms of what you want, you've got to do the work of imagining something new -- and thats hard.
I'm not seeing evidence of that from his profile. I see evidence that he's finding threads that are tangentially related to a problem and posting a solution which he built.
As someone who has been looking for something very close to this and has thought about building it, I'm grateful that he's posting it here.
I want a university or public library to manage the largest online database of videos (which is defining our culture to a great extent), not a big company like Google. A big company can provide the hardware, though.
To play the devil's advocate, isn't technological "addiction" the inevitable future, so why even fight it? The more engaging technology becomes, the more people will naturally(and perhaps rightfully) choose it over reality. Isn't trying to curb the addiction merely an attempt to postpone the future?
A distinction without a difference. Some experiences promote better outcomes than others. As a society we promote those that are demonstrably better vs those that are not. Why should this new terrain - digital - be exempt?
> They're not getting any ad revenue from me, so why am I stuck with their recommender that only cares what it thinks will cause me to spend the most time on YouTube.
I can think of two reasons.
1. If YT recommends good content to people, and make them watch YT more, they are more likely to keep paying the monthly subscription fee. So I'm not sure that the recommender is completely useless for their subscription business. I do agree that for subscribers, the weight of engagement metrics should probably be lowered in favor of user satisfaction or some other metrics.
2. YouTube is an extremely complex system developed over many years, and it's very expensive and error-prone to try and change anything. For many years, it was monetized through ads; so it was designed around that goal. The monthly subscription is relatively new, and it's still probably a tiny fraction of the ad revenues, so many features of YouTube are just left untouched even when they don't make much sense for monthly subscribers.
IMO it's got more to do with the fact that providing a power user feature such as "run a regex search on the closed captions" would implicitly define an API which needs to be maintained. It could complicate the backend forever or block certain optimizations even if the feature is used by a tiny percentage of users who'd be outraged when it's removed.
Any UX designer just had a heart attack reading that. Yes having a bazillion knobs and buttons is cool for hackernews folks, but as a user, it's just confusing. There's a reason iOS is so popular, and Pixel devices are creating an iOS like simple UI.
I do agree that 1-2 or the stuff mentioned above would be nice, but most of it comes off as a programmer trying to tweak the hell out of a system, ignoring that 99.99% of users actually won't benefit from ANY of that.
Being able to hide complexity can easily solve this problem. A user interface can come in multiple flavors. The default interface is typically simple and suitable for 95% of people. The other 5% need things like stats for nerds, advanced options, etc. Making advanced options limited defeats the purpose of advanced options. If you have too many casual users clicking through to advanced options, it means you did a bad job separating concerns in the first place.
But the question is, how much engineering hour and effort do you want to put into a feature that will benefit a very tiny percent of your users, vs putting that time into trying to make the experience that 99% of users have better?
What is even more confusing is having UI elements that don't work as you expect (not interested, dislike button etc).
The search on youtube is also completely broken, I have not monetised any of my videos and the penalties for that makes it so that if I search for the exact title + my channel name the video will show up on something like page 5.
Also, complex != confusing. You can have very simple UIs that are confusing as hell if you're not used to them or you need a specific issue. I usually have trouble helping my mother on her iphone because I'm not used to it and it has absolutely no guides or help functions if you can't find what you need. Usually it's a "magical gesture" that is documented on some obscure web page.
Considering that pretty much every huge service goes into the direction of less control over time (Amazon, Netflix are other examples), I assume that for other people this has to somehow be great and actually increase their engagement. For me (and I assume many others on HN) it simply makes the experience worse and recommendations less relevant.
It is for me and, (again, my assumption) probably for most of HN. When I don't like stuff I'm just annoyed by it taking up space and if I somehow end up watching it, it's only for the few seconds it takes me to close it.
Moreover, Youtube became increasingly resistant to user configuration. And the discussion doesn't touch on user configuration. I'd say the reason is that there's been uniform switch across multiple platform to auto-recommender model. Why? Part may be bureaucracies assuming they need control, part is a "the average user is a total moron who can't configure and will just accept defaults anyway" consensus (not entirely false but not entirely true), part of it is the rise of machine learning makes having supposedly strong recommendation engine be equated with having value.
And the situation with a completely unconfigurable recommendation is that they can't show people what they want. They can only show people "what people can be conditioned to like, what grabs some average more." Which is going to be polarizing content.
There is no additional cost to Google for playing additional videos. They have interconnects and they aren't full, so there is no reason not to add more content.
It's only the last mile that has bandwidth constraints really.
I think most of the suggestion aren't realistic. You can project the same desire for customizability to any software product. For Hacker News:
- Let me configure the order of posts based on voting patterns (hot, newest, controversial)
- Let me adjust the weighted score of posts based on the age of the account.
- Let me configure the home page.
> Let me replace the recommender entirely with something of my own devising, called back through a webhook.
Is there any popular consumer product that allows custom recommendation based on a webhook? Opening up a core part of your product to an untrusted and probably unreliable third-party is not a good idea.
> why am I stuck with their recommender that only cares what it thinks will cause me to spend the most time on YouTube.
This is addressed by the article:
> The first is this notion that it’s somehow in our interests for the recommendations to shift people in this direction because it boosts watch time or what have you. I can say categorically that’s not the way that our recommendation systems are designed.
I wouldn’t mind having these customization methods you suggest organized under different “smart playlists” with different recommendation methods and levels. Some are just chronological subscriptions, others try to dig up new and interesting/different content.
Part of me feels like these tools with "recommendation algorithms" who's parameters and behavior are opaque and not adjustable by the user are meant to put the people that wrote them in control of the user's mind.
What they are "meant" to do is increase engagement metrics. If the recommender is smart enough, the content diverse enough, and the userbase big enough, the recommender will learn how to exert psychological influence over the user as a means to increase engagement metrics.
None of this is exactly deliberate, it's just that the recommender will figure out that if you can get someone angry about something, or entranced by some conspiracy, they're going to be very engaged.
The problem is, going to a social media site with a specific intention, it's an attentional battle. They are going to show you the most entrancing thing they can come up with above the fold. For example, on youtube.com, you can choose to some degree what sections are shown, but the unconfigurable recommender is always at the top and cannot be moved. "Saved for later" is always a couple of screenfuls down no matter what you do.
This is hostile technology. YouTube is not made to be a tool for you to use. It's made so that you're a tool for it to use. It pokes and prods at you with its recommendations to serve its own interests. It does not care at all about yours as long as it doesn't piss you off so much that you stop going there.
Many times a recommendation system is opaque and not adjustable because the people who made it don't know how it make it transparent and adjustable. A good example of such a system was given in Andrew Ng's old machine learning MOOC. I'll describe it below.
Suppose we want to do a movie recommendation system, and we have a bunch of data consisting of anonymous user identifiers and for each user a list of movies they have seen and their rating of that movie.
Imagine that we had a list of movie attributes that we thought might be relevant to whether or not someone would like the movie. This might be things like running time, if it is funny, if it has car chases, if it has romance, if there are horses in it, if there is profanity, and so on. Let's say we've got 50 of these attributes.
Now suppose we had for each movie a vector of length 50 whose components were how well the movie fit each of those attributes, on a -1 to 1 scale.
If we had that movie attribute data, then what we could try is modeling the users as if each user had a 50 component vector telling how important each attribute is for that user, and for each user we could go over the list of movie ratings and try to find a set of weights for the components of that user's vector such that the dot product of their vector with a given movie's attribute vector correlates with that user's rating.
That works pretty well, but there are two practical problems. First, we need to come up with that attribute list for the movies. Second, once we've got our attributes someone has to go through each movie and figure out the vector.
So forget about that approach for a moment. Suppose instead we came up with a list of attributes, somehow, but instead of figuring them out for the movies and then inferring the user weights, suppose we told the users the attributes, and asked them how important each was, and assumed that the users are actually right about what is important to them?
Then maybe we could take all the movies, and try to assign attribute weights to them that lead to consistent predictions of user scores!
It turns out that works, but we still have the problem of guessing what attributes matter, and depend on the users actually somewhat knowing what makes them like a movie.
So...if we knew the movie attribute weights we could infer the user preference weights, and if we knew the user preference weights we could infer the movie attribute weights.
The brilliant solution is to get rid of the attribute labels. All we decide on is the number of attributes! So we might decide that there are going to be 50 attributes, and we can assign the movies random weights for all those attributes. We can assign the users random preferences for the attributes.
Then you can go through an iterative processes where you tune the movie attribute weights to better predict user scores, and you tune the user preferences to better match the movies. This ends up converging to a set of movie attribute weights and user preference weights that do a good job of reproducing the user's scores for the movies, and makes good recommendations...
...and you have no idea when it is done what the attributes mean!
All I really want is for Youtube to stop recommending me videos that I've already watched, or at least show the watched indicator on them consistently. It has always annoyed me to no end how Youtube seems to "forget" about videos that I've watched after a while, even though they're all still right there in my history.
Whenever I went to YouTube I would get suggested videos that were either interesting and related to stuff I had watched, or not interesting to me, but no problem. The algorithm just didn’t quite judge what I might be interested in as well as it could.
Occasionally I would try dismissing one of the suggestions in the hope I wouldn’t see similar again, but then I’d still see it’s kind repeated and was just disappointed it wasn’t taking my request into account.
Suggestions are hard though. No problem.
One day, though, I made the mistake of following a link to YouTube which turned out to be to a video of an American political figure (not even sure if he is a politician) whose views were of the sort that court controversy - on purpose, I guessed.
What I didn’t expect was that YouTube would then offer me videos featuring the same person - or people with similar ultra-polarising messages - every time I returned to YouTube.
I hadn’t realised that not only does YT take into account what you have watched a _lot_ of, it also seems to massively favour what you have watched once - if that is something especially controversial or perhaps otherwise ‘special’.
While I don’t mind pointless suggestions I will ignore, it’s very concerning that without ‘liking’ or repeatedly watching such material, people are being suggested it time after time, even when they ask to have it taken away.
> it also seems to massively favour what you have watched once - if that is something especially controversial or perhaps otherwise ‘special’.
This is my experience as well. YT thinks I love Jordan Peterson. Four or so months ago, I didn't know who he was, and clicked one clip, found it ridiculous and never clicked another one. But my recommendations are lousy with the guy.
If one isn't sure if they want recommendations related to a video (that they haven't even watched yet!), the only safe approach is to watch the video in incognito mode.
You are the second comment here stating this specifically about Jordan Peterson. Weirdly enough, I had that same experience. I saw one video, didn't like it and yet YT keeps recommending this and that video of him or more right-wing people DESTROYING someone. It's frustrating to say the least, not only because I don't want to see it but because I showed a video once when giving a talk, and after it had finished, all those garbage recommendations showed up for everyone to see, making it look like I am secretly some lunatic.
YouTube is not trying to provide a service to you, it's trying to sell you the ad consumer, to their actual customers: advertisers. And they have it in their head that the best way to do that is to increase engagement. And the algorithms they use to try to do that analyze the habits of the most "engaged" viewers which includes a fair helping of crazy people with unlimited time on their hands and when the platform thinks it has some sort of opening to turn you into one of those folks, it tries as hard as possible to shove you in that direction. All this is just silly algorithms under the hood without enough human oversight, but that doesn't make it any less sinister.
Imagine this translated to print media, imagine YouPaper: a publisher of magazines. Going by these same engagement metric driven algorithmic based systems such a platform would rapidly spiral toward pushing tabloid content to the detriment of everything else.
> Sorry, can I just interrupt you there for a second? Just let me be clear: You’re saying that there is no rabbit hole effect on YouTube?
> I’m trying to describe to you the nature of the problem. So what I’m saying is that when a video is watched, you will see a number of videos that are then recommended. Some of those videos might have the perception of skewing in one direction or, you know, call it more extreme. There are other videos that skew in the opposite direction.
I assisted with data research on YouTube recommended videos (https://www.buzzfeednews.com/article/carolineodonovan/down-y...), and this claim is misleading at best. There may be some videos that aren't extreme, but that doesn't matter if they are a) in the extreme minority and/or b) appear very far down in the queue of 20+ recommended videos when users typically either do "Up Next" or look at/action only the top few recommendations.
I have an unlisted video with about 15 views that when viewed in an incognito tab constantly recommends stuff like "Ben Shapiro owns liberal professor", alongside gun videos.
I just did an experiment where I clicked one of these and clicked "next" a bunch going through their recommended videos. It immediately dove into alt-right stuff, and eventually led to videos of antifa protestors getting beaten up.
The same thing happens to me as well. I don't know who Ben Shapiro is besides knowing he's "not liberal" and have never watched his videos but they are recommended in the sidebar of almost any video I watch.
I looked up the hip-hop artist Mac Miller when he died of a drug overdose and the all of the recommended videos were of his pop star ex-girlfriend being the one responsible for his death. Clearly the engine has learned to go towards the extreme.
My opinion is to push for education in all aspects of these algorithms and keep your kids off Youtube! It is so far from network or cable TV it's a little scary. That said, it's an amazing resource but it's geared toward growth and profitability at any cost (most likely lives).
Seriously, why the fuck is this? I get recommended Ben Shapiro, Jordan Peterson, MGTOW, etc. all the fucking time despite explicitly telling Youtube that I'm "not interested" in these videos.
This is normal, true, but be aware even your starting point gives clues. Your IP address, your browser choice, the fact it’s (probably) the first time YouTube sees you... it all feeds into the algorithm.
Might be the case that Chrome users in your area are going Incognito to watch Ben Shapiro videos. Given your reaction, that might be exactly the case.
Or it might be Ben is a B-level celebrity that might attract a click to display an expensive ad.
Maybe it's true people matching some of my info watch are likely to watch Shapiro videos, but by optimizing for engagement they basically are trying to herd similar people towards whatever version of that person watches the most YouTube videos.
Indirectly YouTube wants everyone to be a right-wing fanatic, because the algorithm has determined that that sort of person watches a lot of videos.
YouTube shows videos that are likely to get clicks, but fanatics are more likely to click and watch videos. So things fanatics care about are promoted. YouTube seems to take this stance that it's just natural social dynamics at play, excusing themselves from addressing what they're enabling.
I wonder how much of this is due to sponsorship. Aren't Ben Shapiro, PragerU, etc. billionaire-funded outreach programs? Can you pay Youtube to give your videos higher positioning in recommendation lists without their higher positions being explicitly labeled as "sponsored"?
No, they are not billionaire funded. It is more likely to be a result of PragerU their lawsuit against youtube and YouTube adjusting their algorithm to give them a bit of a boost
I'm not sure if Ben Shapiro is being directly funded but it looks like PragerU is mostly funded either directly by billionairs or by think tanks (which are often also funded by billionaires but I don't have time to dig into all of the different ones mentioned here)
I may be confused here, but the person you’re responding to is saying that Ben Shapiro does not advocate for a white ethno state. Are you saying that he does?
The more concerning thing is that you're an intelligent person: you're likely smart enough to know that tweet is a weird mix of general right wing feelings that would describe most Republicans and bizarre leaps of logic.
You also probably know (or have the knowledge to find out) that Ben Shapiro had publicly stood against the alt right on numerous occasions.
Yet you're presenting this as a "verifiable fact" regardless.
> It's simply a verifiable description of Shapiro's views.
A description that's verifiable insofar as the quality of sources that the author cited. And the author provided sources in the form of urls on an image, and most people don't click on those. I'll link them up here, and let the readers decide whether this unverified twitter account's description holds up:
Even just taking the time to look down and compare the statements made on the image and the quotes that were used to justify these statements revealed the kind of artistic license that was taken in the interpretation of the sources at hand.
Shapiro has made many videos against the alt right. He does not advocate for a white ethno state.
As other posters have pointed out there's some laughably extreme logical twisting - "advocates for Muslim concentration camps" - going on in your tweet there.
I really don't know about antifa, but most of what I have heard or seen support this view (the Berkeley bike lock incident, "punch a nazi" campaign, the video with the origin of the "repent zoomer" meme and the new antifa fighting club).
I don't think this is a fair summary of what the Product Chief is saying. When people talk about the YouTube rabbit hole, they're talking about watching video A, which links to B, to C, etc. until they reach content considered extreme.
This is simplistic, but say all videos are on a 1-10 integer scale in terms of politics. YouTube always recommends 80% videos that are on the same value as the currently viewed video and 10% are one value higher and 10% are on one value lower. It doesn't push anyone in either direction. Randomly clicking on recommended videos will randomly move you up or down (and usually not move at all). But, no matter where you begin, you can always reach either end in 4 clicks - cue people talking about the YouTube rabbit hole.
But hey, maybe it's not enough to merely prefer content that isn't extreme, what if we wanted to actively push people to the center? We could recommend 80% videos that are one value higher or lower towards the center spectrum (or the same is the viewed video is at 5 or 6), and 10% videos that are at the same value, and 10% towards one value away from the center. Randomly clicking on videos _will_ bring most viewers to the center in this system.
But we can still reach the extremes in just 4 clicks. This is why complaining that users can go down a rabbit hole feels disingenuous to me. They always will be _able_ to do so - unless YouTube never recommends videos that aren't more centrist than the video being viewed.
I suppose YouTube could just ban all videos that are at 1 and 10 in our spectrum, but that's basically just calling for censorship. And not to mention, 2 and 9 will just become the new definition of extreme.
Edit: many of the comment lead me to think that I haven't made what is and isn't in scope of my comment clear. Allow me to point out two things in particular.
1. YouTube already does prohibit hate speech. How well it enforces its rules is a valid point of discussion, but not what I'm talking about here. I'm pointing out that the ability to get to the extremes of what _is_ allowed on the platform through recommendations should not in and of itself be a point of concern.
2. I'm also aware that sometimes the recommendation are very messed up in context of the current video - e.g. shock videos getting recommended to kids. But this is more about trolls deliberately gaming the recommendation algorithm, which I see as a distinct issue from the alleged rabbit hole pattern of recommendations.
The alternative is that Youtube could choose NOT to recommend videos for large categories of videos, particularly those related to kids or other categories (for example, Rohingya-related videos in Myanmar in the current time period). You need not censor the videos, however you can choose to make their discovery less easy.
There's no law that says they have to recommend videos. There is however sufficient evidence at this time around the rabbit-hole effect causing real-world harm. This isn't 2005. It's about time Youtube, and social media properties adopt the medical principle: "first, do no harm". Perhaps they will do this willingly, but somehow I doubt it.
You're right that YouTube is not obligated by law to recommend videos. They do however follow their internal guidelines, which try to keep the platform open to ideas. Some of those ideas are distasteful to the majority of the employees at YouTube, however they will still in general respect people's rights to present those ideas.
Allowing monetization (where much of YouTube's controversies have been) is a completely different story since it also involves the demands of advertisers. The advertisers do have a right to avoid paying money to extremist content, just like you have a right not to donate to causes you don't believe in.
> first, do no harm
Easier said than done.
Challenge one is to define "harm" without classifying all sorts of alternative political ideas as "harmful".
Challenge two is once you've defined "harm", build a system that works at YouTube scale that can actually filter it. Also consider that it's a moving target - people are able to figure out how to get around YouTube's filtering algorithm, so YouTube needs to constantly update it to keep ahead of these people.
Recommendations, and autoplaying videos in particular based on them, clearly drive growth and profits. But they are clearly the crux of the problem.
Sure, defining harm is not straightforward. But it's possible, and frankly as a private platform, it is their choice.
One could, for example, take a broader definition of harm, and use that to limit recommendations, but still host the video and allow it to return in search results. Is this the right answer? I don't know. But it's silly to just throw up your hands and somehow take recommendations as some sacrosanct part of the system, when it was clearly a choice to have them to begin with.
I'm not sure how kids factor into this, I've always heard the YouTube rabbit hole in the context of pushing people to either ends of the political spectrum.
As far as "do no harm" goes, implementing safeguard against the alleged rabbit hole has a very strong potential to do harm by making YouTube a partisan platform. Bear in mind that concern over the alleged rabbit hole is not equal between liberals, centrists, and conservatives. Many of the recommended videos I've seen people reference as evidence of the alleged rabbit hole are videos I'd considered mainstream. What it takes to placate the critics has very strong potential to violate the principle of "first, do no harm".
I would suggest looking through that previous link regarding examples of where one can quickly end up in white nationalist content through recommendations. Again, recommendations are for Youtube's own growth-related ends. However, it is emphatically a choice to have them.
Your medium post is not directly related to what I was referring to in my first comment. Yes, bad actors game the recommendation algorithms to troll people. That's what Elsagate was about. People deliberately making channels and videos that would appear as and presumably get recommended as kid friendly content when it was actually shock content. I'm well aware that recommendation algorithms can be abused. Perhaps I should have more strongly emphasized that my examples are presented in a very simplified scenario, that is limited to the alleged political rabbit hole and not things like Elsagate.
The point I am making is that pointing out the fact that it's possible to get to the extremes of YouTube by clicking through on recommended videos shouldn't be surprising. In fact, of you couldn't so that it would be evidence of deliberately trying to keep certain content from being popular.
The question of how well YouTube polices it's terms of service (which already does prohibit hate speech) is related to this question about the alleged rabbit hole because such content presumably exists at the edge of my hypothetical political spectrum values. But it's not directly related to the fact that clicking through on recommended links can bring people to 1 and 10 even if they start at 5 should not be surprising.
What you're saying is true, people often seek extremes in all things without any extra nudging, but the problem is that YouTuve deliberately creates several other positive feedback loops. It doesn't just show related videos, it shows popular related videos and additionally selects them based on what you watched before.
It makes sense to show more Lego content on the sidebar of a Lego set review. It's less clear that I should be chased by Lego videos across the site after I clicked on a single one of them.
Also, considering that various extremes tend to attract attention, it would make sense to use an "anti-viral" algorithm that de-preoritizes videos that get too many views at the moment. This would have an additional effect of giving better exposure to less popular channels, promoting different content creators.
Realistically, I don't think they will do either, even if they know for sure that it will help, because it will reduce addict... I mean, engagement of their user-base.
Jaron Lanier talked about these issues a lot recently. He makes some interesting points regarding incentives and feedback loops.
Wouldn't this all be easy to prove or disprove using random walks? Just run a thousand or so, from different starting points, and see where they all go, picking recommendations truly at random (without bias towards more extreme stuff).
If Youtube truly pulls you towards extreme videos, then you would always end up at those videos after a long run.
* Centrist content links to liberal content at over twice the rate it links to conservative content.
* Proportionally more conservative content links to centrist or liberal content than either liberal or centrist content links to conservative content.
* Liberal content links to itself at the highest proportional rate.
Granted, who knows what portion of liberal or conservative content counts as extreme. But concerns over YouTube extremism almost always has to do with conservative extremism and the data does not suggest conservative content is being promoted by YouTube.
When I say "extreme" I mean "extreme" in all senses, not just politically extremist. Could be just an extremely one-sided view of something ("why X sucks") or even a "top 10" or "worst fail" video.
> When I say "extreme" I mean "extreme" in all senses, not just politically extremist.
I don't consider your opinion invalid, but I do get the sense that the majority of concern over the YouTube "rabbit hole" is about promoting politically extreme content. My comments, since the beginning, have been specific to the allegation of pushing users to the extremes politically. I tried had to be explicit about this, and even edited my comment (prior to your comment, mind you) to more explicitly spell out the fact that I'm not talking about extreme things that aren't political (e.g. the "Elsagate" issues of shock content getting recommended to kids).
If this is the issue you're taking, then bear in mind that it is not in scope of what I am pointing out, and I don't think it's on scope for the majority of people's discussions about the "YouTube rabbit hole".
This is a little off topic, but I watched a Jordan Peterson video once and now my recommendations are fairly extremely right-wing. I'm center-left and the views Peterson espouses aren't politically extreme (although I've heard some are pretty bizarre) at 'worst' he is a moderate conservative (I say "worst" only because at the current cultural moment "conservative = bad" is the prevailing meme). So I'm not sure exactly how YouTube works, but it feels to me to be trying very hard to push me in a particular direction. To be clear, I don't think YouTube has a RW political agenda, but perhaps it has an economic interest in pushing people to the fringes (maybe people at the extremes click more ads or something). Or perhaps they have a left-wing agenda and they want me to associate moderate-RW views with extremist views (seems unlikely, but then again lots of prominent media companies are pushing a similar agenda).
> interest in pushing people to the fringes (maybe people at the extremes click more ads or something).
This theme has been explored in quite many articles on news and advertisement. It is not that extreme stories get people to click more on ads, but it keep people engaged for longer and looking at more content. More views, more engagement with the publishing site, and more ad impressions.
I recall reading that there exist an interesting link between moral outrage and dopamine highs. It basically becomes a drug which publishers see as increased user engagement.
> To be clear, I don't think YouTube has a RW political agenda, but perhaps it has an economic interest in pushing people to the fringes (maybe people at the extremes click more ads or something).
I think it's simpler than what you're suggesting. They want to make money, and the indirect way they do that is to get people to watch more videos. If most people who watch Jordan Peterson tend to also watch extreme right wing content, and you watch his video, then you will get recommended right wing content, because they think that's what you will want to watch.
I work for Google but not on Youtube, opinions are my own.
I considered that, but I can't think of any plausible reason for prominent overlap in viewership of JP's moderate political content and extreme RW content. Maybe Peterson appeals to people who previously thought the only moderate option was quiet deference to progressives? Would be really interesting if this were the case--it might imply that Peterson has a strong moderating effect on RWers and/or that people are being taught a false dichotomy between extreme left and right.
> I considered that, but I can't think of any plausible reason for prominent overlap in viewership of JP's moderate political content and extreme RW content
From my understanding, JP is actually pretty connected with conservative/right wing politics. Not so much that he is necessarily right wing himself, I wouldn't know, but that he has a large right wing following.
I think they like him for his opposition to political correctness/gender pronouns. His philosophy on masculinity also really appeals to conservatives rebelling against feminism.
It's a recommendation engine. People who like JP also like X.
And almost anyone who likes any alt-right video also likes JP, because he provides a veil of respectability for anti-feminist, anti-multicultural views.
That 1-10 seems impossible to gauge correctly without just basing it on the profiles of a video's previous viewers.
And the crux seems to be that YouTube has a single primary interest in generating suggestions: to offer the viewer a video that they probably want to see next. Netflix recs are no different.
So, the impolite or gruesome content becomes clickbait for an ad revenue machine.
> But we can still reach the extremes in just 4 clicks. This is why complaining that users can go down a rabbit hole feels disingenuous to me. They always will be _able_ to do so - unless YouTube never recommends videos that aren't more centrist than the video being viewed.
I generally agree, but maybe an important detail orbits around the "can" (of This is why complaining that users can go down a rabbit hole...): are Youtube recommendations a healthy mix that give users a choice or does the majority of recommendations push the user down the rabbit hole?
I do suspect that a lot of the alleged rabbit hole comes from people deliberately picking out recommended videos that are more extreme than the currently viewed one. But I'm interested in putting this theory to the test.
I'm actually thinking of making a script or an extension that automates, say 10, 25, or 50 random clicks in the top 10 recommendations and seeing where it ends up. Maybe this system exists already.
I have to admit that the exact definition of the problem is not easy to define:
e.g. if I have on my screen a video about "people-that-hate-bananas", would it be ok to have recommendations that show only e.g. "why-bananas-are-bad" or should I (ethically) get as well videos of the type "are-bananas-really-bad?" respectively "these-are-good-bananas!"?
Using the above example I guess that if Youtube (& similar sites) show only recommendations based on case #1 (which is eaaasy for IT) then yes here I go down the rabbit hole (reinforces/confirms the concept "bananas-are-bad") even if the recommendations are not worse than the original one => therefore even recommendations on "equal" level might have to be evaluated at least partially negatively? (as in this example the recommendations don't promote alternative point-of-views / alternative thinking / search for a flow or compromise, if you understand what I mean)
In any case the evaluation of the results you get might be very hard to do, if you want to do it automatically to understand the meaning (e.g. very hard to understand if "good-bananas-are-bad" is negative or positive towards bananas)... :P (maybe if you manage to you might become a Nobel candidate, hehe, or maybe I'm underestimating the current power of AI, which is definitely possible)
I think that the claim you quoted is actually the less interesting claim in the article, the more interesting one is this:
> ...It’s equally — depending on a user’s behavior — likely that you could have started on a more extreme video and actually moved in the other direction.
> That’s what our research showed when we were looking at this more closely..
Now this is also a bit of a weasely claim. First because of "depending on user's behaviour" which isn't very clear (but for now I'll give the benefit of the doubt and assume they are trying to say "unless the user is actively seeking radical content"). Second because this claim could be true and YT would still be radicalizing (i.e. it could go 1->3, 3->2, 2->4, 4->3, 3->5, 5->4 in terms of extremeism giving a gradual slope towards radical while still maintaing that you are equally likely to navigate to more extreme content as you are to navigate to less extreme content).
OTOH, if I understand the article you linked to correctly, all it says it that if you keep clicking "Up Next" you'll eventually get from mainstream content to extreme content, which doesn't invalidate the above YT claim if their research shows that users aren't actually doing that. Given that you assisted with that research maybe you could make it more clear what you tested and how it shows the "rabbit hole" effect?
Honestly both of these are pretty easy claims to check. Not by manually looking and choosing recommended videos, but by following a random walk. If it is true that it all drags you into extreme videos, then if you take a random walk from any video, choosing recommendations at random, you would always end up down the hole in extreme videos.
In a lot of these experiments though, people explicitly pick the more extreme of the recommendations repeatedly, which of course will lead you down the rabbit hole.
The Buzzfeed article quotes the SPLC, a partisan organisation that has labelled anti-FGM compaigners and Muslim reform groups as 'hate' groups (and only admitted the mistake under threat of legal action).
If "fucked up majorly once" means you should be disregarded, Marc Thiessen of "torture is legal and effective" fame should go on your disregard list too.
Why is it an acceptable expectation for the user to self moderate certain information but not others? For example, if I watch a video about modding a car, then that links me to another video of more extreme modification, and so on, up to the point that I am no longer interested and stop watching. That is OK for the people at YouTube. But when it comes to politics they feel that people cannot decide what they do and don’t agree with and self moderate. I get that you could make the argument that politics is different, but is it that much different than some other topics that could lead to undesirable outcomes?
Car modding has a) very high up-front costs, relative to developing political ideas and b) arguably very little chance of doing large-scale damage to a society.
If car modding videos led you down a rabbit hole towards "watch how fast I can go if I cut the brake lines with these $3 wire cutters! Those braking requirements are just a conspiracy between big auto and the shadow government to keep you from reaching car enlightenment, so make sure to drive through the front window of your nearest car dealership"... and then people started doing that, we would have a different attitude about them.
Are they going to take the same stance to videos about gaming, because gaming addiction is a real problem and a progressive diet of “Lets Plays” normalizes unhealthy amounts of video games, by showing people whose entire life is video games?
I can't believe I have to say this, but it's different because political beliefs, whether they inspire one to shoot a bunch of people or just to vote for the "burn it all down" party can have a damaging influence over society as a whole. I.e. it's not just about the personal responsibility to oneself as much as the risk of being inspired to terrorism.
The banning of dissent under the pretense of radicalism has harmed more people than terrorism — Stalin’s purges, Mao’s purges, heck, even Hitler was just clearing out the seditious members of society who were undermining a strong Germany.
Your arguments, however you mean them, are also those used by tyrannical oppressors to justify the powers they use to commit great evil.
In an (unintentional?) irony, all of those people you cited rose to power and did great damage to their societies by exploiting radicalism in politics.
Gaming addiction is a systemic problem that’s impacting the social stability of several Asian nations, and possibly Western ones — your argument isn’t principled, merely special pleading.
It was just an example, but who are you to say it’s not already a huge problem? It surely is to somebody. Maybe they work at YouTube. Modified vehicles spew enormous amounts of pollution into the environment which as we all know is a huge crisis. Driving highly modified vehicles on public roads is also a huge public safety hazard. Nobody needs 1000 horsepower on public roads. See how this can get out of hand? It all depends on who is deciding what is problematic.
For me the line-in-the-sand is political views that advocate destroying our entire way of life, especially through violence.
In my view you don’t get to claim the benefits of free speech and tolerance if your manifesto is based on intolerance and marginalisation. If you’re intolerant, don’t expect to be tolerated.
If the algorithm increasingly leads you to videos that are pushing you to create modifications where things are maliciously wrong, dangerous, or illegal, then yeah I would expect YouTube to have some accountability here. It's not about right vs left, it's about malicious intent and misinformation and the algorithm feeding that cycle.
Radicalization aside, is anyone just annoyed with how much worse the recommendations are compared to how they were? Damn near impossible to discover anything given how biased it is towards "popular" content as opposed to, y'know, related to what you're watching
Yup. It's just... shitty. I subscribe to a lot of channels and well more than half of the videos on the recommendations list are just recent videos from my subscription feed. I recognize that they are trying to create some sort of dumbed down landing page experience, but it just sucks. I feel as though there was a period of time when it was much better, but that time has sadly long passed.
> The first is that using a combination of those tools of authoritative content and promoting authoritative content is something that can apply to other information verticals, not just breaking news.
How is this not a clear admission they are a publisher and not just a platform?
So it feels like the way YouTube's algorithms work re suggested videos is heavily related to "what other videos did the people who watched this video watch?". Now imagine you have a video that expresses some conspiracy theory. That video is watched by a bunch of people who are into conspiracy theories. Then you come along, and YouTube now recommends you videos that the people who watched the same video as you just watched. And bam you get sucked toward those others who were interested in that video.
Now this does work in the other direction, where someone with more normal viewing habits watching a video will steer the conspiracy theorists toward the types of videos they watch, but the size of the pools means it's the polarizing direction that would have a greater effect.
It's worse than that. YouTube tries to maximize engagement, so it will preferentially recommend videos watched by highly engaged viewers. Guess what category of viewers are highly engaged? Indeed, conspiracists, polemecists, etc. Imagine walking into a college campus and asking the registrar for some recommendations on classes to take and them telling you to go listen to the crazy religious nut shouting from atop a milk crate, because they were so much more engaged with their material than anyone else on campus.
Yes, recommenders take content you've interacted with and map it to other content that they see links to (people who hit Like on video X also hit Like on these other videos).
But what I'd point out is that I see YouTube respecting _all_ the content I see, rather than just the most recent one or just right-wing content, as some commenters here are alleging. For example my feed usually includes videos from Vox as well as Fox on politics, alongside video game reviews, alongside a NASA stream, alongside woodworking, and so on.
The thing is this is how algorithmic timelines, lists, etc. work. They're programmed to drive engagement. Fear, hate, outrage, sensationalism, and controversy are what does that as has been known since the time of P. T. Barnum.
At one point, if you watched an Asmongold video, you'd be bombarded with suggestions for more of them.
Never seen it happen as strong with any other topic.
I don't know about increasing levels of extreme, but I can definitely see it possible that suggestions could get people to a place where they are feeling like an idea or a thing is more popular than it really is.
My beef with all this is our current Overton window has been compressed to right of center economically.
American opinion and general support for solid, center left economic policy is high and growing.
Medicare For All, and friends.
Our mainstream media simply does not report on, nor offer much opinion favorable to labor, and or framed from a labor point of view.
It used to. When I was a kid, we would compare the various points of view, learn to identify them, and discuss what having the means to society.
This lack of control is a good thing. I think we've got problems it's a struggle with, but I don't think they're as bad as what's being represented, nor do I think a nicely centered window makes any sense. I think the people should decide that.
Ideally, the best way to balance online rhetoric is for left of center creators to generate more frequent and more compelling content (without admins needing to finesse algorithms or censor/shadow ban content in pursuit of a digital Fairness Doctrine).
By sheer quantity mainstream media leans center left and online indie media leans center right. You could say that taking the whole landscape into account that constitutes some kind of tenuous balance overall - but it would be nice if both mediums were more balanced.
Interesting aside: conservative talk radio has always been more popular than liberal talk radio - is this just the online expression of the same phenomenon? People on one side of the aisle tending to enjoy listening to and watching political content at greater length than their counterparts?
I happen to have some knowledge of radio. Lefty talk, where it got a hot stick (good antenna and respectable signal), did quite well. In my market, the progressive talk station did just fine.
Advertisers, who got it, would support the station. We had a couple who paid well, and got results.
Dirty secret in talk radio: most owners are conservative. Deffo bias there. In my market, the progressive talk station got next to nothing from the owners. Rightie talk got tons of cross promotion.
Where a leftie talk station was not owned by a major cluster, it also would do just fine.
In mainstream media, there is a basic conflict of interest. Flat out, big business does not air progressives, because progressives would cost them and regulate them.
Online, the majors are making a push to get relevant with younger people, who now call them legacy media. (Gotta hate that if you are CNN)
Access journalism, actually a basic fail to do journalism, for fear of losing access, coupled with the same big business bias, means we do not produce economic left content in major media in the US.
Some of this "extreme" discussion centers on the fact that big business simply does not want that on the air, or in people's feeds.
Progressives struggle with this constantly.
Labor just is not in most big platform interests, and many younger people simply ignore the major media for this reason.
I, at a much older age, get pushed to watch legacy media online every day, multiple times per day.
I do not want to watch them. I want more about ordinary people, labor and the political movements seeking better.
I think left-wing creators are getting better at making compelling content on YouTube and having it as part of a discussion. Hbomberguy, Shaun, PhilosophyTube, and ContraPoints come to mind.
The important thing for Lefty's, who are interested in resolving this problem, is to realize they need to support those voices. And we need to do it rather directly, because of the big systems, and the large money, really won't do it.
I think the extremeness of content is just another anti-free speech talking point. I'm sure there is vile stuff on youtube but if it actually comes down in these threads are are videos by Ben Shapiro, Jordan Peterson, Joe Rogan and Tim Pool. These people are simply not extreme and smearing them this is way is only about shutting down opposing ideas.
I also don't doubt that you can (sometimes) get to really extreme content within four clicks but we have to keep in mind that the recommendation lists can be 40 videos long. So just roughly estimating it: if 1% of videos was extreme content because 1% of the population are extremist you would expect, on average, to see roughly one and a half extreme video if you sampled 4 * 40 random videos.
In addition, just doing a quick test with Jordan Peterson in incognito mode I get the following very telling result.
If I click a video called "Jordan Peterson EDUCATES College Professors In An Epic Q & A", I get mostly other Jordan Peterson videos with some Ben Shapiro sprinkled in.
If I click on "Jordan Peterson Destroys Q&A | 25 February 2019" I suddenly get a wide range of people suggested with the common thread being that the title also contains "DESTROY", "SNAP", "OWNS" and so on. This makes me think the 'problem' might just be that people who keep clicking on the most outrageous titles will eventually see the most outrageous videos. From a technical perspective I would say that is working as intended.
Just out of curiosity - we have people on this thread and on other parts of the internet, claiming they'll watch one "extremist", typically right wing video, and then have their recommendations blitzed with even more extreme/right wing content. This is the "rabbit hole" effect that's being referred to in the article.
Can anyone say that they have distinctly not encountered this effect? I can recall having watched maybe a couple of Joe Rogan clips, a Jordan Peterson interview, and even some stuff I would label alt-right content. I'm personally fairly left-ish/liberal, so I find these videos boring/offensive and eventually will go back to my normal youtube consumption - music, sports, some tech videos.
My recommendations are all of the latter categories, not of extreme/political right wing category. I guess due to selection bias, most people who don't have a problem with their recommendations won't report anything, while most people who do will leave comments that they too experienced the rabbit hole effect. I'm wondering if that leads to the problem described being overstated. Or am I really the only person who's managed to watch some fairly extreme right wing content, and had my recommended videos stay intact?
The big question in my life is whether an app can succeeed in 2019 without manipulating user psychology like this.
I built Pony [0], an email platform that delivers once a day, to see if this is possible. In the UI I eschewed every traditional user manipulation technique I could think of: there are no notifications, there are no unread message counts. There even isn’t read/unread message state.
I truly wonder whether people can adapt to a totally unstructured online platform, an unguided, unprompted experience that they create for themselves. My bet is they can.
If only someone took the rss feeds from youtube channels and made your own front end. Ditch their portal.
Turn youtube into a type of podcast db, its just a media storage site. Then you could create your own portal, parse the views/ratings and provide real statistics.
I don't watch youtube for youtube, I watch youtube for user-created content. And some of that content has already moved off onto 3rd party sites. Floatplane anyone? Patreon anyone? list goes on.
I just wish finding video content was as easy as searching for podcasts. Youtube provides RSS links, I use them to add to my podcast player.
I've noticed an interesting thing in terms of "youtube rabbit holes" towards extreme content. I'm what American society would generally consider "liberal", and I watch a lot of videos about fixing climate change, medicare for all, etc. Interestingly, I don't usually get "liberal" recommendations on my home page.
However, if I watch a video from a conservative angle, even just one or two videos, I almost immediately get extreme right wing content in my recommendations. Stuff like PragerU, ReasonTV, NRATV, etc. Even watching videos that I wouldn't consider right wing, just critical of certain left wing sects, like h3h3 for example, tend to almost immediately lead me into videos like "DUMB FEMINISTS GET OWNED - COMPILATION".
It's strange the rabbit holes almost always take me deep into right wing territory, but never really into left wing territory.
You are suffering "annoyance bias". You remember more the few videos that are annoying and you obviously don't want to see again, than the good recommendations (that are hopefully more usual).
For example I clicked a few months ago in a link in HN to a video of a moron that claims that can cure type I diabetes with a diet that avoid certain kind of food [1]. Now I get from time to time recommendations to see the videos of this moron and it's totally annoying, because it's clearly wrong.
I also get recommendations for music I don't like, food channels I don't like, news channel that I don't like, channels that are somewhat related to what I like but I don't like, ... Perhaps some new video appears in the recommendations for a few days. Anyway, they are not annoying enough to be remembered.
[1] It's not sugar, or HFCS or other food additive that is somewhat related to diabetes, or can cause type II diabetes. It just doesn't make sense.
My theory is that we're seeing the "exploration" part of exploration vs. exploitation in action [1]. I use Youtube almost exclusively for music videos and tech lectures. One day I watched this music video created by somebody who has uploaded a lot of Jordan Peterson fan content:
I was getting JP and right-leaning politics recommended to me for a month afterward, even though I never use Youtube to watch JP or political content of any kind. But maybe I was getting that content recommended because I had no prior indications of interest in that conceptual space. The recommender could be exploring previously untouched territory in hopes of finding something with high exploitation value.
A value-blind system trying to maximize engagement is going to find that some niche content performs well on some users, and will try to explore that space for other users to see if it has high exploitation value for them too. So start out watching videos about health and it's natural -- from a data perspective -- to surface some conceptually adjacent videos that include anti-vaccination conspiracies. If you watch one or two of those -- even without saving or liking -- then the exploration may expand into conceptually adjacent videos about other Things "They" Don't Want You To Know, and so on.
The Youtube recommender works pretty well for discovering new music, in my experience. The problem is that the process that gradually takes you from a Top 40 hit to a less popular musician to somebody who recorded their first song last month isn't always benign when "music" is replaced by "health" or "world news."
I think you made his point for him. Your contention is that MSNBC is not extremely left left wing. His contention is that the NRA and PragerU are as extremely right wing to the same degree that MSNBC is extremely left wing. So if MSNBC is not extremely left wing, then I think GP's point is the the NRA and PragerU are perhaps not so extremely right wing as GGP implied
Though I think I made it clear that MSNBC is not extreme left-wing, I didn't make it clear that their juxtaposing of MSNBC's centre-left political platform with the platform of PragerU is wrong, and I perhaps initially misconstrued their comment.
PragerU has hardline-conservative positions on climate-change, abortion, religion, immigration, and economics that are certainly more deserving of being called extreme than MSNBC is deserving of being called left.
If you think PragerU or ReasonTV are "extreme right wing" content, respectfully I think you might be lacking perspective. I would describe PragerU as mainstream within the conservative sphere. Their presentation is almost always high-quality in terms of production, respectful in its delivery, and even-keeled. You might disagree with the content (or their values), or find factual errors, but it is easy to find flaws with a wide range of content producers on the left as well. ReasonTV is even closer to center, and is near-right libertarian, and definitely not extreme right.
As for your experience with recommendations - I am not sure why you would not see recommendations similar to the videos you like. I see videos from multiple ideologies regularly and get content recommended from all of them. I haven't had my feed dominated by one side. What I think is happening is that people _notice_ videos that incense them, and incorrectly perceive it as being dominant when it truly isn't.
"Why you should love fossil fuels", "It is easy for feminists to forget that men gave women the right to vote, gave up their monopoly on power, and invented birth control." -- even-keeled?
> it is easy to find flaws with a wide range of content producers on the left as well.
More importantly irrevelant, since the criticism isn't that imperfect right wing videos are presented instead of perfect left wing ones.
> I haven't had my feed dominated by one side.
So? It's perfectly possible for you to be in a different testing bucket. Don't assume what you see is what everybody sees, and that they are just interpreting it differently than you do. I might as well say "nah, you only think you see videos from multiple ideologies".
Well, I'd argue that they are not 'absolutely garbage' and it's disingenuous to say it is based on a single video (not even that, just 10 seconds of one video) as well as not providing any kind of context.
The chart is from the video "Why is Modern Art so Bad?" published September 1, 2014 with artist Robert Florczak and is available here (5min 49sec): https://www.youtube.com/watch?v=lNI07egoefc
The chart you've linked is presented at 1min53sec (https://youtu.be/lNI07egoefc?t=113) and it's not a key part of his views or arguments. It's shorter than a talk-show skit, even if you don't agree to what he has to say, I suggest watching it.
They say a lot of things that "feel true" if you're disaffected, surround it with pseudo-intellectualism, and then tell you who you're now better than.
From what I’ve seen PragerU is not really any more “garbage” than the condensed explainers that John Oliver and Stephen Colbert have been doing for much larger audiences.
I find their clips pretty useful for honest summaries of what conservative positions are on an issue and why.
I haven't seen that one, and agree that chart is strange. But across the videos I've seen, I did not see something so egregious, so I have to think this is an outlier that is cherry-picked to paint a narrative about PragerU.
If you call Jordan Peterson videos or Joe Rogan clips extremist videos, then I think this is clearly a step in the wrong direction. Their videos thrive on YouTube not because they hold extreme views, it's because their videos are very engaging, fun and you learn something from it.
If this is a ploy to push more mainstream narratives through YouTube that is akin to watching CNN/CBS or ABC, then I'll be looking for other platforms.
However, JRE hosted Alex Jones (conspiracy theorist "mass shooting victoms are crisis actors"), Gavin McInnes (white supremacist), and Milo yiannopolous (conservative provocateur who among other things supports pedophilia in the gay community.
Joe himself I can't comment on. He chooses to promote controversial and actively extreme people though. Sometimes he disagrees with those people, sometimes he doesn't.
So why keep saying that, when you aren't claiming it's extremist? If I said "a person who plays with their socks every evening, pretending it's the royal family, might also make such an argument", that would also be technically correct, but I doubt I could get away with "just making an observation, not saying that's describing anyone here, just that it might".
One is about extremist content. Extremist content can be interesting. Calling content interesting does not excuse its extremist nature.
There is a parallel conversation specifically about whether JRE is "extremist". The show is certainly interesting. It may also be extremist, but it being interesting doesn't make it not extremist. Because, as I previously stated, calling content interesting does not excuse its extremist nature.
I am not making any conclusions about JRE. I'm only saying that his show being interesting doesn't make it "not extremist". If you're trying to prove that he's not, you've got to do something else.
I also provided some things he has had on his show that probably are extremist content.
>Joe Rogan went on Infowars today, the same day a man who lost his child in the Sandy Hook shooting committed suicide. The grieving father was among the plaintiffs suing Alex Jones for defamation after smearing the murdered children and their parents as part of a hoax.
>In the same Infowars episode that Joe Rogan appeared on, Alex Jones pushed a conspiracy theory that the father of a child killed at Sandy Hook didn't kill himself, but may have been murdered to silence Jones and end the first amendment.
That Twitter thread is interesting, because it contains a link to a 'This American Life' program that gives a platform to Alex Jones to explain himself. I doubt anyone would call 'This American Life' a platform for spreading conspiracy theories or hate.
In this light, is Joe Rogan, who also allows people like Jones to explain themselves, a moderate or an extremist?
You have to look at what both Joe Rogan and This American Life do in a larger context, which should clarify the differences. I do think that TAL should not have given airtime to Alex Jones because it really is that dangerous to give him (and people like him) a platform.
Vic made two factual statements, with the disingenuous implication that they are connected or causally related.
Alex Jones is a wacko. Talking to Alex Jones does not necessarily mean you are also a wacko. This new concept of moral badness transitively propagating through conversation is nonsensical and counterproductive.
thx for letting me know, know what I'm doing this evening. Time to kick back, pop some Super Male Vitality pills, and watch everyone's favorite water filter salesman :+1:
> Editor’s note (March 28th 2019): This article has been changed. A previous version mistakenly described Mr Shapiro as an “alt-right sage” and “a pop idol of the alt right”. In fact, he has been strongly critical of the alt-right movement. We apologise.
I'm not in ideological lockstep with them on many topics, but that sort of journalistic integrity is one of the things that distinguishes them from plenty of rags that are purely motivated by partisan ideology.
I'm not the one being disingenous here. And that's not journalistic integrity. They initially stood by their false slander and only changed it after significant backlash.
Journalist integrity would be the economist not lying and smearing someone who they disagree with in the first place. To call an orthodox jew an "alt-right sage" means the people working at the economist are lazy and incompetent or they are liars smearing a jewish individual for ideological reasons. You could follow the thread. You could probably also find the deleted tweets by the economist and their editor too if you want.
To label a pathetic smear campaign by the economist as "journalistic integrity" just shows the terrible state of journalism today. And the only reason the spineless cowards at the economist apologized is because Ben Shapiro has a big enough following to fight back. Otherwise, they would have stuck to their lies and slander. And I have to disagree with your last statement. The economist is most definitely one of the "rags that are purely motivated by partisan ideology.". But we are all entitled to our own opinions.
The economist is far left? Have you seriously ever picked up that magazine?
The economist is socially moderate-left (mostly socially progressive, but less so on certain issues such as Palestine, and generally not devoting time to fringe opinions at all) but very, very free-market oriented. The closest political orientation to them is "classically liberal", or in US terms, "libertarian", or "moderate Republican".
The idea that The Economist is far leftist extremist is outrageously ridiculous. They do not hide their bias and it's not far left in any sense. It's why free-marketeers like me like them.
>Yeah, so I’ve heard this before, and I think that there are some myths that go into that description that I think it would be useful for me to debunk.
This is no myth bro, go to youtube pick a political video leave on autoplay, watch it go to shit.
It’s almost like the YouTube guy is taking his talking points directly from Zuckerberg’s script.
-----------
1. “of course we take this seriously”
- Mohan: "Having said that, we do take this notion of dissemination of harmful misinformation, hate-filled content, content that in some cases is inciting violence, extremely seriously."
- Zuckerberg 2016: "we take misinformation seriously. We’ve been working on this problem for a long time and we take this responsibility seriously."
2. “we’re simply trying to help people get accurate information”
- Mohan: "when users are looking for information, YouTube is putting its best foot forward in terms of serving that information to them."
- Zuckerberg 2016: “Our goal is to connect people with the stories they find most meaningful, and we know people want accurate information.”
3. “fear not, it’s our users who are in power/in control”
- Mohan: "But YouTube is also still keeping users in power, in terms of their intent and the information that they’re looking for."
- Zuckerberg 2004: “People have very good control over who can see their information.”
- Zuckerberg 2017: “Our full mission statement is: give people the power to build community and bring the world closer together. That reflects that we can’t do this ourselves, but only by empowering people to build communities and bring people together.”
And last but definitely not least,
4. “we’re proud of our progress, but there’s more work to do”
- Mohan: "It’s an ongoing effort. I think we’ve made great strides here. But clearly there’s more work to be done."
- Zuckerberg 2016: “We’ve made significant progress, but there is more work to be done.”
- Zuckerberg 2018: “I’ve learned a lot from focusing on these issues and we still have a lot of work ahead…I’m proud of the progress we’ve made in 2018… I’m committed to continuing to make progress on these important issues as we enter the new year.”
Why anyone would copy Zuckerberg’s script at this point is beyond me.
I work at Google but not on anything related to YouTube. This opinion is my own: that was very disappointing to read.
I use YouTube, I don't browse it. I search for what I want, and then I watch it. Yet YouTube seems absolutely determined to send me down rabbit holes for some reason. For instance my friend sent me a video by some cable news neo-philosopher (a Jordan Peterson type) and since then I can't get rid of low-quality vids in my recommendations that are trying to make me angry. "{Person} totally EVISCERATES {other person}" being the calmest of them.
The recommendation algorithm is also just not even good at its job. I am Jewish and the number of videos I have seen recommended to me that are thinly-veiled anti-semitism is pathetic. Do they really think I am going to watch those?
The one thing from the article I do believe is that these extremist videos (mostly) don't monetize. So why do they dominate? Honestly I think we just suck at building recommendation systems.
I am not a big fan of Facebook, but I applaud their recent ban of white supremacist content. Freedom of speech is one thing, freedom to use someone else's site to speak to millions is another thing. YouTube should just take a harder stance on all of this dangerous crap. The Sandy Hook deniers and the Antivaxxers will have a hard time finding a new home.
From the outside it seems like they don't actually design YouTube with features their users would like.
I pay for YouTube to avoid ads and wish I didn't have to use their home view which shows me a lot of content I don't care about. I want my homescreen to be my subscriptions (instead I have to manually switch to this every time).
I would entirely turn off recommended content if I could, in favor of channels suggesting other channels or something more human curated (like Spotify discovery which in part uses other user's playlists).
People are vulnerable to bad information delivered in a manipulative way - especially when this is reinforced by a bunch of recommended videos.
I'm so tired of people calling the regulation of hate speech and violent speech on private platforms "censorship." These people are more than free to express themselves in any public platform or their own platform. They are not censored in any way. They are being removed from a private platform for not complying with the rules of the platform.
You wouldn't say that a drunk who walked into a nice restaurant yelling hateful things was being censored when they are asked to leave. Neither would you say that a man trying to convince kids to get into his van outside of McDonalds was being censored for being kicked out of the McDonalds. Both of those are private businesses regulating the behavior of their customers, and not in any way censorship.
> Randomly clicking on videos
The entire point is that people don't click on random videos, they click on videos they think are interesting. Call it "rage clicks", curiosity, or some people are just bad people and want to see bad things, just don't call it random, because thats obviously wrong.
You can simply reduce you complaint from ‘hate speech’ and ‘violent speech’ to ‘speech’ to reveal the nuanced perspective of grey thinkers.
Dismissing the societal effect of censorship by dominant public mediums because their profit structure is private is - regardless of your position on said censorship - a very incurious and unserious to adopt.
The contrived examples to support your position are unlikely to resonate with anyone who spends the briefest moment in consideration of all the game theoretic outcomes that are possible with your attitude.
I presume in your world it is okay for telecommunication companies to deny cell phone access to individuals based on arbitrary, internal and opaque reasons because they’re privately held.
Attempting to deny the reality of societal responsibilities when a service hits societal scale is just silly.
An advertising platform that makes their money selling and targeting ads owns a video platform funded by advertising, but the purpose of the service is speech and not ads?
Hm. I see some obvious holes in that line of thought.
I feel that the answer to “why is this advertising company so much more interested in selling ads than promoting free speech?” is more obvious than some give credit for, as is the answer to “why isn’t this massive for profit advertiser not on board with my narrow absolutist view on speech?!”
Our mission is to give everyone a voice and show them the world.
We believe that everyone deserves to have a voice, and that the world is a better place when we listen, share and build community through our stories.
Our values are based on four essential freedoms that define who we are.
Freedom of Expression
We believe people should be able to speak freely, share opinions, foster open dialogue, and that creative freedom leads to new voices, formats and possibilities.
Freedom of Information
We believe everyone should have easy, open access to information and that video is a powerful force for education, building understanding, and documenting world events, big and small.
Freedom of Opportunity
We believe everyone should have a chance to be discovered, build a business and succeed on their own terms, and that people—not gatekeepers—decide what’s popular.
Freedom to Belong
We believe everyone should be able to find communities of support, break down barriers, transcend borders and come together around shared interests and passions.
Marketing isn’t an obligation, it’s bullshit believed by middle managers and gullible people. I don’t believe that you’re either, or that you make a habit out of buying into this kind of thing.
I’m at the point where dishonest discussions about “free speech” online are a non-starter (Having had an illuminating exchange on this very site two days ago). It’s not as though these anonymous chats matter in the slightest, they’re had in bad faith and most of all are intellectually stultifying. The only winning move is not to play. You won’t convince anyone of your position who has made the conscious choice to adopt a dynamic series of positions to support their unstated agenda. It’s like arguing with creationists, the best you can hope for is changing their declared position of the day. Meanwhile they win just by having an audience and the semblance of legitimacy, not to mention dominating and railroading ever comment section they can to quash reasonable debate.
This is setting aside issues with bots, trolls, and brigades. In real life with people you know and can talk to there can be value in these discussions, but never anonymously and never online. It’s pseudointellectual wank dressed up as reasonable talk. There is nothing wrong with simply saying, “I’m not interested in having this conversation with you, sorry.” and moving on.
In regards to free-speech you might be right, but I actually feel like HN is somewhat of an outlier with online debate/discussion, because I actually have changed my mind on things based on peoples' posts here (not even terribly long ago).
The two posters before this are exactly why people are so riled up about censorship (calling it what it is. Just cuz someone dislikes the speech doesn't warrant removal of the right to speak.)
Basically you have people that demand others be censored. That they don't deserve to speak freely. And they'll come up with all these mental gymnastics as to why other people's speech isn't acceptable. And for most of us we understand that the first amendment is the most important and that's not acceptable.
Just look thru the last 2 years of left leaning websites. Anyone who claimed trump was innocent (clearly...) was a "bot, troll or brigader" and ought to be censored because it was "hate speech".
A world where we censor is a dark world. There's a reason free speech was right number one in this country.
Editorial discretion has always been a thing - you should absolutely be able to publish your own speech on your own terms, but I'm not convinced you should be able to force other private companies to host your content. The ability to publish widely is relatively new, before if you sent some long screed to the new york times they just wouldn't publish it (you could publish it yourself, but likely wouldn't have a huge audience).
This gets tricky though when you're talking about things like being blocked by a domain name registrar or something.
Even with speech you publish yourself there are some existing restrictions (libel laws, hate speech targeting individuals, etc.)
Moderation is also what allows communities like HN to exist and remain interesting places - I'd definitely prefer it to every website turning into 4chan.
The infringement of speech is only prevented for the federal government. The internet doesn't count. The internet is not beholden to physical barriers and there is more than enough room to start your own website. You have a right to speech, you don't have a right to a privately owned platform that isn't yours..
Furthermore, deplatforming has always been a thing. It basically happened to the dixie chicks after they spoke out against the Iraq war and GWB in England way back in the early 2000's. I don't think they have ever been on fox or a sinclair broadcasting station ever again.
No one is saying proclaiming trumps innocence is hate speech, that is an absurd claim, where are you getting that nonsense?
Nobody is talking about free speech as a matter of constitutional law in these instances (although even that applies to more than the federal government, i.e. state and local governments.) People are making a normative claim that it’s better for institutions like YouTube to have liberal speech policies than restrictive ones. If you want an example with opposite political valence think of NFL players kneeling during the national anthem. The NFL can legally punish players for exercising their constitutionally guaranteed rights, but it nevertheless has a chilling effect.
I would submit that there is a difference between hosting speech, and amplifying it with your recommendation engine. If your recommendation engine does not promote something, I don't think that is the same as censorship.
I don’t get mad about what YouTube does either (although maybe I would if I was a YouTuber). I do get irritated when people tell me I’m a bot, Russian troll... for honestly stating my opinion (which I don’t think is even radical).
I have personally found the whole "he/she is a bot!" thing to be needlessly dismissive and irrelevant. Even if the entity making these claims is a bot/troll, the claims are still out there, and if there's any evidence that a sizable number of people are going to believe them, then these claims should be addressed appropriately. An example that comes to mind are the flat-earthers; a lot of people say that the ones making the YouTube videos are trolls, and most of them might be, but the fact is that there is evidence that a fair number of people fall for it (or at least are so in on the joke that they spend a lot of money for merchandise, and convention tickets, and funding Patreon accounts).
I have no idea of your political leanings, but I do visit Donald Trump's Twitter account almost daily (because I'm apparently masochistic), and I see the "Russian Bot" insult hurled at any of his supporters, which I always find annoying.
You need to do some actual listening to the people you've listed. Idk all of them but the ones I do aren't hate speech at all. You're being played by propaganda homes.
You asked me a question and because I answered it, that’s a problem? I was trying to be polite, and you could do the same by respecting my choice and not using your (I assumed) honest question as bait.
>These people are more than free to express themselves in any public platform or their own platform.
I don't take this position seriously anymore, because I've seen time and time again that people who espouse it immediately flip 180 degrees when some large platform starts censoring content they want to post or see.
The opposite is also true. Some people who speak against censorship flip 180 degrees when they don't like some content. (Though a bit less often, it seems.)
My conclusion: before saying anything of this sort, you have to be able to present a clear general rule (that other people can apply without asking you) that distinguishes between cases of censorship you're okay with from other cases. That's the only way such conversation can start in a rational manner.
Realistically, very few people truly believe that big companies should be home free to delete anything they want, solely their own discretion (including content from human right activists, opposition parties, corporate whistle-blowers, scientists and critics of the platform in question).
> very few people truly believe that big companies should be home free to delete anything they want, solely their own discretion
Of course they are. If they start showing stuff that scares off advertisers, they will be out of business. Don't see a lot of porn on youtube, do you?
If you think that platforms like youtube, twitter, and facebook should be forced to show all legal content, I think you will find yourself to be in a very small minority position.
And it's a position there is no way the US Supreme Court will back you up on.
> I'm so tired of people calling the regulation of hate speech and violent speech on private platforms "censorship." These people are more than free to express themselves in any public platform or their own platform. They are not censored in any way.
What exactly is "their own platform"?
If I post to a blog site, that's not my platform and can be censored.
If I host my own blogging software on a kubernetes service that's not my platform and I can be censured (AFAIK all providers have TOS on the type of content allowed and largely reserve the right to remove content for any reason).
If I host my own OS with blogging software on a IaaS provider again I may be shutdown/refused to do business with.
If I host my own hardware, with own OS with blogging software in a colocation datacenter it may also be removed for TOS violation.
Same for simply hosting it on my own computer connected to an ISP, the ISP has TOS for the types of content allowed through their networks.
So how exactly do I setup this (mythical?) "own platform" where I can post anything that I am constitutionally/legally allowed to and how do I make it available over the Internet?
I would say it should stop at the ISP level. ISPs are (or at least should be) a utility, and as a consequence completely non-disciminatory in who they host, similar to how phone lines are run.
So, in theory (at least in my perfect world), you could run a server in your basement and host your controversial views there.
No doubt someone at the founding of the US Constitution had the same ideals of such a perfect world, where criticism of Christianity would only be permitted from their basement.
Perhaps you missed the point I'm making. YouTube does not allow hate speech on it's platform. How well it polices and enforces those rules is a valuable point of discussion, but not the one that is at hand.
The concern is that people are able to click through to the extremes of what _is_ allowed on YouTube. My point is that this is not in and of itself a valuable thing to be concerned about. Even if YouTube actively tweaked their algorithms to push people towards the center, it'd still be possible to click through to the extremes - unless YouTube totally stops all recommendations that don't lead closer to the center.
I disagree. YouTube obviously allows hate speech and violent images on its platform. Did you follow their struggles in removing the NZ video? If they wanted to have systems to prevent what happened, they could have built them. It is not a stretch, in any way, to think that one day there would be a viral video that they HAD to remove for one reason or another. In fact, they do it every day for the RIAA.
Its not like YouTube is bound by some cosmic constant that makes it hard to police the content on their platform. They are Alphabet for god sakes, they have so much money they don't know what to do with it. They can, and should, solve these problems, and we can, and should, hold them to account until they do.
The "struggles" of tech companies blocking the NZ video is not giving them enough credit. I only know about Facebook's 80% preemptively blocked rate but to be frank, that figure is impressive. A new video, with millions of not tens of millions of people posting it, editing it, cropping it, flipping the image and doing everything they can to avoid the filtering algorithms and Facebook still managed to preemptively block 80% of attempted uploads. To me, that's an impressive feat and a sign of just how much investment the company had made to block bad content.
Copyright filtering isn't a good example IMO. Lots of people consistently claim that the content ID algorithms are crappy and have high rates of false positives.
> I only know about Facebook's 80% preemptively blocked rate but to be frank, that figure is impressive.
I read an article the other day about the Australian response to the New Zealand shooting, which had this choice gem:
'He said it remained online for 69 minutes. "That is a totally unreasonable period of time and represents a complete failure of Facebook's own systems," he said.' [0]
That was a very senior government minister and they are about to start legislating on the subject. That has to be one of the more breathtaking expectations out there - this is asking a media company to judge what the community standard is and deploy widespread containment measures in under 60 minutes. That is comparable to the response time of our emergency services (~10 minutes) in a life-or-death situation.
I can't really grasp what it is people think that Facebook is doing wrong here that requires a response of that precision; accidentally watching a video simply isn't comparable to having a heart attack or being on fire. I was hopeful that the response time would be something out of the ordinary and measured in hours or days, I'm doubly impressed that Facebook managed to respond in 70 minutes and surprised at how well developed filtering mechanism to control communication on their platform.
I'm not arguing with you any more. I'm a software engineer who works on web applications and I know for a fact what you are saying is bullshit. You are arguing in bad faith. These are the most profitable companies of all time. Your argument is unreasonable. Later days
I'm a software engineer working on web applications as well. My company handles data in the exabyte scale. I've spoken in depths with coworkers tasked with filtering illegal or prohibited content, and have done limited amount of work on this system (to be fair, mostly refactoring logging and other orthogonal tasks - I've never introduced new heuristics or changed the filtering logic itself).
It's easy to do when it's the same files or data trying to be uploaded, or when there's a way to consistently identify it as prohibited. It's hard when people are actively trying to evade the filtering algorithms. Here is one easy example: a lot of copyrighted work like to shows can be found on YouTube where the copyrighted video is displayed on a smaller section of the screen (e.g. a rectangle in a corner with some animated stuff around the rest of the screen). This is because such editing makes it harder to detect. Maybe YouTube has beaten this countermeasure by now - it's a cat and mouse dynamic by nature.
To be fair, this is just my perspective. If someone with greater authority on this subject can give me technical explanation as to why Facebook's 80% preemptively blocked rate is not good I'm happy to hear it. But from what I know, an 80% preemptive block rate on an hours-old video with a significant number of people actively trying to avoid the filtering logic is an impressive feat.
I don't see why it wouldn't be legal, it's Comcast's infrastructure, right? If the public says "Hey Comcast, why are you letting extremists use your network to connect themselves to the world and transmit dangerous ideas?" I don't see why Comcast wouldn't be allowed to decide to cave under the pressure and disconnect me.
> The State attempted to analogize the town's rights to the rights of homeowners to regulate the conduct of guests in their home. The Court rejected that contention, noting that ownership "does not always mean absolute dominion." The court pointed out that the more an owner opens his property up to the public in general, the more his rights are circumscribed by the statutory and constitutional rights of those who are invited in.
Another phrase I've heard recently is along the lines of "a company's ability to refuse to do business with you should be proportional to your ability to refuse to do business with them".
It's not too much of a stretch to liken some of the big platforms and service providers we have today to company towns. While, strictly speaking, one doesn't have to (live in a company town)|(subscribe to Comcast), for some large segments of the population who (live in economically disadvantaged regions where the only jobs are available via the Company)|(are forced to choose between dealing with Comcast and cutting themselves off from social and economic activity), that's not much of a choice in practice.
For the moment, I think that line of "inescapable enough to warrant restriction of property rights" falls to include infra providers (think cartel ISPs, some PAAS, DNS, maybe CloudFlare if you squint) but not platforms like YT/FB. That's probably down to my bias towards technical solutions over regulatory intervention - there's at least a reasonable hope of displacing YT/FB with decentralized platforms, but short of mesh radio you're SOL if your only ISP that offers >56k doesn't want you.
Which was subsidized, made scarce, and then sold to Comcast by the government. There's a reason people are calling for broadband providers to be considered common carriers, because the market isn't "free" (as in competitive).
Well sure, but in that case the Washington Post is performing censorship when it decides which "letters to the editor" it chooses to print. I suspect that is not what most people have in mind when they use the word "censorship".
I generally agree, but these networks are the conduit for so much speech that I'm sympathetic to the idea that it constitutes censorship, especially when we're not talking about "hateful views" but moderate views (e.g., advocating for tighter border control) which are deemed 'hateful' as an excuse to justify censoring/deplatforming/etc. Doubly so when bonafide hate speech ("cancel white men") goes unpunished altogether.
> You wouldn't say that a drunk who walked into a nice restaurant yelling hateful things was being censored when they are asked to leave.
This seems like a bad analogy to me. No loud drunks are forcing anyone to listen to them on YouTube. Yet some people are angry that YouTube lets loud drunks post videos of themselves yelling hateful things, and angry that YouTube lets some people who want to listen that do so?
> These people are more than free to express themselves in any public platform or their own platform.
I think we're at a point (or fast approaching it) where these huge platforms are so integrated into our society that perhaps we _should_ consider them public platforms.
>You wouldn't say that a drunk who walked into a nice restaurant yelling hateful things was being censored when they are asked to leave.
I don't think this analogy holds up because of the sheer magnitude of difference in size. These platforms are bigger than most countries. Countries have governments and laws that serve the citizens and keep society running. Imo, the users of these platforms should be treated like citizens of that platform.
Where in my comment did I say that YouTube doesn't have the right to ban content?
Nowhere. It always puzzles me why this response always crops up, even when I never even remotely try to claim that platforms have any legal obligation to host content.
I did claim that making their site have a partisan recommendation algorithm is not a good idea and may be bad for society. I have zero doubt that this is their right, but people and companies have the right to do plenty of things that aren't good for society. Saying that it's bad for society to implement a partisan recommendation algorithm is not at all the same thing as saying it is or should be illegal.
My theory of HN comments: People prefer to respond to familar arguments, rather than the arguments you make in your comments. People prefer to discuss topics that they are familiar or passionate about, rather than the specific topics addressed in the articles or parent comments.
I do because I'm stating my own personal views. You do as well, when you are stating your own views. Why do people keep trying to depict people stating their opinions as advocating some sort of government sponsored program forcing their opinions on others? Has discourse really gotten to the point where we just assume that when people state their views they are always implying that they want these views enforced on others?
It's entirely valid to say that tech companies' censorship or forms of manipulating views is bad while acknowledging that this is their legal right. This comment applies just as much to you: https://news.ycombinator.com/item?id=19525998
They already have the power to censor. There's no "giving" involved.
They don't do much because it costs money and they believe it's unprofitable. There's no noble "free speech" goal behind any of these companies, and you're deluding yourself if you think otherwise.
> I can't stand the insane contradictory leftist rhetoric. On the one had, you claim tech companies are too powerful and evil and shouldn't be trusted. On the other hand, you want to give tech companies power to censor.
This is actually a right-wing authoritarian stance: you're advocating that the government (or do you mean some other entity?) force private businesses to publish speech that they disagree with.
You're conflating private entities freedom to publish or not publish what they wish with censorship, which only comes from a legal authority.
The challenge is that, at least in the US, these platform have sought to secure themselves the "public space" status through Section 230 immunity under an economic difficulty argument. So, yes they are private entities, but they have been claiming to be public spaces who shouldn't be burdened with this.
> The challenge is that, at least in the US, these platform have sought to secure themselves the "public space" status through Section 230 immunity under an economic difficulty argument.
It makes perfect sense that they would be free to refuse to publish content they object to, even with 230 immunity, and the courts agree.[0]
> Do I lose Section 230 immunity if I edit the content?
Courts have held that Section 230 prevents you from being held liable even if you exercise the usual prerogative of publishers to edit the material you publish. You may also delete entire posts. However, you may still be held responsible for information you provide in commentary or through editing.
> This is actually a right-wing authoritarian stance: you're advocating that the government (or do you mean some other entity?) force private businesses to publish speech that they disagree with.
This is absolutely a false dichotomy. It is entirely possible to say that having private corporations dictate what people see and read is bad for society, without claiming that the government should force companies to publish anything.
Yes it the company's right to censor whatever content they want. It's possible to believe in that right while still saying it's wrong for companies to wield it in a certain manner.
Much in the same way you can believe in the right to free speech of the Nazis that marched in Skokie while simultaneously calling their actions reprehensible, it is entirely valid to criticize the censorious actions of these platforms while acknowledging that this is well within their legal rights.
The above poster acknowledged that private platforms are allowed to censor whatever they want. Repeatedly.
I really don't see how your comment is productive, as all it's really doing is putting words in the above poster's mouth that [s]he explicitly denied. What redeeming value am I supposed to see in your comment?
Apologies if this comes off as overly negative but it is very tiresome to keep seeing criticisms of tech companies' censorship met with, "It's entirely within their rights, you're the authoritarian one for advocating government-compelled speech." Doubly so when the previous comment not only didn't advocate government compelled speech of any sort, but repeatedly acknowledged that companies are legally entitled to ban whatever speech they want.
> It is entirely possible to say that having private corporations dictate what people see and read is bad for society, without claiming that the government should force companies to publish anything.
I mean, yes, it's possible. It's also delusional and paranoid to think that you can't post your hate speech, anti-vaccine delusions, and whatnot on your own damn platform.
We're entitled to free speech, but that's where it stops. We aren't entitled to free, global publishing platforms hosted at the expense of some other party.
You might complain that Youtube or whatever other "public" platform is where all the eyeballs are, so it's not fair that you can't publish anything you like there. Guess what, the reason the eyeballs are there is because these places curated their content so they didn't get their advertisers driven away by hosting vile garbage. This allowed them to scale more, host more content, and scale more, in a virtuous cycle.
There are sites that don't curate like this, and they're not successful publishing platforms. What do you need to publish that you can't get 8chan to host, anyway?
People say Jordan peterson is "hate speech". If you're tired of people defending speech then get pissed at the people who are abusing the concept of violent speech by slapping the label on everything they dislike.
the stupid idea that it is the algorithm driving people to extreme content. It is not. people are driven by that. our whole societies are driven by violence, fear, disruptive stuff
* Let me turn it off entirely.
* Let me adjust the relative weight that the current video gets vs my past views (i.e. do I want to see related videos to this one, or ones that YouTube generally thinks I'll click on)
* Let me adjust the weighting of thumbs up vs. watch time.
* Let me configure the homepage
* Let me make the entire recommender panel be just subscriptions or items from a particular list
* Let me replace the recommender entirely with something of my own devising, called back through a webhook
These things would make YouTube much more useful for me. I'm not going to YouTube just to kill time, and I pay them $13/mo. They're not getting any ad revenue from me, so why am I stuck with their recommender that only cares what it thinks will cause me to spend the most time on YouTube. I am not interested in spending the most time on YouTube. I'm interested in getting the most out of it in the minimum time.