Hacker News new | past | comments | ask | show | jobs | submit login

I installed TikTok to see what it was all about. I used it every day for about 6 months. I saw loads of misinformation about the election, there were gruesome videos of people feeding live animals to their pet snakes and other animals. And then just loads of sexualized content. I wouldn't allow my kids to use this app in a million years. Also, I tried reporting a bunch of these videos, but they were never taken down.



It tells more about what kind of content you are liking and watching than anything else.

Mine is filled with skaters, mom-daughter dumb jokes, silly lesbian/transgenders sheniningans.

There's sometimes weird things like a priest telling stupid jokes alone in his church and that korean guy doing asmr of burning bread in the oven.

On facebook I saw the death of participants in the events that happened on the 6th but that was in a journalist/news/media analyses group.

I see a lot of good vibes content on Tik-tok but that's what I am looking for on the platform. No doubt I could soon end up on indian train deaths if I wanted. Just like that time on Youtube when I saw that out of nowhere, I was looking for something totally unrelated to trains or india though.


People say things like this in an attempt at shaming the original poster but one thing not considered is the location of the person using TikTok.

I live in a big city and most of the feed is highly sexualised, highly political, or highly prank-related.

My friend that lives in a small town about 100 miles away has to essentially send me fun tiktok videos and, no matter how many likes I do of those videos, the base diet of sexual, political, or prank-related videos are always there.


>> one thing not considered is the location of the person using TikTok.

I run into this daily. I'm in the military and so my internet connection at work is mingled with a thousand other military people. Ok, it is accurate to say we are predominantly male and younger, but every time I see an ad does it have to an an anime character? Some of us recently gathered to watch the DC riot footage on youtube. The footage started with a non-skippable ad for some pervy anime game. Try holding a strait face in front of the general as a busty cartoon character in a negligee is bouncing up and down on a ten-foot screen.


I don't know, there's something reaaaaally dystopian and weird and fun but concerning about the chain of events that led to that situation.

I personally use ublock to correct in real time my world lenses.


Try holding a strait face in front of the general as a busty cartoon character in a negligee is bouncing up and down on a ten-foot screen.

Classic! I love real world tidbits like this. I would suggest in the future use youtube-dl https://ytdl-org.github.io/youtube-dl/index.html to download the video to your machine to avoid situations like this.


There are all sort of ways to do this, but they are difficult in the military context. We gathered in one room because youtube/facebook is blocked on our normal office machines. This was an unrestricted podium computer used for totally unclassified stuff like streaming live news feeds. Like most corporate networks, we aren't allowed to just install software, even simple browser extensions have to go through an approval process.


Can you talk a bit about the ambiance and the reactions in the room as the events unfolded ? If you can, that is.


There was no reaction. This is normal. The unfiltered internet is full of inappropriately sexual ads.


I mean.. reactions to the events that took place at the Capitol.


I love how even the DoD won't pay for YouTube Premium.


> I live in a big city and most of the feed is highly sexualised, highly political, or highly prank-related.

More anecdata: I also live in a big city - London - and I don't see this. I joined TikTok just a few weeks ago and I was surprised at how overwhelmingly positive and fun it all is.


Not a tiktok user, but I know instagram also fills your explore feed with the content that your friends like. So even if you're tailoring your likes to appropriate posts, your friends may not be.


That may just be the initial period after a user has signed up and hasn't engaged with enough content for IG to show anything under "Discover". They may use friends' preferences as a placeholder.

I know for a fact that what's on my Discover feed (surrealist art, UX/UI stuff, Newfie dog videos) is unique to me.


> People say things like this in an attempt at shaming the original poster but one thing not considered is the location of the person using TikTok.

That's a good point. I was wondering about the initial seed of the feed.

It wasn't my intention to shame though.

Until tiktok has something like channels or theme/tag subscription it will remain a problem.


I don't think likes are actually a big signal, it's mostly based on watch time in my experience.


Note that you can also hold on the video then click ‘not interested’.


I hadn't found this piece of UI yet, will try and do that too, thanks!


The content your parent comment refers to was (partly) what I saw when I installed it and scrolled through the default suggestions before it had the chance to learn anything about me. Sure there was a lot of harmless and just silly stuff, but the amount of animal abuse like https://twitter.com/noellesdesigns/status/122443946827220992... that scrolled by made me stop using the app.


The OP did say they used it daily for six months though, that really ought to be enough interactions for the app to personalise to the users choice.


I don't think it tells you anything about what the commenter likes. When I created a TikTok account it showed me nothing that interested me. I marked videos as "not interesting" for half an hour before giving up.


I got a few animal abuse videos in an hour or so of swiping. It shows you these random videos in case you are interested. A few of these was enough to uninstall. I tried again some months later, and after awhile the recommendations get a little better, but there is still the occasional piece of awful content that gets thrown in there. In all cases I swiped past as quickly as possible.


Would you trust the algorithms to tailor most content appropriately for a kid?


I think OP’s point is that there is plenty of questionable media on Tik-Tok.

Your point about what they like is irrelevant to that point.


There is just as much questionable media on YouTube. The difference is TikTok's algorithm works a lot harder to show you more of the content it thinks you will spend more time viewing, even if you are not actively searching for it.

So if OP is seeing "loads" of sexualized content, it's because they spent enough time watching sexualized content before swiping for TikTok to think that's what they want to see, and ensure they see more of it.


> So if OP is seeing "loads" of sexualized content, it's because they spent enough time watching sexualized content before swiping for TikTok to think that's what they want to see, and ensure they see more of it.

Even if the picture you are painting of the previous poster is accurate, being horny once in a while is by no means justification for an endless stream of sexualized content.

That's not a neutral quality of the system, but a deliberate feedback loop establishing an addictive behavior. We already know the shortcomings of the human condition and should ask for less footguns, not more.


I've heard people describe a situation in ML (could be hill climbing) where an optimising algorithm will get stuck in a valley. At that point it will try various things in order to get unstuck, often involving a good dose of randomness. What's to say this didn't happen to op or that you really won't start seeing recommendations for Indian train death videos?


> What's to say this didn't happen to op or that you really won't start seeing recommendations for Indian train death videos?

We can never be sure of anything. What if HN start pushing train death videos on the front page and the admin can't tweak the algo because a glitch in the stack prevent them from accessing the backend ?


Mine is full of fitness instructors showing of their fitness challenges,dancers showing of their synchronized dance routines, and oddball humour. I had a moment where some BLM stuff trended but on the whole it seemed to show both sides (albeit a bit the extreme versions of both sides).


If the content is harmful it's harmful...


> It tells more about what kind of content you are liking and watching than anything else.

This sounds very much like victim-blaming, I don't think it's appropriate to side with TikTok on this.


It's not victim blaming. It's widely known that the "magic" of TikTok is that it carefully scans your interactions with content as you consume it, and seamlessly adapts your stream to what it believes you'd like to see. Algorithmic suggestions taken up to 11, and generally as close to mind reading as we've came as a species.


It seamlessly adapts your stream to what it believes will increase “engagement“. That's a different thing to “what it believes you'd like to see”.


TikTok has seemed to present me with shock/outrage content in the name of engagement less than any other social media - far less.

I almost always see content that is only fun.

That has made it much better for long-term engagement and positive for my mental health, rather than other social media's short-term outrage engagement that is wholly negative.


That means the TikTok algorithm designers are half-way competent.


I intended this in contrast to your point above.

It could mean the algorithm designers have decided that optimising for fun is more effective at increasing engagement than optimising for shock value and conflict.

But, this is a different choice than other social media designers have made. It seems like there is some more wholesome motivation.


I don't think the other social media designers deliberately optimised for shock value and conflict. I think that was a side-effect of a naïve optimising algorithm: short term (i.e. until people just quit the social media platform entirely), conflict and doomscrolling are two of the most engaging behaviours.


Have you ever used TikTok?


I think there's a problem with this though: people aren't always in the mood for the same stuff. The easiest distinction here is 'inappropriate' content from the rest. Sometimes the user is interested in that content, but not every time they open the app. It might even be that most of the time they don't want to see that content.

I experience this with politics videos on YouTube. Sometimes I'm in the mood for that. One evening I'll watch a few politics videos. The next day, my recommendations are filled with politics. Over time my recommendations get back to normal. Then I'm struck with the politics bug again, but my recommendations don't give me an easy way to find more politics content. They're recommending the type of stuff I'd normally watch. The day after that I'm annoyed again about politics being all over my recommended lists.

It feels like I should have separate profiles based on the topics I'm interested in. The platforms don't seem to really support that though. Even different users seem to influence each others suggestions.


This may be true for people with short attention span. For me personally sure, I may click on a clickbait article or watch an attractive lady, but it doesn't mean that I want those suggestions even shown.

It's like putting a lot of chocolate with sugar in my apartment: the wise choice is to not even buy anything that contains sugar, as I can't stop myself from eating it if it's in front of me all the time.


Maybe reptilian mind reading. While we might intuitively be very attracted to sexual and gore content, it doesn't mean that's what we want.


[flagged]


What about when you see the content on first install?


Then it's a problem (for the sake of the argument I consider "the content" to be either illegal or borderline psycho).

And for the record I consider a lot of tiktok pranks to be borderline psycho (eg gaslighting your SO for giggles about divorce, abortion, etc.).


I use the app sometimes. Going through my feed, most videos are DnD or fantasy related, there is no sexualized content and haven't seen any misinformation about any elections. The app tailors the feed to your habits and inputs.


Weird, I guess it really does matter where you are geographically, as well as your "likes" bubble.

I did a similar experiment myself and found it to be a very delightful stream of content. While it did have some sexualised content for sure, it wasn't too much in frequency and degree, nothing more than you would see on a normal tv drama anyway.

And it was just so much fun - mostly teens doing amateur choreography, skids and pranks. I uninstalled it mostly because it was so effective in capturing my attention than anything else.

I still see teenagers going about trying to mimic one routine or another, and it always makes me smile as I can see them being creative rather than just consumers. Doing a high quality YouTube video would be too hard for them, but a quick 10 seconds video - they feel they can do it and try to participate, which is great in my book.


The app literally makes an algorithmic feed of content that you personally "like". I haven't seen anything like this at all. Mostly dancing and singing people, and comedic sketches.


No, it starts out by showing you popular, random or slightly profiled content ("you appear to be a male in the 40s, so let's start here"), and then just maximizes the content types you were shown that would increase screen time.

This has nothing to do with your preferences/what you like. It doesn't care if you feel disgust, as long as the app stays open.

Suggestion/auto feeds are designed to abuse human psychology to maximize time wasted, not interest.


If you like, watch things all the way through, or comment on, then it tries to show you more things like that. If you skip past content, or mark it as not interested then it tries to show you less things like that.

So if you are disgusted with something. Stop Watching It!

Yes this does optimise for time spent watching, but interest definitely plays into it.


People don't read news because they enjoy hearing about crime and corruption. They don't read about the Peter Madsen Murder case because they enjoy the thought of chopped up people.

Blaming the user for engaging by subconscious impulse in a machine designed with every psychological trick in the book to force you to do just that, to optimize screen time at any cost, is just wrong. Assigning interest, joy, or even conscious intent to that engagement is very, very wrong.

The only thing you can do is remove the app, but it won't take long before you run into the next abusive machine. And quite frankly, as with any other intentionally addictive interaction, it is not easy to realize the problem and remove it.


Which is a bit like telling a heroin addict to just Stop Doing Drugs!

At our core, we're still stupid monkeys. We cannot help ourselves on an individual level. The platforms must take this responsibility, there is no other way.


One thing that's clear is that these algorithms are here to stay forever. What's not clear yet is what we should do to cope with it. Culture emerges to help people deal with problems - so what cultural factors will emerge to protect us?

I'm starting to think the answer is a sort of "feed hygeine" that we will all have to learn to keep up with. I successfully made my facebook feed a very calm place by unfollowing toxic friends, hiding posts that made me angry, and joining positive and productive groups.

Maybe we'll be expected to teach our kids how to do this. Maybe it will become part of mainstream internet culture that we know how to curate our own feeds through our activity.


In other words, it shows you more of what you like to watch, not more of what you'd like to like to watch.


No, it shows what you do watch. If you are paralyzed it tries to paralyze you more.


The algorithm does not know that you're watching horrified or enjoying the video. They can just tell that you stopped scrolling for an extra amount of seconds.


It is nonsensical to impute anything about a person's character from what shows up in their feed, a result of black-box algorithms. This feels like the beginning of a witchhunt.


That sounds like YouTube.


Well, that's just you. After some time TikTok actually learns what type of videos you like to watch and shows you more and more of those. You end up in a bubble. Their algorithm is pretty good at that.

When I check the phones of my kids or my wife, I can see totally different type of videos recommended to them.


This is not a counterargument. What if your kids end up in that bubble?


You’re comparing your wife’s experience with GP’s experience. I’d be curious to see what your feed would look like after some use.


And what if your kid ends up in that bubble? What if your kid likes one animal torture video and then the algorithm decides that they should see that on a daily basis?


Sadly your experience regarding reports and offensive content being completely ignored can be universally applied to other big platforms like twitter or youtube. I've seen some inappropriate YT content being bombarded with reports and it's still up because it generates a lot of clicks. Hypocrisy at its best.


It is actually quite easy to have Youtube delete content (doling out a strike in the process) that doesn't violate CG/ToS using false reports.


Yes last year I had a quick look at TikTok to see if it could be added to my main clients (Drinks Industry) social media portfolio.

Literally the first recommended video was a young woman in a school uniform - so that's a hard NO!


Some of these comments seem to be complaining that

a) snakes gotta eat, and, b) children often wear school uniform.

The reason you should want to keep children and alcohol apart is because alcohol is a drug, not because you think school-children are inherently sexualised.


Obviously a kids app is a bad place to sell liquor.


Why is a girl in a school uniform a hard no? Sorry, must have missed some key piece of information here.


Not something you can advertise alcoholic drinks alongside, at least in most countries.


Ahh, that makes more sense.


Maybe she accidentally


They stereotype of the Japanese school girl might well be problematic - the sexual element was what I was hinting at.


> I wouldn't allow my kids to use this app in a million years.

And how would you ensure / enforce this? Honest question, because I'm in that situation right now.


(IANAP -- this is not parenting advice ;) I speak mostly as a newly-minted adult who still remembers what it was like to be a child, when my world was small, so small decisions felt very important, especially when they were made without my input)

Children are people too, just with less experience. Your goal really isn't just "ban TikTok", it's "raise my children to develop healthy habits on their own and to recognize when they are being manipulated."

So, even if you ultimately decide to enforce parental controls, I hope you will bring your children into the decision-making process. Have an honest dialogue with your children about your concerns, and develop a space where they are free to share their own thoughts and feelings without fear of judgement. It's important that your mind is not already made up before you sit down at the table, as they'll sense it immediately. Repeat this often, and make it easy for anyone (yourself included) to express that their perspective has changed. Set a good example (by e.g. not using social media yourself) and apologize when you are wrong, so that your children learn that it's safe to do the same.

Encourage them to pursue healthier alternatives. Allow them to experiment and learn from their mistakes.


Upvoted for solid parenting advice.

I pretty quickly realized that kids, even little ones, like 3 years old are far smarter than you think. They are basically little humans without much experience and that means not great judgement.

So as a parent it’s important to make sure they are exposed to things they understand and can process. Things that are age appropriate. If something isn’t appropriate, well, you try and explain why (as best as they can understand).


Thanks for your comment, I really pointing out "learn why".

I try to explain to my 7-yo the _reason why_ for many things, whenever I can. Key is to not push it when she loses interest. If it's important, just end up talking about it again at a later datetime. There's only so much input a child (or adult) can accept before the buffer is full and needs to be mapped. It takes time to put into context.

Also accept that some things can't really be explained by everyone. A guy on YT with a channel on self defense said, "if you are not a violent person, you'll never understand violence", and I think it has some bearing. Sure, I can certainly understand that, in the case of X, he was beaten as a child and therefore may see violence as a way to handle his feelings, but that doesn't explain "why did X beat up Y unprovoked last Wednesday". There's rarely a single causality that explains things like that.

And, ads, yt-videos, social media, most often are manipulative. Eg the "youtube-face", excessive reaction-videos, etc. (no, she has no access to social media).


Exactly this! I was hoping to get more suggestions along these lines. Because the first thing I thought was that if I hard-ban something they'll just learn how to do it without me knowing, especially if it is something their peers are doing.

So, we need to "gracefully" ban this; that is, if it needs banning at all.

The alternative is to embrace it and show how to use it properly.

And then, there's this addictive nature to it - the app is designed and tuned to provide dopamine hits; how to fight that?


> Because the first thing I thought was that if I hard-ban something they'll just learn how to do it without me knowing, especially if it is something their peers are doing.

Yeah, exactly! Growing up in this age, it's impossible to avoid online interactions of some kind. So, it's better if children learn how to use technology / media / internet responsibly in a controlled environment. You're like the guard rails in a bowling alley :)

One approach might be to let them use TikTok or whatever app is popular, and just try to get them to learn to self-identify 1) how long they spend on the app, 2) how using the app made them feel, and 3) whether or not time on the app took time away from something else they might enjoy. Help them learn identify the positive aspects and the negative aspects of the platform, and only consider a full ban if you start seeing extremely problematic patterns.

If you have any personal self-improvement goals, it might also mean a lot to your kids if you make a habit of sharing your progress with them (in a way they can understand at their age).

Good luck :)


I use the Family Link feature on Android, so my kids have to ask for permission before they install a new app.


and then they will just use the phone of their friend with no such policing in place or use a publicly available client with no such controls in place

when I have such discussions with friends and family I tend to say that I'll rely more on trying to make my kids understand what they are doing than trying to police what they are using. Not saying that I'll not do some policing on their devices but I'll just rely on that


neither one works flawlessly in isolation. You can explain to a child why you don't want them doing something until you're blue in the face. But in all likelihood they're going to want to do it more. The backup is then to block the app on their phone

Sure, they can still use their friends' phones, but you can only protect a child so much. You've made a good start by explaining the issue and banning the app, but you can't track them all day to make sure they're not doing it elsewhere.


Instead of focusing their attention by telling them how bad it is, give them something else to do and introduce you them to other children raised with similar values


You can do that all you like, you can give them all the hobbies they could ever need to be busy. But eventually one of their friends is going to share the app with them when they're playing together


> and then they will just use the phone of their friend with no such policing in place or use a publicly available client with no such controls in place

That means, that they wont have that content pushed on them for hours either before sleeping or while you work and they are in home due to lockdown.

They will have access to it only while they are with that friend.


I deploy a novel virus that makes it impossible for them to get near a friend's phone.


Both iPhone and Android have parental protection that allows you to block apps (though on Android it might be too easy to circumvent)


On my home network, I run a transparent, MITM squid proxy with a whitelist for my 8 year old son. My intention is to gradually loosen the restriction as he gets older, in stages: first switch it to a blacklist, then lift all restrictions but continue to log, then remove the proxy altogether and allow him free, unmonitored internet access. I will change it at whatever times seem appropriate - I'm new to this, ofc, and I don't know when it will be.

I have no intention of letting him have a mobile device any time soon, but if and when I feel it's appropriate/necessary, I have the option of giving him a "managed" device (i.e. like a corporate device) with always-on wireguard to my home network, routing traffic through the same proxy. I haven't tried setting up a device like this yet, but the necessary capabilities appear to be present in both iOS and Android. This will also allow me to control which apps can be installed on the device.

I am of course entirely open with him about this, including the technical aspects of how it works, and frequently discuss all of the many issues involved. My goal is not to hide reality from him or to instill some unreasonable fear of what's out there - quite the opposite. It's to try and help him arrive at a healthy relationship with the internet as an adult, something that most adults I know (including me) have so far failed to establish.

And yes, of course it's possible to circumvent all of this stuff (although quite a lot harder with what I have than with the vast majority of parental control solutions). And yes, I can only control the technologies he has access to that I manage. But you have to consider the "threat model" here. He doesn't rail against this restriction. He understands it. If he wants access to something he asks for it. If I say no, I explain why, and he accepts it. We'll see how that develops over time, but it's certainly not the case that "It's technically possible to circumvent it, ergo there's no point doing it".

There's a strangely defeatist attitude I see about this, often voiced alongside a false dichotomy: that what we need to do is teach our children responsibility instead of using technology to protect them. As I see it, both are needed - and the latter, while difficult, is possible. Unfortunately it currently requires skills that are far from universal. It would be much easier if people took the need for it seriously and developed better technologies for it.

There are people on this post saying about dubious content on Tik Tok that you "only see it if you like it". That's not good enough. Internet technologies lead you on in subtle ways. As an example, my son is massively into Lego. When he was 6 he discovered Lego videos on youtube and started watching them on our smart TV. After a while I realised that all of the models he was making were weapons, mostly guns. It reached a particularly bizarre moment when he handed me an (awesome, obviously!) lego butterfly knife he had made. I checked the videos he'd been watching and all of them were Lego weapon tutorials. Now, my reaction here isn't "omg weapons how horrible!". Not at all. It is, however, to note that through youtube's algorithms a general interest in Lego became laser-focussed on one, perhaps slightly dubious, genre of models.

Among other things, I want my son to understand this kind of subtle shaping/guiding influence that technology can have on its users, and until I feel he has developed sufficient awareness I want to be in a position to know and to intervene if I think he's being led by it in directions I don't approve of. At this stage in his life, I feel that is my responsibility and it would be wrong of me not to at least attempt to live up to it.


Don't mobile phones have decent parental controls?


Other kids' phones may not.


No. It is pretty crappy.


Why... why did you use it for every day for 6 months?

I got bored of it on day 3


It's designed so you'll use it every day for 6 months.


Such a different experience. I keep telling everyone how Tik Tok is the most positive social media experience. All I see is people dancing and comedy sketches. My wife sees some more "mom" jokes, but still the same overall feel.

I've never seen anything like what you're describing. Amazing how the algorithm can make experiences completely different.


I only joined to follow some people from other platforms, so my recommendations were pretty much immediately affected, but stuff from my region has been creeping in lately. Just curious, do you get the feeling that those types of things might be of common interest where you are?


what on earth were you doing to get suggested that stuff? I don't trust TikTok as much as the next paranoid computer guy, but my wife uses it and it's tailored around her interests - almost uncannily.


This would indicate that you liked one or more videos containing misinformation about the election, gruesome videos of people feeding live animals to their pet snakes and other animals, and sexualized content.


The app is full of videos of what people want to see. It's not for you, I guess (or for me)


[flagged]


I don't know why you are being downvoted, we already know that the country of origin is manipulating content to deliver a certain messaging:

https://www.businessinsider.com/tiktok-censorship-users-deem...

We already know it's a massive privacy risk, deemed 'far more abusive' than American social platforms, which are already bad enough:

https://www.wsj.com/articles/tiktok-user-data-what-does-the-...

A piece of software being from China makes it an instant no for me.


Because the equivalent app is also used in China, where its wholly illogical that the government would allow an app dedicated to "corrupting the youth".


In fairness, TikTok and Douyin (mainland China’s version of the app) have no access to each other’s content. It’s totally possible that the CCP allows/mandates different managing styles for the different markets.


TikTok and Douyin are made by the same company, but it appears they don't share content across networks. You're comparing apples and oranges.


TikTok is 90% you get what you consume. More so than even facebook. If you consume right wing content you will get right wing content. You consume queer content you will get queer content.

This is sorta like complaining that you go to reddit every day and all you see is /r/the_donald and /r/gore. Well if you didn't subscribe (consume in tiktok's case) then you wouldn't see very much of it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: