Hacker News new | past | comments | ask | show | jobs | submit login

I do suspect that a lot of the alleged rabbit hole comes from people deliberately picking out recommended videos that are more extreme than the currently viewed one. But I'm interested in putting this theory to the test.

I'm actually thinking of making a script or an extension that automates, say 10, 25, or 50 random clicks in the top 10 recommendations and seeing where it ends up. Maybe this system exists already.




I would love to see such an analysis, hehe :))

I have to admit that the exact definition of the problem is not easy to define:

e.g. if I have on my screen a video about "people-that-hate-bananas", would it be ok to have recommendations that show only e.g. "why-bananas-are-bad" or should I (ethically) get as well videos of the type "are-bananas-really-bad?" respectively "these-are-good-bananas!"?

Using the above example I guess that if Youtube (& similar sites) show only recommendations based on case #1 (which is eaaasy for IT) then yes here I go down the rabbit hole (reinforces/confirms the concept "bananas-are-bad") even if the recommendations are not worse than the original one => therefore even recommendations on "equal" level might have to be evaluated at least partially negatively? (as in this example the recommendations don't promote alternative point-of-views / alternative thinking / search for a flow or compromise, if you understand what I mean)

In any case the evaluation of the results you get might be very hard to do, if you want to do it automatically to understand the meaning (e.g. very hard to understand if "good-bananas-are-bad" is negative or positive towards bananas)... :P (maybe if you manage to you might become a Nobel candidate, hehe, or maybe I'm underestimating the current power of AI, which is definitely possible)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: