Hacker News new | past | comments | ask | show | jobs | submit login

This is a great question, but it's hard to address in a comment. However, I'll have a go (in brief) :)

There are biases in all algorithms because they represent an encoding of either judgement or bias in training data selection.

So our aim here is to control for bias, and take some steps towards giving the user more control, compared to an approach like Facebook's where the algorithm is entirely a black box.

The approach we've taken is, effectively, to try to codify editorial judgement and professional journalistic best practices into the system, and the selection of training data. As well as being a programmer / computer scientist, I'm also a professionally trained journalist and was editor of Australia's leading computer magazine.

The knobs aren't direct representations of model predictions themselves, but weighted summary scores based on algorithms using a number of lower-level predictions (using statistical models, ML/DL models, ensembles). The personas/moods/filter bubbles use more of the individual attribute predictions.

Having said that, in practice it's all a bit of an experimental mish-mash currently, and we have a lot of work still to do. Some predictions are way more effective than others, as you can see browsing through it. And others (like source and author quality and attribute prediction) are learning and improving over time. But we have a lot of iteration and experimentation ahead of us!

In practice, the initial feedback has been that the predictions are surprisingly good. But sometimes they are way off the mark.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: