Hacker News new | past | comments | ask | show | jobs | submit login
Building Better Algorithms Requires Human Judgment and Values (continuations.com)
35 points by Osiris30 on Aug 30, 2016 | hide | past | favorite | 5 comments



I agree with the article, but reading it reminded me a bit of this passage from a (late-1970s?) book detailing why computers will never beat humans at chess: https://twitter.com/mrspeaker/status/751493208299929600


Obviously. All algorithms come from human judgment. There is no such thing as "AI". It is all human intelligence (or to be more precise, developers intelligence), after all.


The idea of the "Opposite View" is so naive that I can't really believe anybody would imagine it can convince anyone of anything. Have they never had an argument in real life?

It also seems to ignore something even more fundamental: people plain and simple do not like engaging with the "opposite view". Cognitive dissonance is painful. People will always avoid it.

Am I advocating people stay in their "bubble"? Kind of.

In fact, it's become a fairly firm belief of mine that the social network to upend Facebook et al. (if any) will be based on a) creating bubbles around different groups based on "mood affiliation" or some other sort of behavioral likeness b) create a smart topology around those groups to minimize intra-group infighting while maximizing overall information serendipity (i.e. filter "opposing view" information to provide me with just the right information that I'm missing to complete my personal view, not the crap that's going to enrage me).

The underlying issue that never seems to be fully explained here, and the reason why I'm bullish on the "Balkans" model (nicely coined term, but hardly neutral) social networks is simple: bounded rationality. Unless we come up with brain augmentation devices that allow us to consume more and more information, we're limited in the amount of information we can consume, and filtering information intelligently is the only way for us to maximize the utility of this information. Ergo, halving your information consumption to see things from the other side only (assuming there's only 2 sides...) works if it provides more value than consuming more information from the same side. So far, I've never seen any compelling reason why it should (to echo a deleted comment, I can't see how reading an InfoWars article after reading a HuffPo editorial is going to do much for me). So there will always be a form of "observable information universe" with boundaries defined by our capacity to ingest information.

It seems to me many people have a huge blind spot/bias against "filter bubbles" because they refuse to even entertain the idea that they could be incredibly valuable. That's probably because most of the material written on the subject so far (e.g. the Eli Pariser book) are hopelessly simplistic and seem to ignore how people actually behave w.r.t to the information they consume (to summarize: analyse everything through the lens of "viewpoints" and "opposing viewpoints", politicize all information consumption, etc. What if I'm mostly reading math articles on the web? Do I need to read the ones that claim that 2+2 != 4?).

Now, although the "Opposite view" idea seems, frankly, dumb to me, there may be a way to make a lot more interesting: give me the "Nearest but slightly different view". That to me would be a far more serious contender to broaden people's view of a topic effectively.

And I have a hunch that such a thing likely won't be shared by someone with an "Opposite viewpoint". It will come from within the "filter bubble".


I hacked together a (very, very, very rough) "opposing view reader" for myself before I had heard of this idea; it pulled data from Pew's 2014 political spectrum of news sources (http://www.journalism.org/interactives/media-polarization/) and served me a randomized news source each day, weighted on a bell curve around where I already was (so I mostly got "nearest but slightly different views.") The problem was... it was boring, not very useful for staying informed, and I basically just ignored every time Fox news showed up.

There needs to be some way to incentivize people to actually read the "opposing view" and take it seriously; in my experience, most people who claim to be reading the opposing view are still using it to prop up their own conclusions, and not dealing with it on its own terms. The abstract benefits that people talk about from seeing diverse media are always outweighed, on an individual level, by how annoying it is to listen to people who seem obviously wrong!

I'm very glad I did debate in high school; as weird and silly a world as that is, it does provide a legitimate, strong incentive to understand both sides of an argument well enough to convince a third party, and practicing at it made me much more skeptical and (I like to think?) nuanced. I don't think there's any way to move an incentive like that to "real life," but it is a very interesting exercise to try to understand another viewpoint well enough you could convince someone of it, even if it's hard to take the exercise seriously without a shiny trophy resting on it.


That's a very interesting point you are making. Would seem true if one establishes that any off-the-personal-baseline analysis by a human-wetware cost more, the further off the analysed position is.

Your idea would to be to sneak in notions that are outside of a personal, or group, bubble. Do it gently and slowly. It would work if the positions would be presented with known and familiar analogies. In some setup that could work, we think in analogies.

One could argue, that that's what we do all the time - convincing ourselves and others of value of existing or new positions while ever-slightly changing our own positions. Do we want to dress that process up with machine learning, though?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: