Hacker News new | past | comments | ask | show | jobs | submit login

> Lee Sedol is also a black box, no?

Last time I checked we can talk to Lee Seedol and ask him to explain things. We can ask him questions. We can have an intelligent conversation with him.




Human explanations for their decisions are often rationalizations after the fact. The explanations don't necessarily accurately represent how the decisions were actually made. Most decisions are made subconsciously based on intuition and emotion. So that intelligent conversion might not have any real significance.


Ya, that was my thinking. And likewise with the sort of hybrid, constrained neural net setup we're discussing here, you could likewise 'discuss' the constraints, inputs, perhaps even the thought process to some extent. But like a human, it couldn't tell you the exact causal path taken to arrive at the decision.


We often don't even have appropriate language for many decision processes. See: research into those that do vs. don't have internal monologues (virtual voice, basically) when reading and thinking, and associations to creative thought.

N.B: When I was younger I had no idea that others did those and thought they were fucking with me when they were describing this.


My general point applies to human thinking in general not just Go.

Example: Would you fly on a plane designed ultimately by a human vs an impenetrable black box?

Also there is a spectrum. Let us not pretend otherwise.

1. One end: No explanations.

2. Middle: Seometimes false explanataions and sometimes true explanations.

3. Other end: Always true explanations.

Are we really saying the middle is completely useless?


Yes I would fly in any large airplane which is properly certified for scheduled commercial airline service, regardless of how it was designed. The FAA has earned our trust and has a good safety record so if they tell me the design is satisfactory then I would believe them. I also wouldn't take the risk of flying in any non-certified experimental aircraft, again regardless of who or what designed it.

We have no way to determine whether an explanation is true, false, or simply a post-hoc rationalization. We like to believe that we can, but we're just fooling ourselves.


> We have no way to determine whether an explanation is true, false, or simply a post-hoc rationalization

If we have no way of determining whether something is true or false, then I can say the same thing about your own statement quoted above. I can just say it is false and go on with my life.

I sincerely hope you realize the obvious self contradiction :D

Logic 101. :)


There's no self contradiction. I never claimed that we have no way of determining whether something is true or false. I only claimed that we have no way of determining whether the explanation a person gives for how he made a decision actually matches his real mental process or motives. We can't yet install a debugger with break points and variable watches in the human mind; it's very much a black box.

Logic 201. :-)


Logic 101. :)

> I never claimed that we have no way of determining whether something is true or false

Everything can be cast as a declarative statement.

Matches(givenStatement, actualIntention).


The Black Box that can invent stories to give the illusion it understands it sub-conscious processes? How comforting.


I remember an article here on HN about somebody that trained a neural network to explain the decisions of another NN. I think it's fitting :)


Do we really need to understand the whole stack that goes into a decission?

That means we have to start with physics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: