Last time I checked we can talk to Lee Seedol and ask him to explain things. We can ask him questions. We can have an intelligent conversation with him.
Human explanations for their decisions are often rationalizations after the fact. The explanations don't necessarily accurately represent how the decisions were actually made. Most decisions are made subconsciously based on intuition and emotion. So that intelligent conversion might not have any real significance.
Ya, that was my thinking. And likewise with the sort of hybrid, constrained neural net setup we're discussing here, you could likewise 'discuss' the constraints, inputs, perhaps even the thought process to some extent. But like a human, it couldn't tell you the exact causal path taken to arrive at the decision.
We often don't even have appropriate language for many decision processes. See: research into those that do vs. don't have internal monologues (virtual voice, basically) when reading and thinking, and associations to creative thought.
N.B: When I was younger I had no idea that others did those and thought they were fucking with me when they were describing this.
Yes I would fly in any large airplane which is properly certified for scheduled commercial airline service, regardless of how it was designed. The FAA has earned our trust and has a good safety record so if they tell me the design is satisfactory then I would believe them. I also wouldn't take the risk of flying in any non-certified experimental aircraft, again regardless of who or what designed it.
We have no way to determine whether an explanation is true, false, or simply a post-hoc rationalization. We like to believe that we can, but we're just fooling ourselves.
> We have no way to determine whether an explanation is true, false, or simply a post-hoc rationalization
If we have no way of determining whether something is true or false, then I can say the same thing about your own statement quoted above. I can just say it is false and go on with my life.
I sincerely hope you realize the obvious self contradiction :D
There's no self contradiction. I never claimed that we have no way of determining whether something is true or false. I only claimed that we have no way of determining whether the explanation a person gives for how he made a decision actually matches his real mental process or motives. We can't yet install a debugger with break points and variable watches in the human mind; it's very much a black box.
Last time I checked we can talk to Lee Seedol and ask him to explain things. We can ask him questions. We can have an intelligent conversation with him.