The thing about neural nets is that they are pretty opaque from an analyst point of view. It's hard to figure out why they do what they do, except that they have been trained to optimize a particular cost function. I think Strong AI will never happen because the people in charge will not give control over to a system that makes important decisions without explaining why. They will certainly not give control over the cost function to a strong AI because control of determination of the cost function is the axis upon which all power will rest.
Our life is dominated by systems we don't understand. I have some understanding of how my cell phone works at the software level, but when it comes to details at the hardware level I just trust the electrical engineers knew what they're doing. I have virtually no understanding of how the engine in the bus operates beyond what I learned in thermodynamics 101. Sure, you might say - someone understands these things. But for some systems, it's hard to pinpoint these people. And for some other complex systems, like the stock market, nobody really understands them or (completely) controls them. But we still use them every day. I think once AI becomes useful enough, people will gladly hand control over.
Maybe my understanding of neural networks is wrong... but I'm under the impression they work from weighted criteria. With enough weight an answer is selected as being the most likely. A well-trained neural network has enough data to weight options and pick with high accuracy.
Then again, this is essentially black magic to me:
A trained neural network is like a horrible huge spaghetti code ball you've inherited after a programmer ran over by a bus that for some miraculous reason happens to be working mostly correctly.
However, you won't be able to understand why or how it works. That also means you won't be able to modify/improve/fix it using systematic methods. Only trial and error and it will be 'error' most of the time.
This is a common criticism. However, almost all ML methods have some built in heuristic choices, that are the result of finding something that both works and is mathematically nice. Each of these choices restricts us to some family of functions where it's hard to justify why it's really relevant to the problem at hand, e.g. convex loss functions (l1, l2, ..), convex regularizers (l1,l2,..), gaussian priors, linear classifiers, some mathematically nice kernel functions, e.t.c.
In the end, people usually statistically estimate the performance of the methods and use what works.
It may be the case, though, that companies that relinquish control to neural nets will have better results than companies that don't. In fact, there's a winner-take-all effect in many markets, so in those even a slight improvement over humans would lead to massive benefits, rapidly pushing human analysts out of the market.
That's the (morally neutral) wonder of the market--it'll beat ideological or emotional objections into the ground, for better or for worse.
And sooner or later, someone might start a company where all decision making is performed by a neural net...
I kind of drifted into the camp of transhumanism as future where human is enhanced by all the smart sub AI problem solver but generally the humans take the decision at the end of the day. Also I think other problem is for strong AI to exist we are not sure what the "objective function" for the AI to work for.
I remember wanting to train a neural net for my MSc thesis more than 20 years ago, but my tutor recommended against doing so for precisely this reason, i.e. he said it is very difficult to prove your results. While not being able to prove your results might be a bad idea if you're trying to get your MSc, I don't see it holding back other advances.