I think the misunderstanding comes from confusing "strong AI" with "human-like AI". It may be impossible for a meatbrain to build another meatbrain-like AI, but, if we are willing to compromise on that and make some AI that can function at or above meatbrain levels in some of their functions, we have already a couple examples of very clever parts we can, eventually, combine.
A recent post on LessWrong, http://lesswrong.com/lw/3gv/statistical_prediction_rules_out... , suggests that won't help much, at least for most people. Even when they have superior tools and information available, most people prefer their own, inferior, judgement.
blockquote
>If this is not amazing enough, consider the fact that even when experts are given the results of SPRs, they still can't outperform those SPRs (Leli & Filskov 1985; Goldberg 1968).
>So why aren't SPRs in use everywhere? Probably, we deny or ignore the success of SPRs because of deep-seated cognitive biases, such as overconfidence in our own judgments. But if these SPRs work as well as or better than human judgments, shouldn't we use them?