> No computer will ever demonstrate actual intelligence, which is the ability to bring a new and unforeseen solution to an existing problem. I need only a cite a few examples to show this is true: Uber cars run over people. Facial recognition claims black people are criminals just for being black. Facebook's AI software lets in fake news.
Burden of proof is not on your claim, but rather on Kurzweil's, but since you made a claim and tried to back it up, I feel compelled to point out that it's a non-sequitur. Just because something hasn't been done before, doesn't mean it's not possible. Many people said going to the moon was impossible. Might be interesting to look at Russell's Teapot.
In general, that which nature has demonstrated is can usually be replicated. A bird (flight), a floating log (ships), a fish (submarine), an asteroid (space travel), etc. Nature has demonstrated intelligence: a human.
However, just like nature has not demonstrated superluminal travel, it has not demonstrated super-intelligence; so that is still a question.
The AI singularity, while the assumption is often that it will be smarter "per entity" does not really require individual AI entities to get smarter than a human, as long as we assume that human level intelligence continues to scale if you speed them up and increase the number of entities working together. In that case we could still get the singularity "just" by getting to a level where an AI can optimize its performance and resource use so that it eventually beats us on sheer number of "human-brain-second-equivalents" dedicated to improvement at any moment.
Burden of proof is not on your claim, but rather on Kurzweil's, but since you made a claim and tried to back it up, I feel compelled to point out that it's a non-sequitur. Just because something hasn't been done before, doesn't mean it's not possible. Many people said going to the moon was impossible. Might be interesting to look at Russell's Teapot.