Yes yes, automation will increase number of people who will be paid... say what?
That never happened in history!
We need actual data rather than thought experiments here, and we could actually get it quite easily for last industrial revolution. Wars kinda break the data but it is still workable. The economical assumption is that people can retrain. This may no longer be true when complexity of the job hits a certain level or any random person can contribute only a tiny amount of value. (As opposed to outliers.)
ML results are also logically and formally verifiable, just like with human brain. There are higher order structures in both.
Humans can and do visualise at least 4 dimensions. Time is not as easy but still possible to visualise by anyone, higher levels are done as projections. We wouldn't get anywhere in math and especially algebra otherwise.
Even AI rarely directly work with all dimensions, reducing datasets to interesting features internally. Otherwise even with a relatively small number of dimensions curse of dimensionality makes calculation impossible. Much like with NP-hard problems. Quantum computers could perhaps skip over this but they're not practical yet.
Reprogramming AI reliably is even harder than understanding it. A hacked system is likely no longer an intelligence, unless it is just completely replaced. (Like a human impostor, we can reasonably well detect it too.) Such tampering is easy to detect or requires a very sophisticated understanding of the whole system. Which the piece asserted we "cannot have".
Current systems are easy to hack precisely because they are not intelligent at all.
Specifically, intelligence robustly handles unexpected situations. Including attempts to tamper with it or many kinds of damage.
For a self driving car tampering would be reasonably easily detected by simulation that was not used to train ML. (Can't do that with human minds yet.) A dumb self check. Very similar to what you do to stroke patients. The checks can even be supplied by a set of trusted third parties for any given kind of ML system. (Or you could have even formal logic verification that low level algorithms work correctly.)
And if sophisticated understanding is involved, the actor is either an unrecognised AI genius, an insider or a large actor - megacorporation or government. That narrows the set substantially. Or of course the AI just made a mistake, but for sophisticated AI honest mistakes are about as easy to detect as in humans.
This piece is actually insulting to the reader, written from a high horse and quite wrong plus unsubstantiated, and downplaying what we have already achieved or can achieve.
ML results are also logically and formally verifiable, just like with human brain. There are higher order structures in both.
Humans can and do visualise at least 4 dimensions. Time is not as easy but still possible to visualise by anyone, higher levels are done as projections. We wouldn't get anywhere in math and especially algebra otherwise.
Even AI rarely directly work with all dimensions, reducing datasets to interesting features internally. Otherwise even with a relatively small number of dimensions curse of dimensionality makes calculation impossible. Much like with NP-hard problems. Quantum computers could perhaps skip over this but they're not practical yet.
Reprogramming AI reliably is even harder than understanding it. A hacked system is likely no longer an intelligence, unless it is just completely replaced. (Like a human impostor, we can reasonably well detect it too.) Such tampering is easy to detect or requires a very sophisticated understanding of the whole system. Which the piece asserted we "cannot have".
Current systems are easy to hack precisely because they are not intelligent at all. Specifically, intelligence robustly handles unexpected situations. Including attempts to tamper with it or many kinds of damage. For a self driving car tampering would be reasonably easily detected by simulation that was not used to train ML. (Can't do that with human minds yet.) A dumb self check. Very similar to what you do to stroke patients. The checks can even be supplied by a set of trusted third parties for any given kind of ML system. (Or you could have even formal logic verification that low level algorithms work correctly.)
And if sophisticated understanding is involved, the actor is either an unrecognised AI genius, an insider or a large actor - megacorporation or government. That narrows the set substantially. Or of course the AI just made a mistake, but for sophisticated AI honest mistakes are about as easy to detect as in humans.
This piece is actually insulting to the reader, written from a high horse and quite wrong plus unsubstantiated, and downplaying what we have already achieved or can achieve.