Obviously systems have always had to be resilient. But the point here is how dangerous a "set it and forget it" AI can be. Because the mistakes it makes, although fewer, are much more dangerous, unpredictable, and inscrutable than the mistakes a human would make.
Which means the people who catch these mistakes have to be operating at a very high level.
This means we need to resist getting lulled into a false sense of security with these systems, and we need to make sure we can still get people to a high level of experience and education.
Which means the people who catch these mistakes have to be operating at a very high level.
This means we need to resist getting lulled into a false sense of security with these systems, and we need to make sure we can still get people to a high level of experience and education.