There definitely exist "accountable" AI models. Things like decision trees and various types of regressions.
The thing is though... any sufficiently advanced AI is going to be unaccountable pretty much by definition. It's like, calculus is an extremely useful tool for predicting the temperature of a cooling object over time, but good luck explaining to a 3-year-old how to perform the necessary maths. The fact that they can't comprehend it doesn't mean that calculus isn't useful, it just means that it's beyond the 3-year-old's ability to intuitively grasp. In this smilie we're the 3-year-olds. :)
Seconded. Another extreme example would be human brains, which I don't think we understand enough for it to be "accountable" in any mathematical bounds, yet we trust them to make complex decisions. Statistical characterization of the behavior of the AI system is a better objective than the ones based on inherently biased symbolic systems. Just because it is the way how humans communicate it, doesn't mean it's the best way to do it.
The thing is though... any sufficiently advanced AI is going to be unaccountable pretty much by definition. It's like, calculus is an extremely useful tool for predicting the temperature of a cooling object over time, but good luck explaining to a 3-year-old how to perform the necessary maths. The fact that they can't comprehend it doesn't mean that calculus isn't useful, it just means that it's beyond the 3-year-old's ability to intuitively grasp. In this smilie we're the 3-year-olds. :)