Like all proprietary AI software, I'm sure this app will generate battle plans without explaining why they make sense or what intel they were based on. The results will be predicatable, and commanders will then simply blame the system (or the hidden intel) rather than take personal responsibility for failure. That'll conveniently become the military's new status quo — the rejection of accountability.
For over a decade, police departments have been partnering with Palantir (and other firms selling decision aides) to investigate and even arrest suspects, while provide little or no basis for that action, falling far short of the legal standard required. So far, local courts have routinely turned a blind eye, since the decision to detain or arrest is largely discretionary and may not lead to charges. Of course, those caught up in such naive sweeps rarely have access to competent councel, so they frequently plead guilty to charges floated by unscrupulous ADAs, despite the inadequate / compromised / fabricated chain of evidence they were threatened with. Since the case never goes to court, this lack of evidence and abuse of procedure is never revealed.
Companies like Palantir who depend upon hiding in the shadows need at least as much rigorous oversight as any organized criminal gang receives, perhaps more.
"Like all proprietary AI software, I'm sure this app will generate battle plans without explaining why they make sense or what intel they were based on."
AI does what you tell it to do. As you likely know, a NN model never outputs definitive results, we clamp/truncate them to make it so. In areas of life and death you can choose to keep this information and lay it out as part of the way for an AI to establish the reasons behind its decisions and its level of certainty.
You're describing poor practices that come out of likely poor understanding of tech, poor training, maybe perverse economic incentives for Palantir to present its technology in a more assertive way than it should, but we're simply guessing.
True for supervised learning and maybe unsupervised clustering. But the same isn't true of all LLMs and other AI systems based on observational mimicry. Nobody trained ChatGPT to hallucinate answers or suggest that Kevin Roose leave his wife. Until such systems can explain their reasoning and the facts it's based on, it makes no sense at all to trust it, much less depend on it behaving fairly and rationally.