In a computer that critical and that simple, there is no OS, and the compiler (or assembler) is tested for the program in question. The hardware design can be demonstrated to correctly execute every permutation of every instruction.
No, the halting problem states that it's impossible to know if any given program halts. This says nothing about a particular program. For example, "return false;" halts. But, you can't give me a program that when given another program returns true or false depending on if the passed program halts: that's the halting problem.
No, it's just impossible in some cases, and you don't know which cases. If the validator fails to work, the target software is too vague and has to be re-written.
But you can write software that can be proven to be correct. You can also write software that can be proven to stay within certain limits (i.e. the computer will fiddle with the acceleration a bit, but not more than a reasonable pre-defined limit, and not when the brake is on).
I should rephrase that "every permutation of every instruction executed by the program". I'm not trying to prove that every program is correct, only that my program is correct. That is possible by using a well-defined and fully-proven subset of the available features of the language and processor.
For example, designing your program and CPU as a set of state machines allows you to define all possible states of the system (which are deliberately limited), define all the state transitions, then verify that every input condition for each state results in the correct state transition. Even if you simply brute force your way through every state and every transition instead of using mathematical generalizations, you've still proven that the program is correct.
As it turns out, this doesn't actually work. State derived program analysis has been shown to not be provable for all cases. Particularly when the state-space is very large, and when state transitions are non-atomic in the code, e.g. two or more state transitions in a code block.
The best research I've seen on this was done with Access Control Matrices in the Computer Security field. e.g. can you prove in a general sense that a sequence of atomic state changes to an ACM result in no violations of access control? The answer is, for atomic state changes you can prove that they are internally consistent, but not that they do not introduce a flaw in the ACM.
In other words, because proving software reduces to proving correctness, it only proves that the software is internally consistent. Basically it's a circular proof. It doesn't prove that the software is without flaw.