Sure, but it's still a statistical model, it doesn't know what the instructions mean, it just does what those instructions statistically link to in the training data. It's not doing perfect forward logic and never will in this paradigm.
The fine tuning process isn't itself a statistical model, so that principle doesn't work on it. You beat the model into shape until it does what you want (DPO and varieties of that) and you can test that it's doing that.