If you can update it, trivially, presumably without using the heuristics themselves, someone can update it for you. Now you have a new problem (securing the method for updating the heuristics.)
Although a counterpoint to this would be using more unique points of reference like the iris, fingerprints, etc, things not likely to change.
So the current heuristics are no longer valid (e.g. you were in a car wreck so you're hunched from back pain and have a black eye), and you're going to use the current heuristics as a gate to updating the heuristics?
Imagine if the "forgot password" link on a website gave you a form that said "enter your current password and then you can set a new password". This is the scenario you're describing.
Think of it like Paxos. As long as a majority of unique data points agree, allow access. That way if your back is out which results in your posture being unrecognized, the other factors can still form a quorum and grant you access. There could be 20 such factors weighted by "difficulty to forge", so that a fingerprint counts 15x as much as posture for example. Then if you successfully authenticate with, say, 80% on all data points then the system can allow you to train a new factor.
"What do you mean access denied - I've just got a back ache!"