That could end up with significant false-positives when updating the thing without updating the estimated memory usage (especially if slow bounded fragmentation over a long period of time applies). You might also have some expected ratio of memory usage (e.g. process B uses 3x the memory of process A), but want to allow the absolute usage to grow as more data is processed.
Having the wrong limits will cause the wrong thing to die, regardless of whether they kick in during OOM or orchestration. Better to find out during your planned rollout window if you ask me.