I agree that this sketch comes closer to working in practice than simple RLHF. In my earlier comment I was imagining bringing in some auxiliary data like you describe to detect plagarism and then using RL to teach the model not to do it.
I was surprised that I came up with a plausible sounding method. I had thought on first blush that this was impossible but now it seems reasonable. You could still have various exfiltration methods like "give me the data with each word backwards" and I'm not sure where that would stand legally.