Is that even possible? If you use X data to generate a model and then delete the data and keep the model, you could in theory use the model to re-identify that user that deleted the data with a high degree of confidence.
In general models trained on individual users' data get rebuilt when the underlying data changes so as to properly reflect the current state of the source data.
Do you have any specific policies around this? I suspect it would be hard to say definitively that there aren't remnants of data I've asked Google to forget still kicking around.