Oh yeah, good idea. That should generally improve the results.
There's a bit of art to it for sure that can improve the results. You may also have to do some pronoun substitution in the summarized sentences (and then decide to do that before or after calculating the synthetic score) so they make more sense.
I find it's hard to tell how well all this will work until you just do it. Not every kind of text works equally well and the only proof that it's really working is "does the summary make sense or not?"
It's also possible that for long structured documents like laws or contracts, that you don't want to summarize the whole thing, but treat major sections like different documents and do intra-document summarization to maintain understandability.
Here's one that does something kinda like what I was writing about above.
When you lemmatize, what are you doing exactly? Are you merely reducing words to a more common form, thereby reducing IDF? For example, are you reducing "walking" or "walked" to "walk", and then using the IDF of walk?
related is the idea of "stemming" which uses an algorithm to try to reduce inflection, to find a common form of a word that various versions come from. Porters algorithm is a well known stemming algorithm. However, sometimes you end up with weird "non-inflected" tokens at the end. (e.g. 'enhancement' might become 'enhanc')
However, lemmatization is considered "better" in that it uses a dictionary of inflected forms that map back to the non-inflected form. So in theory, if the dictionary is comprehensive, you can properly replace inflected forms with their correct non-inflected forms. (e.g. 'enhancement' -> 'enhance')
If your dictionary isn't comprehensive and comes across a token it doesn't recognize, you can try falling back onto a stemming algorithm.