I think people feel differently about contributing to SO or wikipedia or even quora than they would labelling CIFAR images for instance. Maybe it's a distinction without a difference but people don't usually contribute to things like stack overflow with the objective of training an AI model.
Their efforts can be used to create a silo of information that others are required to pay for (e.g. when SO shuts down, is inaccessible, or otherwise made non-functional).
Their effort might be used to create completely wrong or even harmful content while only using the training material to learn to convince people to believe the AI output.
Yes, this was all possible before, without LLMs, done by humans (or machine translation for example).
But not at this scale. SO and the comments there were still still the authoritative source, and written by humans (without the need for any proof...)
But there is growing cohort of people who want to use AI as a knowledge blackbox, search engine, encyclopedia, even as an authoritative source.
And with it comes the intent to cleanse this "knowledge" of any individual authorship or traceable source.
It is not an odd stance to oppose this, even if the concrete actions expressing this stance might be futile in this particular case.