There are two parties that decide what constitutes an intelligent, well-reasoned critique.
The first party is the owner(s) of a platform. For any owner, there is incentive to preserve power, but there is also incentive to not act as an oppressive villain.
The owner sometimes has ideals beyond power, such as cultivating free communication to build ideas for a better world.
But even the most callous, iron-fisted dictator can sniff the danger in being overly oppressive. They also sometimes recognize that unstifled communication serves as a competitive advantage.
The second party that decides what constitutes an intelligent, well-reasoned critique are the viewers of and contributors to a platform.
We have pitchforks, and pitchforks have sharp ends, but they can be dulled by overuse. Furthermore, the citizen's militia grows weary when called upon too often.
And thus, when the rallying cry is sounded for the non-event of a crude slur being censored, we're eroding our own power. We're jumping the gun, which should fire only when we think an intelligent, well-reasoned critique has been suppressed.
Making a slippery slope argument is betraying a lack of understanding about the above complex dynamic.
I disagree, since I think all speech* is equally protected speech, including a crude slur. The reason for that is because it is far too easy for the powers that be to use claims of "only cleaning out the trash" to censor legitimate thoughts and expressions. Throughout history humans have proven incapable of making the right call on where to draw that line, so the only option is to draw it at all.
To refute your points: the owner of the platform has all the power in this relationship, since there are essentially zero repercussions for them over-censoring discussions. Most of the users of a platform won't know, won't care, or will accept the "just censoring the evil speech" argument. The audience itself is also not infallible, since there is a such thing as the tyranny of the majority.
*: of course, there is the point where speech itself requires harming another to create, such as child pornography or snuff films. But that is not what is at issue here or anywhere in modern discourse.
* Allow all speech, including unintelligent and hateful speech.
This is evidently false, given:
1) Intelligent, well-reasoned dissent against a platform is permitted on virtually all large platforms on the internet, and this has been the case for decades.
2) Virtually all large platforms censor unintelligent, hateful speech.
Your nightmare scenario of intelligent speech disappearing due to censorship of small-minded vulgarity simply hasn't happened.
Quite the opposite, intelligent speech does disappear when small-minded vulgarity isn't censored. See voat or 4chan or the thousands of other poorly censored, poorly moderated places on the internet. Productive, intelligent communication does not happen in public forums without censorship.
Concerning what-ifs and slippery slopes, call me when someone with a well-reasoned argument is being suppressed by a large platform. All I've ever seen is hate and misinformation being pushed down the drain, and good riddance.
Censoring kids spamming "communist bandit" isn't a problem, not in this universe.
Censoring a professor critiquing communism as a viable form of government would be a problem, but that's neither here nor there.
Meanwhile, legitimate problems in the free speech domain are draconian copyright laws, suppressing science and creativity, and espionage laws, suppressing whistleblowers such as Snowden.
If we've decided that free speech shall be our crusade, and we're in the US, our limited time and resources should be focused on these real, ongoing, and broad-reaching problems.
Those in power have an incentive to categorize anything that threatens them as "hate speech" or "obscene" or "threatening to the social order".