The sample size is small but the effect is strong, stronger than you almost ever see for this kind of thing.
(e.g. i suspect a tiny dose of antiviral or synthetic antibody would be effective for prophylaxis or early treatment. By the time somebody is seriously ill they are sick from the cytokine storm and possibly clearing the virus doesnt change the course of the disease.)
With so many studies being done on covid (by basically every lab in the world capable of doing so), the self selection biases are especially strong. That is, every study that confirms some statistically significant effect (often with a small sample size) will be published, and all of the corresponding studies which confirm the null hypothesis are buried. So you get a flood of seemingly compelling early evidence for covid effects or treatments, most of which will turn out to be false
Indeed. Presumably if it is effective, you could study people who are using Fluvoxamine for its anti-depressive properties, and you'd find that (other things equal, using something like propensity score matching), they have less serious cases of COVID.
If you flip a coin 10 times, there is less than a 5% chance you will get 3 or fewer tails -- so if you do, that looks "statistically significant" that you have a weighted coin (which still doesn't guarantee it).
But if 500 people in different places flip a coin 10 times, what are the chances at least one of them will get 3 or fewer tails? Oops. It doesn't mean your chances of having a weighted coin went up.
(e.g. i suspect a tiny dose of antiviral or synthetic antibody would be effective for prophylaxis or early treatment. By the time somebody is seriously ill they are sick from the cytokine storm and possibly clearing the virus doesnt change the course of the disease.)