As far as I can see (having checked wiki and the abstract of the original paper - I'm no expert on this) the DK effect is only the first of those claims. However it sounds like claim 2 is less significant here anyway.
Re claim 1 the random numbers example is "all noise, no signal" and I can see the objection that a more convincing example might be to demonstrate the "false" DK effect in an example that does have some signal (i.e. a positive relationship between actual and estimated skill), but that is easy to do and I hope you'll be able to see why if you see my reply at https://news.ycombinator.com/item?id=31042619 and read the comments under the article I mentioned there.
The point is that the DK analysis involves comparing two things which both contain the same single sample from a noise source. Pure noise like the random numbers in the example displays a powerful DK effect due to autocorrelation that says nothing interesting (just that a single random sample of noise is correlated with itself), and that powerful effect can swamp any actual relationships in the distributions. To avoid that effect appearing, you have to make sure that if the two things you are comparing contain samples of a single noise source they are separate, independent samples of it. The experiment with the education level groups achieves this because the education level is "measured" as a separate event from the "actual" skill measurement so they have separate noise sources (and even if they didn't the noise source would have been sampled separately and independently).
I have to say, during the discussion above I hadn't thought through it deeply enough to grok this level of it, and while pondering your last comment I went through a phase of "hang on, am I actually understanding this myself?", so I apologise and retract any suggestion of bad faith.
Thanks! No worries, I appreciate you writing this.
> I can see the objection that a more convincing example might be to demonstrate the "false" DK effect in an example that does have some signal
More than that - as it is, the argument is meaningless to me. It states that DK is trivial in a world where all people have no ability whatsoever to assess their own performance. OK, and finding dinosaur bones is uninteresting in a world where dinosaurs roam free. Both are true, but both are irrelevant in our world (considering my priors). To give a less hyperbolic example, suppose I found some population of people whose weight and height correlate much less than we currently measure, through some biological mechanism of very high variance in bone density or something. To me this article is like saying "well yeah, but this finding is uninteresting, for example if you take purely random weight and height you get an even stronger effect of short people with very high bone density and tall people with very low bone density".
Regarding all the rest - I don't really understand all this "comparing things which both contain the same single sample from a noise source". I'm currently willing to bet (albeit not too much) that any synthetic data experiment you'll come up with, that doesn't display an effect through the DK analysis, will turn out to be based on assumptions that strongly align with my prior, which is that subjects' self-assessment of their performance is correlated to their performance, with 0 bias (on average) and noise that is small (but not negligible) compared to the signal. Would be interested to be proven wrong.
> Pure noise like the random numbers in the example displays a powerful DK effect due to autocorrelation that says nothing interesting
On the contrary, finding out that the distribution in the real world is like that ("pure noise") would be very surprising (therefore interesting, in a sense) to me.
Re claim 1 the random numbers example is "all noise, no signal" and I can see the objection that a more convincing example might be to demonstrate the "false" DK effect in an example that does have some signal (i.e. a positive relationship between actual and estimated skill), but that is easy to do and I hope you'll be able to see why if you see my reply at https://news.ycombinator.com/item?id=31042619 and read the comments under the article I mentioned there.
The point is that the DK analysis involves comparing two things which both contain the same single sample from a noise source. Pure noise like the random numbers in the example displays a powerful DK effect due to autocorrelation that says nothing interesting (just that a single random sample of noise is correlated with itself), and that powerful effect can swamp any actual relationships in the distributions. To avoid that effect appearing, you have to make sure that if the two things you are comparing contain samples of a single noise source they are separate, independent samples of it. The experiment with the education level groups achieves this because the education level is "measured" as a separate event from the "actual" skill measurement so they have separate noise sources (and even if they didn't the noise source would have been sampled separately and independently).
I have to say, during the discussion above I hadn't thought through it deeply enough to grok this level of it, and while pondering your last comment I went through a phase of "hang on, am I actually understanding this myself?", so I apologise and retract any suggestion of bad faith.