Unskilled people are more random with their self assessment that skilled. It has nothing to do with unskilled people thinking that they know everything.
> Unskilled people are more random with their self assessment than skilled
This is a statistical claim, supported by the DK graph (but not the random data thought experiment from the article).
> It has nothing to do with unskilled people thinking that they know everything
This reads to me as a claim about the psychological reason for the statistical pattern, which I don't think is either supported nor contradicted by data, both in the article and in the original paper.
"Unskilled people often think they know everything" is pretty much the colloquial use of 'Dunning Krueger' to label people who insist that they know better than the experts from a position of relative ignorance
But I think that's consistent with the statistical pattern: if the distribution of self-assessment [amongst unskilled people] of their relative abilities is random or near random, it logically follows that the set of unskilled people includes a lot of people who significantly overestimate their ability at something. Dunning and Kruger don't really talk about the propensity of excellent test performers to underestimate their skill as much (though the Nuher study results somewhat justify their original focus on the ignorant by finding that more skilled groups like professors and graduate students make smaller average prediction errors of their test scores than undergrads).
Dunning and Krueger's contention in the original article is that "incompetence robs people of their ability to realise they're incompetent". Similarity of the prediction errors to a random walk isn't a rebuttal of that (although it's a fair critique of the presentation) because the null hypothesis is that people who find a test particularly difficult shouldn't be [almost] as likely to believe they achieved above average performance as the people who aced it. There might be other reasons for that (like the test being pretty easy for all participants and raw scores in a fairly narrow range, or test takers wrongly assuming their lack of understanding was being compared against the general population rather than other smart undergraduates) but in general people ought to be able to incorporate knowing that they didn't know how to answer a lot of questions into their self-assessment of how they performed.
The one reproduced in the article doesn't show the density of points, so it's hard to conclude anything from it. Figure 4 from the Nuhfer et al. paper does seem, to me at least, to support DK's conclusions.
It does show the lack of extreme values for higher-skilled people, surely this has some statistical significance ?
Especially in a situation where you would expect the distributions to be of the same type ?
Unless they had messed up in failing to normalize the number of points per group, and so this might come from the law of large numbers failing + sheer randomness failing to create extreme values on higher-qualified, but lower population groups ?