Right. And they only tested 5 logos picked at random from the first 28 that Yahoo presented. They may not even have picked the "best" 5 out of that lot. So really... this whole blog post was click bait. I'd never heard of Survata before... but now I have... for better or worse.
The way I understand it, each user surveyed got five random logos, but not the same ones. Over the entire population, all the logos were surveyed and the "winningest" ones are presented as the highest rank (see the last chart with the full matrix of which logos won against which others).
This still generates an issue with sample size - since today was the 29th day, that logo had from days start through time of writing/submitting the article, while the logo from day 1 had a much larger sample size. There is some evidence in the results. Secondly, was it fully randomized, or were things like multi-armed bandit or other a/b testing methods used to provide enough variation. The chart is biased towards earlier numbers, with the outliers of the really bad logos being dispersed towards the end.
Ah. They could have worded that a little different then. It did sound as though they picked 5 logos and then surveyed the people. I can see it the other way now as well. I've never known a survey to be run like that. Seems odd to me.