Hacker News new | past | comments | ask | show | jobs | submit | i-think's comments login

I don't think this article adds anything to the discussion. It repeats Halderman's earlier point.

the important point is that all elections should be audited, and not only if you have statistics suggesting that something might be fishy.

And repeats other conclusions which say that there are no signs of something fishy in the currently available data, at least based on initial statistical analyses.


Is it not currently the case that there are redundancy tests (i.e. overconstraint tests / unit tests) on election results already? Would a manual miscount of a vote have a near-nil chance of getting caught?

If so, I totally agree we need to have samples of all elections verified to at least get bounds on error rates.


It's a pity that journalists (and a few self-interested politicians) distorted Halderman's points so dramatically. This stuff's never going to go away, now -- it'll be the foundation of wild conspiracy theories for decades.


> This stuff's never going to go away, now -- it'll be the foundation of wild conspiracy theories for decades.

Eh, the same stuff was said after Bush v Gore and it died down within a few months of inauguration. You can see it happening already, most people are uninterested in pushing this.


More damaging would be if it implicitly changes the message from 'audits should always be done' to 'we're doing it because there was monkey business'


It already has. Which is doubly unfortunate since there is no evidence whatsoever of monkey business anyway.


> We just don't provide Linux binaries, except Ubuntu Snaps, because it is extremely difficult to do.

Why is it so hard? Do you mean that it would be extremely difficult to make the large number of binaries required for all the different Linux ditros?


Well, it is difficult, because for a media player, you need to have a correct video stack, (which means X11, DRI, mesa, OpenGL etc...) and a correct audio stack (which means pulseaudio) and those are quite hard to ship in a cross platform way.

If you look at the Snap packages of VLC, a LARGE part is not VLC at all, but this graphic stack.


In order to make this claim, Amazon, TechCrunch, and the researcher they cite must be able to accurately identify the population of incentivized reviews. How is that possible?

Incentivized reviews, if I'm using the term correctly, are designed to be indistinguishable from 'real' reviews. The reviewers aren't going to reveal which ones are incentivized.

If you think you can identify them, what you mean is that you can identify the ones that you identify; it's literally that much of a tautology. You have no idea of your accuracy, how many true and false positives and how many true/false negatives.

What Amazon is done is the same; they remove reviews that meet certain criteria. Amazon claims the criteria are an accurate proxy for incentivized reviews but I doubt they can confirm that.

At best they are raising the bar so that only better written incentivized reviews remain, and incentivized reviewers will adjust to the new standard. Users, no longer seeing incentivized reviews that they can identify, will assume the situation has improved. Really, they are still being conned but now don't know it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: