It seems like all the hash implementations I've seen have an insertion rate of ~Million/sec. I wonder if it's possible to get at least an order of magnitude faster (single threaded). Are we close to the input string parsing at this rate? Would sorting/caching help a lot?
Despite the title, I found the details regarding rodent brains and the kiss and spit mechanism much more engaging and convincing than the discussion of effects on humans.
I didn't get much from the linked blurb, but the title made me think: if you sample training data exhaustively and then run a learning process, could you then take the training data maximizing algorithm performance to then better train humans at the same task?
I was also thinking along these lines as I read, and there have been plenty of articles posted on HN on the diminishing returns of working too long. Basically it boils down to people getting tired, and especially in creative pursuits, it helping to get away from your project and let your mind wander sometimes.
What I expected the author might say is that with many accumulated hours you become more efficient such that over time your output per hour grows, but his view seems quite different.
Nice development for trivial patents, but it makes me wonder if this could lead to people having to defend their (e.g., algorithm) patents by appeal to computational complexity or the physical constraints of human vs. computer memory, etc.
Why, exactly? I wondered the same thing--what are these so-called startups that are achieving this, how big a trend is this, is it just a local Silicon Valley thing or is it widespread, etc.? Seems like the author dropped the ball here.
I've read articles about at least two, but sadly can't find them now. They use unlicensed spectrum to deliver low data rates, which is enough for IoT devices.