I used to be a hard core stackoverflow contributor back in the day. At one point, while trying to have my answers more appreciated (upvoted and accepted) I became basically a sychophant, prefixing all my answers with “that’s a great question”. Not sure how much of a difference it made, but I hope LLMs can filter that out
Chess puzzles work quite accurately IME to assess my mental capabilities for the day. Especially the puzzle storm on lichess. There are enough puzzles there to not repeat themselves too often and they are rated so they have similar difficulty for the same rating. In my good days (lots of sleep in previous nights) I have way better scores than on my bad days (30-40% better)
A bit unrelated, but try having l theanine after a night of drinking: it makes you wake up fresh and without a hangover because it speeds up alcohol processing and it metabolites by the liver and also prevents alcoholic liver damage. Me and all my friends who tried it say “i wake up totally fresh after theanine”. There’s also a study confirming this: https://pubmed.ncbi.nlm.nih.gov/16141543/
But once that bubble (and any other one) burst, it never reached new highs again. The bitcoin bubble burst multiple times and got back higher and higher
I think “puzzle storms” on lichess can be considered a way of fiddling with chess instinctively. I found it also helps with not giving pieces for free when playing full games and also seeing the blunders your opponent makes way faster
TL;DR: the Tornado Cash developer, Alexey Pertsev, was convicted by the Dutch authorities to +5 years in prison for money laundering for developing Tornado Cash (not for actually laundering any money).
I wonder: will they go for the monero devs next? What about the creators of wallets that don’t require KYC before creating an address?
I read some time ago on twitter something along the lines of “if you run enough A/B tests, you’ll end up with a porn website” and I guess that’s what the future of high-engagement-web reserves us
I think this could be implemented without listening: I am physically meeting with my friend X who is researching online for his new acquisition (or he already got it). The more obsessed he is with that, the bigger the chances he'll tell me how awesome such a thing is.
So algorithms can percolate his interests to me after we meet and some time it would happen that we talked about it too.
The tech lead I was working with a few years ago was absolutely certain vanilla js is a framework. To the point he dismissed some candidate for not knowing about it.
I guess he was referring to this one http://vanilla-js.com/
After working for the past ~3 years with a microservice oriented architecture, I can only imagine the horrors of having to work on a project built with lambdas.
We're a team of only 2 backend engineers which have to touch up to 4-5 repositories for implementing some features. Orchestrating the deployments of the changes to all those repositories, debugging across all those microservices, having sane rollback strategies for each feature, really brings down the developer experience and speed of development when compared to working with monoliths. I don't want to imagine what happens when one would have to treat every function as a separate entity.
Yes, these architectures have their benefits, but for 99% of the projects in the wild the burdens outweigh the benefits drastically: every day, for every feature developed, for every deployment and for every bug being debugged.
I’m a strong proponent of microservices but only for one thing - the ability to scale the number of engineers working on a product. If you can abstract out a contract between each service then developers are unburdened to iterate as fast as they can without constant communication overheads.
The idea of a 2 dev org deciding to replace simple maintenance free function calls with authenticated, TLS-secured, retry-able, service-located, rate-limited remote procedure calls (in the general sense of the term) and replacing simple method parameters with serialisation and deserialisation overhead… it’s all just pretty absurd.
The consequence of that is that every release is a fairly big deal requiring a lot of coordination overhead among a huge number of people, which bottlenecks how often they can happen. That's fine for a foundational piece of technical infrastructure like an operating system kernel, but Web apps and other forms of continuously delivered software want to release weekly or more frequently, and there's no way that could happen if every release incorporated 12,000 people's work. That is the problem that microservices solve.
Not that I'm super-familliar with Linux kernel development, but that's kind of a special project that has a benevolent dictator and worldwide interest from several industries willing to throw resources at it. I think what you said is sound, but I'm not sure how comparable the Linux kernel is to the way that most for-profit software firms are run. And maybe that says a lot about FOSS.
I’ve found AWS’s SAM to be a real pleasure to use (though I’ve only used it a bit), and it would solve the pain points you’re talking about here (basically include all your functions in a single repo with a sam template and accompanying cli that makes deployment and orchestration a breeze). May be worth a gander.
reply