That's hardly a convincing example. All of these points can be solved elegantly with a stream abstraction, which can be cheap or free given a sufficiently advanced language and compiler.
As for legal or policy reasons, those still aren't reasons to write boilerplate code. Your reimplementation can be tight and reuse other abstractions or include their own.
A stream abstraction is a solution so some (not all) of these problems, and indeed some libraries use them, but a stream abstraction that is powerful enough to solve most of these problems may result in more complex code than just rewriting the checksum algorithm from scratch. And there is a limit on how compilers can optimize, especially considering that checksum calculation may be critical to performance.
In reality, few people need to write their own checksumming function, but sometimes, it is the best thing to do. And it is just an example, there are many other instance where an off the shelf solution is not appropriate because of some detail: string manipulation, parsing, data structures (especially the "intrusive" kind), etc... And since you are probably going to have several of these in your project, it will result in a lot of boilerplate. If it was so generic not to require boilerplate, it probably has been developed already and you would be working on something else.
Abstractions are almost invariably more complex, slower, more error-prone and generally worse than the direct equivalent. They are, however, reusable, that's the entire point. So one person goes through the pain of writing a nice library, and it makes life a little easier for the thousands of people who use it, generally, that's a win. But if you write an abstraction for a single use case, it is generally worse than boilerplate.
All of the examples show a single use but the value of destructuring becomes more obvious when you look at an example with multiple uses. Take this variant of one example:
Depending on what the functions are doing, neither the VM nor tsc might be able to infer that variable's value stays the same or that props, props.a, props.a.deeply, or props.a.deeply.nested will stay the same for that matter.
The VM will likely have to generate code to dereference the chain all over for the second use, and the compiler might lose narrowing information. Both of these can easily be avoided with destructuring.
(You could use "const variable = props.a.deeply.nested.variable", but then you have many of the same issues the article complains about.)
This is a good example of why I dread Copilot: even if Go specifically couldn't express this any more concisely, there is a language that can and Copilot's very existence makes it less likely for that other language to be used as much as it deserves.
Besides, the generated example seems to be missing code to gracefully handle the case where len(filtered) is zero. Maybe there's a precondition that prevents that from happening or maybe a division by zero is exactly what you'd want, but at face value it looks like the bot did a rush job.
Zero is gracefully handled; the mean of a zero-sized set is best represented by NaN, and this would be idiomatic in most languages' IEEE754-style handling.
Saturation is not. This is what really bugs me: If I'm going to drag in a billion GPUs of external computation (or a dependency, which is basically the same thing but with human brains), I want it to provide the hard algorithm I can't write, not the easy one I can. I am not limited by typing speed.
Agreed about saturation and the choice of variable name, but the code would trigger a division by zero and not result in NaN: https://go.dev/play/p/vYm4tSNEJ7M
(Also, in--say--Ruby and JavaScript 1.0/0.0 is Infinity and not NaN.)
Your playground link shows a build error. 0.0/0.0 at runtime will be NaN. And in basically every language, 1.0/0.0 is Infinity. But we're talking about 0.0/0.0.
Both good examples of coding where you should be thinking instead, though.
I think if anything there is far too much thinking going on here, for the tiny example I copied from the window I literally already had open with the function I was working on.
For what it's worth, Copilot (correctly) inferred a loop variable called "detection", I imagine based on similar usage earlier in the function. And there is already a conditional in place to prevent invalid operations; if I remove it I see a new suggestion:
if len(filtered) > 0 {
This tool is far from perfect, but it very much sounds like you folks haven't used it. If that's the case, I would encourage you to research it like all tooling and draw some informed conclusions about it's applicability instead of making assumptions.
Reminds me of Cargolifter, they tried something less ambitious and failed... but that was 20 years ago, at the very height of the dot com boom. Better luck to the Pathfinder 3 team! It would be amazing to see airships revived.
You work with crypto according to your profile. I hope that when you see a random generator return a series of a hundred 6s, you go and check what's wrong with it instead of assuming you just got lucky this time ;-)
This is much more relevant in crypography than on statistiscs. If your PRNG always returns the same 4, it's buggy, but the really problematic outputs look exactly like random numbers.
(Looks like my profile is out of date, by the way.)
If you follow the link, it says Americans spend that amount of time daily with "major media", and no mention of news only. So that would presumably include reading/watching fiction, listening to music etc.?
> included are digital (online on desktop and laptop computers, mobile nonvoice and other connected devices), TV, radio, print (offline reading only), newspapers, magazines, radio, and television
Pretty big blunder in an article about media trustworthiness if you ask me.
This article is a freelance health/wellness writer dumping a spec pitch that didn’t sell on Substack to try to recoup a little of the invested work. The article was written toward an expected outcome starting from a Twitter poll, which means by the end of the lede it’s basically worthless because it exists only to confirm the author’s presupposition. That’s the polar opposite of journalism, despite what some folks would have you believe, and it’s why every editor she called presumably passed on this. Blunders throughout should surprise you next to zero because it’s basically a writer with Word open, a point to make (noticeably from a bad personal experience), and no support to keep it on track. And this is the kind of gun you want to make sure you’re aiming well. She isn’t.
Seriously, leading off with a Brandeis quote comes across as so horrifically pretentious it hurts. That the next graf is about a crisp November day almost made me close the tab immediately, and I’ve read some really bad work. The TL;DR being that long and then the article starting like that is a big, neon sign upon which is written “I badly need an editor.” And this is part 1! The horror!
Another blunder: the duplicated scripts are because lazy local producers take fully-produced packages off their network sources to fill time. I know because I’ve done it. If I’m three minutes light in my B block, I’m looking at the network’s stuff or CNN or whatever to get what I can. Something on trade relations? Cool. Throw it in! It’s already done! (Sinclair must-runs use the same mechanism and are just pushed on producers.) It’s not a local news producer watching a competitor’s air and writing it down word for word, as she seems to think it is with her “hence” behind that YouTube embed. An interesting oddity of news, but not the malevolent machine she’s implying.
(Yes, I ended up reading the whole thing. It’s an interest area; I was a local news director in TV and an assignment editor before my start in tech. The author understands next to nothing about the incentives nor economics of particularly newsgathering. I’d rebut it but I’d run out of room here. Why it’s on HN is beyond me, since not even Good Housekeeping would pick this drivel up. She’s onto something of course, and it’s a subject that deserves better study than this.)
It can be both: that the teacher helped a _gifted_ boy grow into the author we know now, and the boy wouldn't have become the author we know now without them.
Just because their other pupils didn't turn out to be legendary doesn't invalidate the teacher's contribution, which is to say, doesn't mean the joke was in good taste.
> Researchers have learned that digestive tracts of long-distance migrants work overtime prior to migration. Food intake is increased, and the food is metabolized and stored as fat, which serves as the birds’ flight fuel. Bar-tailed Godwits store fat until it makes up about one half their weight. Then the digestive system shuts down so metabolic energy can be maximized for muscle use.
Brecht had a whimsical take on this. There's a German saying that goes "wer A sagt, muss auch B sagen": he who says A, has to say B, meaning of course you should always try and follow through.
Brecht wrote "Wer a sagt, der muß nicht b sagen. Er kann auch erkennen, daß a falsch war": he who says A, doesn't have to say B. He can also realize that A was wrong.
As for legal or policy reasons, those still aren't reasons to write boilerplate code. Your reimplementation can be tight and reuse other abstractions or include their own.