> Programming languages will generally crash at runtime when an unrecoverable error occurs (index out of bounds, memory allocation failed, null pointer, etc).
It is useful with some badly written build systems that think they have to write every invocation of gcc to stdout. Like every badly written Makefile out there. Or `cat myapp/log/yesterday.log`.
You can work around both but it's clunky. And also it's a thing that's not a must have, just a nice(r) to have.
Maybe, maybe not. Is the potential benefit even worth fighting over? Style is just full of so many subjective details that every discussion ends in. And everybody has their favorite it's un-fucking-readable. My oppinion converged on "Fight it out and tell me what options in $formatter I need to set. Don't even tell me reasonings, just the options. leaves room". I'd rather drink coffee than have the next horizontal real estate vs. parameter alignment debate.
The argument to match clauses/scoping for example is weird in a world where a formatter automatically applies indentation. It might make sense when you have to manually check them on print-outs without an Editor. I'd argue GNU style fulfills that role even better or Horstmann if you care about vertical real estate (or be daring and combine the two). And now everyone hates me.
In a way that I tell the requester "you better have some really, really good reason for wasting my time on this".
Hell, I've even had a new hire prepare an entire presentation to convince us to change the line-length to something different than "The Language Standard".
"We follow the language standard" and if you don't like it, convince upstream (This was Rubocop/Ruby) is another one I've used in the past.
Naturally both people just gave up. There really is a better way to spend time than on tuning and tweaking a linter. Nearly all linters have some way to document and define very local exceptions for that case when some external__dependency_IsMxing_camel_case or whatever.
On the other hand I understand de data model of git and can't to the most basic shit without looking up which invocation I need via search engine/man pages. Like... deleting a branch `git branch -d` (-D for forced deletion). Deleting a remote? `git remote rm`. Knowing the model teaches me nothing about the ui.
This seems like a good opportunity to plug two aliases I wrote about a year ago that have been very helpful for cleaning up all the extraneous branches that would show up when I ran `git branch`.
I run `git listdead` after I merge and delete a branch and do the next fetch (`git pull --rebase`). That lists the branches that can now be deleted.
Then I run `git prunedead` and it actually removes them.
Previously if I ran `git branch` it would list every development branch I had ever created in nearly a decade of work. Now it lists maybe ten branches.
This is a common way of thinking about it, but nobody really knows. It's purely a conjecture. There is no data to back it up, and it just appeals to common sense.
Should everybody drop C/C++/whatever and rush into the Rust train because Rust people has conjecture?
I ask the opposite question. What would happen if the only programming language left is C? Wouldn't we become better programmers and raise the bar so high that the bug count drops to 0?
> What would happen if the only programming language left is C? Wouldn't we become better programmers and raise the bar so high that the bug count drops to 0?
Given that C was the dominant programming language for UNIX applications for over a decade, I think we can look to history for an answer to this question. And I believe the record shows that the answer is "no."
>Should everybody drop C/C++/whatever and rush into the Rust train because Rust people has conjecture?
The way these things usually work in practice, the evidence that a new paradigm improves things usually builds up slowly. There will never be a point at which someone proves mathematically that C is obsolete. Instead, the gentle advantages of other options will get stronger and stronger, and the effectiveness of C programmers will slowly erode compared to their competition. At first only the people who are really interested in technology will switch, but eventually only the curmudgeons will be left, clinging to an ineffective technology and using bad justifications to convince themselves they aren't handicapped. Is Rust the thing that everyone except the curmudgeons will eventually switch to? Who knows, but if you don't want to end up behind the industry then it might pay to try it out in production to see for yourself. If you don't make room for research and its attendant risks you will inevitably fall behind.
I'm sorry you see things in terms of competition and curmudgeons. But I see your point, it's just another type of FUD: don't stay behind and adopt Rust because you'll be a curmudgeon.
Not exactly, I'm saying that if you stay behind and don't adopt something, where that something is whatever the industry switches to after C, you will eventually be left behind. Of course it is also possible (and likely for many people) to die or retire before that happens. It's not like C is going away any time soon.
> What would happen if the only programming language left is C? Wouldn't we become better programmers and raise the bar so high that the bug count drops to 0?
Is this a rhetorical question? Clearly the answer is no.
It seems like there is a big difference between a "mathematical certainty that certain bugs will not occur" if you play by the rules and just be a better programmer so that you don't write errors. Not that you won't have bugs in Rust but it seems like we should move towards having our tooling do more of the heavy lifting in ensuring correctness. I don't believe you will ever have the bug count drop to zero. I do believe in mathematical certainties though.
>Do I really know how much cycles/stack it takes to do std::sort(a.begin(), a.end()); in that specific platform? No, so I cannot trust it.
I also don't know how many cycles it takes for my implementation of quicksort apart from checking the output of a specific compiler and counting instructions. C is not, was not and will never be a portable assembler.
> I also don't know how many cycles it takes for my implementation of quicksort apart from checking the output of a specific compiler and counting instructions.
On any modern out-of-order CPU, that doesn't get you close to determining the dynamic performance. Even with full knowledge of private microarchitectural details, you'd still have a hard time due to branch prediction.
The embedded/micro space is not out-of-order, broadly. Memory access latency may be variable, or not, but I would charitably assume OP knows their subject material and is either using cycles as a metaphor, or actually works with tiny hardware that has predictable memory access latency.
On most (small, I'm not talking about mini linux) embedded systems, instruction counting will tell you how long something takes to run. In fact, the compilers available are often so primitive that operation counting in the source code can sometimes tell you how long something will take.
Depends a lot on the compiler and target arch. You'll miss out on a lot of stack accesses, or add too many. You don't get around looking at the final executable if you want good results. And for more complex targets, in the end you need to know what the pipeline does and how the caches behave if you want a good bound on the cycle count. Of course assuming you're on anything more complex than an atmega, for which op counting might be enough. I work in the domain; lots of people do measurements, which only give a ballpark but are bad since you might miss the worst case (which is important for safety critical systems, where that latency spike in the wrong moment might be fatal). Pure op counting is bad since the results grossly overestimate (eg you always need to assume cache misses if you don't know the cache state, or pipeline stall, or DRAM, or...). Look at the complexity of PowerPC, this should give you a rough idea what we're usually dealing with (and yeah, I'm talking embedded here).
To me that "sometimes" feels like "I can wrestle some bears with my bare hands, e.g.a Teddy bear" ;-)
"Most" embedded ARM? Cortex-A8 and smaller do not have OoO execution. Cortex-A9 is a 32-bit up-to-quad core CPU with clock of 800MHz-2GHz and 128k-8MB of cache. That's pretty big. I guess a lot of this is subject to opinion, but I don't think smartphones with GBs of RAM when I think of embedded systems.
All of Cortex M is in-order and only M7-- still somewhat exotic-- has a real cache (silicon vendors often do some modest magic to conceal flash wait states, though).
Alignment requirements are modest and consequences are predictable. Etc.
About the most complicated performance management thing you get is the analysis of fighting over the memory with your DMA engine. And even that you can ignore if you're using a tightly coupled memory...
JS would like to have a word with you.