On the other side of the coin, you can have situations like in 1989 when Jethro Tull beat Metallica for Best Hard Rock/Metal Performance at the Grammy's.
The Grammys have notoriously been considered out-of-touch for some time, even when you compare them to the other big awards shows (Tonys, Emmys, Oscars).
This assumes negatives are represented as two's-complement, but that's implementation-defined. Testing whether a negative number is even by ANDing it with 1 won't work in a system that uses one's-complement.
Unisys apparently ships some emulators of a old system that still use one's complement... much as I hesitate to ever link to any ESR material, there's a good article here:
It bothers me when "goto" is assumed to be "a maligned language construct".
People who think "goto" is evil should also give up the other jump statements: continue, break, and return (and also switch, though its not listed as a jump instruction in the C standard, at least not in '89 or '99).
You can see some contradictions in the paper regarding goto. For example, they state that deep nesting should be avoided, but goto should be avoided as well, even though one benefit of using goto is to limit nesting depth. From the Linux Kernel coding style doc:
- unconditional statements are easier to understand and follow
- nesting is reduced
- errors by not updating individual exit points when making
modifications are prevented
- saves the compiler work to optimize redundant code away ;)
This is because certain coding standards are designed to be idiot-proof. Unfortunately, that can result in tasteless code and sometimes undesirable workarounds (e.g. using "goto" have one exit path for errors is a perfectly valid use).
When Dijkstra wrote his famous essay "Go To Statement Considered Harmful" (1968), it was a manifesto against unstructured programming i.e. the spaghetti code. However, the use of "goto" per se does not imply unstructured programming. Donald Knuth wrote a wonderful essay "Structured Programming with go to Statements" (1974) to make this point.
Availability of "goto" in C merely gives us more flexibility, but it does not mean that we should start writing unstructured code.
Dijkstra was concerned about being able to reason about code, and spaghetti code can make it impossible to decompose. A single goto within a function is not a big deal, and that's not really what he was worried about.
Few people have worked on real spaghetti code, thousands of lines with no functions, no modules, just spectacular leaps forward, backwards, leaping forward into the middle of huge loops, leaping backwards into the middle of loops, giant loops nested with and partially overlapping other loops.
I worked on such code, trying to decompose it in order to organize it into subroutines. It resisted my efforts almost completely. Fortran IV I think.
Lua added goto just recently, and it's benign, because it can't escape its calling context.
Users were agitating for `continue` to join `break` for control flow interruption. Lua instead chose to provide all the non-structured control flows you would like as a primitive; scoping it lexically keeps it from breaking composition.
I still hope that we might be able to convince the Lua authors to add continue one day. It would be very convenient.
I think the real reason why it hasn't been added to the language yet is that it has a weird interaction with the way Lua scopes its repeat-until loops. The justification about having a single control-flow structure is more of an excuse.
The closest I've come was my own code: minsweeper on my Ti-82 graphical calculator. I Separated the program in various sections, then used `goto` to jump where I needed. I was tempted at some point to use the "call program" facility instead, but that would have meant exposing those programs to the end users, so I just lumped everything in one file.
Oh god, I feel for you. My first real language was when I had my TI-84, and you NEEDED goto if you didn't want to clutter up the user's machine with a bunch of things they should never-ever press. God, what horrible yet nostalgic memories.
I’ve seen a few Fortran goto subroutines. I somehow get the feeling that the two following facts are mathematically related: some graphs cannot be drawn on a two-dimensional sheet of paper, and some subroutines cannot be decomposed into smaller subroutines.
> People who think "goto" is evil should also give up the other jump statements: continue, break, and return (and also switch, though its not listed as a jump instruction in the C standard, at least not in '89 or '99).
That makes no sense. goto is maligned because it's unstructured, the other constructs you list are structured and restricted. Much like loops, conditionals and function calls they're specific and "tamed".
Not only that, but the historical movement against goto happened in a context were goto was not just unstructured but unrestricted (to local functions).
Even K&R warns that goto is "infinitely abusable", and recommends against it when possible (aka outside of error handling & breaking from multiple loops as C does not have multi-level break).
Unstructured and unrestricted equivalent of goto then is a method call in OOP. Where method is basically a label and object is a shared state it randomly messes with.
No, that's not a restriction, but just a behavior. It doesn't keep you from jumping all over the place and modifying shared state by calling methods within methods. You can only use conventions to have some restrictions and structure here. Just like with goto.
You highlighted in this comparison why I hate exception handling in OOP languages, and just generally the common practices prescribed for handling errors.
Yep. Those maligners forget that state machines are useful ways to structure code, are inherently analyzable (they form the basis of modeling languages like PlusCal), and can only be fully expressed in C using goto.
(No, you can't use tail calls to represent state machines in C; that is not a feature of the C language but rather of a given implementation. And yes, you could model state machines with an enum variable and a big switch statement, but that's even harder to follow.)
What trips people up is when goto is used to jump across resource lifetime boundaries (which C++ addresses with RAII), when they maintain too much state in cross-state variables that they forget what their invariants are, and when they use a goto as a replacement for what should be a function call.
Using goto to implement e.g. a data processing loop, a non-recursive graph traversal algorithm, or a parsing state machine are all perfectly valid uses.
Tracing through goto spaghetti is not more comprehensible than a structured switch with clearly defined regions for each state. This is the sort of abuse that gives goto a bad name. The only thing worse is table driven state machines calling function pointers scattered everywhere.
State machines implemented with gotos have very clearly defined regions for each state: the space between each label. Switch-based state machines are fine, but become hard to follow when e.g. you need a loop around a couple states, and are often abused to allow changing states from a non-lexically-local context (e.g. within a function call).
At a high level, this:
goto NewState;
is no less comprehensible or spaghetti-prone than this:
As a longtime C and goto user, defending the practice many times, I discovered something interesting.
My uses of goto can be replaced with nested functions! The code is nicer, cleaner, and the equivalent code is generated (the nested functions get inlined).
Of course, nested functions aren't part of Standard C, but they are part of D-as-BetterC. (D has goto's too, but I don't need them anymore.)
Walter,
what do you think of nested functions or even normal functions taking an identifier similar to continue / break to be able to jump up the stack precisely: either a certain number of steps, or to a particular calling function.
Yes and it annoys me no end that clang has refused to implement them as well, as they were part of my codebase as well... How better to implement stuff like qsort() callbacks than with a simple, contextual small function just over it ??
YES it is dangerous due to stacks etc etc but hey, we're grown up adults, not script kiddies.
Using a chainsaw without paying attention is how fingers are cut off, using that as an argument against making chainsaws easier to use doesn't make any kind of sense.
Good for you maybe, projecting that on people who have a clue what they're doing doesn't make sense either. They're mostly messing up chainsaws as well these days, for the same misguided reasons.
That adds a lot of boilerplate though. IMO the best solution is destructors and RAII so that you can return early in case of error and not leave your resource half-initialized. And this way you don't have to repeat your cleanup code in the "deinit" method. Of course if you start adding destructors soon you'll want generics and before you know it you end up with "C with classes" and we all know that's the path towards insanity.
In practice, it doesn't. (The compiler inlines the code.) I know because I'm pretty picky about this sort of thing - it was why I was using goto in the first place.
> RAII
RAII can work, but it's a bit clunky compared to a nested function call. Additionally, if the code being factored is not necessarily the same on all paths, that doesn't fit well with RAII.
Boilerplate in term of characters typed, not resulting code size. Adding a bunch of function declarations adds a significant amount of noise IMO.
>RAII can work, but it's a bit clunky compared to a nested function call. Additionally, if the code being factored is not necessarily the same on all paths, that doesn't fit well with RAII.
I think I'd need to see an example of what you're talking about then because I don't quite understand how your method works exactly.
I guess it's a matter of taste, but I often (not always) prefer less return statements in a function with gotos to error handling/cleanup code. Especially in kernel mode drivers.
> It bothers me when "goto" is assumed to be "a maligned language construct".
That's a statement about what people say about the feature, not about the feature itself. The subtlety here is that malign as an adjective refers to something evil or ill-intentioned, while as a verb it means something closer to slander or defame. It's frequently (mostly?) used in a context of skepticism regarding the claims in question, especially in the construction much-maligned.
Sounds a bit like a strawman then, does anybody actually maligns "goto" as an error handling construct in C? It's pretty standard in my experience. It's goto "like in BASIC" that's utterly evil and rightfully maligned. And having learned C coming from BASIC I speak from experience...
Yeah their take on goto is a bit odd given that it's probably the sanest way to do "cascading" error handling in C given that we don't have RAII or exceptions.
For memory allocs sure, but that's only a small subset of resource management. How about sockets, fds, locks, 3rd party library initialization, hardware setup (for device drivers) etc... Alloca doesn't cut it, you need general purpose destructors.
Also when you code something at lower levels, sometimes it’s beneficial to treat some parts of CPU state and thread state as resources: process/thread priority, interrupt mask, FPU control, even instruction set (e.g. Thumb support on ARM needs to be manually switched on/off).
It's negligible, you still need to manage lifetimes somehow beyond the scope of a single function. Like having a context abstraction and tying destruction of resources to it. Introducing something alternative for special cases only increases complexity as now instead of using a single universal and consistent API you have multiple that behave rather differently.
in many cases (I would say most, at least for my programs in non-interactive scientific computing), all the objects can be created at the beginning of the program, and then no further creation happens. Sometimes it takes a bit of effort to refactor your program into that structure, but it is an effort well spent. Then you can use tools like openbsd's pledge, and reason more clearly about your algorithms.
I concur, that tends to be my modus operandi as well but unless your application is completely monolithic you'll probably have 3rd party init code to deal with at some point. And again it won't help if you need to handle cleanup that's not memory-related.
In the case of an operating system (the subject of TFA) pre-allocating everything is obviously completely impractical and alloca won't help since you can't return the memory outside of the stack frame. I'd wager that there are very few uses of goto in kernel code that could successfully be replaced by alloca (the fact that kernel stacks tend to be very shallow wouldn't help either).
Pre allocation is generally the safe option in a embedded security critical environment where you must always handle the worst case scenario and you know all possible inputs. In a user interaction environment though it's usually better to over-sell so that the user can choose wheter he wants to create 1 million A's or 1 million B's, instead of having a pre-created pool of half a millon A and Bs each.
With pre allocation you usually also end up creating your own resource management within the pre allocated pool and then you are back to the resource management problem...
>The only language where unconditional jumps make sense is Assembly.
In Scheme, because a lambda expression is a closure and shall be optimized with tail call elimination, it is considered as the ultimate goto. It's goto but with procedural abstraction.
Using goto for managing cleanup after an error condition results in cleaner, easier to understand and maintain code. This is a specific idiom that is easy to recognize.
On the other hand, using goto to jump back in the program flow, or multiple branching gotos are both asking for trouble.
For people who say goto statements make spagetti, please consider looking at this goto use case in Linux, sock_create_lite() [1]. Imagine we make the same thing without goto here, is it gonna be more Don't-Repeat-Yourself, more readable, and less error-prone? I don't think so.
I understand the how hamful goto statement is in general all of us know that. But there is very specific useful area in C. When Dijkstra wrote "GOTO Statement Considered Harmful", even before C was born, people tended to use goto statements everywhere because they were used to use assembly jump instructions. But we don't abuse goto statments anymore.
As a C novice, this makes me wonder - wouldn't templates/macros be able to serve a similar purpose to goto statements if the goal is to avoid nested calls but still share code?
EDIT: Or, for that matter, trusting the compiler to inline small-enough function calls?
A gcc extension that gets around macros is nested functions.
> trusting the compiler
99.99% of the time you should just trust the compiler to do the right thing. That 0.01% of the time use a well tested and maintained library.
Hating on goto in c is just cargo cult programming. goto in BASIC and FORTRAN are evil. Though back in the 1960's 1970's programmers often had no choice. Programmers used evil assembly goto hacks in order to get their programs to fit in memory/disk.
Problem with Dijkstra was he was a academic mathematician who despised practical programming where the program needed to run on the hardware available. Hint back in the 1970's professors like Dijkstra had unlimited accounts on the schools mainframe where everyone else had extremely limited accounts.
This sounds particularly useful for customers that spin up instances constantly, because every time then spin up an instance they are paying for 8GB. They could create instances with smaller root volume and save.
A coin with rounded edges won't land on its edge 1 out of 6000 tosses. A coin with a flat edge that is thicker than either of its faces (or just thick in general; doesn't have to be thicker than the faces) will land on its edge a great deal more often.
Also, a coin with a heavier "tails" side will more often land on heads in a spin.
I guess I should read the paper. Maybe it clarifies.
The 1/6000 figure comes from a paper "Probability of a tossed coin falling on its edge" from 1993. I looked for it in the hopes it clarified the type of coin used, but the paper, as far as I could find, is behind a paywall.
Nevermind, found a link that talked about the paper. They used a US 5¢ coin, a nickel which has a flat edge and I think the thickest edge out of all (common?) US coins. At least it has the thickest edge to face ratio in terms of width.