Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The thing is that many people today have never encountered the sort of spaghetti code that Dijkstra was talking about in 1968. There's plenty of confusing and messy code around, but true spaghetti code that GOTOs all over the place and is nigh-impossible to follow has been extremely rare for a long time. I can't recall encountering it in the last 30 years.

It easy to misunderstand what he was even talking about because the paper is so short, assumes you know about this context, and has no concrete examples. People quite reasonably assume it's about ugly code they've encountered, but it's actually about ugly code of a completely different kind.

I'm not that old, but I was unlucky enough to have programmed in an unstructured language where GOTO was the only way to use faux-subroutines in my teens. Whatever you think as code that's difficult to follow: it's nothing compared to this.



> The thing is that many people today have never encountered the sort of spaghetti code that Dijkstra was talking about in 1968.

Can't highlight this enough. The type of spaghetti code "goto considered harmful" was reacting to is basically impossible to create anymore, so anyone who didn't work on that type of code in the 80s or earlier probably hasn't seen it.

And thus, is applying the mantra "goto considered harmful" incorrectly. (Such as trying to avoid it in C for clean error handling, when there's no reason to avoid that.)

To try to replicate that experience, you'd have to write your entire application (including all libraries since libraries were often not a thing) as one single function in C. All of it, no matter how many tens of thousands of lines of code, all in one function. Then label every line. Then picture having GOTOs going every which way to any of the labels. For instance you'd preset the counter variable to some desired value and jump right into the middle of a loop elsewhere. Over time you'd surely accumulate special conditions within that loop to jump out. And so on. It's difficult to even imagine code like this today (or in the past 30 years).


Sure it's possible to have horrible spaghetti today. Just look at any pubsub architecture based system and tell me what piece of code executes after another. It's super popular, and it's GOTOs all over again, just with data instead.


This is a good observation, thanks. Yes, some of these cloud-native patterns do become what is essentially a spaghetti flow pattern, even if not in the fragmented pieces of code directly.


Async-await in general has all the same pitfalls. In fact, async programming, being based on reifying program continuations, is a GOTO equivalent in a quite literal sense.


It's not just cloud, ROS uses this for robotics, for example.


I agree, the closest modern equivalent is code that branches too much from too many places. Modern tools still help a lot to reason about it but it comes to a place where I, personally, have to create call flow diagrams on pen and paper or a whiteboard to actually understand some flows.

Code with extreme branching, while dealing with state/boolean parameters that determine flow branching; error handling that can also create other branches of execution; all of that is really hard to keep in mind when reading such nightmare codebases...


> The type of spaghetti code "goto considered harmful" was reacting to is basically impossible to create anymore, so anyone who didn't work on that type of code in the 80s or earlier probably hasn't seen it.

Its still quite possible in assembly, where goto (JMP) is your only way to do control flow. But I doubt there's many people left who write and maintain large assembly programs. I imagine most programmers reach for C or something higher level as soon as the program becomes non-trivial.

I still use this cute online Intel 4004 simulator sometimes when I teach programming:

http://e4004.szyc.org/

Its a fun challenge for novice or advanced programmers alike to write little programs in assembly for a CPU from 1971. The assembly language[1] is only 45 commands, and you only need a handful of them anyway. The CPU interpreter is simple enough you can literally see it think.

[1] http://e4004.szyc.org/iset.html


> Its still quite possible in assembly, where goto (JMP) is your only way to do control flow.

Yes, certainly very possible in assembly as it ever was. But as you note, few people are doing large scale assembly programs anymore. And I'd say that those who still do, are sufficiently experienced to avoid unstructured jump explosion in their code, hopefully.


I learned coding in the 90s and I've seen that kind of code. While BASIC itself already advanced to the point where structured conditionals and loops were available pretty much everywhere, plenty of code that was written earlier was still around.


Technically nobody stops you from writing one giant function with labels and goto, it's just not the most obvious path even to the most inexperienced programmers.


> Such as trying to avoid it in C for clean error handling, when there's no reason to avoid that.

Dijkstra would clearly disaprove of this use of goto. But he would blame the C language for making it necessary. Languages with structured cleanup (like the using clause in C#) does not need to use gotos for resource cleanup.

Dijkstras argument does not distinguish between short and long gotos or long or short functions. His argument applies to any use of goto.


Where would once find examples of such code?


A modern example of code like this would be any game coded in SmileBASIC for the 3DS.


Among others, likely in old introductory books on BASIC or its other flavors.


If you have ever seen someone try an construct a bunch of nested IF statements with complicated conditional clauses you might think GOTO is not so bad. People have simply become better coders. There are also still GOTOs that are used in specific cases such as CONTINUE and BREAK - no labels required.

If I look back it always comes back to naming and managing names of things. GOTO 100 is meaningless and one eventually runs out of meaningful names for GOTO labels. For me OOPs addressed the naming issue relatively effectively by using the data type as a namespace of sorts.


CONTINUE and BREAK are quite different from GOTO in that they operate predictably given the current scope: their limitations make them incapable of creating the unstructured nightmare that Dijkstra was talking about. They're similar to a GOTO only in that they compile to a jump, but so do IF statements and FOR loops.

Structured programming wasn't about eliminating jumps, it was about enforcing discipline in their use. The simplest way to do that is to eliminate raw GOTO from the language, but it's also possible to just be careful and use it wisely.


Not too many things make me shake my head harder than folks who consider continue/break to be GOTO equivalents. For the reasons you eloquently said.

Additionally, far more often than not, continue/break allow you to avoid another form of complexity, bugs, and low comprehensibility: deeply nested conditionals.


CONTINUE and BREAK are simply jumps to the beginning of or just past the end of the current loop context. They are equivalent to GOTOs to particular program offsets without the programmer needing to create labels for those offsets. They do not have any magical meaning beyond that. You could even call them syntactic sugar.


Structured if/then/else is also merely syntactic sugar over if/goto, but that doesn't make it any less useful.

What makes break/continue (including labelled variants a la Java) useful is the fact that the restriction on where they can jump means that the control flow graph is guaranteed to be reducible. That is not the case with free-form goto.


They're not syntactic sugar in any language that does not have GOTO, because the semantics of GOTO-free languages don't allow arbitrary jumps, so there is no equivalent syntactic structure that you can compile BREAK to.

The distinction matters because the whole premise of Dijkstra's argument is that if you replace the GOTO keyword with a bunch of more limited versions that cannot be used to produce spaghetti, code quality would go up. The only way for that to work is for the language to be semantically incapable of expressing GOTO.

As I said in my other reply, you seem to have the subtyping relationship wrong: GOTO is a subtype of BREAK (anywhere you find a BREAK you could replace it with GOTO), but BREAK is not a GOTO (you cannot do the reverse).


See my other reply. GOTO is the generic type because it can be used to jump anywhere. BREAK/CONTINUE are sub-types because they are limited in where they can jump. BREAK/CONTINUE can be always implemented using GOTO, but not the other way around.

I agree that BREAK/CONTINUE are not syntactic sugar in languages that don't have GOTO.


No, you're still mixing up the subtype relationship.

Type X is a subtype of type Y if and only if an instance of X can always be used where an instance of Y is required.

GOTO can always be used to replace a BREAK. Therefore GOTO is a subtype of BREAK.

BREAK cannot always be used to replace a GOTO. Therefore BREAK is not a subtype of GOTO.

The inheritance relationship here is not single, it's multiple: a GOTO is a BREAK, but it is also a CONTINUE and a whole lot of other things. It's like a monster class that inherits from every interface under the sun and can do just about anything.

Dijkstra was basically advocating for refactoring our languages to extract those capabilities into smaller, more focused keywords (as well as dropping most of the functionality). Rather than having one keyword implement both the BREAK and CONTINUE interfaces, we break them out into separate keywords.


We apparently disagree on whether the general, flexible construct is the subtype or whether the specific, constrained construct is the subtype. You seem to be thinking from a object-oriented programming class hierarchy perspective, while I am thinking from a set theory perspective (i.e, the set of operations that can be done with GOTO is a superset of those that can be done with CONTINUE or BREAK).

At this point, I don't care which you call the subtype. You can claim that as a win if you want, but having spent so much time on this stupid thread I think we've both lost.


I actually rather enjoyed the conversation and didn't feel it was wasted at all, but I'm sorry you didn't feel the same. I wasn't in it to win, just to explore the idea.

I'm still interested in exploring the idea, but you're welcome to tune out at any point.

> the set of operations that can be done with GOTO is a superset of those that can be done with CONTINUE or BREAK

Yes! And this is actually part of my point. If Y is a subtype of X, then the set of valid operations on Y is a superset of the set of valid operations on X. This is true for any types, by the definition of subtyping.

This means that you're absolutely correct that the set of operations GOTO can perform is a superset of those BREAK can perform, and for this very reason GOTO is a subtype of BREAK.

The reason why I'm focused on the types and not the operations is because the question at hand has been whether BREAK has the same flaws as GOTO. My argument is that this hinges on whether or not BREAK is just a type of GOTO.


Yes, it's all compiled to jumps... The point of the discussion is that things like continue and break are easier to read and reason about because they can't just jump anywhere.


My point was to contest GP's assertion that CONTINUE and BREAK were not equivalent to GOTOs.

I agree that CONTINUE and BREAK are easier to reason about because you can look at them and instantly know what they do without having to look up what label they're jumping to.


My point is that it's meaningless to make the argument that CONTINUE and BREAK can be implemented with GOTO, because every control flow structure can be. That you could use a GOTO to implement them isn't in question, what's in question is if you could do the reverse.

It's a subtyping problem, and you have the is-a relationship backwards: a cat is an animal but not every animal is a cat. GOTO is a BREAK (could always be substituted for one), but a BREAK is not a GOTO.

When you need a BREAK you could implement that in terms of GOTO, but no amount of coercion will allow you to use a BREAK as a generic GOTO.


> GOTO is a BREAK (could always be substituted for one), but a BREAK is not a GOTO.

You wrote this backwards, but you seem to understand the relationship and that GOTO is more general. That is, every BREAK is a GOTO (because you can always substitute a GOTO), but every GOTO is not a BREAK (i.e., you can't substitute a BREAK for some GOTOs because BREAK cannot jump to an arbitrary label).


No, I wrote it in exactly the order I wanted to. Because of the substitution property that you acknowledge, GOTO is a subtype of BREAK. BREAK is not a subtype of GOTO because it cannot always be substituted for GOTO. Thus, "GOTO is a BREAK, but a BREAK is not a GOTO."

A GOTO is just one possible implementation of BREAK, just as a cat is one possible implementation of an animal.

The practical impact of this is that it is incorrect to ascribe to BREAK the same weaknesses as GOTO, because BREAK is not a GOTO.


The moment I received a down vote for a simple opinion/observation about relieving naming overload and the similarities of BREAK and CONTINUE to GOTO I knew where this was going:)


IF and WHILE are also equivalent to GOTOs in that sense.

The point is that CONTINUE and BREAK jump to exactly one location given their lexical context and cannot jump anywhere else. They are also only meaningful when applied to structured control flow. The problem with GOTO is the unbound nature of its jump target, which leads to control flow that is difficult to comprehend by looking at the lexical structure of a function.


The argument against gotos in Dijkstras article would apply equally to breaks and continue and even to early returns.

I dont fully agree with Dijkstras argument. For example I think early returns can often improve the readability of the code. But worth noting Dijkstra is not primarily concerned about readability but rather about how to analyze the execution of a program.


Far as I could tell coming in on the end of it people like Dijkstra were primary trying to write proofs about programs. That motivated them to ban constructs they didn't know how to analyze. Problem is that some of those things turned out to be trivially tractable but lots of people never got the memo.


If you read Dijkstra's letter it wasn't about formal proofs. It was about go to statements being very hard to reason about, especially when trying to understand the flow of a program and how you got to a particular point in its execution. The word "proof" doesn't even show up in the letter. It's only a page or so, well worth a read instead of guessing at what he may have been writing about.

http://www.u.arizona.edu/~rubinson/copyright_violations/Go_T...


> CONTINUE and BREAK are quite different from GOTO in that they operate predictably given the current scope

CONTINUE, BREAK, and GOTO all operate predictably because they are deterministic operations. Each continues program execution at the directed explicit (goto) or implicit (continue or break) offset. There is no non-deterministic or unpredictable behavior whatsoever.


Predictably may have been the wrong word, because that does imply non-determinism. It might be better to say that CONTINUE and BREAK are limited: given a scope, the keyword can take you to exactly one place, while GOTO could be used to take you anywhere, and you have to go hunting for the corresponding label to find that place.


> operate predictably because they are deterministic operations

If I sat you in front of a computer generating numbers using a pseudo random number generator and gave you as context the last number it generated, could you make any prediction about the next number it generates?

Now if it used a prng that was known and standardized to only compute one number could you predict anything about the next number now?


A better rule than goto harmful is gotos should only go lower in the function and should only exit blocks and/or skip over them, never be used to enter them.


Generally true, but I've been known to do `goto again;` for those cases where retrying is a corner case. Sure, you can put the entire code inside a `for(;;)` but if it almost always only runs once, you're not helping the reader understand the code.


I'm fond of the MISRA C approach where you have all these bright line (even machine checkable maxims) but if you have a reason to break one you're just supposed to write up a report why its better to do it this way and how you've addressed the risks.

Seems like a reasonable trade for the occasional "goto again".

[Before anyone reads the above as advocating MISRA C ---- I think MISRA actually tells you not to do "goto fail;" which is advice I'm kind of dubious about. It also tells you to not do "good = good && side_effecty_thing();" (no shortcutting operators when there are side effects) so its style has you make a typical function absolutely littered with explicit initialization guards.]


This.

To find an example, I did a search for “Commodore PET Basic programs”.

Here’s a book from 1979 that shows what spaghetti code looks like, in my opinion:

http://www.1000bit.it/support/manuali/commodore/32_BASIC_Pro...

The first program listing is on page 24 of the PDF. Try to follow the logic of the program. Why does line 400 go to 280? What paths can lead to line 400? Who knows! And this is high-quality BASIC by 1979 standards — it’s in a printed book after all.

There’s an auxiliary listing after the program itself explaining the routines and variables used, but many/most programs in those days wouldn’t have this level of rigorous documentation. Deciphering the program would probably have to start by drawing a flowchart of execution paths.


Worth also noting the sort of wild limits with Basic too that are partially responsible for the code being spaghetti, including effectively an inability to add new lines in the code without adding a goto between previous statements.


What are you talking about?

gwbasic had a function to relabel all the lines. And the labels skip by 10 numbers exactly for the purpose of inserting lines. Then when you had hit the limit you'd ask the computer to relabel them in steps of 10 again.


So many hours of my childhood. Wasted. Damn you. Why didn't you tell me about this then?


> gwbasic had a function to relabel all the lines. And the labels skip by 10 numbers exactly for the purpose of inserting lines. Then when you had hit the limit you'd ask the computer to relabel them in steps of 10 again.

gw-basic was released c4 years after this book, and there was a huge change over that period:

1976 - Release of Apple I

1977 - Release of Apple II / Commodore PET

1979 - This book

1982 - Commedore 64

1983 - GW Basic

This book is pretty much closer to the Apple I than gw-basic. Perhaps I should have specifically said developing basic in 1979 though as referenced in that book though (and there isn't just one sort of basic - there are so many dialects).


Wow, that takes me back. I learned to program on a PET and would have devoured this book had it been available. As it was one of our math teachers was tasked with teaching the computer classes bu didn't have any programming knowledge beyond input/output, loops, and simple calculations. Books and manuals were hard to come by.


Also learned to program on a PET in BASIC writing code that was much messier than this because I was probably 10 years old.

Might be why I'm good at restructuring horrible to read spaghetti code.


> And this is high-quality BASIC by 1979 standards — it’s in a printed book after all.

I don’t think that’s true, certainly not for books of that time period. Because the whole field was changing rapidly, writers would often work under tight schedules, and customers would buy about anything because they only had magazines and books to learn from and review sites didn’t exist.

I also think that’s bad Basic for the time. Certainly, comment lines before subroutines would help.


It looks pretty typical for BASIC of the late 70s to me.

You wouldn't want to waste characters on commenting code: machines of that era would have only a few KB of RAM, as low as 1K. For the same reason you don't want to waste characters on long, meaningful variable names or on well spaced code. Multiple statements per line isn't to save print space, it's to save RAM.

Meanwhile, the program's pretty well structured for such a short bit of BASIC: subroutines start at multiples of 100, for example, and each subroutine starts and ends clearly, no shenanigans like jumping from the middle of one sub to another, no multiple exit points for subs, all as linear as it can be. The use of IF is limited to skipping forward a short way to conditionally execute a line or two only. GOTO only exists in those IF statements.

I'd have been happy to have written code like this, back then.

I am pretty sure that my uncle ran this exact program on his computer and printed out biorhythms on listing paper, in the mid 80s.


I messed with some programs written in basic for industrial controllers that was written in late 1970's.

There is a simple thing. On a lot of machines only spaghettified programs would even fit in the memory available. Academic CS researchers with their unlimited accounts on the institutions mainframe didn't have that worry.


I would argue that the average program that got printed in a book was probably quite bad but still of a higher quality than the programs people wrote on their own, simply because the latter were usually written without any education or useful models of working programs.

It’s like an iceberg of bad code: the underwater part nobody saw was astonishingly terrible by modern standards. That code might be running a business, but its author would never get exposed to professional programming. Today Excel often serves a similar purpose. (Excel isn’t spaghetti though since the execution model is completely different.)


> comment lines before subroutines would help.

That code looks normal to me. I have a ton of BASIC books and magazines. You're talking about a time period before full screen text editors were a thing. It's almost impossible to explain to anyone that didn't have to work with TI-99 BASIC, C64 BASIC, GW-BASIC/BASICA, etc. what it was like. Once you got to QBASIC/QuickBASIC it was done. Life was easy. A few years before that, and you're printing out pages on a dot matrix printer and going line-by-line to debug. You'll notice a distinct lack of white space between lines and that comments start with "REM" and a line number. You didn't even get labels for lines. The code looks like that because those were technology limits on really rudimentary devices. You were editing code inside the BASIC interpreter, often using some command like "LIST <line#>". It was awful.

We take so much for granted today. Dual monitors. Color. More than 80x25 character display. Multiple screens and multitasking. Just getting to Linux in '95 and having F1/F2/F3/etc. switching terminals was a huge deal.


Wasting RAM on REM’s was just rookie stuff.

I remember chasing bytes with short variable names, abbreviated print statements, reducing spaces as much as possible etc.

I had like 28k to play with and I was 14 years old!


I remember in Commodore basic, finding that a period by itself ('.') was parsed as zero, but was actually slightly faster than using zero, and saved a byte every time it was used. In other words, you could write:

    10 for i = . to 6.28 step 0.1:next
and it would be slightly faster and smaller than

    10 for i = 0 to 6.28 step 0.1:next
Made for some ugly inner loops, but you gotta do what you gotta do. For that matter, we certainly would have removed some of those extra spaces as well. Bytes mattered and whitespace slowed you down.


And we liked it !


> The first program listing is on page 24 of the PDF. Try to follow the logic of the program. Why does line 400 go to 280? What paths can lead to line 400? Who knows!

Without looking at the post-program material, this isn't exactly a difficult question to answer.

Line 400 is preceded by some print statements:

    370    PRINT "PRESS 'E' TO END, SPACE TO CONTINUE"
    380    GET R$:IF R$="" THEN [goto] 380
    390    IF R$="E" THEN [goto] 120
    400    L=0:GOTO 280
So we have a prompt that says "press E to end, space to continue", and then branches one of three ways: if you provide no input, the prompt is shown again; if you provide an E, the entire program restarts from scratch, and if you do anything other than that, the count of lines drawn on screen is reset to 0 and the next 18 lines of the chart are drawn.

We can assume that line 400 will be hit whenever a piece of chart is drawn to the screen.

The program's structure here is a nested loop: there is a loop between lines 280 and 400 (displaying the chart indefinitely, 18 lines at a time) containing another loop between lines 300 and 360 (displaying 18 lines of a chart, one line at a time).

Why is this supposed to be an example of spaghetti code?


Now imagine you have got a typo:

    400  L=0:GOTO 290
This would be almost impossible to debug.


That would correspond to the following C:

    /* 280 */
    do_something();
    for (;;) {
      /* 290 */
      /* display chart in blocks of 18 lines */
      /* 400 */
      L = 0;
    }
when the correct code is this:

    for (;;) {
      /* 280 */
      do_something();
      /* 290 */
      /* display chart in blocks of 18 lines */
      /* 400 */
      L = 0;
    }
The bug is that the call to do_something() precedes the outer loop when it should be inside the loop.

Is that easier to debug in C than it is in the BASIC program? What's the difference?


Pretty sure we had that program on our home computer in the 80s (I was a young kid but I distinctly remember a biorhythms program). What impresses me reading it now is the "y2k" compliance. If the year entered is only two digits, it adds 1900, otherwise it takes the full year.


To give people some kind of an idea of what it was created in response to: Imagine writing an entire program in one single main function. The only thing you're allowed to do for flow control is goto. You can do 'goto somelabel;' for an unconditional goto, or you can do 'if (somecondition) goto somelabel;' for a conditional goto. Here's some examples of how it would look if if translated to something C-like:

Loops would look like:

    int i = 0;
    loop_start:
    print i;
    i = i + 1;
    if (i < 10) goto loop_start;
Fizzbuzz would look something like:

    int i = 0;
    loop_start:
    if (i % 5 != 0) goto not_fizzbuzz;
    print "fizzbuzz";
    goto done;
    not_fizzbuzz:
    if (i % 3 != 0) goto not_fizz;
    print "fizz";
    goto done;
    not_fizz:
    if (i % 5 != 0) goto not_buzz;
    print "buzz";
    goto done;
    not_buzz:
    print i;
    done:
    i += 1;
    if (i < 100) goto loop_start;
Often, this wasn't just constrained to a single function; the whole program would be constructed like this, with gotos which jump back and forth across pages and pages of code. Languages wouldn't even have a call stack with subroutines (which is why "procedural" languages -- languages with procedures -- were important enough to be given a special name).

At least that's my understanding of it. I haven't lived through this, and my only experience with this kind of stuff is writing assembly, where we always make use of a call stack, so even that is in practice a procedural language. If I have gotten anything wrong, please correct me.


Could be worse.

https://github.com/Keith-S-Thompson/fizzbuzz-c/blob/master/f...

    #include <stdio.h>
    #include <setjmp.h>
    int main(void) {
        jmp_buf jb[7];
        volatile int j = 0;
        setjmp(jb[0]);
        volatile int i = 1;
        if (j == 0) setjmp(jb[1]);
        if (j == 1 && i > 100) longjmp(jb[6], 0);
        if (j == 1 && i % 15 == 0) longjmp(jb[4], 0);
        if (j == 1 && i % 3 == 0) longjmp(jb[2], 0);
        if (j == 1 && i % 5 == 0) longjmp(jb[3], 0);
        if (j == 1) printf("%d\n", i);
        if (j == 1) longjmp(jb[5], 0);
        if (j == 0) setjmp(jb[2]);
        if (j == 1) puts("Fizz");
        if (j == 1) longjmp(jb[5], 0);
        if (j == 0) setjmp(jb[3]);
        if (j == 1) puts("Buzz");
        if (j == 1) longjmp(jb[5], 0);
        if (j == 0) setjmp(jb[4]);
        if (j == 1) puts("FizzBuzz");
        if (j == 0) setjmp(jb[5]);
        i ++;
        if (j == 1) longjmp(jb[1], 0);
        if (j == 0) setjmp(jb[6]);
        j ++;
        if (j  < 2) longjmp(jb[0], 0);
    }


I am sobbing in pain at that atrocity.


I live to serve.


Procedural or not was more of a spectrum. If you look at early BASICs, for example, they had GOSUB, and it could recurse, so there was a return-address stack. But GOSUB did not have any provisions to pass arguments or return values - it was just a GOTO that remembered where it came from; you had to use globals to pass data around. So if you wanted a data stack (i.e. locals), you had to rig your own with arrays.

OTOH early FORTRAN had procedures with arguments and results, but no recursion.

Structural or not was also not necessarily all-in. FORTRAN and BASIC both had for-loops before they had structured conditionals.


The loop you posted is a do-while loop; the while loop has a somewhat less intuitive translation to GOTO:

  loop_test: IF (NOT loop_condition) GOTO after_loop
    loop_body
    GOTO loop_test
  after_loop: etc...


tangentially, can somebody point me to what are considered the best fizzbuzz solutions? I'm both an experienced and cs-educated coder, and I know what I would consider to be a good solution, but I have no idea what the rest of "you" are looking for. (my favored solution would be a small number of state machines running in parallel to sieve-of-eratosthenes the correct answers thus avoid innumerable divisions, but maybe that's just me and I'm old fashioned?)


I don't claim to have the best, or even good, fizzbuzz implementations, but if you're looking for a lot of fizzbuzz implementations:

https://github.com/Keith-S-Thompson/fizzbuzz-c

https://github.com/Keith-S-Thompson/fizzbuzz-polyglot


Surely your compiler won't actually emit an integer modulus if you tell it to mod by a constant?


Not too far from assembly ...


Almost, except in assembly, we always have a call stack. So even assembly is a procedural language, even though it doesn't otherwise have structured control flow.


I used to work at Microsoft on the windows team. There it was very common to have a “goto cleanup” for all early exits. It was clean and readable. Otoh, I once was assigned to investigate an assertion error in the bowels of IE layout code. It was hundreds of stack frames deep in a recursive function that was over a thousand lines long and had multiple gotos that went forwards or backwards. That was an absolute mess and would have been impossible to debug without an recording (“time travel”) debugger.


Any API that returns error codes has that issue; if it just returns an int you can't just set a breakpoint on the "allocate an error" method.


> but true spaghetti code that GOTOs all over the place and is nigh-impossible to follow has been extremely rare for a long time.

Exactly! Recently I had the "pleasure" to work with some FORTRAN IV code from the early 60s, so I know what you mean. No functions/subroutines, only GOTOs. Even loops were done with labels. There is also a weird feature called "arithmetic IF statements" (https://en.wikipedia.org/wiki/Arithmetic_IF). Luckily the code was pretty short (about 500 lines including comments).


Hmm, it is time to live Cunningham's Law I think:

There’s no good way to do a do…while loop in Fortran, other than a goto.


Early Fortran already had loops, but with labels:

         INTEGER A(4,4), C, R 
         ... 
         DO 10 WHILE ( C .NE. R ) 
                A(C,R) = A(C,R) + 1 
         C = C+1 
  10     CONTINUE

Note that Fortran has evolved significantly over time. This is how you would write the same in Fortran 77:

       INTEGER A(4,4), C, R 
       ... 
       C = 4 
       R = 1 
       DO WHILE ( C .GT. R ) 
              A(C,R) = 1 
              C = C - 1 
       END DO


Those are just regular loops, though, right? I’m looking for the posttest loop, sometimes known as the do…while or until loop. There seems to be a unfortunate inconsistency in the naming of this thing.


Ah, right, I misread your post. DO WHILE is indeed a "normal" while loop. There doesn't seem to be an equivalent to "do { ... } while", as found in most C-style languages.

> There seems to be a unfortunate inconsistency in the naming of this thing.

Well, Fortran is older than C, so you cannot really blame them :-)


The funny thing is, I mostly program in Fortran (thus the interest in this construct). It is nice for expressing “this iterative method must be run at least once.” Unfortunately at some point I absorbed the name that comes from the C-ism, haha.


A less ambiguous name for those is repeat/until, as seen in Pascal and its descendants - I don't recall any language that uses that syntax for anything other than a postcondition loop.


May I ask what field you're working in? Physics?


I recall having to sort out spaghetti Fortran back in the '80s; numbers as labels (a la basic but with free-form numbering), computed gotos back and forth in the code, stuff like that. Learnt a lot from fixing that mess.

Forty years later, I'll still use a C goto if the situation warrants (e.g. as a getout from deep but simple if). Maybe because having long been an assembler programmer as well, goto's are part of the landscape (if/else is effectively a conditional and unconditional branch/jump).


> The thing is that many people today have never encountered the sort of spaghetti code that Dijkstra was talking about in 1968.

I would say js/python callback-based frameworks of today like Twisted (also c++/rust futures) is exactly that


BASIC?

GOSUB was so much worse than GOTO in that it had a stack for the current line but no stack for variables so you could not write recursive functions. I think Fibonacci as a recursive function is malpractice but boy was it a hassle to write Quicksort in BASIC although I had no trouble coding up an FFT (1950s FORTRAN style) from first principles in BASIC on TRS-80 Model 100 on a bus ride across Vermont.

Funny though I did come to a conclusion that for the Arduino programs I wrote I didn’t need a stack at all.


Yeah I saw a codebase written in Fortran 77 (the program was written in 1988 or 1989 I believe) and geez... it is almost impossible to figure out what is going on. Programming has changed a lot.


> I'm not that old, but I was unlucky enough to have programmed in an unstructured language where GOTO was the only way to use faux-subroutines in my teens.

Was that a Casio calculator? Because it was like that for me, only GOTOs existed. Learning about C and seeing these things called loops was a revelation because I had reinvented them with GOTOs already in my programming.


I've seen full spaghetti on recently written C driver code for an IC. The device contained a 24-bit processor that wouldn't have a compiler and its one-man dev team was necessarily doing all of the firmware in assembly. He basically wrote all of the C with assembly style control flow, goto-ing all over the place and zero high level statements.


I read the paper when it first came out. At the time I was programming in Fortran which had the three branch if-statements. As you pointed out, that was a special kind of hell. The paper rang very true. However we did all take it to the extreme and go for zero goto with quite a fervor.


Do you know of a good example you could link?


I dug around a little and found an example [1] on a reddit thread looking for examples of spaghetti code. Most of the examples on the thread were just badly written code. Irreducible spaghetti code tends to be complex state machines that cannot be rendered well in a flat format. People like to flatten those out with a trampoline pattern[2], but that can hinder performance.

Malicious spaghetti involves transformations such as

   for (x = 0; x < 10; x++) {
       for (y = 0; y < 20; y++) {
           printf("%d %d\n", x, y);
       }
   }

   |
   | DRY
   v

   x = 0;
   loop_x_head:
   condition_val = x;
   condition_stop = 10;
   condition_var = 'x';
   goto check_condition;
   loop_x_body:
   y = 0;
   loop_y_head:
   condition_val = y;
   condition_stop = 20;
   condition_var = 'y';
   goto check_condition;
   loop_y_body:
   printf("%d %d\n", x, y);
   increment_val = y;
   condition_stop = 20;
   increment_var = 'y';
   goto increment;
   loop_y_end:
   increment_val = x;
   condition_stop = 10;
   increment_var = 'x';
   goto increment;
   loop_x_end:
   halt;
   increment:
   increment_val++;
   if (increment_var == 'x')
       x = increment_val;
   if (increment_var == 'y')
       y = increment_val;
   condition_val = increment_val;
   condition_var = increment_var;
   check_condition:
   if (condition_val < condition_stop)
       goto pass_condition:
   if (condition_var == 'x')
       goto loop_x_end;
   if (condition_var == 'y')
       goto loop_y_end;
   pass_condition:
   if (condition_var == 'x')
       goto loop_x_body;
   if (condition_var == 'y')
       goto loop_y_body;

[1] http://wigfield.org/RND_HAR.BAS

[2] https://en.wikipedia.org/wiki/Trampoline_(computing)


Something like this is the best I could find: https://craftofcoding.files.wordpress.com/2018/01/fortran_go... – the last page has a full listing with arrows to show the jumps.

I grew up with MSX-BASIC: https://github.com/plattysoft/Modern-MSX-BASIC-Game-Dev/blob... – GOSUB jumps to a specific line number (RETURN returns from where it jumped). Even a fairly simple and clean example like this can be rather difficult to follow.


The first part of Guy Steele's talk on Fortress: https://www.infoq.com/presentations/Thinking-Parallel-Progra...


I kind of have, instead of GOTOs, we have RPC calls on the flavour of the day, that just GOTOs only make sense after doing a full diagram of call sequences.

Just like 8 bit BASIC spaghetti code, only refined.


The last time I encountered that kind of code was when I wrote it myself in VB6 as a kid. I’ve never seen it in my decade and a half work life.


You're lucky. If you want to feel comfortable on a plane, don't work on avionics systems written in the 1970s-1980s (and probably a lot from the 1990s). Some horrifically bad code running some planes.


BASIC. The only thing worse than gotos is line numbers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: