Hacker News new | past | comments | ask | show | jobs | submit login
Rules for Developing Safety-Critical Code (2006) [pdf] (pixelscommander.com)
143 points by trymas on Oct 18, 2019 | hide | past | favorite | 44 comments



Mostly good rules for C development in these situations but see John Carmack for a disagreement about rule 6.

http://number-none.com/blow/john_carmack_on_inlined_code.htm...


Interesting to note that the description of how the flags and main loop works in practice makes the resulting program close to ladder logic,[0] still used in programming PLCs. Having used it the global state is annoying at first but does make the program easy to reason about.

The ladders inputs and outputs almost become like a set of invariants like you would find in a functional program, but the monad for state is implicit.

[0] https://en.wikipedia.org/wiki/Ladder_logic


Rule #6 increases readability. eg:

    int foo;
    // a page of code
    foo = bar + 5;
    // more code
vs

    // a page of code
    int foo = bar + 5;
    // more code
This keeps everything in one place. It's more readable and easy to follow.

I feel like the topic of inlined code is similar but is not quite the same.


I'll add another rule: don't ship behavior you can't test. This includes the behavior of a stateful system after an extended period of uptime.

I once spent weekends and late nights fixing a bug which, if it had shipped, would have eventually bricked an embedded product. It caused the software to hang in a way which evaded every watchdog and reset timer. This bug was only barely caught by the edge of a multi-week stress test. As it turned out, the bug itself wasn't in any of the code I had written, but in an open-source dependency.

There are several ways to approach this problem. Besides simply testing your product over an extended period of time, you can build a test harness which simulates time at an accelerated rate. Another method is to periodically reboot your system from scratch after the longest period of uptime you're able or willing to test. It also helps to use software and hardware watchdogs which automatically reset your system, in whole or in part, if it becomes unresponsive (fails to "feed" the watchdog).

However, watchdogs by themselves aren't foolproof - in my case, the program's main I/O loop kept running, but a critical part of the stack no longer functioned correctly.



> No function should be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration. Typically, this means no more than about 60 lines of code per function.

Maybe it’s because C is a lower level language but for me these limits seem much looser than is discussed in higher order languages, specifically for me ruby.

Take Sandi Metz rules for practical object oriented [quality] code.

https://thoughtbot.com/blog/sandi-metz-rules-for-developers

1. Classes can be no longer than one hundred lines of code. 2. Methods(functions) can be no longer than five lines of code.

Its a long way from a 5 line constraint to a 60 line constraint.

I agree with Sandi that smaller ‘single purpose’ functions lead to less coupled, easier to maintain code.

Is 60 down to 5 lines just about a progression of thought, with 60 lines being more what used to be acceptable and is possibly still acceptable with ‘older languages’?

Is one better than the other? How to explain the difference?


A 5-line constraint seems ridiculous to me, even in Ruby, and I imagine that just about every codebase ever breaks it. Do you have an example project following such a style?


My general rule is that a function should help keep things DRY and/or easier to test. Functions should be _reused_.

If your code does a 30 line operation exactly once, it can be inlined. If you want to move it into a function to validate error handling, that is great and now your unit tests help validate code paths.

The fact that is is 30 or 100 lines is immaterial. Making "micro functions" to keep each small just makes it harder to follow the stack. Readability is key and micro functions hurt readability.


DRY is overrated, and lesser engineers can make a hash of your entire codebase in the name of DRY (and don’t get me started on DRY tests. Fuck me.)

In fact the Rule of Three all but says that it’s okay to repeat yourself once, twice is not good, and three is right out. That’s substantially more repetition than DRY prescribes.

Instead, and I admit this sounds a little vague, the code should say what it does. The bigger the thing it’s trying to do, the harder it is to state that clearly, or to verify, so you break it down into separate concepts and string them together. And then you notice how awkward it is to state the same thing many times so you are highly motivated to reuse those statements, refine them, and make them as accurate as possible.

Inlining frequently fails this test because you lose the boundaries of the “thing” that is happening, and people start slipping non sequiturs in, start rambling, which makes it very hard to follow their reasoning.

If you can’t follow their reasoning, you can’t defend it. You won’t defend it. And pretty soon it doesn’t say what it used to and that’s where you get regressions. So if you care about that, you want to write your code so that they understand, in which case breaking your intentions becomes pretty indefensible.


Totally agree


In Ruby, Rails and many projects in the Rails ecosystem aspire to it. The result is (to me) poor code locality and unreadably deep call stacks.


I can't imagine ever choosing RoR for a safety critical system.


What is ridiculous is to have blanket rules & constraints instead of guidelines that can be followed or not depending on common sense and the actual case.

It is idiotic to treat, say, the implementation of some kind of complex 3D rendering algorithm in exactly the same way than you would treat an UI view for a login form.


Nearly all of the front-end developers I've suffered the misfortune of working with over the years have been aghast at the notion that their UI code might be responsible for anything less than the fate of all mankind. Ironically, this has done nothing to raise the shockingly low standard of web development.


A static analysis report of published codebases regarding metrics like line length per function would be interesting to see.

Guidelines should come with a rationale so that in the very least following them when it contravenes their intent can be justifiably avoided.

If I can't understand and appreciate the rationality behind a guideline and I'm not required by coding standards or tooling to follow it, I'm not going to.

Here's a guideline; don't surrender your common sense and let someone's generalized ideology dictate the design of your program.

While I see advantages to minimizing the length of functions, I can't imagine a well written program following this five line rule.

My concerns are that it increases the length of the source code which damages readability, that it scatters functionality and obscures control flow, and that it unnecessarily requires the formation of many interfaces.

The rule is far too general in it's application "all functions" and at the same time too specific "5" to be useful.

If you said instead "in general, try to make your functions do one thing well and divide and conquer the problems until each function is not very hard for you or someone else to understand",

Then OK.


Garrett Smith advocates for tiny functions; here are his blog posts about Python[1] and Erlang[2].

Admittedly Erlang's powerful pattern matching makes it easier, but it definitely can be applied to multiple languages. The biggest problems I've found trying to apply it are the lack of pattern matching in most languages, and the problem of naming.

1: http://www.gar1t.com/blog/more-embarrassingly-obvious-proble...

2: http://www.gar1t.com/blog/solving-embarrassingly-obvious-pro...


It's horrible. The next sentence tells why. It's horrible because of something. That something is noise. The noise is because of indirections. No one talks like that.


Some functions are complicated, some are simple, some are very simple. C makes you write out more of the very simple ones by hand.

For example, imagine a very simple bytecode interpreter: In C, you need a function to dispatch all of your bytecodes to the correct implementation functions with the right arguments. In Common Lisp, this is a prime target for a macro to turn a configuration file into actual code using fill-in-the-blanks boilerplate iterated a few times. In C, you have to write the switch statement by hand, which leads to writing a function as long as the number of individual bytecodes your interpreter knows about. Exceeding sixty lines is entirely possible assuming formatting directives are adhered to such that you can't bunch statements up on a single line. However, that switchyard function is the simplest way to write that code; breaking it up would only make things needlessly complicated and harder to check.


That's my opinion as well. A function should have a one job and do that one job. Not two jobs, not five jobs and usually worse 3/16ths a job because some line limit rule.

The line limit rule sounds to me like a CS professors homework assignment instruction that escaped into the workplace.


In C, you could do that with an array of pointers to functions, and just index into the array. (The functions would need to know their own number of arguments, though, and consume them from the byte stream. But I think I prefer that to the dispatcher knowing the number of arguments that each function consumes.)


That would defeat some static analysis, so I can see it being disallowed by a careful style guide.


Well - in C, you can use macros as well as in Lisp. Classically, you'd use X-Macros, requiring a separate file to hold the list of bytecodes; in the modern style you can use Boost.Preprocessor to operate on preprocessor data structures.


You can use Boost in C? Is this something that's commonly done? I thought Boost was for C++


Boost.Preprocessor is the only part usable from C, though I wouldn't generally recommend using it.


And exactly such a function exists in CPython, the main implementation of Python :) It's about 2000 lines of code, as far as I remember.


In the Joint Strike Fighter (i.e. F-35) C++ Coding Standards, the limit is 200 lines. But there is also a limit on the cyclomatic complexity number (20 or less), such that you can either have 200 simples line, or a shorter function that is more complex.

You want something that allows you to implement the smallest unit of complexity without having to artificially breaking it down. If you implement a FFT with just 5-line functions, the result is probably going to be more messy than if you did with less restrictive requirements.


More messy and with terrible performance.


Not necessarily, if your compiler is smart about inlining. Reading stack traces is an entirely different story, however.


Code made of 5-line functions is more flexible; it can be more easily made to do things its original author did not anticipate. Code made of 50-line functions is more verifiable; its author can more easily anticipate which things it can do.

Flexibility increases when there are more things the code can do. Verifiability increases when there are more things the code can't do.

Of course most code could be simultaneously improved in flexibility and in verifiability, but ultimately there's a tradeoff.


I do think you got the correlation correct, but causality inverted.

Abstract problems have the complexity removed from them, what leads to smaller code chunks (be them functions, classes, modules, declarations, whatever). Those are also more reusable and flexible, because they are focused on general features.

Concrete problems can not have the complexity removed from them. That leads to large chunks of code and an implementation that can not be used for different things. That is the code that is expected to specialize those abstracts chunks from above into something that solves a real problem.

All that people naively claiming that they've "seen short code and it is so much simpler to understand. Why can't everybody just write short code?" are completely missing the point.


You're describing indirection, not abstraction. Indirection is bug-prone, improves flexibility, and produces smaller functions. Sometimes it can be a means of getting abstraction, but it is not the only way.

As an example, DGEMM is a very abstract function, applicable to rotation in 3-D space, Markov-chain occupancy computations, finite-state machine simulation, ANNs, and numerous other applications. But it doesn't use indirection. And it's not a particularly short function. (The same is true of most of the rest of LAPACK.)


ime/o aggressively short functions are painful to debug, understand, and maintain. When there's a bug in a function you need to go down the call stack to find the offending code and fix it. The files wind up being much larger and harder to navigate, and you wind up needing to use tooling to figure out what is broken and why, as opposed to seeing it on inspection.

imo splitting architecture into functions is more about reusability than some arbitrary metric of "one thing" a function is supposed to do, since you can make that "one thing" as granular as you want and leads to a lot of terrible code.


This is from JPL.

For cars, there is an ISO standard for SW - and the rules are likely a lot stricter than these. I wonder - will there ever be an ISO standard for space travel?


The FAA/NASA have guidelines for getting space craft "human rated":

https://www.huffpost.com/entry/what-are-the-safety-regulatio...

The software design aspects of these rules have probably evolved in conjunction with DO-178, the industry standard for designing safe computing systems for avionics.


You're thinking of MISRA C:

https://en.m.wikipedia.org/wiki/MISRA_C


No. While MISRA C was created for cars, the ISO standard is ISO 26262. MISRA C is not required for ISO 26262, but it does satisfy some of its requirements and so most C developers will use it.


Iso 26262 do not quote misra, but everything which it asks is basically contained in misra


> Iso 26262 do not quote misra

It doesn't require MISRA, but it does actually mention it as an example.



Could you share what standard do you mean? MISRA, 26262? Is there others? Sharing with the company :)


26262.


The most important rule is not there: All zombies should be thoroughly justified: why a unit test cannot be written to kill it?


Could you clarify what you mean here?


The test coverage should be so good that mutation testing finds nothing uncovered.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: