Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That seems like programming 101 for these systems.

In the past, I've worked around this by validating the configuration of a file before attempting to run it. You bail out in a safe way during validation, but still allow a hard error during run time.

Doesn't prevent all misconfigured files, but prevents the stuff like.



I think it was in the early 90s when I first saw something do A/B style loading where it would record the attempt to load something, recognize that it hadn’t finished, and use the last known good config instead. Anyone studying high-availability systems has a wealth of prior art to learn from.


I think all programmers should have the experience of using and developing on a single-address-space OS with absolutely no protections like DOS, just to encourage them to improve their skills at writing better, actually correct code. When the smallest bugs will crash your system and cause you to lose work, you tend to be a lot more careful with thinking about what your code does instead of just running it to see what happens.


Suggesting “Being more careful” never solves these issues because eventually someone somewhere will have a momentary slip up that causes this.

The real takeaway is that we need to design systems so this kind of issue is less possible. Put less code in the kernel, use tools that prevent these kinds of issues, design computers that can roll back the system if they crash.


Perfect example of where instrumentation guided fuzzing like AFL would almost certainly have found a problem.

I agree with the amateur hour observation. But then most things seem to be.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: