> In 2014, Navy officials discovered a flaw in the IBNS. One component could not keep track of more than 150 ships at a time without malfunctioning, according to Navy investigators. The Navy’s solution? Sailors were told to delete tracked ships before the total hit the magic number.
This blows my mind. Can anyone here guess what's going on under the hood? Is this a magic number that a developer came up during testing to avoid running of out memory/swapping?
I doubt it was a magic number, but just an emergent property of how quickly the system could track a certain number of targets. When they saw problems in testing, a workaround was devised, and the 150 number probably came from just counting the number of targets when things started to go haywire. Most likely a bug report was put in the queue, and it never got prioritized because new complaints stopped coming in.
Just a guess—probably designed using high-reliability / realtime principles. This might involve e.g. allocating memory only at startup, which would explain things. The limit has to be some number when programming this way.
In a real-time safety-critical OS, you absolutely do allocate memory only at startup, but this kind of OS usually isn't used for any kind of GUI interface, it's used for much simpler systems. Another trait of these systems is that the CPU caches are all disabled, because those prevent determinism. But again, this isn't something you'd do on a GUI system; it's something you'd do on an ABS brake controller or an engine ECU, where the amount of memory ever needed is easy to determine in the design.
This is also done in the signal processors that power these systems. I don't know this system in particular, but the USN has been moving towards commodity computing hardware for these systems. Essentially, a rack of commercial servers (usually IBM) running stripped-down RHEL with a real-time kernel. The software for these systems is firm real-time, and generally uses the same principals as other real-time software (including, but not limited to, up-front allocation).
This blows my mind. Can anyone here guess what's going on under the hood? Is this a magic number that a developer came up during testing to avoid running of out memory/swapping?