Hacker News new | past | comments | ask | show | jobs | submit login
Reverse-engineering a three-axis attitude indicator from the F-4 fighter plane (righto.com)
212 points by zdw 87 days ago | hide | past | favorite | 65 comments



Thanks for including ridiculously high res images.

And it amazes me how many analog tricks they used. Modern day would be a couple lines of code.


The 1950s were a time in computing where it wasn't a given that digital computing was clearly "better". We still hadn't developed methods of mass-producing reliable, fast, and cheap microelectronics and controllers. So for high-reliability applications, analog computing was THE solution.

In 1954, Rex Rice wrote this piece about preferring a simple plugboard as the means of programming a computer, versus any sort of abstraction with a programming language (https://dl.acm.org/doi/10.1145/1455270.1455272). So it was still very much up for debate, whether high-level programming languages were even the right solution for the problems being faced.

But I agree with you, our forefathers were simply geniuses to have figured out how to manipulate the physical world to produce mathematical computations. Early in his career, my dad had to disassemble and reverse-engineer some Soviet-made aerospace devices, and he still fondly recalls how superbly engineered and precise the Soviet devices were. I wish there was more information out there about Soviet computing, but the winners do write history after all.


My understanding the surprising factor that settled the debate for analog vs digital computers. was that digital computers require far less precise electronics. the voltage in an analog computer has to be exactly 5.2648 volts, every resister, capacitor, transistor has to be a high precision part, the more complex the computer the higher precision required. while in a digital computer "close to 5 volts" is good enough. this makes the components cheaper, more reliable and smaller. and as such the digital computer won.

This is why I have my doubts about existing designs for quantum computers, the quantum algorithms we are trying to solve in hardware require an analog design and we rejected analog computers before, mainly because they were unable to scale. This inability to scale is the same problem we see with quantum computers.


Not only is "close to 5V" "good enough", it's so "good enough" that we consider a CPU running a program practically idealized. When programming a computer, you largely don't think about any hardware cases going wrong, because they're so vanishingly rare.

This is a huge departure from the physical world, and it speaks to what a massive benefit computers bring.


The best part of a computer is its utter disconnection from reality.


I wonder if EMP resistance was also considered, and if analogue computing could have offered better resistance compared to the digital devices of the time.

Impossible to know, though the use of nuclear weapons by an enemy was probably on everyone's mind, and diversity of redundant systems may have been applied in case some hypothetical enemy air defense came about during the lifespan of the aircraft.


Probably not a factor. There is a backup attitude indicator on the bottom right side of the panel. So a more modern version would have gone digital for the PFD (more options for extra indications) even with EMP risks considered and just kept the analog backup instrument.


It's interesting that you note the unreliability. I always assumed tubes were unreliable, but thought anything solid state (even those card based systems) would be "reliable enough" to start taking for granted.

But then you look at it and think Yeah, obviously they're not going to have MTBF times in the millions. It's going to be hundreds of hours - once a week, or maybe every few weeks between real hard crashes.

How would that change your behaviour.


I've wanted to add such an indicator to my car's dash (I already added a boat compass, which I find quite useful and aesthetic). Unfortunately, electronic indicators of any kind are much more rare than vacuum powered ones or all-glass cockpits.


I am currently pondering the idea of building a modern electronics "replica" one of these, with 3d printed sphere halves containing stepper motors, magnetic rotary encoders, and a 6dof compass/gyro imu. If you put an Arduino or ESP32 inside to drive those, you could have simple slip rings that only needed to supply power through the roll and pitch axes.

(Only pondering though, I have had the same idle thoughts about making my own Russian Soyuz mechanical navigation instrument too from this other writeup of Ken's https://www.righto.com/2023/01/inside-globus-ink-mechanical-... but somehow the idea of making replica soviet vintage tech isn't as appealing as it was a few years back...)


That would be a wonderful project. It'd be cool (albeit inefficient) if there were a way to use induction and remove the slip rings altogether.


That's because vacuum powered is the traditional way in small aircraft and the modern replacements are all-glass based on AHRS.

The number of planes without a vacuum system but with electrical mechanical attitude indicators is quite small. Your best bet are electric mechanical backup instruments used on earlier installations of the all-glass G1000.

Take a look at Diamond DA40 and DA42 for electrical backup attitude indicators, but for example their next models (DA50 and DA62) use all-glass backup instruments.


What you need my friend is a ring-laser gyro.


Boat compass on the dash is awesome, I might have to borrow that. Any issue with interference from the vehicle itself?


Yeah. My compass (a Ritchie) has two axis calibration at the bottom; I ended up maxing out one of the axes (so it's still a bit off). Also, it tends to shift by a decent amount when the car is pointing up or down steep hills.


Author here if there are any questions...


How accurate you think this instrument was compared to the ic based sensors found in your typical smartphone nowadays?


According to a paper on navigation sensors, commercial grade sensors have gyroscope drift of 0.1º/s (which is consistent with iPhone data), while navigation grade sensors have a drift of <0.01º/hour. I couldn't find specific numbers for the F-4's inertial navigation system, but I assume it is navigation grade. So the aircraft gyroscopes would be orders of magnitude better than a smartphone. For the azimuth, the F-4 used a flux valve compass, which must be much better than the relatively poor compass on a smartphone. Of course, the smartphone sensors are orders of magnitude cheaper and smaller.

[1] https://doi.org/10.1186/s43020-019-0001-5


Drone flight controllers can use 3 axis magnetic field sensors combined with their 3 axis gyros and accelerometers, and use various sensor fusion algorithms to produce higher accuracy compass headings which shouldn't have _any_ long term drift, since the magnetometer readings can compensate over time for any short term gyro drift.

I'm not sure how accurate the magnetometers are, and how accurate the calculated compass heading can be, but I recall a project where I used a pair 6DOF accelerometer/magnetometer breakout boards to measure the angle change between the body and the grind setting adjustment ring to datalog my coffee grinder, and I was getting 10bit resolution degrees which all _seemed_ to accurately track the small sub 1 degree adjustments when dialing in the grind. Although I'll admit to not taking the time to understand all the math involved to get from the raw 3 axis acceleration and microTesla magnetic measurements to compass heading, I just copy/pasted sample Arduino code which gave me 1 decimal place degrees. But my grinder has 100 "notches" around the 360 degrees of the grind adjustment ring, and my data logger at least looked like it was accurately/repeatably showing changes down in the quarter to half a notch range.

So while I don't think $50 worth of stuff from Adafruit will get you <0.01deg/hour gyro drift rate, with magnetic calibrations and sensor fusion you could probably hit at least 0.1deg long term precision/accuracy, so over 10 or more hours you'd have "navigation grade" accuracy. Which is probably "good enough" for anything traveling at less that a couple of hundred MPH. 0.1deg error is about 1km every 600km - so if you were sailing from Sydney to San Francisco (with no GPS or astro nav equipment) which is ~12,000km, you'd be within 20km or so when you got there. Which seems "close enough" for anything except delivering nukes.


This instrument is only an display that shows data coming from another device that does the actual measurements. One of the reasons why the threewire synchro interface is used is that it is surprisingly accurate as long as you don't care about the fact that it is “slow” by modern standards. The same interface was used to direct artillery and similar things that require significant accuracy to be effective.


Well this is just an indicator, the accuracy from the actual IMU would also need to be considered. The indicator itself probably isn’t the main source of inaccuracy once the IMU has drifted a bit.


In civil aviation specifically jets, your AI pitch on the horizon does actually reference body angle not level flight. I wonder if it's a function of the precision and scale of mechanical AIs and also the wider flight regime of military aircraft.


Fun fact is these airplanes are still being used as the backbone of Iranian Airforce and the very same unit was being used before they upgraded the avionics a couple of years ago on some variants.


Asking just out of curiosity/ignorance. The author mentions that the F35 has a completely digital touchscreen to basically do anything on the aircraft (I assume). I can also image a powerful gun damaging it, then how does pilot manage if that screen stops working at all? Compare the same situation in the F4. The hit would only break/damage the instruments on that line of fire, correct? So in one case you would be totally screwed while in the other one you would partially lose some instruments, right? I must obviously not be taking into account something (or many things) for the F35, but in my mind having a 100% digital aircraft seems pretty scary.


Generally, if the cockpit is getting hit with damage to the instruments, there is a very good chance the pilot has also been injured or killed, and doesn't care about the instruments anymore.

In old gun fights (which just don't happen anymore), shots were likely to come from behind (so, they intersect the pilot) or the top (so, through the canopy if they're hitting the instruments). This has to do with the orientation both planes are probably in if one is shooting at the other. Go back farther and you get shots from the front, not from fighters (head-ons are very difficult to pull off outside of videogames) but from bomber tail gunners - very old planes from WWII even had bulletproof glass in front of the pilot for this reason. If the F35 has gotten into a gunfight, the pilot has fucked up, it's not a dogfighter and wasn't designed to be one.

Even nowadays, if the missile or flak pops next to the cockpit and has managed to damage the instruments, there is a very strong chance the shrapnel has also hurt the pilot to the point that they're not flying home that day. This is the most likely way for the F-35 to be damaged in the modern era.

There are obviously scenarios where the instrument panel gets damaged but the pilot is okay, but it's such a low probability scenario that they likely deemed it to be less harmful than the benefit they foresee in a glass cockpit.


Thanks for replying! As other mentioned I was missing/not considering the most important case that the pilot is assumed to be dead and that the plane is not supposed to receive such fire.


The relevant question strikes me as less extreme outlier combat scenario, more routine safety of flight upon primary display LRU failure. See this reply[1].

[1] https://news.ycombinator.com/item?id=41683263


Can't speak for the F35, but for the fighter I work on we basically consider the pilot dead if you have shrapnel damage in the cockpit. For instance, the FCS is located behind the pilot. That being said, I would assume the F35 display being at least dual redundant (think two displays merged together, which can be done seamlessly) for flight safety reasons.


If the displays are merged seamlessly, how will you know if one has failed?


I assume both displays in use, but have failover to one only if a display goes “dead”and then the single display can still display the most critical information/controls to the pilot; to me that seems like the only logical implementation. Giving 1 of the 2 screens failing. It should be fairly easy to set up a redundancy failover process, I’ve done that many times in embedded coding where we failed over to a backup system.


Reduction in brightness?


I assume they're using LCDs, where the brightness is provided by the backlight.


> I assume they're using LCDs, where the brightness is provided by the backlight.

Presumably that also would be duplicated. Why go to all the effort of having two displays but still have the risk of a single backlight going out and making both displays useless?


Uhh, the pilot will notice that the right half of your display is black and you have to move the displayed data to the left one?


Uhh, that's not "seamlessly blending two displays", that's just having two desplays. If you've seen photos of planes, they don't just have an empty screen next to a working one anywhere.


Ah, I misunderstood - it's not uncommon to have those "big" screens also put together from two different screens side by side.


This is actually what I meant by "merged together". They are two separate displays controlled by separate computers, with separate power supplies etc. During normal mode operation they're synchronized somehow and can draw a large image over the entire display area, but in emergency mode you can display crucial flight data on only one of them if the other stop working.

I have never really thought much about it, but I guess the actual LCD thingie is manufactured in one piece and the control signals just being split between different computers somehow, but I don't know anything about how these things are made! :)


Here's a paper on a redundant display for military aircraft. Each LCD panel has redundant inputs, driver circuits, and power supplies. The two displays are built on a single LCD substrate, so there is no line between the two. https://www.mrcy.com/resourcehub/displays/dual-redundant-dis... https://www.mrcy.com/legacy_assets/siteassets/product-datash...


The backup for the display is an integrated standby instrument system (ISIS), which combines several essential instruments into one small digital display. An ISIS typically has its own sensors and a battery backup, so it should stay operational even if the main display fails. https://en.wikipedia.org/wiki/Integrated_standby_instrument_...


Oddly, no one has mentioned HMDS[1] in discussion yet.

[1] https://www.collinsaerospace.com/what-we-do/industries/milit...


They're not just regular screens. They're highly hardened, redundant, specialized displays, it's a whole industry.

There are companies that make displays that have clear conductors over the screen so they can heat them so they can be used and maintain function even when on the deck of an aircraft carrier in the arctic.

There are companies that still make CRTs for specific military purposes.

These screens are safer, more reliable, and durable than the mechanical systems they replace.


The displays aren't that much special. Probably the main two things that are special about them are color rendition and contrast and the rest is just about the certification process. And extrapolating from automotive experience, the color rendition and contrast is about some team of engineers being solely dedicated to simulating various lightning conditions and verifying that the screen remains legible, does not interfere with night vision and does not cause reflections on other instruments that would make them hard to read. In automotive this kind of simulations use multiple terabytes of reflectivity data for various mostly “dull” materials (gigabytes upon gigabytes of data on what the driver might wear…), so extrapolate from that to “most advanced fighter aircraft”.



Basic flight instruments almost always have a backup. In case of F-35 there's a small square screen in centre console which shows attitude indicator and flight parameters. Needless to say, if main screens are out you are turning around and looking for the nearest airport.


I'd imagine the ejection system is going to be activated by traditional handles, and not a screen. Same with the basic flight controls; there's no reason to move to a touch screen throttle or flight stick.


The F35 is not meant to be a dogfighter. If it has gotten shot such that the control screen in unusable, something else has gone wrong.


It's a high performance fighter with a gun and SRMs, so...


...so you turn around and go home long before you're forced into a gun fight.

Sure, the F-35 is a multirole fighter aircraft with a gun - so is the F-15E, but that doesn't make it a dogfighter. If you're in a position where a guns solution is your only kill option in an F-15 or an F-35, something has gone terribly wrong. Pretty much everything about both jets is designed to operate in a theater with extremely high levels of air support and friendly materiel. It is assumed that they won't get into a dogfight because it is extraordinarily rare for an F-35 to exhaust two AIM-120s in a single sortie, let alone the 6 it can carry in stealth mode or the whopping 12 AMRAAMs that the F-15 can lug along.

Even as far back as the Vietnam War, forcing a missile truck like the F-4 into a dogfight with a MiG-15 was a death sentence. You don't need a very active imagination to suppose how an F-35 fares in a guns-only dogfight against an Su-27.


FYSA, F-15Es and F-35 squadrons train to BFM all the time, and it happens in exercises all the time. The enemy gets a vote, and fog of war is real. I suspect the key piece you are missing is: If an engagement happens when you're in a stern WEZ, you may have to go to the merge.

Also, you are not carrying that many MRMs on an eagle unless it's the new one with quad packs.


F35s (I believe from the 134th fighter squadron of the VT Air Guard) train over my house regularly and I can confirm they do quite a lot of slow, close quarters dogfighting.

EDIT: I believe I've also seen F22s but I have no idea where they're from.


Depending on the fighter: redundant systems. Ie multiple independent Ring Laser Gyros, (viewable on multiple independent displays), backed up by analog "round dials" instruments.


If you project a line that crosses an aircraft instrument panel it's hard to imagine a line that didn't also go through pilot's body.


to be fair isn't the purpose of the F35 fairly different since it's extremely reliant on stealth and beyond visual range engagements?;Instead of getting close enough to be gunned down, it is supposed to strike from so far away that the enemy wouldn't know it's there.


kens@ is a treasure we do not deserve.


Thanks!

Wait, you're the Linux/4004 guy, aren't you? That project was truly amazing.


Thank you :)


Crazy to think that all that technology was built by people using slide rules.


I bet the engineers responsible for this would be so stoked that someone figured out how they solved all these problems.


pretty awesome to see the engineering details involved! -thanks. as a software person I always wonder how they handle bugs and QA when building complex pieces of hardware like this


Physical products require "test engineers" to design and run appropriate physical tests of products. It's an entire discipline worthy of study. Design for Six Sigma is a great place to start if you're ever interested in understanding ultra-high reliability applications.

https://www.youtube.com/watch?v=_g6UswiRCF0


The strangest concept for modern software engineers is that it had to ship bug free and it could never be updated with firmware patches. Shipping under those constraints brings a certain level of focus not experienced in modern design.


My dad used to work on certifying, servicing and making custom instruments for planes, subs, prototypes of all kinds of that era (60s to mid-90s).

His “lab” was basically all about testing and simulating environments for the instruments. He had tons of sayings about not having room for error in his line of work. This is as close as you can get from “building bridges” and to this day I don’t think I have seen this level of attention to detail/perfection in any other profession.

His job involved electronical engineering , mechanical engineering and programming amongst other things, not to mention a deep knowledge of the physics of these environments.

Back then also the tools or source of information that were available to them were quite crude compared to what we have now.

His spare time was all about flying, pimping his ham radio gear with all kind of “home made” electronics, build antennas and messing with computers. I guess he’d qualify as a “Hacker” nowadays.


I think the key is that in those days you didn't launch a product until you were absolutely sure it was going to work well, it was prototyped and debugged before it was launched. At least that is the impression one get with classical tech, solid reliability.


So basically like designing and building a bridge?


Umm...If you ship firmware today, sure it _can_ be updated, but almost nobody does update firmware, so yeah, that shit has to work when it ships.

Also, I've never been at a place that tested FW patches as well as full releases, so...do you _really_ want to install somebody else's random FW patch? I don't unless I have some known problem with a fix in the release notes...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: