>SpaceX also made use of Chromium and JavaScript for Dragon 2 flight interface. I am not sure how that passed the certification. I assume it was allowed because for every mission-critical input on the display, there was a physical button underneath the display as well
I think that's all the validation we need for HTML/CSS/JS as the best tool for UI development nowadays. I wonder if there was actual shared code from the Dragon UI used in their online docking simulator. How neat.
The consensus regarding automobiles and touch interfaces is starting to form that they are just a bad idea.
Physical switches, knobs, toggles, buttons -- these things can be activated using one's hands without needing to coordinate with sight, meaning, our eyes can stay on the road.
There is no road to keep your eyes on in space though, so needing to coordinate hands and eyes is clearly not that big a problem for the Dragon, and might even be better than lots of physical inputs: you can cram more virtual inputs into the same area by using menus and what not, and that might make it easier to navigate one's way around them. Then again, complex menus might make things worse in an emergency. There's not that much for the astronauts to do in Dragon though, so it's probably all OK.
Disclaimer, I work for GM, any thoughts or opinions are solely my own. I do not work on car user interface.
I agree that anything you need to do while driving should be a physical interface, but some of the 3D haptic stuff looks really cool.
I don't think touch screens are going away because the display is so valuable, and there's so much stuff that is non-critical that you can configure from a touch screen.
Touch feedback would be a massive improvement. I've seen countless patents about them but in practice, all we get are vibrators.
The only convincing tactile feedback I've experienced is in the MacBook trackpad, even the much publicized "Taptic Engine" of the iPhone feels like a vibrator. And while the MacBook trackpad is convincing at emulating clicks, it is the only thing it does.
Have you tried the “HD Rumble” feature of Nintendo Switch games? It’s about on the same level as the “Taptic Engine”, but it’s been made use of for a far-larger variety of use-cases, so it’s helpful to get a real sense of what the possibilities are of that level of tech.
When I visited the SRI and NASA UI design groups in the early 90ies, any feedback if touch, force, audio even smell was being considered and implemented. Touch and force feedback was extremely advanced there, nothing ever made it outside. They even had early versions of Google Glass, so that they can work hands free.
But those SpaceX touchscreens are very bad UI design. Would not make it in any car or plane. Those buttons need to be physical. Remind me on the Apple touch bar, when you need to use it at 3g.
I don't think critical vs noncritical is the important distinction for cars. It's distracting vs not distracting. The physical knobs and stuff in my car I don't have to look for to use. A touchscreen button requires much more careful finding and pressing since you can't just root around for it.
Why not a voice menu? In addition to a physical interface. For most functions recognizing just a handful of words would do the job, like <wakeup word>, one, two, three .... You could shut off the air conditioner like
you: "hello hal"
hal: "say one for radio, two for air cond..."
you: "two"
hal: "say..."
you: "three"
You wouldn't want a voice recognition app to steer or brake, but it would be reliable enough for changing the radio station or toggling overdrive. Are there high end cars that have voice control stock?
Of course you can. If it has any form of display, it can display customized ads. Some radio stations manage to put ads on the 14-segment display that is supposed to show the station name and/or frequency.
And if it doesn't, there is always room for a sticker. Buy a new laptop and you are going to see half a dozen ads before you even turn it on: on the box, on leaflets inside the box, and in the form of stickers,... It seems that only Apple doesn't play that game, but the privilege is expensive.
> Some radio stations manage to put ads on the 14-segment display that is supposed to show the station name and/or frequency.
We can talk about hardware protection against ads. The hardware would need to recognize allowed data - like station name and frequency - and reject disallowed, like attempts to make a running text. This will limit functionality; maybe it's worth it.
We can talk about software protection - when user controls some layer of software. We can implement it as a type system, with the same result (only now we can modify the software layer if we need). Type system which makes ads non-representable looks like a nice solution.
Guilty myself in writing a game resembling Tetris on a 8-segment display (blocks moving horizontally), so yes, people can be inventive...
> so needing to coordinate hands and eyes is clearly not that big a problem for the Dragon
Two things. During ascent the g load on the passengers is steadily increasing. By the time you're 6:30 into the burn, you'll be experiencing about 2.5g, and just before MECO at about 8:15 into the burn you're close to 3.2g. I'm not sure using a touch screen in a high-g environment is a good idea.
Second thing, right after MECO you're suddenly very close to 0-g. Which is a fun ride to be sure, but _everything else_ in the cabin is suddenly weightless as well. Items have a tendancy to start floating around the cabin and knocking into things.
> There's not that much for the astronauts to do in Dragon though, so it's probably all OK.
When everything is going according to plan.. there isn't much to do. This isn't the scope you want to design for, though.
> I'm not sure using a touch screen in a high-g environment is a good idea.
Agreed. All of the events during this time are fully automatic. If the astronauts needed to personally abort for a reason that ground control did not see, I saw what appeared to be a physical abort switch that needed to be turned and pulled (but I could be wrong)
Keep in mind you might be operating them with a spacesuit gloved hand. I'm not sure how thick/solid they are, but there would be a decreased sense of physical tactile feedback.
Looking at the left panel, I'm guessing you first select the function you want (Water Deorbit, Deorbit Now, Breakout, Depress Response, Suppress Fire, Fire Response), then click either Execute or Cancel. Lights on the switches help you know what is selected.
What consensus? Bad interfaces on low-end touch screens are terrible, but plenty of Tesla owners would take the screen over the plethora of physical knobs.
Volume and vehicle control/steering are the main inputs where physical makes sense.
Pilots have a duty "to see and avoid" in VMC, which means visually scanning outside the plane 90% of the time.
But it's human nature to fiddle with instruments and displays. So even in the steam gauge era it was a tough balancing act, and now it's much worse with acres of cockpit electronics.
It requires more training, and one would hope ADS-B provides sufficient collision alerts, to offset the heads-down tendency.
One of the best illustrations of this is the Qantas A380 in-flight engine failure that resulted in 1,000 or so alerts. They had to be manually acknowledged before the systems were cooperative again. One of the pilots was completely occupied with that task alone.
As a Tesla owner I completely agree. I really like what Ford did on the new Mach-E. They unabashedly 'stole' the touchscreen idea, then iterated. Now I want the smaller, wider LCD in front of the driver, and the big knob at the bottom of the center touchscreen. Maybe Tesla can copy that idea and iterate it some more... add some programmable buttons along the bottom or something ;-)
For some things, absolutely, but for the times when you need to interact with things on the screen that are more complicated than simple volume, a knob is still a pretty great option to have.
> From my experience, the majority of people who drive a car with touch interfaces don't want to look back to old fashioned buttons.
Many automakers that had touch-heavy interfaces are moving back towards physical controls, both because of market demand and evolving industry safety considerations.
I have no problem with touch controls in general, but replacing a volume knob I can find blindly with a relatively small pair of "Vol+" and "Vol-" touch targets is mildly infuriating. It's OK as the driver because there is an actual tactile control on the steering wheel, but downright unpleasant as a passenger.
It is fascinating just how poorly modern touch interfaces do compared to older vehicles. As an example, with older cars with vacuum controlled HVAC, the sliders and such were very, very "notchy" and you could change all the settings without taking an eye off the road. In the 80s/90s the buttons change to electrical instead of mechanical and you start needing to glance, due to the lack of haptic feedback. Fast forward to the modern era, and you generally have no way to control anything (other than the volume) without a full look at the touch screen, plus a longer look to make sure your finger hits the touch target.
I will admit that the touch screen in my new VW is fashionable and I love all the data screens -- but I have had near-misses due to having to stare at the screen to do stuff like hit touch targets for changing my spotify playlist. It has nothing to do with responsiveness either -- the screen has no perceptible lag, its all to do with lack of haptics / feel.
edit: I drive both an early 90s truck and a late 2010s car. I don't miss the touch screen when I drive the truck, I only miss Spotify / Google Maps.
One of the issues with those automakers is that their interfaces are in general unresponsive and slow. The one in my BMW sucked. The Jaguar I-PACE is very slow as well.
Tesla got this right with both the touch screen and the physical buttons on the steering wheel for things like volume control.
> One of the issues with those automakers is that their interfaces are in general unresponsive and slow. The one in my BMW sucked. The Jaguar I-PACE is very slow as well.
> Tesla got this right with both the touch screen and the physical buttons on the steering wheel for things like volume control.
Perhaps there's an auto manufacturer or two that got it right. However, I've never driven a car with physical buttons that got it wrong. When I used to shop for cars in those days, I never had to consider if those buttons were compatible with me. Now when I buy for a newer car, it's an added headache to consider, and one that I've not seen add any real value. Going on a test drive really will not tell me enough about whether the interface is good. And worst of all, whether it is or isn't affects my safety.
>Perhaps there's an auto manufacturer or two that got it right. However, I've never driven a car with physical buttons that got it wrong.
You gotta give it some time, because modern physical control interfaces in cars had over half a decade to evolve to reach this point. With regard to touchscreen interfaces in cars, it simply feels like those are still stuck somewhere in the pre-iPhone era of touchscreen interfaces for phones in terms of usability compared to physical controls.
> However, I've never driven a car with physical buttons that got it wrong.
I sure have, the climate control in 2001 VW GTIs is a fiddly mess of small buttons with feedback from a tiny screen almost as low as the gearstick. It's the "high end" system but the lower-end knobs are so much better and safer.
Your comment has nothing to do with what I stated. As I mentioned already, I do have a tactile steering wheel control for volume. But even the experience for a passenger adjusting volume sucks compared to a knob.
And my touch screen, in general, is snappy enough.
I disagree that it "has nothing to do with what [you] stated", as the user experience also influences demand, and I still think Tesla got it right with the touch screen available for passengers to change the volume and the tactile steering wheel control. Do you drive a Tesla?
I lost the ability to have my text messages read to me, or to respond by dictating them. For a short time there was a half-implementation from Tesla that was in no way comparable to CarPlay, but it stopped working for me a couple months ago and I haven't been able to convince it to start working again.
I also used to be able to select music to play, and it didn't require jumping through a bunch of hoops to find it. Now my choices is the streaming that Tesla includes, or Spotify, and the Spotify interface is awful. I bought a subscription and tried it out, thinking I could switch from Apple Music, but after failing repeatedly to get the Spotify in the car to see playlists I created on my phone, and growing weary of Spotify on my phone or computer defaulting to playing out in the car even when I had been inside for hours, I gave up on that.
I also used to use Waze, or Google Maps, or Apple Maps, depending on what I felt worked best. Thankfully the maps built in to the Tesla aren't awful, but the interface is not without its quirks.
Tesla could solve this by supporting CarPlay and Android Auto, but they won't, because they need us to have a reason to pay for the premium data subscription.
> I lost the ability to have my text messages read to me, or to respond by dictating them
Out of curiosity, which Tesla? Because iirc there was a firmware update pushed out closer to the end of last year, which added that functionality (both having texts read and being able to respond by dictating), and I thought it went out for all Tesla cars.
The only reason I asked about the specific model you drive is because I know that some older Model S and X cars receive slightly different versions of firmware updates than newer ones like Model 3, but afaik the differences are usually related to autopilot/FSD features, not to UX features like this.
I have a Model 3 Performance. The text readout did work for a while, then it stopped. I haven't been able to bring it back. Unpair, re-pair, make sure bluetooth notifications are on, etc. Texts just don't show up any more.
My car is a little psychotic, perhaps :-). Probably a coincidence, but I also can no longer get the dashcam to work. Tried an exotic solution first (Pi Zero emulating a drive, with WiFi uploading at home), but it wasn't reliable, so then I switched to just a regular USB drive (adapter + high endurance microSD card), which worked for a month or so, but has stopped working and I haven't been able to figure out why.
I'm hoping the newest update with the Sentry updates will fix that last problem, since apparently it can format the card now. Though I did hear something about the last release also screwing up sentry for some people, so who knows. I'm at 2020.16.2, so there's at least one more point release that I haven't gotten yet.
Have you tried a full power-off, waiting 5 mins, then powering it on? I am not talking about holding the two steering wheel buttons to do a quick reboot, I am talking about a full power-off through settings.
I've read in some recent threads that this method ended up resolving a lot of really weird and random issues like what you described.
Interesting, I had a brief period where my M3 LR AWD would "forget" that I had enabled text message synchronization with my phone (a recent-model iPhone) and I'd have to open the bluetooth menu and re-enable it, but I haven't really had any ongoing issues with either message readout or the dashcam.
I'm not sure what all of this has to do with the touch screen conversation, but I feel you. It will all be solved with software updates eventually if enough people send formal complaints.
> and I still think Tesla got it right with the touch screen available for passengers to change the volume and the tactile steering wheel control.
That's not a super relevant point in response to me mentioning that my car has a touch screen available for passengers to change the volume and a tactile steering wheel control, and that I don't like it.
There is no way for it to be more relevant than it is.
I am telling you that what you don't like in your car, I think that Tesla got it right. How is that not relevant? What kind of response are you expecting?
Yeah, I was a touchscreen skeptic until I got my model 3: all the controls I care about while driving have physical controls on the steering wheel and, if I absolutely need to fiddle with the touchscreen, I can enable autopilot, which is safe enough compared to distracted driving.
I would argue activating autopilot while distracted is not only counter to teslas instruction, but criminally negligent. Please either drive, or take an uber, for the rest of us.
So, when the road is well-marked and I’m adjusting the AC, you’d rather have nothing keeping my car in the lane? My hands are on the steering wheel, and I’m paying attention to the road, I just know that I’m going to be spending some amount of time looking at a screen to change a setting.
This isn’t “I’m tweeting and using social media and thwarting the attention sensors“ it’s “I know my attention is going to be divided between maintaining my lane and adjusting the temperature/picking a new station so I’m going to take steps to avoid drifting out of my lane into other cars”
“Autopilot” here is basically adaptive cruise control + lane assist, which are both technologies available from nearly every manufacturer at this point. I’m not talking about the beta Full Self Driving features.
Would be awesome to use voice commands mapping physical controls.
Say, “Ok, Tesla, put AC control on slider 1”. This also keeps the kids in back from messing with the AC (dads appreciate).
The passenger doesn't need to keep their eyes on the road, though. It seems to me like the benefits of being able to cram a million more things (as needed) into a screen outweighs the downsides of having to glance at a screen if you're not familiar with it.
I prefer touch controls both as a driver and a passenger. I even use a Halo keyboard (effectively just another screen, with no "real" buttons) on my laptop. It takes some getting used to, but eventually becomes second nature -- and then you open up huge new areas of functionality that aren't possible with physical buttons and knobs.
You have to give it to modern car tech though. This is how one would start Ford Model T: While out of the driver’s seat, the driver would need to pull the choke, located near the right front fender. At the same time, the driver had to turn the crank lever one-quarter turn clockwise to prime the carburetor. The crank lever was located beneath the radiator at the front of the auto. Next, the driver had to jump into the car, insert the key into the ignition, and turn it. Immediately, the driver had to move the timing stalk up and move the throttle stalk down to set the idle correctly. Pulling the hand brake back would place the car into neutral. The next step necessitated getting back out of the car to turn the hand crank a half-turn. This turn of the crank would actually start the engine.
You can use the physical controls on the steering wheel if you don't want to use the screen. This subthread is in response to GP saying "but the passenger still has to use the screen".
My 2006 honda civic navigation had a good mix of touch and physical buttons, that is almost no-look. What I find funny though is that since it's a resistive touch and has a beep for every button (after a lag of course), i get "haptic feedback"
I strongly dislike voice control. Speech is a high-latency, low-bandwidth, error-prone medium. A button is a button. One bit, clear unambiguous.
Tactile controls can become so thoroughly integrated in muscle memory that well-designed tools and machines (including cars) feel like an extension of thought. Think "volume up", it just happens automatically from thought to fingers to buttons.
Compare that to a multi-word speech command that requires perfect diction and phrasing over a span of five seconds. That's five seconds of distraction, longer if the command has to be repeated, vs milliseconds for the button.
Part of pilot training in the old days in the Air Force was the pilot was blindfolded, and the instructor would call out a control's name, and the pilot had to put his hands on the control.
Or he wouldn't be rated to fly the airplane.
In an airliner, the critical controls are all uniquely shaped, so the pilot knows by the feel which is which.
I agree, and the voice control implementations I've seen leave a lot to be desired. But they do work without taking your eyes off the road, or hands off the steering wheel.
I think the most common commands can all be mapped to buttons on the steering wheel, which similarly is as close to possible as "negligible reduced driving capacity to use" without the tediousness of voice control.
But I do think the theoretical safety benefits of physical buttons that are not on the steering wheel as compared to touchscreens for features beyond the ones commonly used while driving are probably overstated.
I share your dislike for voice control. But using it for some non-essential controls (like the air conditioning) while driving a car is the one place I've thought of where I would be comfortable using it.
> You have always had the need to look at the buttons, like reaching out to the radio.
When the controls are in fixed positions, you only need to look at them a few times, before you memorize their positions. It also helps if they're shaped differently, like knobs and dials in addition to buttons.
Using the example of a radio, I used to be able to do just about everything without looking, with the exception of direct tuning (but that's what presets are for), in my older cars with traditional head units. Now that I have a touchscreen head unit, there's almost nothing I can do without looking. Basically, I can control the volume from my steering wheel, and that's about it.
> When the controls are in fixed positions, you only need to look at them a few times, before you memorize their positions.
The controls on your phone are in fixed positions most of the time; do you use your smartphone without looking, the way you probably used your feature phone?
No, because with touchscreens come heavy-weight operating systems like Android, which have unpredictable input delays. Without looking, you can't tell if your touch input was correctly registered (vs. ignored because the OS decided to hang for a second), and if there's any mode switching involved, you can't tell when the UI has updated to allow you to press again.
I'm not against touchscreens as a matter of principle (though I come to appreciate the ergonomics of physical controls). But their overall responsiveness is, in general, much worse than those of physical controls.
I can't edit anymore, but I meant physical controls in fixed positions, so you can feel which control you're interacting with in relation to other controls. Even better with a little bump or divot, like the "5" key on a 10-key.
With a touchscreen, there's really no way to know where your starting point is when you reach for it blindly, so you have to look. I suppose you could use the edges, but that wouldn't be so easy on a large touchscreen.
With an old car radio, you can reach for it, know what you touch first, and know where everything else is in relation to it, all without looking, ever.
The buttons on a touch interface absolutely can change positions. What button is currently in scope? Everything is contextual in my experience. With physical interfaces, I don't have to worry that my radio's volume knob is going to wind up on the ceiling after I had just turned on the AC.
Even if you design the logical interface such that all 2-d screen mappings are consistent regardless of context, there is still the problem of tactile feedback. Physical affordances can provide varying shapes, materials and actuation methods which can immediately distinguish them from each other without the need for visual confirmation.
The touch-screen buttons in many cars often do change meaning, even if they don't change position within a given context.
I first experienced this on a Prius back in the mid-2000s, where the touch-screen was in one of several modes (climate, radio, navigation etc) and even if one can remember where the controls are, it's necessary to look before touching. That car had more physical buttons than is generally the case now and was still difficult to operate.
A study from the UK's Transport Research Laboratory showed that all drivers in the study had a critical reaction time degradation when interacting with Android Auto and Apple Car play.
> "The increase in reaction time when interacting with either system using touch was higher than previously measured forms of impairment, including texting and hand-held calls".
It's a small sample size N=20 but their conclusion was that interacting with these touch infotainment systems created a reaction time degradation that was 4x worse than the reaction time of a drunk driver (80mg of alcohol per 100ml).
They followed up this study with a call for urgent shifting in the auto industry to voice controlled infotainment systems.
[0] "Interacting with Android Auto and Apple CarPlay when driving", R Ramnath, N Kinnear, S Chowdhury, T Hyatt, March 2020.
I find it extremely concerning that we allow for this marketing pull from consumers. All the evidence so far points to the dangers of touch interface when performed by the driver.
If auto marketing/product managers are not responsible enough to correct this than it clearly is up to Road Safety authorities to regulate.
Mazda did the right thing, other auto manufacturers should know better. We are truly risking drivers' and pedestrians' lives here.
> You have always had the need to look at the buttons, like reaching out to the radio.
Definitely not true. My last car had all physical buttons, and I could easily, by feel, use all of them with one hand while keeping my eyes on the road and the other hand on the wheel. When you've been driving a vehicle for years you get very used to its controls. Plus, on a lot of these cars some of the most common radio controls (like volume and change frequency) are right on the wheel beneath your thumbs.
You have always had the need to look at the buttons, like reaching out to the radio.
That is definitely not true. Volume on the left, tuner on the right, and of the five preset buttons, WFBQ is the one in the middle. You could feel for all of that without your eyes leaving the road. What might have happened is that you came of age after utilitarian radios. For example, I had to give some serious looking in order to find a DIN2 radio that had CarPlay and a physical on/off/volume control. Or just a physical on/off button, not some software button buried five menu levels deep. Such jackassery used to be the exception, not the new rule.
Climate control? Heat control on left, blower the middle, and position (where it blows) on the right. Or some combination thereof. Our utilitarian Mercedes Sprinter RV has it right. Other, less utilitarian vehicles: meh, not so much.
I could go on. But I've never been required to let my eyes leave the road for functionality until the last ten years or so. $DEITY help you if you need to switch Bluetooth devices while driving our Leaf: seven screen touches to pull that off, with plenty of opportunity along the way to make a wrong choice. No wonder it won't allow you to do it while moving.
"I had to give some serious looking in order to find a DIN2 radio that had CarPlay and a physical on/off/volume control"
Share.. Please! I still haven't found any.
I have 2004 wrx with 2din slot but haven't found anything with ergonomics of the manufacturer old setup. I have so much room, yet all I can find is huge screens with tiny or no buttons... :(
See that big honkin' knob on the left? Volume and on/off. It's glorious, especially when driving a 5400Kg vehicle. Bought it last year, or year before, so there might be an updated model.
EDIT: check out Crutchfield's other radios, too. There's a brand they sell call "Boss" (never heard of them, either), and every model of theirs has a big knob for 1/0/Volume.
Both of my cars are old enough that they don't have a screen. I don't need to look at the buttons to use the radio, turn on defrost, adjust the A/C, etc. It is all muscle memory. I can do it all blindfolded. But on those occasions when I rent a car that has a touchscreen it drives me crazy. There is literally no way to use it without taking my eyes off the road and looking at the screen. It seems like a huge safety issue. It has convinced me to never buy a car with a touch screen. If there are no newer cars without a touch screen, I will just buy older used cars.
I've seen so many of these go by that I had to do the very same search that you could have made instead of asking for sources.
Asking for sources when a trivial search at your fingertips will yield them can come across as rude and lazy. Please make a better effort in the future.
I drove a Prius where EVERYTHING was controlled a non-tactile touch panel. It was one of the most infuriating experiences with a car in my life and far more distracting.
In comparison my car has physical buttons and dials. If I want to adjust the temperature while driving I don't need to look away from the road, I just grab the dial and turn it a few clicks, same with sterio volume. I know approximately where the dials are so I can reach in the general area and know what I'm doing with out looking because there is something physical to grab.
With physical controls, you can verify using touch (ie without sight) that you are touching the correct control, and then activate it.
With touch controls, as soon as you've touched it, you've activated it.
And touch screens have very few ways to verify your position by touch, since it's a large flat screen. Controls with a plethora of knobs and sliders have all kinds of terrain that you can use by touch to verify your position.
> You have always had the need to look at the buttons, like reaching out to the radio.
Aside from others' comments that one needn't look at buttons & knobs one has memorized, also:
1) one does not need to look as long at buttons and knobs as a screen to locate the desired control, because they do not change position. One also never needs to navigate a menu to perform a basic task. The closest most come is having a button that cycles AM/FM and channel presets.
2) it's possible to combine the fixed-location advantage of buttons and knobs with visual and tactile differentiation that reduces or eliminates the need to look at the controls even further. Some cars do this, but a really great example is the Nintendo Gamecube controller, which was clearly designed by someone who'd watched a young child try to remember what all the buttons on earlier consoles (SNES, N64, Playstation, perhaps) were used for. Size differentiation, most-used buttons in the most-accessible positions, larger, and highlighted by color, less-used ones smaller with diminishing vividness of color the less-important they were. A clear hierarchy of importance differentiated by color, size, shape, and feel. Auto makers don't ever fully embrace this because it results in a UI that doesn't look they way they want it to, but for maximum safety, they should.
A very, very consistent touchscreen UI could use some of this to great effect to reduce the harm that they cause, but they'd have to almost never screw with placement, appearance, or behavior of UI elements, all of which would need to be perfectly consistent and just about never change, which in practice probably means never receiving updates because there aren't a lot of teams that can resist fiddling with looks & behavior (this is a problem that plagues all web and frequently-updated software, and if you don't think it's a big one or causing some serious irritation and reduced-utility in computing in the wild then try watching someone who's not extremely "computer savvy" use their computer or phone for a while). They'd also need to radically simplify their UIs and work very hard on reducing latency and improving interaction accuracy (touch, then nothing happens for a second or two, is extremely confusing to non-computer-nerds, and it's 100x worse if it's not consistently precisely that unresponsive—sometimes instant, sometimes 1s delay, sometimes 5s delay, is the absolute worst way a UI can behave)
> You have always had the need to look at the buttons, like reaching out to the radio.
I never look at those buttons, and I switch stations all the time. In fact, the radio is my standard example of where touch screens fail and are hazardous.
Just like with a phone that has physical buttons - I would dial numbers without looking at the phone.
In the car I have that doesn't have a touch screen, I don't think I ever look for any button - be it climate control, emergency lights, etc.
My car has a holographic HUD that displays on the windshield, and responds to hand and voice gestures. (The HUD is _awesome_, but the gesture recognition is hit or miss at present).
I agree with you that touchscreens in autos are garbage. However I don't agree that physical or haptic switches are the _only_ solution. I think touchscreens in autos were just a technical stepping stone towards something new, like this combo of holographic/gesture/voice control UI.
No, not at all. The HUD is positioned so that you're watching the near-field road ahead. The hologram is projected so that it appears maybe 30' ahead of you, on the road. The HUD also has minimal information. If nav is off the HUD shows you your current speed and the speed limit only. If nav is on, the HUD gives you a simple map display showing your next turn, and that only appears as you approach. It's a really excellent system. Not at all invasive or distracting, and MUCH better focus- and attention-wise when using navigation vs, say, staring at a smartphone or console display.
I'm talking about having to move the focus from the road to the windshield. That still takes time. If the HUD is "at infinity" then it's ok, but would you need special glasses to make that work?
I use the term "holographic" for a reason. The HUD appears to be projected at a distance of, say, 20-30', and is positioned low on the windshield so that it essentially overlays on the road. You do not need to refocus your eyes on the windshield. Focusing on the road also focuses on the HUD projection.
Hand-eye coordination is actually quite difficult shortly after reaching microgravity. You're so trained compensate for the weight of your arm that you'll hit above where you expect when you reach for things. The opposite goes for landing again.
As a fighter pilot, the ability to move switches without touching them is incredibly important. But most of that is achieved via HOTAS (Hands on Throttle and Stick). Anything outside of HOTAS if already hands off and a "labor". That's where fast access to data is important. Our checklists have migrated to iPads over paper products and that has honestly been great. The physical limitation of number of buttons to achieve an effect is an issue.
So.... I need physical buttons where I need real feedback fast on touch. I can use touch screens where I need to access a lot of data but have seconds to spare.
Everything is automated though. There's almost nothing for astronauts to do except pull an abort handle. Instruments and telemetry are shared with ground control, and anyways, they don't need the astronauts to touch anything unless there is so much instrumentation to present that the astronauts have to page through it.
I wish we could invent some kind of haptic interface for touch screens. The flexibility of a touch screen is great but very hard to find things without looking.
For infotainment, touchscreens are a huge help, no debate really. Anyone doubting this should get a refresher by inputing a street name with a twist dial.
As someone that works in Auto on buttons and works with the head unit team, you are incorrect.
The benchmarking shows nearly every company is moving to screens. Sure you might have a few models that remove the screen, but these are cheap cars with limited features.
You literally can't have a button for every car feature.
And if your car doesn't have all the features, you won't sell well.
And if you disagree with all of this, you probably aren't the type of person to drop 40k-50k on a new car. You'd be happy with a 2014 car for 10k.
That looks like an excellent argument for stronger safety regulations. It is now illegal to drive any vehicle on public roads that does not meet the following safety standards: X, Y, Z. There you go, market for idiots who like flashy gadgets over safe vehicle controls eliminated. We did it with drink driving. We did it with driving on phones (though both the laws and the enforcement are still far from ideal on this one). We can do it with obviously dumb ideas for vehicle controls where there is evidence of severe safety problems too.
The Navy went big in touch screens and is rolling that back. Mazda, by no means a luxury brand, is abandoning touch screens. I think touch screens will stick around for tasks that aren’t routinely done while driving, but there will be a massive correction away from touchscreen as the only interface to the car.
> The benchmarking shows nearly every company is moving to screens.
Benchmarking? Really.
> Sure you might have a few models that remove the screen
Mazda is removing or de-emphasizing them across their product line if I understood correctly.
> You literally can't have a button for every car feature.
Yes, true, very very true. Also irrelevant. The point is that features like:
- climate control
- stereo volume (and other radio controls)
- defrost
- etc
should be physical buttons because drivers use them a lot and should not have to take their eyes off the road to use them and should NEVER have to their eyes off the road to see if their inputs took (touch/display latency sucks).
> And if your car doesn't have all the features, you won't sell well.
The features that must have physical input methods, must have them, and the others can be touch screens if you really like.
Also, fewer features is kinda fine, really, if it makes the roads safer.
> And if you disagree with all of this, you probably aren't the type of person to drop 40k-50k on a new car. You'd be happy with a 2014 car for 10k.
I.e., I must be cheap. Or maybe I would be happy with older cars (or newer Mazdas) precisely because they have these physical inputs / lack those dangerous touchscreens that I detest.
More and bigger screens are certainly the way of the future for most cars/SUVs, but I'd advocate strongly for a large set of ergonomic, reassignable hard controls. The best thing about the Tesla Model 3 controls is the configurable control gadget on the steering wheel (though I wish they'd used better materials -- feels absurdly cheap in a $50k car).
> The benchmarking shows nearly every company is moving to screens
You're confusing a benchmark with... something else. And I really dislike my new-ish MP3 player with it's damn screen that I have to keep looking at to operate, to do anything.
> And if your car doesn't have all the features
My MP3 player has a ton of features and I don't need 90% of them. I want it to do one job well - play my music and a few other basic controls, just like my old, physically-operated MP3 player did. I don't want awesome UX/UI bullshit to get in the way, I just want it to do its job.
While your comment is valid to the the person you responded to, consider:
My older car with no touch screen has a custom stereo installed - everything with physical buttons. And it can do more than my other car with a touchscreen. Its Bluetooth capabilities are superior. I can set it not to auto-play, etc.
Yes, no need to go old school. But no, you don't need a touchscreen to get a radio/stereo with better features.
That is my opinion which is why I kept on saying "I". I see nothing wrong at all with 'old-school'. There are too many self proclaimed UI/UX "experts" who keep screwing things up for people (edit: I'll restate that: fucking over their users). I only want what works.
> The vast majority of customers do not want something that basic.
And you accuse me of 'opinion'. Well, provide evidence of this claim.
This. There will always be a niche of people who want barebones single-purpose old-school experience out of any given device.
Goes for literally anything, from cars to phones to music equipment. I definitely fall into this category for some of the things myself. However, it is important to remember that this is not representative at all of what the majority prefers.
This is a better post than its parent, and in some cases, sure I agree. I can't argue with someone who wants the new & shiny, if it works for them, great. But it's being pushed on us so such luddites as myself have no choice any more - it's all touchscreens now. It's become marketing driven. The choice is gone.
> this is not representative at all of what the majority prefers
I was very careful to not to project my desires on others in my original post, but you're telling me about what "majority prefers". So back this up. I don't think you can.
>you're telling me about what "majority prefers". So back this up.
Do you see dedicated single-purpose barebones MP3 players having a high demand? Or do people just use their smartphones for that purpose? When you walk into a room and ask people if they would find an MP3 player device useful and would like to get one, what answer do you expect to hear?
Also, try asking the same question from people about smartphones vs. single-purpose cellphones. Yes, there is obviously a niche of people who want to "disconnect" and not have to deal with smartphones. But they are in a tiny minority.
While market isn't a perfect representation of what people want, it is a great proxy, in a lot of cases. And for this situation specifically, it looks like the market has clearly expressed what consumers want.
I asked you to back this up with actual figures. Please do so. Now...
> Do you see dedicated single-purpose barebones MP3 players having a high demand?
I can't buy them. When I looked for a new one, there was none available I could find. I did ring the companies too. There's no choice so actual demand is difficult to ascertain.
Smartphones... OK, that's a good point.
> what answer do you expect to hear?
Irrelevant - give me figures, not asking what I expect to get. Facts please. And if you read the comments here, there's quite a few expressing preference for physical controls.
> But they [non-smartphone users] are in a tiny minority.
A minority or a tiny minority? Give me figures please. Don't just talk at me, throwing words around. Facts please. And BTW I'm one of these minorities. FYI.
> it looks like the market has clearly expressed what consumers want
The fact that you called up a bunch of companies, and none of them were producing dedicated barebones MP3 players, kind of speaks for itself. If there was a significant demand, why wouldn't they jump on this easy money-making opportunity, given that they would have pretty much no competitors?
>give me figures, not asking what I expect to get.
I don't have numbers, and neither do you. In the absence of actual numbers, anecdotal evidence is the second best thing. Do you have anecdotal evidence of talking to an average person and asking whether they would be willing to pay for a dedicated MP3 player? I do, which is why I asked you to imagine how that scenario would play out in real life.
If your scenario played out the opposite of mine, then we would be at a stall, as anecdotal evidence is nothing against opposing anecdotal evidence, only factual numeric evidence can beat anecdotal evidence. But if it played out the same, I feel like it would only act in support of my hypothesis.
I can also bring out hard factual numbers for the sales numbers of dedicated MP3 players going down as smartphone proliferation increased, if you want, but you probably already know how those numbers look.
"It is fascinating just how poorly modern touch interfaces do compared to older vehicles" with a response of
"I can't figure out how to turn the HVAC system on in a newish car"
This proves that the market is demanding worse interfaces, otherwise why would people have to deal with them?. That's how your argument goes, and it's bunk. Remember the cries of pain over windows 8? That's because people liked pain. The market spoke, right?
> I don't have numbers, and neither do you.
Then again I only spoke for myself. Whereas you "...this is not representative at all of what the majority prefers" & "it looks like the market has clearly expressed what consumers want" believe you can speak for others. Nope. Facts please.
> I can also bring out hard factual numbers for the sales numbers of dedicated MP3 players going down as smartphone proliferation increased, if you want, but you probably already know how those numbers look.
Irrelevant. I spoke about dedicated MP3 players, and if you'd bothered to read what I said, I actually said yours was a good point. Still, dedicated MP3 players have a market because they are still being sold - https://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3... So a market for them still exists. It's not about smartphones vs dedicated MP3 players, this is about interfaces and choice.
>"I can't figure out how to turn the HVAC system on in a newish car"
That says nothing about touchscreen interfaces themselves, it says about their poor implementation in certain cars. Just like touchscreen interfaces on phones, they were all various degrees of trash for daily usage, until iPhone came out with touchscreen-oriented UI and lead by example of what touchscreen-oriented UIs for phones are supposed to be, as opposed to just regular phone UI with touchscreen functionality bolted on.
A similar thing can be observed in cars. I had so many hellscape-ish experiences with touchscreens in cars, I can rant about those for days. But then I had an opportunity to extensively test its implementation in Tesla cars, and it was extremely pleasant.
Not that your criticism of touchscreens in modern cars is invalid, it totally is valid. Touchscreen interface implementation in modern cars, on average, is totally inferior to the older physical control interface implementation. Which makes sense, as we had over half a century to perfect that.
However, as demonstrated by the Tesla interface I experienced (hybrid touchscreen+physical controls on the steering wheel), those issues are not inherent to all touchscreen interfaces. The other manufacturers just need to catch up (for some of them, I can already seem them being very close). You cannot just bolt a touchscreen onto the interface designed with physical controls in mind and call it a day. Because that's pretty much why those touchscreen interfaces in most modern cars are awful to use.
I'm not down on touchscreens, just when they're misapplied, as you've indicated. And maybe markets indeed don't always deliver what the user wants, at least not at first. I think we've reached some agreement here.
It might happen when statistics actually show cars with touchscreens are measurably less safe.
It'd be even safer if we removed the radio all together, and banned any physical controls that aren't on the steering wheel (to ensure you don't have to take your hand off the wheel to use them).
Why do we only care about public safety in one circumstance but not the other?
>Why do we only care about public safety in one circumstance but not the other?
Outside of armchair theorists online, I'm not convinced anyone in charge of car design actually cares about car safety at all.
Car accidents are the leading cause of death for people age 15-29 and the second leading cause of death for people age 5-14. Nearly 3,300 people die every day in car accidents, and double that number are permanently disabled.
If people actually cared about car safety, it feels like these numbers would have gone down in the last 30 years. They haven't. [1]
They probably have gone down per capita, right? We just have a lot more people than we did 30 years ago. A car built today is certainly safer than one from 30 years ago.
My hope is that driverless cars end up solving this faster than we otherwise could politically, but obviously that may be a bit ambitious.
Not for me. I'm quite content setting a target temperature and leaving the rest on "auto", although I know for many drivers that's not the case. And obviously it's still in my interests that other drivers not be distracted.
But those, too, could be controlled by physical buttons on the steering wheel and/or voice commands.
And some of us would have done. Trying to write anything longer than a brief text message or Tweet using any modern touchscreen phone is excruciating. Swipe-style keyboards have made entering text tolerable, but correcting the mis-reads or even the most basic editing takes orders of magnitude longer than it needs to. Admittedly, modern phones do provide a neat demonstration of various technologies designed to correct the errors caused by an otherwise slow and inaccurate input device.
Because Europe and Japan also had this idea how touch screen is cool, new and the future. 2016-2018 were the blunder years with everything being touch. And soon people realized that touch, no surprise, is shit. So for most common operations makers have added buttons back - climate and audio controls.
For never used settings - like should interior lights come up when you open the door, beep sound loudness, interior light brightness, etc - yea, touch screen is good. For stuff that you actually use - no.
It blows my mind people think touch screens are good. If they are so good, why don't we use touch inputs for blinker signals behind steering wheel? For steering itself? Swipe left to go left. Why no touch input for shift paddles? Hell, why I never saw anyone using touch screen keyboard instead of a physical one? Silicon valley devs could easily afford one and to them touch is superior to physical buttons. Nope, 1970s or w/e physical keyboards for every computer, it's like there's a bubble somewhere where "new = always good and old = always bad".
It’s unlikely that the flight interface is anywhere as nearly complex and flexible as a desktop UI needs to be.
Astronauts do not need a screen reader or other accessibility features; nor does the UI need the ability to resize for different monitors or play nice with the window manager, etc.
Mission-critical UI’s are also deliberately simple control panels that send commands to some other process. In comparison, Electron apps often try to do everything in JS, which can lead to efficiency problems for heavy-duty tasks.
The pilot interface, though, is not just for executing single actions; it's for executing against complicated checklists, some of which may be less familiar to pilots, some of which may even be uploaded asynchronously from the ground (for instance, consider the unorthodox procedures developed for the Apollo 13 astronauts), and some of which may be concurrently ongoing.
This lends itself extremely well to hypertext [0] and to interface components such as collapsible sections of text, being able to display multiple procedures simultaneously, being able to resize them relative to a dashboard of diagnostics (if not a window manager, a split-screen), and placing the button(s) required to execute a step directly near that step without needing to manually resize boxes when this is requested. Browser technology is exactly the stack that has solved all these problems!
Upon thinking about this, it seems like it's a different problem in that the people are moving while the screen is not. So maybe the screen should have a camera and try to adjust the direction to the way the person is facing? Obviously if the screen is upsidedown that's not great.
(I don't think that anyone does this, but it's a neat thought.)
This sort of exists, actually... use hover focus and eViacam (https://eviacam.crea-si.com/) together. Of course, that doesn't give you separate control of the mouse, but I'm not sure that's a bug.
I would suggest some manual switch (software or hardware), so that you can rotate displayed image as required, not yourself. Like a selector with 4 buttons, you click the one which is currently up for you and display respects that.
If we're going to use SpaceX to validate our language choices, then we should be moving back to C for the "mission critical" code.
> The certification and correctness part is made easier by using software verification tools. One such tool is Astrée. It is a static code analyzer that checks for runtime errors and concurrency related bugs in C projects. This also leads us to the answer for why a lot of mission-critical code is written in C. Its because there are a lot of static analyzers and software verification tools for C.
The Kernels for all major operating systems are written in C; C is also used for the majority of embedded devices which perform the small miracles around you. C is everywhere.
Well, C actually does have very good verification tools, so if they are used it could be a reasonable choice.
The problem is that commercial SW development is incompatible with the use of such tools or the associated safety-critical processes. The extra time, process, competences would make a company much slower than the typical bullshit move fast and break things start-up.
Those tools are basically the great pacific garbage patch in an ocean otherwise also full of garbage haha. HTML parsers are practically self-aware, and JavaScript, oh, JavaScript.
If it’s the only app, with a known configuration, with presumably no general Internet access, and a simplified UI, with redundant physical buttons for all functionality, yes, it’s probably a good choice.
This is one data point. Saying that that's all of the validation that we need for web technologies being the best way to make a UI is really only proof that statistical significance and science currently has no place in the software industry. It's just a fashion show.
What are the sources for the claim that they use Chromium and JavaScript? Is it the 5 year old StackExchange post? Things might have changed since then.
I wouldn't agree with that. Nothing has changed. You should still ask yourself why am I using this in my context. Could save yourself a lot of trouble and make more money. If we were to follow SpaceX we would also be using C#, knockout.js and labview for everything.
This is bonkers. The UI did look good, I'm suprised it wasn't something native like QT. But I guess when I think about it, does it really matter? Chromium has some of the best engineers behind it, and web developer supply has never been higher.
> I'm suprised it wasn't something native like QT.
The question is what you really would gain from using QT compared to Chrome/HTML/JS.
Performance won't really matter for this. It it's good enough on the hardware they have, then further improving it does not make a difference. Provided they are running everything really mission-critical and real-time outside of the UI and on an RTOS or specialized hardware anyway (I really really hope so).
One advantage might be audibility and the hope for less defects due to a smaller codebase and less dependencies. But I think even QT is already far beyond that point. And since Chrome is nowadays wider deployed, it might have a higher level of maturity than eg. the QT/QML tech.
In the article comment section someone from QT mentioned they were using QT for something. So it isn't clear which part of which is QT and which is HTML.
* The npm ecosystem is a bit overzealous in the dependency department. Not everything is bloated inside npm or out of it but a lot of it is.
* Javascript itself hasn't changed much in the way of making DOM creation/mutation easier. Custom elements, shadow DOM, etc. were nice but at the end of the day you still need to call createElement a lot and do everything manually. You're probably referring to using libraries, of which some are nice but many suffer the above-mentioned bloat problem. Event handling similarly hasn't changed much.
* True enough, there probably haven't been this many of us in a long long time! More people must mean more good people.
It also depends on what these screens are really used for.
The Dragon capsule can operate without any input from the astronauts onboard, so it's not completely clear to me whether these screens are usedfor anything beyond showing what is going on.
Correct they can dock to ISS and return fully automatic without astronauts being involved. However the touch controls allow to take over, if need or safety protocol require to.
SpaceX has a marketing version of the control interface in this simulator: https://iss-sim.spacex.com/ How closely that resembles the actual interface I don't know. The astronauts claimed the simulator they used matched reality quite closely.
Depends on the environment. If RAM or CPU time is at a premium then no. But that's so rarely an issue these days, especially if you separate UI from the main processing thread.
Finally we have the worlds least-integrated, poorly accessible, completely inconsistent, most difficult to theme, resource intensive GUI stack. Software devs rejoice at the job security of JS framework churn while stealing user data and shoving ads in their face to pay their own salaries.
> Btw if you consider that, there are wars over which JavaScript framework.
None. Simple as that. At least for something like the Crew Dragon control panels, any kind of JS framework would add unnecessary bloat for very little use, because they were all designed with a very different goal, definitely not to write the UI portion of a C++ application that controls a spaceship.
Just stick to vanilla JS, simple HTML and well-written CSS and circumvent the framework wars altogether.
They all still compile to the same stack, so no problem. I don’t have to throw away my working jquery/angularjs/scriptaculos/dojo/react/ code just because angular.io is pretty cool.
I'm surprised you're allowed mission-critical or even non-critical inputs on that touchscreen.
It might be OK if the entire screen was READ-ONLY. Anything else sounds like a disaster waiting to happen.. I also don't understand why companies race to have their laptop display touch, but then again I'm just a random software engineer.
Military planes - Super Hornet, for example - have been using touch screens (UFC, in this case, the thing just below HUD) for over two decades now, and they seem fine with it, and newer ones use it even more. They don't seem to cause any problems.
Granted, stuff that's absolutely critical - like gear or the arm switch - is still mechanical, but UFC is still pretty mission critical, controlling eg IFF, autopilot, or radios.
F-35 has a 20" wide by 8" tall touchscreen. However, just about everything that you do on the touchscreen can also be done with various switches and selectors on the stick and throttle. There's even an equivalent of a mouse that will let you move a cursor around the screen and select items.
About the only things you cannot do on the touch screen are giving stick/throttle inputs and MASTER ARM on/off.
Physical switches since they are moving parts, have a high failure rate. Apollo 11 they broke an important switch donning their spacesuits. In Apollo 14 the Landers abort switch was falsely triggering so they had to hack the computer to ignore it.
Most of the cars I’ve owned have been recalled/needed to have some switch replaced in the first year or two of ownership. Jury is still out on touchscreen, but they seem to have lower mortality (it could be a simple numbers game, several dozen switches vs 1 touchscreen).
> I also don't understand why companies race to have their laptop display touch
Try using your laptop in bed, on couch, browsing something with a partner, etc. The flexibility of Yoga style 2-in-1 devices is amazing in this regard.
For productivity I don't find it extremely useful (maybe if I was a designer and combined it with a pen), but for casual use - a lot of UI these days is touch friendly and feels natural with touch screen.
Personally, it seems I touch wrong buttons quite a bit on my phone. I also click wrong buttons when using mouse but that is rarer than pressing wrong button on my phone.
The most annoying part is using touch screen in my car for GPS. It might be bad UI but also viewing angle might be making it hard to press right button. But then again I would not expect a physical keyboard in a car.
In physical land, it is very rare that I press a wrong button. Maybe when playing intense video games, I press jump too soon or use wrong gun. Oh and of course while typing.
Touch screen are too sensitive to be safe. Also there is a very high chance to mis-click (see HN upvote/dowwn-vote buttons). Much safer to use regular old mechanical buttons with tick-tacks.
I noticed during the stream one of the astronauts selecting some input on his screen, pausing for a moment, and then pressing the physical button below the screen to confirm the input. So that's probably their defense against misclicks: any action other than just changing the display mode or opening a checklist has to be confirmed through the physical "Execute" button (there seems to be two of them, one for each astronaut).
I'm also curious about this disaster waiting to happen. They've emphasized an escape capability all the way into orbit. They emphasized various types of automation (also used by other spacecraft).
Why is READ-ONLY incorrect? I mean as in the input on the screen have no write ability and it is only used to control the interface that display the information.
I might have misunderstood your point. I thought you referred to the fact that the touch screen was based on a non-realtime system rather than it being a touch screen.
Years ago, astronaut Chris Hadfield told an audience of software engineers (including me) that the moment the space shuttle was in stable orbit, the crew would pull out laptops and set up an ethernet network for all the scientific work of their expedition, as the space shuttle's own computers, though limited in raw computing power, ran software that was so thoroughly tested that there was every reason not to "upgrade" them in any way to support the scientific work.
As a child, a friend's mother was a programmer for the Shuttle (mid-80s to early 90s). Her job sounded awful: a 'science person' (I was a child, remember!) designed a piece of math; an engineer (software?) would take that piece of math and reduce it to an algorithm; my friend's mother would program that algorithm in machine code, in parallel to an assembly listing. Each instruction (and all the data) would be reviewed line-by-line. She had to provide reasoning (roughly a proof) that each instruction correctly implemented the algorithm. My guess is the algorithm went through the same equivalence checking to the 'math'. I don't know how much code she wrote, but it was only a few programs (functions) per year.
That was my view of 'programming', and I wasn't disabused of that until very late 90s.
Sounds lovely. Now I write code with vague requirements that interfaces with ill-specified functions, on tight deadlines. It surprises me that it doesn't contain a lot more bugs. Maybe we just haven't found them yet.
And whatever you wrote And right to the spec, user said it is not what they want. And demo why it is not working or useful ... Of course that is nothing to do with them, that is not what they said or your it guys analysis wrongly. As spec goes ... do not understand it.
To be fair it takes a lot of interaction (and understanding ) to fix it. And you wonder how these operate here.
It’s funny how early experiences color our understanding of a craft. I remember as a boy telling a neighbor that I was interested in art. She worked with computers and, with stars in her eyes, told me how digital plotters could define shapes accurate to n thousands of an inch. For me that was the worst possible selling point, and may have influenced my decision to eschew anything digital for years. ... and now I teach digital art.
Even today with modern avionics there would be no reason why an aircraft's computer systems would need to interface with a general-purpose LAN. Take a look at the control system of a Cirrus Visionjet for instance. It's quite locked down as its own embedded system. For reliability and resiliency reasons it's good to keep some things compartmentalized.
Good writeup! In general, the direction in modern aerospace is to use COTS (commercial off-the-shelf) parts with redundancy and failback for radiation hardening.
If you’re into this sort of thing, I co-write a weekly newsletter about the space industry and often talk about software. https://orbitalindex.com
I'd worry that the COTS approach might be fine for dealing with cosmic rays within the Van Allen belt but it might not be sufficient to deal with the solar wind if you're going to GEO or Mars.
Just FYI but OI has become a great resource for me and my team to maintain situational awareness on what's going on in the industry. Thanks for all you do!
When you think about it from a first principals perspective, having multiple touchscreens is better than only having physical switches. When a switch is damaged/fails, you are out of luck. When a touchscreen is damaged/fails, you use the one next to it. On a rocket you do not have the mass or room to have more than 1 of all but the most critical of switches.
There have been quite a few missions that nearly caused death or mission failure directly due to a switch getting broken (Apollo 11, lander return engine-arm switch) or going faulty (Apollo 14 abort switch).
What really matters is that they have no single point of failure (touch screens can do everything switches can, an individual touch screen is not important, and switches can cover abort/return scenarios to protect the crew). For the software, it only matters that its been fully tested, including random bit flips and hardware failure.
From a cost savings perspective, its vastly cheaper to verify that 3 touchscreens are working correctly than the 600 switches they replace.
>When a switch is damaged/fails, you are out of luck. When a touchscreen is damaged/fails, you use the one next to it.
This is a trivial problem to solve on a physical interface. One solution could be what is commonly used on hardware synthesizers. A shift button or switch. You engage it and all controls begin to perform their secondary functions. You get redundancy for the price of one extra control and a secondary set of labels in a different color.
Also, use of displays to virtually label buttonss is common. In such case you can reassign a control if one fails.
In any case Dragon capsule had physical buttons for important functions as a backup.
The touchscreen frees you from the complexity that comes with giving switches alternative modes, and gives you the mass to have multiple copies of critical switches. Also multimode switches greatly increase the complexity and failure modes, so they need to be done so that if a switch is triggered in the wrong mode its recoverable (eg: the switch for aux radio power isn’t also the undock switch).
When you get to the point of having displays for the switches why not go full touchscreen and eliminate all of that cost and complexity of a bunch of tiny displays?
Interesting read. I've wondered about their use of big touchscreen interfaces having heard a friend's experience with the similar setup in a Model 3.
On multiple occasions they've had to pull off the highway to turn their car off and on again to get the screen working. Not really an option on your way to space.
> On multiple occasions they've had to pull off the highway to turn their car off and on again to get the screen working.
Surely not? The touchscreen is run by the media computer which does not control the car. You can reboot it with the 'two finger salute' while you are driving down the highway. Some things will be unavailable (you cannot, for example, engage autopilot while it is rebooting), but the car still runs & drives.
I hope they just miscommunicated the situation to you, otherwise they are really working too hard just to fix a touchscreen. Turning the entire car off is kind of a pain in the ass. I've never done it. And I have only rebooted the touchscreen a couple times ever. Your friend may want to schedule a service appointment if they really do have to power cycle the whole car, because that is super abnormal.
I'm not suggesting it had any function on that automotive functions, but I was also not aware of the media center reboot with steering wheel buttons. From what he described to me I don't think he was either, I'd have to check and will pass that along if not!
I'm a little surprised that the media computer doesn't have a built-in heartbeat check and know to reboot itself if it stops responding. I've heard of other cars and embedded systems doing that.
EDIT - asked him about reboot via steerling wheel:
> It’s not great because you lose lots of feedback. No speedometer, no sound from turn signals, etc. But it does work.
It probably does, but you always have issues where the task that feeds the watchdog works fine, but another does not (eg: GPU gets in an odd state).
In single core systems its somewhat solvable, but in multicore and if you want to include multiple chips in the crash domain it gets very hard with off the shelf chips.
In my experience the couple times I've had a touchscreen glitch, the watchdog will activate and reboot it after a bit. It's not very aggressive, though, for sure, it can take a minute or so.
> It’s not great because you lose lots of feedback. No speedometer, no sound from turn signals, etc. But it does work.
Yep, this is very true. You are driving blind, as it were, because the touchscreen is the dash. Fortunately it doesn't take long to reboot, but it is still much less than ideal.
> On multiple occasions they've had to pull off the highway to turn their car off and on again to get the screen working.
This is false. Teslas have, roughly speaking, two computers. One drives the big touch screen, and the other manages the core automotive functions.
The media computer sometimes hangs/crashes; this has no impact whatsoever on the basic function of the car.
When that happens, it's an easy matter to reset it while the car is in motion; one just holds one button under each thumb on the steering wheel for a few seconds. The big screen will come back within a couple of minutes.
I'm aware there are two computers and the UI crashing has no impact on the basic function of the car, but turning the car off and on makes the media computer restart. Not sure if he was aware about the steering wheel reset function, it was a while ago that he told me that story. Also possible I've misremembered it.
I should not have said 'this is false'; the story very well could be accurate. Please pardon the harsh tone.
It's more correct to say that stopping and re-starting the whole car is not the easiest way to restart the MMU. It's quite possible there are people who aren't aware of the 'two thumb salute' method.
Your friend is dangerously misinformed. There is no need to pull off the road to fix any problem with the screen. There are physical controls for everything essential to driving and they work independently of the screen. Dragon is reportedly the same.
But there’s only a handful of buttons, no? Unless I’m missing something. Might be a dumb question but the Apollo control panels had hundreds. How would you manually fly the Dragon in the event of a screen failure? I’m assuming this question was answered somewhere, since the thing was certified to fly, but I must’ve missed it.
Edit: Is the answer simply "it doesn't, it gets flown remotely from Mission Control"?
Give their docking simulator[1] a go on a phone or tablet - I was dubious at first, but even on an iPhone I managed to complete manual docking on the first attempt. It turns out flying a spaceship is very different to flying a plane, and rapid inputs just aren't massively important. Generally your edit is probably true as well - in standard flight everything is meant to be on autopilot using predefined routines for the mission, computers are just significantly better at performing a 16 second burn at 82% thrust at precisely T+12:48:16.
You managed to dock in a simulation in which it's guaranteed that nothing goes wrong. Most of the switches and buttons are there for when something does go wrong.
I think the main reason there are so few buttons on Dragon is that it evolved from an uncrewed vehicle that was designed to be flown from the ground, so everything is much more connected and software controlled than was even practical in the shuttle era.
That makes sense. Flying a plane is all about responding to perturbations to keep on an even course. A spaceship has literally zero perturbations. It goes exactly as you direct it to go, within the precision of your burns. So its all about carefully issuing bang-bang control signals (burn for 2 seconds) then measuring the response against expectations.
It's close to zero over short distances (ignoring solar wind and LEO atmosphere), although you're still trying to dock with a moving target.
If you've played KSP or read about Gemini 4, the relative motion can be unintuitive due to orbital mechanics. The SpaceX docking simulator puts you right next to the ISS with low relative velocity -- as it would be in real life if everything went as planned -- but it would be much more challenging if you needed to do it manually from a greater distance and relative velocity.
You probably don't fly manually, just use predefined abort procedure or receive configuration from ground control. It looks humans now are just another cargo and screens are only for non-critical flight information purpose (in other case I cannot see how they would get away with touchscreen only and software running on Chromium).
It's interesting to wonder whether if Apollo 13 style fault happened, modern ships would be easier to reconfigure and rescue or harder?
It's just a bad article. The whole section about Astree is based on a quote on HN about the ATV, a European resupply craft developed 15 years ago that has absolutely nothing to do with SpaceX.
I would have expected them to use SCADE, but given their choice of stacks that are entirely unverifiable, due to a combination of their complexity and their semantics, I can only assume that "control panel" was basically decorative, and that actual critical functions and alerts were handled by something with much, much higher safety assurances.
TLA+ is basically an academic tool. It allows you to verify your specification, but it is useless to detect bugs in your implementation. In my experience, the process of writing your specification in TLA is a buggy enough process to make it all a waste of time.
The Crew Displays do not use Knockout.js. As far as I am aware this is possibly used in some other asset management software developed in-house for manufacturing teams but definitely not onboard Dragon.
at the bottom of the article they mention model rockets and the three levels of certification. Each level grants you access to more powerful motors and therefore higher or larger flights. The hobby is self-goverend by NAR and Tripoli who manage level certification.
It's a fun hobby, although large motors get pricey. The largest can be 4-5 figures per launch. However, you can get very advanced and do things you wouldn't typically expect in a hobby.
Here's a two stage ( 4" diameter booster, 3" diameter sustainer ) reaching over 200k feet in altitude. The Karman Line is about 330k feet.
I'd love to work with physical software (software that interacts with the real world through sensors and actuators), as a C developer, how should I move into this space? Every time I try intro to ARM kits I feel like I'm in over my head.
Start out with an Arduino, or ESP32/ESP8266. The ESP boards are probably the best bang for your buck out there if you're just playing around— you can start out with the Arduino environment (C++-ish), or use something like PlatformIO to interface a bit more directly with the hardware. It's not as low level as an ARM board where you have to worry a bunch about setting up clock multipliers and all that jazz, but the power to do so is there when you're ready for it.
The best way to learn is to solve a problem that motivates you. Maybe you want a phone notification when your laundry is done, or when a room in your house exceeds a certain temperature. Work on some little projects like that
If you want to get even lower level, then try out the ARM boards again. Someone else mentioned STM32 boards, which are great in many ways but IMO not very user friendly (Their STMCube software actually makes me mad). You can also try your hand at FPGA development. Verilog and VHDL are both popular languages, and preferences between the two tend to depend on domain. There's compilers that let you program FPGAs in C, but shying away from that.
If you want to look for work, probably the big industries to check out are:
Consumer embedded hardware: companies that make printers, cell phone radios, etc
I know you said you're a C developer by the MicroPython boards get you up and running pretty quickly. It's straightforward to get a blinking LED ( the Hello World of embedded programming ).
Otherwise, I would use a STM32 dev board and their IDEs and tools. It will get you off the ground and blinking LEDs way faster than working out the toolchain, loaders, ASM, setting the clocks and peripherals and all that work that has to be done before you even get to your application.
I worked with AVR microcontrollers, and the avr-gcc toolchain when I was in college. It's basically an abstraction layer below what an Arduino provides you.
Start reading data sheets of these ARM parts. It’s straightforward when you know what peripherals do you need. Get familiar with FreeRTOS or other embedded operating system. It’s really not rocket science. You only need to know, what do you want to build. Otherwise it gets boring very quickly.
I've gotten recruiter email from SpaceX but I've worked in embedded sensor systems, big radars for the government, and robotics so those might be stepping stones. Be warned you might have be a US citizen.
This article looks like a fine overview but when it comes to follow-up posts, the test is: does the new submission contain enough SNI (significant new information) to support a substantially different discussion? In this case it looks like not, but I can't really tell.
It would be awesome if some SpaceX engineers would give a few presentations at events like CppCon and talk about their software development process including some code examples and demos.
I wonder how they manage not to have accidental taps on the touch screen during liftoff and or re-entry. As I understand there are a lot of G's and violent vibrations and I would assume it's hard to keep a steady hand?
(Atleast this is my understanding from watching Apollo documentaries/movies etc.)
Liftoff and re-entry are completely automated. The astronauts are passengers and don't interact with the craft during these maneuvers. This was the case even for Apollo.
The Crew Dragon seats also have physical arm-rest controls. Those arm-rest controls are what the astronauts use under acceleration, and IIRC they're primarily comms controls.
I doubt anything on screen can affect flight during lift-off or re-entry. Anything it might had damn well better be locked-out anyway. Lift off's automated and I'd think any significant failure of that automation would be a cue to abort (physical control for that); as for re-entry, by the time you're deep enough in atmosphere that things are vibrating you're really just falling with style, and there's no "flying" to do nor anything to control.
I'm so relieved to hear all the redundancy and testing in place. I had heard that the touchscreens were built in Chromium/JS and was rather alarmed. Don't get me wrong – I do a lot of web stuff and I love that environment, but I've never seen a web app I would trust two human lives to. This, however, sounds like they really thought it through and made it safe.
A contrary opinion - a tool/framework used and tested by billions of people every day is a lot less prone to crashing when used for its intended purpose than something custom-built. There are tons of developers out there building complex apps by following beginner JavaScript tutorials, but SpaceX is obviously going to enforce better standards. HTML/CSS/JavaScript/V8 are all extremely solid technologies that have stood the test of time, and there is nothing better to build a user interface with today.
I've had cmarshall.[com|org|net] for a long time. I recently also fetched chrismarshallny.[com|org|net|dev], along with a bunch of various social media handles (see my HN ID). Right now, they point to my LLC site, but I'll probably set up a personal site; sooner or later. I did notice that a squatter had grabbed the Pinterest name. I don't care.
I've decided that ChrisMarshallNY is my "Google Me" name. It's working fairly well.
Does this article imply that RTCA/DO-178B is used as a means of demonstrating compliance in some way, or otherwise is used to define lifecycle processes for their development/verification/systems teams? Anyone know where this was mentioned by SpaceX?
> The secondary ports go into the primary ports, which are heavy-duty actuators that connect to what’s called a “summing bar,” which is no more than a massive steel rod.
I think we’ve been conditioned to believe that any application of medium complexity requires hundreds or thousands of developers. Snarkiness aside, they have a very concrete set of functional constraints and total control over the hardware, network, and software environment. Issues like resource management, dependency conflicts, security, scale, etc are taken out of the equation, things that are often the biggest time and resource sinks.
NASA's software teams are also remarkably small, and manage to produce code with one of the lowest defect rates of all time. (Space shuttle code had 0 defects in 500k SLOC!)
I can see how it’d be a decent tool for plotting up a bunch of raw telemetry streams, but as someone who had to write a moderately complicate program in LabView once, I’m astonished they’ve scaled it up that far and very glad it’s not me who had to do it. It’s a real PITA to work with.
I think that's all the validation we need for HTML/CSS/JS as the best tool for UI development nowadays. I wonder if there was actual shared code from the Dragon UI used in their online docking simulator. How neat.