The sad thing for me, and I see this way too often, is that someone, somewhere, no doubt said "Of course its secure, you can only READ the CANbus, the software doesn't even HAVE a write capability!" and everyone in the room nodded and went on with the rest of the review.
Manufacturers and engineers have to get it through their head that IF you can change the firmware ANYONE can change the firmware. If the firmware is SECURITY CRITICAL then the only way to change it can be through physical presence, loading encrypted and and signed firmware, with external validation. (something like the car asking for a third party to authenticate the operation ala nuclear launch codes). You can still get screwed but it will be hard enough to do that otherwise low value targets will remain relatively safe.
1) By default the cat is "locked", and uses a manufacturer-issued public key to verify the signature of updates
2) A car owner can "unlock" the car, by creating a key pair, and asking the manufacturer to add / replace (debatable) the new public key to the car. Right after that, the owner has an ability to modify the firmware, but everything is still secure. And it's up to the adult owners to manage their private keys and be careful with updates.
3) There should be a simple way to factory-reset the car (with either the owner, or the manufacturer signature).
Exactly! Jumpers with EEPROMS. I miss those. I recommended them often on Schneier's blog for a budget solution to boards with firmware update. There were also old mainframes that let users update the firmware by physically swapping it out much like we swap out a drive or SD card today. Cheap as chips can be today, there should be even more simple solutions available.
As usual, technology isn't the reason. It's marketing, profit per unit, and profit in maintenance phase.
Under this model, if you buy a used car, how can you be sure it doesn't have keys that are controlled by someone else? You can follow the "factory-reset" procedure, but how do you know the reset mechanism hasn't been tampered with?
They can design it so that the firmware is in two parts.
The first part does minimal initialization and then gives control to the second part, which is where most of the functionality is.
The first part is also the part responsible for implementing the firmware update protocol and the reset procedure. The first part would either not allow any update to replace the first part, or it would only allow the first part to replace with factory signed firmware.
Alternatively (or in addition) they could have a diagnostic connector that lets external hardware read the firmware memory. You could then do the factory reset on your used car, and then hook something up to that diagnostic connector and have it compute a hash of the firmware that you can check to make sure it is the right firmware. An Arduino should be sufficient as the thing to read and hash the firmware.
I think the best solution would be your first one, but have the second part be stored on removable media, like an SD card. Make it physically impossible to write changes to anything but the SD card (e.g. use actual ROMs for the first part, and use volatile RAM for temporary storage of the code loaded from the SD card).
Then, if you want to be sure that the firmware isn't hacked, simply remove the SD card, put it in a computer, and verify the contents. If paranoid, overwrite the SD card with the official firmware. If extremely paranoid, throw away the SD card, buy a new one, and write the official firmware to it.
They could still open source the first system's firmware code. FOSS doesn't imply easily modifiable, does it?
E.g., rms says he used a Lemote Yeeloong because of its free BIOS, but AFAIK he couldn't have reprogrammed it unless he removed the BIOS chip from the motherboard.
Similarly, when you buy a used car, how can you be sure the previous owner didn't copy the key (you know, the thing that lets you open the door and drive away) and still has it? That's not a new question.
Similarly, you move into a new apartment and the super gives you the key (to the door, not a cryptographic one). You installed a new lock, right?
In newer cars, keys are unique and registered to the car individually. My Audi has a list of all keys registered, and I can have the dealer unregister any I've lost.
Cursory searching reveals the procedure for manually adding blank keyfobs to Audis, but what you describe above seems... a fairly technically complex system to take on faith. (Given the article we're talking about)
Modern keys are complex things. Gone are the days where you can have a piece of metal re-cut for $5 at Home Depot. A replacement key for modern cars is an expensive proposition: they have to be custom-ordered from the factory (with proof of identity and insurance) then programmed by your dealer. Replacement keys for modern Audis/BMWs/Mercedes run $300-400 all-in. (Remember that scene from Gone in 60 Seconds? And that was 15 years ago. And also a movie.)
You can buy blanks, sure, but without programming, it's nothing more than a hunk of metal.
Is it somehow possible to "clone" a key? Never underestimate hackers, but it ain't easy. Per Brian-Puccio's use-case, unless the previous owner is presenting auto exploits at DefCon, you don't have to worry about someone having a copy of a key you don't know about. (And if they've lost it, you can go to the dealer and have it un-registered.)
Aside: Google around for articles on new car theft: you'll find that no thieves are copying keys or hot wiring or anything like that. In fact, the "new thing" is to get a signal amplifier that allows a car's keyless entry/start system to "find" a key that's sitting 100 feet away inside someone's house or office, then just drive away.
One correction: MMI doesn't list all the registered keys, but it does track them (e.g., you just see the total number registered). I know that dealers can un-register, and I know they're unique.
Neat! Thanks for the additional details. The part I was curious about was the dealership being involved in the process. In other words, who is authoritative: if the car queries the manufacturer for authorization, or if the dealership is updating the car.
I see at least two ways to implement that, but I certainly don't see how practical they are, if implemented at scale:
1) Every component has unmodifiable update logic that supports just that: factory reset for itself, and propagating it to the subcomponents.
This is still where we can say "the owner does not own the car", but in this case it could be treated as a part of hardware (that could not be modified either), rather than "user-space" firmware.
2) Allow easy reinstallation of the critical electronics components, which can be bought from the manufacturer. I don't quite like this "hard" way, but it's a possibility.
The firmware must be able to create and sign a document with written the signature of the keys inside so that the car's owner can check the result (the car's maker would publish his public keys).
If the hardware of the firmware is tamper resistant you can pretty much be sure that it's unmodified. If the attacker can modify tamper resistant hardware I don't think that the car is your first concern.
I would assume in this perfect scenario that the factory reset lives in some ROM somewhere that nobody, not even the manufacturer can touch post-release.
This is a strange thing to worry about: the former owner of your used car planning to murder you? I'm perfectly fine with the answer that, as with most technology, it's nearly impossible to prove your car computer hasn't been backdoored.
This assumes the manufacturer's software has no vulnerabilities. Safer to have a hardware airgap that doesn't allow writes no matter what the software says.
There is probably a middle ground here. For example, lock it down like the parent comment suggested, but have a process to unlock it, requiring (for example) physical access and some vehicle specific file from the manufacturer requiring proof of ownership and a scary warning. (I believe some phone manufacturers had a similar process for unlocking bootloaders a while back.)
This would still allow enthusiasts and security researchers to "own" their vehicle and would keep malicious actors out (for the most part).
Put a key on a piece of hardware and provide it with the car. The firmware will not update without that key. Now the owner can update their firmware, or hand the key to their mechanic to do it for them. If they lose the key, they get to pay their mechanic to source and install a new controller that comes with a new key.
Or heck, just put a switch on the controller that must be manually closed for a firmware update to proceed. What matters most is preventing quiet firmware updates over the air.
Even if the update is not quiet, how do you know that it's authentic? If the hackers can trigger a "update available, please flip the update switch and restart your car"- notification, you will not be able to tell if this came from the manufacturer or the hacker.
In my first idea, the hardware "key" holds two cryptographic keys. One is a private key, whose public counterpart is held by the manufacturer. This authenticates the vehicle to the manufacturer. The other key is a public key, whose private counterpart is also at the manufacturer. This authenticates the manufacturer to the vehicle.
My second idea does not accomplish any new authentication (beyond what over the air updates already do). However it does prevent silent updates, which is an improvement over today.
Today, not only can a vehicle owner not tell if an update came from the manufacturer or hacker, they can't even tell that an update happened at all.
We (consumers) had to deal with that over 10 years ago. My old Lexus still has some kind of proprietary diagnostic system that only dealerships can access, and we get stuck with dinky ODB2. Did manufacturers suddenly start getting more transparent and compatible with their vehicle diagnostic and control systems?
Also: virtually every consumer in the world would choose safety from some 'evil hacker force' rather than the ability to maintain their factory entertainment system themselves. There's really zero reason the nav/entertainment system should be on the same communications network as the engine and brakes, anyway.
Keep a suitable private key for signing firmware in the car, locked in a secure box that can only be unlocked with two copies of the car's ignition keys? I'd call that sufficient security, but it'd also clearly allow me to get to the keys if I want.
You could for example hook up the radio to a CANbus that has a minifirewall/proxy in it, allowing only read from the radio and write/read to the radio.
Regardless of the firmware question with the V850 module, I'm surprised that other modules on the CAN bus would accept commands from an entity that 'should' only have read access, without validation.
I'm not up to speed on CAN bus, but if it's not possible to secure it in this manner, then it cannot be considered an appropriate communications bus for safety-critical applications. Even the industrial guys are getting their heads around this with encrypted & validated signalling over ethernet.
CAN isn't really designed for security. The nodes don't even have addresses. They communicate using message IDs without regard to who the sender or responder is. The error detection requires a party line so that any node can flag a corrupt message and the data rate is too slow to make tunneling a secure stream practical.
Thanks for this description (and others below). In that case it's not reasonable to use this for safety critical applications, IMO. Any element on the bus becomes a potential point of entry, and there's no defence in depth once on the bus. Kinda scary from an architectural point of view.
Unfortunately you are casually proposing to trash untold billions, probably trillions of dollars worth of existing automotive engineering infrastructure with that comment. As any system is decomposed into components you can and must reach a level where inter-component communication is dumb, unquestioning, immediate. Obviously by the time you reach transistors (or pistons) that's how it works. CAN is not as fundamental to cars as pistons (or wheels if you prefer electric cars I suppose) but it's not that far removed. It was developed decades ago, at a time when this kind of hacking was strictly science fiction.
I don't think it's necessary to throw away everything (as near as makes no difference) to address this problem. Real air gaps, real read-only circuitry etc. (as described in other comments) could do the job.
I disagree; it's not necessary to throw everything away, but an improvement should, and must be made. Existing protocols can be extended to include security (and have, see Safety CIP for an example).
Any form of engineered design is unlikely to go backwards in terms of the functionality and flexibility that things like bus communications bring, but awareness of these new attack vectors needs to be included in the base design.
This is the path that industrial control systems are taking in light of attacks such as Stuxnet, I expect that the automotive guys (who are usually ahead of the game in terms of systems engineering) will follow suit.
Edit: The problem I have with people proposing air gaps and read-only circuitry, is that they just don't work in real applications. If there is a business case against it, such as the service these jeeps were offering, then air gaps and hardwired circuitry solutions will be overridden by that business case. Further than that, as can be seen with things like the Lenovo hardware fiasco, manufacturers cannot be trusted to abide by the rules (such as: air gaps are now mandatory), when there is a benefit for them to act otherwise. As Chrysler now has the ability to remote into their cars, they are very unlikely to remove this ability 'merely' due to safety concerns.
It's far safer on the whole to offer secure methods of achieving the existing or proposed functionality, than to try and walk backwards and make things harder and more costly to implement, for lesser functionality.
The nature of a bus-network makes it very hard to do verification that the sender is in fact who he claims to be. Only solution i can see that really works is if the true sender "breaks the bus" when it detects a malicious sender is using it's source address, as is done when you get ID-collisions, CAN itself does not have a concept of source but many protocols implemented on top of CAN does. In practice this could be very complicated though because often devices deliberately pretend to be other devices due to compatibility with legacy devices.
The bandwidth of CAN is quite limited so including cryptographic signatures with proper strength in every message is not an option. Establishing encrypted connections also comes with a safety risk if they fail, current architectures are designed to allow glitches in the physical connections and recover instantly once the connection is back. Really, many ECUs are designed to fail in every possible way, you can even pull the power of an ECU and plug it back in while driving and your car will keep going without you even noticing it(dont try this at home). You don't want to waste a second with encryption handshaking on a 100hz signal just because you lost a sequence number somewhere. And knowing the complexity and lack of quality in encryption libraries(openssl anyone?), adding more complexity would just introduce even more risk.
> And knowing the complexity and lack of quality in encryption libraries(openssl anyone?), adding more complexity would just introduce even more risk.
What you're saying here is fundamentally that because OpenSSL has bugs that everyone should be using HTTP instead. That opinion puts you in the minority of the technical world, to say the least.
> Only solution i can see that really works is if the true sender "breaks the bus" when it detects a malicious sender is using it's source address, as is done when you get ID-collisions
This is a solution that requires perfect implementation from every component. As long as this is actually a specified behavior, written down somewhere that you have to implement to mark your part as "CAN2-compliant", that would be OK, but I doubt that's going to happen.
In general, though, I oppose requiring every node on a network operate perfectly to maintain security.
> The bandwidth of CAN is quite limited so including cryptographic signatures with proper strength in every message is not an option.
So don't include a signature in every message. Nobody does that in the computer world, you authenticate a session. There's still no reason for the head unit to be opening up a session with the brakes, and the brakes should reject such a connection attempt.
I realize that this isn't possible with the architecture of the CAN bus, but the CAN bus is just not suitable for nodes which are connected to the internet and broadcast wifi hotspots. The failure case of a CAN bus attack is just not appropriate - once you're on, that's it, you can do anything. There's going to have to be some changes, and the sooner we accept that fact the sooner we can start working on a solution. If we need a network with a cleaner electrical source and more bandwidth, that can no longer be driven with 8-bit micros on an unfiltered alternator, well, that's the price of having nice things. We'll have to buy the $1 part instead of the $.30 part to get the power windows onto the network. When you're connected to the internet, blind trust is not an option.
Within a year or two we'd see micros with onboard crypto accelerators anyway, and we'd be back to the $0.30 parts.
Thanks for that comment. I have some more reading to do on CAN bus, and admittedly I come from an industry where the default is generally to turn things off in unsafe or uncertain conditions, which simplifies the protocol significantly. That doesn't help when your car is doing 100km/hr on a freeway though...
We call this "dumb, unquestioning" layer the "link layer" (layer 2) in the 7-layer model.
The thing is, nobody lets their computer act based on arbitrary packets coming in from the internet. We at LEAST make them take place in the context of a TCP session (layer 5). More likely you'd require node authentication (a function of the application layer, layer 7).
I have real doubts that manufacturers can/will implement airgaps and read-only circuitry properly. Jeep tried, and now they're doing a recall. Auto companies are only starting to realize that they are actually in the business of computer security now, and it will be years before there's been enough of these recalls for them to come to terms with the magnitude of this task. The fact that they are still operating on the walled-garden concept - inside of which all messages are taken on face value - is ample evidence of this.
Real airgaps imply a severe degradation of functionality from what both consumers and auto companies have come to expect. The days of your entertainment unit being able to control anything in your car will be gone, as will things like over-the-air firmware updates, remote door unlocks, etc. This is just not going to happen in a feature-driven world. And all it takes is for someone to leave one firmware-flash bit unset and the entire thing is worthless. Expecting that airgaps and read-only circuitry will be implemented, and implemented perfectly, every time, is wildly unrealistic, and with that security model the failure case is brutal. Once they're into the garden, they're in.
Burying your head in the sand or bawling about the cost of the fix is pointless. Cars are now connected to the internet and broadcast wifi networks, they can no longer blindly trust messages on their internal networks. That's the thing auto companies have to fundamentally come to grips with, regarding them now being in the business of computer security. If you want cars to be on the internet, validating signing for messages, firmware updates, etc is not optional, regardless of what you would prefer. We need UEFI for car hardware, and we need messages authenticated on a session level. The brakes need to be able to realize that there's no valid reason for the head unit to be sending them commands.
I think CAN maps better to layer 1. You can and do blindly trust messaging at layer 1. In any piece of engineering there exists an abstraction level below which there is no security. Software people are prone to forget this because everything they do is virtual. But it's not turtles all the way down, there is no such thing as a secure transistor.
Whoa there, no it's totally fine. It has great error and collision detection, is very tolerant of noise, has priority for realtime baked in, all the controllers do rudimentary and effective protections against failed controller, and the entire notion of some controller sending something a few are interested in and a controller interested in the same message that a number of different controllers may send is central tenant to the operation.
The mistake is making it accessible over a cellular network. This was not careful air gapping. The OEMs will learn a lot from this. They had protections but they were not strong enough. There will be protections that are harder in the future thanks to this.
CAN is really quite good and in fact is used at times in industrial automation as well as automotive where it shines.
CAN bus isn't designed for security, it's designed to transmit bits in electrically noisy environments (such as a car) between embedded devices. It sends frames of up to 8 bytes, which is not even large enough to fit a TCP packet header.
If the firmware is SECURITY CRITICAL then the only way to change it can be through physical presence, loading encrypted and and signed firmware
Physical access? So, hypothetically another car gets hacked, but this time there is no middleman in position to implement a mitigation like Sprint was in this case. How do you suggest a firmware rollout happens? Recalls? Mailing thumbdrives to end users?
Over the last week I've seen the infosec community warmer to the idea of OTA updates and all that baggage that entails compared to the alternative ways to update car firmware. You're posting pretty authoritatively though, if you've got some analysis that the rest of us don't, I'd love to hear it.
And if it was a buffer overflow that lead to an arbitrary RCE instead of a firmware-overwrite, and the fix was to upgrade the code to fix the buffer overflow?
I have for some time felt that robot cars absolutely need a physical disconnect lever. Nice and big, break glass, and you're driving off the wire.
But, this hack reminds me that there's no such thing in a modern car; the whole thing is networked.
So, yeah, physical updates of firmware. Although, in the article, it's a subsystem's firmware, which physically protecting that is just not in the minds of engineers right now: think of all the ports you'd need.
Even if the physical kill switch left me with just (unpowered) steering and brakes, I would have a vastly different perspective on the safety inherent.
In lots of cars the key doesn't do anything when the vehicle is moving. In some cars you can't even turn it mechanically if you don't have gear in P or R, in other cars you can't even turn it at all because its just a button. There are very good reasons for this, the recent GM issue where the key fell out by vibrations which killed the power steering comes to mind, some people actually died because of this.
> "Of course its secure, you can only READ the CANbus, the software doesn't even HAVE a write capability!"
That can mean two things—either the software didn't implement writing but the hardware is capable of it, or the hardware isn't capable at all. Sounds like the former in this case. It should have been the latter.
That is an answer, except consider the situation where a flaw is detected in the manufacturer's code. The cost to fix is to replace the entire chip. Which is fine, from a process standpoint, but these cars are going to be competing with other cars in the market and will have a higher "service cost" and that higher cost will dissuade people from buying (because historically less than 5% of the market is willing to pay a higher price for something that is "more secure").
So while I agree with you, unless it were federally mandated for all cars, the ones that didn't do it would have lower cost of ownership and get more buyers.
The buyer pays about $600 extra for the "Internet-enabled" version of the car. It's an option; you don't have to buy it, and, after this, probably shouldn't. If you didn't buy it, software updates require a dealer visit. I have a 2007 Jeep Wrangler with no external interfaces, and it's been in three times for software updates for basic automotive functions, such as stability protection and engine restart.
Cars go back for recalls all the time. Having to physically replace a part isn't a big deal. Dealers have a supply chain in place for obtaining parts from the manufacturer.
It's an option; you don't have to buy it, and, after this, probably shouldn't.
Unfortunately, at least for some cars, all these options are bundled together in packages. Unless you buy the base model, you'll get the Internet as part of a package of other features that you want.
I think that nowadays the better answer would be to go in and physically rip out the cellular antenna. Easier said than done.
I don't think they implied that the chip should be read only, you should still be able to have a flashable firmware of some sort.
But instead of giving this chip which should be read-only regulated by software direct access to the bus, route it through a bit of extra on board circuitry which physically prevents it from writing data to the bus. This would enforce the readonly client in hardware, not firmware.
Sadly CAN bus doesn't have that option, although I suppose you could put a MITM sort of device which rejected any write packets. Could be an interesting gizmo to market to car makers.
I don't think that is easly done with CAN communication. You have a clock line and a data line. You don't have to drive the data line if you want to talk on it, you only have to pull down. So you can't simply not connect a wire (as in a Rx/Tx configuration) but you have to make the sillicon chip unable to Tx. This requires new sillicon chips, and even then I don't know how well they can pull it off. So this solution is far from trivial.
There is no clock, that is one of the genius aspects of CAN. There were earlier ideas about multiplexing before CAN and they needed a clock, and that is why they never worked right.
I had thought the firmware already required physical presence to alter? As per the recall notice, Jeep is mailing everybody thumbdrives with updated firmware. That suggests that the hack doesn't require changing the firmware, but merely exploits what's already there.
They were able to remotely flash the firmware while the vehicle is being driven down the highway and there's not really anything the passengers can do while it's happening. They are able to control other components like the audio system without needing to flash anything.
Charlie and Chris modified V850 firmware that is connected to CAN bus to control devices on CAN bus. They flashed V850 firmware from main info/entertainment system that runs Linux on ARM board. Chrysler most likely is patching this main firmware. Linux on this ARM board was entry point for this attack - it is connected to WiFi, 3G and other stuff and it has/had dbus listening on 6667 port for all interfaces.
You're sorely mistaken if you think Jeep is the only car manufacturer with this setup. I'd bet that most cars produced within the last few years don't have airgapped CANBus systems. By definition, any car with 'drive' settings you can adjust by the headunit console (suspension, sport modes, charging status, etc) is not airgapped. I mean, in a Tesla you have the option to change suspension settings based on location from a Google Maps -based GPS system. It also can drive itself from your garage to your front door based on meetings scheduled in your calendar.
We literally have the same information for any auto manufacturer that we have from Jeep - empty assurances that 'we design our cars with the utmost cyber-security protections, trust us'.
Car nut here: Your assumption is correct. The system is not air gapped. Even racing control systems suffer from this (except some really expensive ones, although they use commonly sourced sensors with various form of shielding).
Absolutely. For example, I'd imagine that teams that participate in major events/series lock down their systems to avoid another team from sabotaging them. The competition is fierce and everybody is looking for an advantage. I've heard of teams stealing or hiding tools/parts during endurance races that I've attended. Messing with another teams engine control setup does not involve anything but a computer. A simple change to the timing or fuel mixture could destroy an engine and take out a competitor. Without putting the driver in much risk.
I'm in the industry. The setup described in the article (multimedia CPU plus connected vehicle controller (V850) which supports the CAN connection) is pretty much the standard headunit architecture. What is different from OEM to OEM is to which kind of CAN bus the system is attached and therefore which signals you can change.
For other OEMs it e.g. might not be possible to directly send brake signals from there, as a gateway connects and filters the different CAN networks.
But of course, once you figure out how to remote update individual ECUs then you might also be able to modifiy the gateways and do all kind of bad stuff.
> ...in a Tesla you have the option to change suspension settings based on location from a Google Maps -based GPS system.
Given Musk's technical background, one would expect Tesla to understand that they're building a network-connected computer-controlled two-ton missile and to have both the desire and know how to properly secure such a thing.
But yeah, AFAIK, all we have from any manufacturer is assurances of security. :(
Tesla takes their security pretty seriously. There were at DEF CON, both in the contest area and the front row of the talk where someone dumped their firmware. The presenter was overall pleased with their design. Expect to see something on 60 Minutes in the coming months.
If they allow over the air updates, even encrypted, there's a big potential risk. Exactly how are the crypto keys generated and protected? An attack on the download signing server, rather than the car, offers an entry point.
Nitpick on the Tesla: the "drive itself from your garage to your front door" thing is a hypothetical future scenario with no proposed implementation timeframe, and not something you can do right now.
But it's just a nitpick, because the basic controls are certainly all there. The feature doesn't exist yet just because it's hard to do it safely and legally.
As that tweet pretty clearly says, what they're rolling out soon is highway autosteer (a.k.a. automatic lane keeping) and automatic parallel parking. "Summon" mode (having the car drive to you by itself) is not coming any time soon.
> This is the thing that all the manufacturers always refer back to when it comes to IT-security of cyber-physical systems: there is an isolation they say, the air gap between connected and physical parts of these systems.
> [the] multimedia system’s controller itself can’t communicate directly with CAN bus, it actually can communicate with another component which is connected to CAN bus, the V850 controller
That... that's not what an air gap is. Usually I'm forgiving about security that's fudged, since it's hard and marketing and higher-ups rarely understand. That's the entire point of an air gap: there isn't anything to understand, it's a physical disconnection. It's either plugged in or it's not.
When asked "is there an air gap", if the answer is no and you answer yes then you're lying in the most blatant and bare-faced way I can imagine. It's like saying "that car is four wheel drive" when it's only two wheel drive, or saying "that car has an 18 gallon tank" when it has a 7 gallon tank.
While some of the research could proceed without the diagnostic equipment, many
active tests and ECU unlocking require an analysis of the mechanic’s tools.
After both authors of this paper sold plasma for several weeks, we were finally
able to afford the system required to do diagnostics on the Jeep Cherokee (and
all other Fiat-Chrysler vehicles)
Its a joke. Miller is loaded, and alibaba is full of cheap clones of diagnostic tools (and 'special' dongles that do magical things like cloning BMW keys, or bypassing Mazda 6 security in 4 seconds)
Yep, lots of amazing bitsy things for sale from Asia. I liked the very helpful strings output, it's so nice when programers are thoughtful like that and give helpful -h output.
This was an excellent read, and had the technical meat I was looking for. Too much actually, my eyes started glazing over during the CAN protocol checksum part.
It was a hair-raising experience learning that up until recently thousands of cars' head units could be controlled remotely at any given time by telneting to port 6667 (irc, hah!) and sending unauthenticated D-Bus commands over the Sprint 3G IP network.
A simple fix would be to allow me, the owner, to turn off the wifi device and the connection to Sprint. I am not using those features and don't wish to have them.
Most people would not like to feed their location data to "the cloud", or open their car to anyone with an exploit in their pocket.
Traffic information can be distributed to cars via one-way mechanisms without sacrificing privacy -- using mechanisms such as FM RDS and Satellite Radio.
Safety Defect/Non Compliance Description and Safety Risk:
SOME 2013-2015 MY VEHICLES EQUIPPED WITH SPECIFIC RADIOS HAVE CERTAIN SOFTWARE SECURITY VULNERABILITIES WHICH COULD ALLOW UNAUTHORIZED THIRD-PARTY ACCESS TO SOME NETWORKED VEHICLE CONTROL SYSTEMS. A SUCCESSFUL EXPLOIT OF THIS SECURITY VULNERABILITY COULD RESULT IN UNAUTHORIZED REMOTE MODIFICATION AND CONTROL OF VEHICLE SYSTEMS. FCA US HAS NOT MADE A DETERMINATION THAT THIS SECURITY VULNERABILITY CONSTITUTES A DEFECT. ALTHOUGH FCA US HAS NOT DETERMINED THAT A DEFECT EXISTS, IT HAS DECIDED TO CONDUCT A REMEDIAL CAMPAIGN AS A SAFETY RECALL IN THE INTEREST OF PROTECTING ITS CUSTOMERS. EXPLOITATION OF THE SOFTWARE SECURITY VULNERABILITIES COULD LEAD TO EXPOSING THE DRIVER, THE VEHICLE OCCUPANTS OR ANY OTHER INDIVIDUAL OR VEHICLE WITH PROXIMITY TO THE AFFECTED VEHICLE TO A POTENTIAL RISK OF INJURY.
Repair Description:
CUSTOMERS AFFECTED BY THE RECALL WILL RECEIVE A USB DRIVE WHICH THEY MAY USE TO UPGRADE VEHICLE SOFTWARE, PROVIDING ADDITIONAL SECURITY FEATURES INDEPENDENT OF THE NETWORK-LEVEL MEASURES. ALTERNATELY, CUSTOMERS MAY VISIT HTTP://WWW.DRIVEUCONNECT.COM/SOFTWARE-UPDATE/ TO INPUT THEIR VEHICLE IDENTIFICATION NUMBERS (VINS) AND DETERMINE IF THEIR VEHICLES ARE INCLUDED IN THE RECALL. IF SO, THEY MAY DOWNLOAD THE SOFTWARE THEMSELVES, OR VISIT THEIR DEALERS, WHERE TECHNICIANS WILL PERFORM THE INSTALLATION. THERE IS NO CHARGE FOR THE SOFTWARE OR, IN THE CASE OF DEALER VISIT, INSTALLATION.
I'm getting a 404 on the update page [nevermind it's case sensitive and needs to be downcased]. Does anyone know if this update actually fixes the issue? From reading the exploit details it seems more like a systems design issue that can't easily be patched in software.
Chrysler worked with the guys that discovered the vulnerability and no doubt simply patched the current known bug allowing exploit from the cellular uplink. The system could be vulnerable again if a new exploit is discovered. I updated my Jeep a week or so ago and haven't had any issues -- yet.
Edit: After reading the paper it looks like the update from Chrysler blocks inbound tcp/ip now, and Sprint is also filtering traffic more aggressively.
This whole story reminds me of the Battlestar Galactica reboot a couple of years ago. The premise was that the shiny, new spaceships were all susceptible to hacking and virus injection while the old, "vintage" ones were unhackable.
Those articles say that the idea of a remotely exploitable vulnerability in a car is ridiculous (therefore so is the murder idea), but is that not exactly what happened with Jeep here?
Yeah that was my point. There are always "experts" to be found who can't imagine a particular exploit working. Reality is not limited to some dude's imagination, so we should have read the initial articles for what they were: "nothing to see here, move along." If we knew then what we know now, such articles would have piqued our curiosity. Therefore, the fact that they were published does pique my curiosity.
It reminds me a little of something we say, more or less: "this device does not read its data sheet". It comes up a lot when a physical device does not behave the way you might expect from reading its accompanying documentation.
I'm not so worried about hackers gaining access to change radio stations, but writing to the CANBus is concerning.
It sounds like the weak point in this was the ability to rewrite the firmware of the V850 controller. If that could somehow signed then I'd feel safer.
Even changing radio stations could be very dangerous, especially if access is also granted to audio controls (which it is). Attackers could easily cause accidents by creating distractions like rapidly changing, loud radio stations.
Don't forget an FM radio signal itself is trivial to hack. I did this when I was a kid so that's not a new problem at all, and probably not a danger. If the driver is going to crash because of a sound on the radio, they probably aren't safe to be driving in the first place.
All you can do with an FM signal is override the audio. The volume is still limited to whatever the driver set, and they can always turn it off.
Turning the volume to maximum and preventing the driver from lowering it or turning off the sound would be much worse.
As for "probably aren't safe to be driving in the first place," that may be so, but I would estimate that about 80% of people on the road aren't really safe, and just muddle through by luck and generous margins. Causing unsafe drivers to crash when they otherwise wouldn't have is still bad.
In the security paper, they highlight that the entertainment system displays radio station images that are broadcast over the air (not sure if this was FM or satellite). So it's conceivable that an FM transmitter could broadcast a corrupt JPEG, causing a buffer overrun in some crappy image decoding software, and pwn your car...
I'm sure many engineers overlook the "guy who knows a guy" situation. The whole car itself is one giant system that works in unison; practically everything in that car IS connected. Which obviously means everything is accessible one way or another. Isolation is not really isolation as isolation more than likely depends on something.
Employing this trick you can find all of Chrysler’s cars equipped with this kind of head unit. Over a million of them were actually recalled by Fiat Chrysler. After that all you need is to choose the right one. Funny thing is that it’s rather hard to do, “it’s much easier to hack all the Jeeps than the certain one,” as the researchers say.
However, picking the wanted Jeep is doable as well, thanks to the option of the GPS tracker.
Better double check that you're not crashing another Jeep which is in the wrong place at the wrong time...
If you're doing it for nefarious purposes, all you need to know is that the target vehicle in is on the road. You could have every active vehicle crash at the same time. While doing so blows the hack for good, it would make the motive incredibly difficult to determine.
A lot of automotive manufacturers are migrating to FlexRay (https://en.wikipedia.org/wiki/FlexRay) and Ethernet so hopefully they will use the extra bandwith to implement a bit more of message authentication to their protocol.
Manufacturers and engineers have to get it through their head that IF you can change the firmware ANYONE can change the firmware. If the firmware is SECURITY CRITICAL then the only way to change it can be through physical presence, loading encrypted and and signed firmware, with external validation. (something like the car asking for a third party to authenticate the operation ala nuclear launch codes). You can still get screwed but it will be hard enough to do that otherwise low value targets will remain relatively safe.