Hacker News new | past | comments | ask | show | jobs | submit login
Urgent/11 – Zero Day Vulnerabilities Impacting VxWorks (armis.com)
125 points by phantom_oracle on Aug 13, 2019 | hide | past | favorite | 71 comments



Wow, I was going to comment "IoT, the 'S' stands for 'security'", but this is about VxWorks, a battle proven (literally) RTOS.

This illustrates a point that now, in 2019, there is literally no OS designed for security. I mean, security was never a real goal. Even software specifically written to address security requirements could easily have gaping holes (re Heartbleed)...


I tilted at the windmill of making and selling a secure RTOS for 10 years. It is a fool's errand, and everybody that likes money should stay away.

What I painfully learned is that hardly anybody save sometimes the US government is willing to compromise on features or pay 1 cent more for a secure RTOS.

Look at it another way. The market for cybersecurity is supposed to hit $300 billion by 2024, according to a recent report. If we only released secure software, that $300 billion market wouldn't exist. In that sense, it is vastly more economically viable to release insecure software than to spend any effort on securing it.


> sometimes the US government

Yeah, but that's a huge opportunity b/c they have tons of money and constantly overpay massively for things.

> If we only released secure software, that $300 billion market wouldn't exist

This is the broken window fallacy [1]. In fact, what you are saying is that there is $300 billion of value to be created by making secure software.

1: https://www.investopedia.com/ask/answers/08/broken-window-fa...


The broken window fallacy doesn't apply to software security for a couple of reasons. The first reason is that it isn't apparent that software is insecure until well after the sale, and depending on use (i.e. on a security network, etc), might actually be fit for purpose. In other words, measurement of window brokenness changes over time and with situation.

The other important difference is that the software vendor, unlike the glazier, doesn't directly benefit from folks breaking its software.

A better analogy is the introduction of the iPhone creating the iPhone accessories market. For lots of reasons, Apple decided not to make the iPhone crush proof, allowing Otter and others to sell protective cases for it.


To say that it's "vastly more economically viable" to spend $300 billion fixing insecure software rather than simply making it secure in the first place is fallacious.

If you made the software secure in the first place, you would have secure software and $300 billion to spend on something else (guns, butter, or what have you).

That seems like the very definition of the broken window fallacy to me---but hey, if not, it's still a fallacy.

Of course we haven't factored in the extra cost of making the software secure in the first place. If that costs vastly more than $300 billion, it is vastly more economically viable to just make broken software, but I don't think that was the intention of the statement.


Sadly that actually sounds like it could be a really interesting industry to work in tbh.


I have no idea of the details, but this:

> The URGENT/11 vulnerabilities affect the noted VxWorks versions since version 6.5, but not the versions of the product designed for safety certification – VxWorks 653 and VxWorks Cert Edition, which are used by selected critical infrastructure industries such as transportation.

Seems to me more like meaning that a "certified" "safe" version exist but that a lot of companies used (most probably to save money) the "normal" edition, and - indirectly - that the differences between the "normal" and the "certified" editions were known, at least to the developers/company actually making VxWorks.

It would be "queer" that the "certified" editions have "different" mechanisms implemented (for completely different reasons) and only coincidentally they are more secure.


A lot of "Safety-critical" certified versions of operating systems just don't include things like TCP/IP stacks or userspace applications. That's probably what WindRiver is referring to here. Otherwise they might actually have to do rigorous design verification and testing on their network stack which would cost a great deal.

For example you can get a "medical grade" QNX but the certificate only covers the kernel, so you have to write and verify the entire userspace yourself.


Sure, but then I would have expected a "reason" provided by Armis Labs, i.e. something like:

>... which are used by selected critical infrastructure industries such as transportation ...

... as they do not contain the vulnerable TCP/IP stack.


No, that still fits with my experience. In the safety-critical realm anything that casts doubt on your claims of robustness, reliability, or safety such as a TCP/IP stack vulnerability opens you up to a lawsuit.

What will happen is you'll purchase a certificate for the RTOS kernel plus a few critical components. Then you can choose to use any other off-the-shelf components that the vendor or third parties provide. Those parts don't have to be safety-critical, but if a defect is found in uncertified software it's not VxWorks's problem.

VxWorks is very clearly and concisely stating that the safety-critical certified components are not affected. But they're not going to make statements about the systems their safety-critical clients built. That's not their responsibility. And Armis is almost certainly reprinting a statement from VxWorks. Both Armis and VxWorks are leaving it up to each VxWorks customer to determine whether their particular configuration of Safety-Critical VxWorks uses a vulnerable stack as an add-on.


Well, that Armis labs is reprinting a statement from VxWorks is a possibility, but the statement is on Armis page, and sounds like they themselves wrote it, since it is an announcement on discovered (by Armis) vulnerabilities, there are as I see it two possibilities:

1) Armis tested those certified environments and couldn't replicate the bugs

2) Armis did not test those certified environments and reprinted the VxWorks statement

If #2 they should have written instead "we could not test the vulnerabilities on the "certified" versions but we believe in VxWorks' assurance that they are not vulnerable (because ... )"


Or maybe it is deployed less so targeting it is not as attractive...


I dont understand how that is meaningful.

If an entire "family" of versions of the OS is vulnerable to these 11 (eleven) bugs whilst the "certified" versions are vulnerable to none (of these specific 11 ones, not necessarily not vulnerable to other 11, maybe 12, other ones), it means that the certified versions are different.

Small volume and thus less attractive targeting might explain why - since no time has been spent to find the hypothetical "other 12" vulnerabilities in the "certified" versions - noone found them (yet).


And as usually some of them are caused by memory corruption:

> Stack overflow in the parsing of IPv4 options (CVE-2019-12256)

>

> Four memory corruption vulnerabilities stemming from erroneous handling of TCP’s Urgent Pointer field (CVE-2019-12255, CVE-2019-12260, CVE-2019-12261, CVE-2019-12263)

>

> Heap overflow in DHCP Offer/ACK parsing in ipdhcpc (CVE-2019-12257)

DoS via NULL dereference in IGMP parsing (CVE-2019-12259)

While a safer language wouldn't make the remaining logical ones disappear, there would be 7 vulnerabilities less.


battle proven in what context? I think in terms of being real time, but not security. OpenBSD is designed for security.


as in military control software.


I wonder if there is a company that's really interested in developing a secure (by design) operating system. Apart from you-know-who?


There have been efforts to do this. Back during the first crypto-wars I had some code in Java that would give it capabilities similar to the way that the language Joule did it (basically the class loader would elide any methods from the loaded class based on capabilities so they weren't even available) I got a couple of patents out of it but sadly the politics inside of Sun kept it from going anywhere useful.

Highly constrained OSes and languages that do so in order to minimize attack surface tend to be challenging to work in. As a result they constrain productivity which increases time to market and the folks who got something out, even insecure, would "win" the market. It was a sad thing to see happen.


I found that working on Green Hills' Integrity was quite a bit easier than working in user space on a typical UNIX userspace. The primary reason is that the API for Integrity was designed in the late 90's, where the POSIX API was inherited from the first thing folks thought of in the late 70's.

The other aspect was that IPC via message passing is a very natural way to program.


Take a look at seL4 [1].

That it has never taken off is more evidence that there's no money in securing software, just cleaning up the mess insecure software leaves behind.

1. https://sel4.systems/


Is it true that seL4 has never "taken off"? And might it be too early to tell?

I am under the impression that the people behind seL4 have managed to successfully commercialize earlier other versions of L4 before seL4 was created.

Anyway, even if we grant the premise that seL4 has not taken off, that does not seem to justify saying that there is no money in securing software.


seL4 just celebrated its 10th anniversary. seL4 isn't widespread in COTS systems but rather in high assurance government systems as explained in this blog post: https://microkerneldude.wordpress.com/2019/08/06/10-years-se...


SeL4 is a small microkernel, not a complete operating system. It is very, very cool, and deserves more adoption, but a customer would need a load of stuff on top of it for it to be a viable option.


I don't know who.


GHS Integrity?



Wow that's a blast from the past. That's well over 12 years old (maybe 14?) software, third party to Integrity.


They have a cool T-rex skull in their office.

https://twitter.com/phil_torres/status/700115845540765697


OpenBSD?


https://www.cvedetails.com/vulnerability-list/vendor_id-97/p...

I might hazard to say that (in my opinion) no OS written in a memory unsafe language is secure by design.


Tock might fit the bill then (Rust): https://www.tockos.org/documentation/design


THALES has put Linux into battle mode.

EDIT: bzlg. SYSGO GmBH


"Security" is a stupid goal to have: if your specifications (and their implementation) is correct then the software will be secure.

Correctness is a goal of many operating systems.


You can obviously correctly implement a wrong specification which doesn't provide security.

Common Criteria distinguishes security in security functions and assurance (the effort spent in the verification of the implementation).


This is a very surprising point of view. I hope you don't work with avionics, or nuclear power plant control software, or something that has a potential to inflict a lot of harm.

You may not always foresee the requirements for 'correctness'.


Seems WindRiver also adding[1] VxWorks support in the Rust language. There should be more efforts into bringing safer and secure languages, toolchains, even OS themselves into IoT, IIoT, and even RTOS worlds.

[1] https://github.com/rust-lang/rust/pull/61946


Savvy safety oriented developers already have a few options, it is not only about Rust, although it is nice to see it doing progress there.

https://www.mikroe.com/, Pascal and Basic

https://www.astrobe.com/default.htm, Oberon

https://www.aicas.com/cms/, RealTime Java

https://www.ptc.com, RealTime Java and Ada

https://www.ghs.com, Ada and INTEGRITY RTOS


The scale of this is baffling! And from what I've seen in the industrial side of things I doubt that everything will be patched anytime soon sadly.


And there are many medical devices running on VxWorks. Changes to medical devices can take months to years due to quality testing and recertification. (software changes aren't as bad as a physical change, but it still takes a while)


If we gave them each an IP address, they'd use around half of the IPv4 address space.


especially when many of the devices have to be returned to manufacturer for the update... including a whole bunch of RTUs that run electrical transmission/distribution systems.


On the industrial side all these devices are in completely segregated airgapped networks. Obviously someone could strike havoc via USB, etc., but it’s not as bad as it could be.


Why do you believe this? Connections between industrial control networks and corporate internet-facing business networks are ubiquitous [0]. They happen because somebody needed a link for convenience and forgot to tell management, or somebody put a wifi router on the IC network just to get their job done. This stuff happens because people act like people, policy be damned.

So yeah, this is really, really bad.

[0] This is well-established infosec fact. It's not controversial. Latest case I know of was at JPL a few weeks ago.

https://duckduckgo.com/?q=jpl+infosec


Agreed. Some orgs are better than others at practicing good security hygiene.

The better ones have awareness of their network and have systems monitoring their networks, etc. There are afterall the equivalents of Qualys in the IA world.


I’d expect sonicwall firewalls and xerox printers at least to have network connectivity.


A buck shot approach of mailing malicious USB devices would likely be devastating.


Depends on the org. For some companies, you could drop a few USB devices in the parking lot and they'd be toast. Others fill the USB ports on their computers with epoxy.


> Others fill the USB ports on their computers with epoxy.

How do you plug in a mouse or keyboard?


You plug those in and epoxy them so they can never be removed.


Maybe. But it would have to be smart. These systems are regulated by tight SOPs which don’t allow for the plugging in random USB devices.


Internal networks, yes. Airgapped, probably not.


Everything old is new again: "WinNuke is an example of a Nuke remote denial-of-service attack (DoS) that affected the Microsoft Windows 95, Microsoft Windows NT and Microsoft Windows 3.1x computer operating systems. The exploit sent a string of out-of-band data (OOB data) to the target computer on TCP port 139 (NetBIOS), [...]" https://en.wikipedia.org/wiki/WinNuke


We used to win nuke the computer labs in high school after playing the first 15 seconds of blur song2. It got to the point where just playing the song would cause Ethernet dongles across the room to get ripped right out of laptops. Ahh the old days.


That's what I was thinking when I read "Four memory corruption vulnerabilities stemming from erroneous handling of TCP’s Urgent Pointer field" - the hardline on your desk is ringing, and the caller ID says 1996.


That's how I would free up my friend's phone lines when I wanted to call them.


Completely random aside, but the site's scrolling is horrible. Clicking near the edge randomly starts scrolling when the bar isn't visible and I can't middle click and drag to scroll the page at all.


Not at all surprised.

Busy kitting out my place with (consumer - jikes) IoT...and basically just connecting the stuff long enough to get it online via Home Assistant.

...next step...firewall all the IOT IPs. Once they're connected to Home Assistant they don't need internet access.


Just don't use a NetApp or Sonicwall firewall...


Off topic:

Why are web developers constantly reimplementing native browser functionality? This site for instance has their own scroll implementation that's laggy, adds unwanted smoothing, and of course has less functionality (middle-click scrolling doesn't work, nor does autoscrolling). Fortunately I can get the native implementation by disabling scripts, but I've seen sites that are `overflow: hidden` so you're forced to use their scrolling logic.


It's a huge problem with the current SPA trend. Sites are re-implementing all sorts of basic functionality like scrolling, links, and form inputs and invariably do a myopic job of it where they only implement basic functions or ones that the developer personally uses.

There are so many sites now where you can't Ctrl-click, middle-click or right-click links and get proper behavior, where inputs don't work the way they should, where sites try to hijack keyboard shortcuts (and of course assume everyone's using the default ones), where scrolling with the keyboard messes up layouts because they assume you're going to gradually scroll down with a mouse or swipe, where browser extensions can't affect element types because everything is just a <div>, etc.

It's a gigantic pain in the ass for consistent usability across sites, and a complete disaster for accessibility as well.


Project managers/product designers that don't understand their problem domain paired with developers that won't say no.

There's a couple extensions that are supposed to target and disable this behavior, but I've found them flakey at best.


aren't you forgetting the 'developers' who want to be 'cool' in this equation?


It's why I use reader mode in FF. All that js, ads and other crap just go bye bye leaving you with only the article content that you wanted to read to begin with, like it should be.


I couldn't agree more. While it's enticing to show your creativity in something like scrolling, it almost always negatively impacts the site...as it definitely has here.


Flash the browser plugin may have died, but the mindset that made people want to make websites in it never did.


vxworks ... have you ever tried to implement something with that ugly hack? -- and seen how nice it can be with other, proper operating systems?

as many already said: not at all surprised.


Yes, I have. And why do you call it an ugly hack? For what it does, it does it quite well. If that's what you need, then it's a very nice system. If you need something else, use something else.


VxWorks is well regarded and works extremely well, and yes, I've implemented plenty with it.


There's a reason it's often referred to as "Internet of Shit". I highly doubt anything is going to change until someone figures out how to use an internet-connected power outlet to burn down a house. It's going to be a decade-removed version of the wireless router issue: huge botnets will go on for years and years and maybe eventually manufacturers will slowly close security holes and institute better practices. Even that I doubt, since routers are made by a handful of major companies and IoT devices are made by hundreds of fly-by-night outfits who're likely to be out of business in five years.


VxWorks is a serious, long term player in the embedded space. This is the operating system you’ll find on the moon. Not really related to the fly-by-night internet of shit.


Indeed they are, but I refer to the legions of no-name products that make up much of the consumer industry. Smart lightbulbs, power outlets, rain meters, etc. The original headline was much less specific before it was edited.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: