Wow, I was going to comment "IoT, the 'S' stands for 'security'", but this is about VxWorks, a battle proven (literally) RTOS.
This illustrates a point that now, in 2019, there is literally no OS designed for security. I mean, security was never a real goal. Even software specifically written to address security requirements could easily have gaping holes (re Heartbleed)...
I tilted at the windmill of making and selling a secure RTOS for 10 years. It is a fool's errand, and everybody that likes money should stay away.
What I painfully learned is that hardly anybody save sometimes the US government is willing to compromise on features or pay 1 cent more for a secure RTOS.
Look at it another way. The market for cybersecurity is supposed to hit $300 billion by 2024, according to a recent report. If we only released secure software, that $300 billion market wouldn't exist. In that sense, it is vastly more economically viable to release insecure software than to spend any effort on securing it.
The broken window fallacy doesn't apply to software security for a couple of reasons. The first reason is that it isn't apparent that software is insecure until well after the sale, and depending on use (i.e. on a security network, etc), might actually be fit for purpose. In other words, measurement of window brokenness changes over time and with situation.
The other important difference is that the software vendor, unlike the glazier, doesn't directly benefit from folks breaking its software.
A better analogy is the introduction of the iPhone creating the iPhone accessories market. For lots of reasons, Apple decided not to make the iPhone crush proof, allowing Otter and others to sell protective cases for it.
To say that it's "vastly more economically viable" to spend $300 billion fixing insecure software rather than simply making it secure in the first place is fallacious.
If you made the software secure in the first place, you would have secure software and $300 billion to spend on something else (guns, butter, or what have you).
That seems like the very definition of the broken window fallacy to me---but hey, if not, it's still a fallacy.
Of course we haven't factored in the extra cost of making the software secure in the first place. If that costs vastly more than $300 billion, it is vastly more economically viable to just make broken software, but I don't think that was the intention of the statement.
> The URGENT/11 vulnerabilities affect the noted VxWorks versions since version 6.5, but not the versions of the product designed for safety certification – VxWorks 653 and VxWorks Cert Edition, which are used by selected critical infrastructure industries such as transportation.
Seems to me more like meaning that a "certified" "safe" version exist but that a lot of companies used (most probably to save money) the "normal" edition, and - indirectly - that the differences between the "normal" and the "certified" editions were known, at least to the developers/company actually making VxWorks.
It would be "queer" that the "certified" editions have "different" mechanisms implemented (for completely different reasons) and only coincidentally they are more secure.
A lot of "Safety-critical" certified versions of operating systems just don't include things like TCP/IP stacks or userspace applications. That's probably what WindRiver is referring to here. Otherwise they might actually have to do rigorous design verification and testing on their network stack which would cost a great deal.
For example you can get a "medical grade" QNX but the certificate only covers the kernel, so you have to write and verify the entire userspace yourself.
No, that still fits with my experience. In the safety-critical realm anything that casts doubt on your claims of robustness, reliability, or safety such as a TCP/IP stack vulnerability opens you up to a lawsuit.
What will happen is you'll purchase a certificate for the RTOS kernel plus a few critical components. Then you can choose to use any other off-the-shelf components that the vendor or third parties provide. Those parts don't have to be safety-critical, but if a defect is found in uncertified software it's not VxWorks's problem.
VxWorks is very clearly and concisely stating that the safety-critical certified components are not affected. But they're not going to make statements about the systems their safety-critical clients built. That's not their responsibility. And Armis is almost certainly reprinting a statement from VxWorks. Both Armis and VxWorks are leaving it up to each VxWorks customer to determine whether their particular configuration of Safety-Critical VxWorks uses a vulnerable stack as an add-on.
Well, that Armis labs is reprinting a statement from VxWorks is a possibility, but the statement is on Armis page, and sounds like they themselves wrote it, since it is an announcement on discovered (by Armis) vulnerabilities, there are as I see it two possibilities:
1) Armis tested those certified environments and couldn't replicate the bugs
2) Armis did not test those certified environments and reprinted the VxWorks statement
If #2 they should have written instead "we could not test the vulnerabilities on the "certified" versions but we believe in VxWorks' assurance that they are not vulnerable (because ... )"
If an entire "family" of versions of the OS is vulnerable to these 11 (eleven) bugs whilst the "certified" versions are vulnerable to none (of these specific 11 ones, not necessarily not vulnerable to other 11, maybe 12, other ones), it means that the certified versions are different.
Small volume and thus less attractive targeting might explain why - since no time has been spent to find the hypothetical "other 12" vulnerabilities in the "certified" versions - noone found them (yet).
And as usually some of them are caused by memory corruption:
> Stack overflow in the parsing of IPv4 options (CVE-2019-12256)
>
> Four memory corruption vulnerabilities stemming from erroneous handling of TCP’s Urgent Pointer field
(CVE-2019-12255, CVE-2019-12260, CVE-2019-12261, CVE-2019-12263)
>
> Heap overflow in DHCP Offer/ACK parsing in ipdhcpc (CVE-2019-12257)
DoS via NULL dereference in IGMP parsing (CVE-2019-12259)
While a safer language wouldn't make the remaining logical ones disappear, there would be 7 vulnerabilities less.
There have been efforts to do this. Back during the first crypto-wars I had some code in Java that would give it capabilities similar to the way that the language Joule did it (basically the class loader would elide any methods from the loaded class based on capabilities so they weren't even available) I got a couple of patents out of it but sadly the politics inside of Sun kept it from going anywhere useful.
Highly constrained OSes and languages that do so in order to minimize attack surface tend to be challenging to work in. As a result they constrain productivity which increases time to market and the folks who got something out, even insecure, would "win" the market. It was a sad thing to see happen.
I found that working on Green Hills' Integrity was quite a bit easier than working in user space on a typical UNIX userspace. The primary reason is that the API for Integrity was designed in the late 90's, where the POSIX API was inherited from the first thing folks thought of in the late 70's.
The other aspect was that IPC via message passing is a very natural way to program.
SeL4 is a small microkernel, not a complete operating system. It is very, very cool, and deserves more adoption, but a customer would need a load of stuff on top of it for it to be a viable option.
This is a very surprising point of view. I hope you don't work with avionics, or nuclear power plant control software, or something that has a potential to inflict a lot of harm.
You may not always foresee the requirements for 'correctness'.
Seems WindRiver also adding[1] VxWorks support in the Rust language. There should be more efforts into bringing safer and secure languages, toolchains, even OS themselves into IoT, IIoT, and even RTOS worlds.
And there are many medical devices running on VxWorks. Changes to medical devices can take months to years due to quality testing and recertification. (software changes aren't as bad as a physical change, but it still takes a while)
especially when many of the devices have to be returned to manufacturer for the update... including a whole bunch of RTUs that run electrical transmission/distribution systems.
On the industrial side all these devices are in completely segregated airgapped networks. Obviously someone could strike havoc via USB, etc., but it’s not as bad as it could be.
Why do you believe this? Connections between industrial control networks and corporate internet-facing business networks are ubiquitous [0]. They happen because somebody needed a link for convenience and forgot to tell management, or somebody put a wifi router on the IC network just to get their job done. This stuff happens because people act like people, policy be damned.
So yeah, this is really, really bad.
[0] This is well-established infosec fact. It's not controversial. Latest case I know of was at JPL a few weeks ago.
Agreed. Some orgs are better than others at practicing good security hygiene.
The better ones have awareness of their network and have systems monitoring their networks, etc. There are afterall the equivalents of Qualys in the IA world.
Depends on the org. For some companies, you could drop a few USB devices in the parking lot and they'd be toast. Others fill the USB ports on their computers with epoxy.
Everything old is new again: "WinNuke is an example of a Nuke remote denial-of-service attack (DoS) that affected the Microsoft Windows 95, Microsoft Windows NT and Microsoft Windows 3.1x computer operating systems. The exploit sent a string of out-of-band data (OOB data) to the target computer on TCP port 139 (NetBIOS), [...]" https://en.wikipedia.org/wiki/WinNuke
We used to win nuke the computer labs in high school after playing the first 15 seconds of blur song2. It got to the point where just playing the song would cause Ethernet dongles across the room to get ripped right out of laptops. Ahh the old days.
That's what I was thinking when I read "Four memory corruption vulnerabilities stemming from erroneous handling of TCP’s Urgent Pointer field" - the hardline on your desk is ringing, and the caller ID says 1996.
Completely random aside, but the site's scrolling is horrible. Clicking near the edge randomly starts scrolling when the bar isn't visible and I can't middle click and drag to scroll the page at all.
Why are web developers constantly reimplementing native browser functionality? This site for instance has their own scroll implementation that's laggy, adds unwanted smoothing, and of course has less functionality (middle-click scrolling doesn't work, nor does autoscrolling). Fortunately I can get the native implementation by disabling scripts, but I've seen sites that are `overflow: hidden` so you're forced to use their scrolling logic.
It's a huge problem with the current SPA trend. Sites are re-implementing all sorts of basic functionality like scrolling, links, and form inputs and invariably do a myopic job of it where they only implement basic functions or ones that the developer personally uses.
There are so many sites now where you can't Ctrl-click, middle-click or right-click links and get proper behavior, where inputs don't work the way they should, where sites try to hijack keyboard shortcuts (and of course assume everyone's using the default ones), where scrolling with the keyboard messes up layouts because they assume you're going to gradually scroll down with a mouse or swipe, where browser extensions can't affect element types because everything is just a <div>, etc.
It's a gigantic pain in the ass for consistent usability across sites, and a complete disaster for accessibility as well.
It's why I use reader mode in FF. All that js, ads and other crap just go bye bye leaving you with only the article content that you wanted to read to begin with, like it should be.
I couldn't agree more. While it's enticing to show your creativity in something like scrolling, it almost always negatively impacts the site...as it definitely has here.
Yes, I have. And why do you call it an ugly hack? For what it does, it does it quite well. If that's what you need, then it's a very nice system. If you need something else, use something else.
There's a reason it's often referred to as "Internet of Shit". I highly doubt anything is going to change until someone figures out how to use an internet-connected power outlet to burn down a house. It's going to be a decade-removed version of the wireless router issue: huge botnets will go on for years and years and maybe eventually manufacturers will slowly close security holes and institute better practices. Even that I doubt, since routers are made by a handful of major companies and IoT devices are made by hundreds of fly-by-night outfits who're likely to be out of business in five years.
VxWorks is a serious, long term player in the embedded space. This is the operating system you’ll find on the moon. Not really related to the fly-by-night internet of shit.
Indeed they are, but I refer to the legions of no-name products that make up much of the consumer industry. Smart lightbulbs, power outlets, rain meters, etc. The original headline was much less specific before it was edited.
This illustrates a point that now, in 2019, there is literally no OS designed for security. I mean, security was never a real goal. Even software specifically written to address security requirements could easily have gaping holes (re Heartbleed)...