> It also provides partitioning mechanisms that make it possible to simultaneously process public and sensitive information on the same computer, within two completely isolated software environments, in order to avoid the risk of sensitive information leaking onto the public network.
Just don't. Please don't. You can have the most sane architecture, but there's a whole pile of s..tack underneath. You can't even trust a modern CPU.
>> It also provides partitioning mechanisms that make it possible to simultaneously process public and sensitive information on the same computer, within two completely isolated software environments, in order to avoid the risk of sensitive information leaking onto the public network.
> Just don't. Please don't. You can have the most sane architecture, but there's a whole pile of s..tack underneath. You can't even trust a modern CPU.
Wasn't it determined in 1974 that you can't successfully do that using conventionally developed software, even without the bloated modern "stack?"
The way I see it, sensitive information should be offline. But you often need to access it in an automated manner. So set up a secure interface. Secure means that the protocol is minimalistic and you are monitoring unusual access patterns. E.g. serial interface with the sensitive data computer, where you only ask for X and it provides you X. Almost never you need to fetch all data at once. So you can set up some hourly limits or whatever. You can use microcontroller to validate data transmission. You can set up a hardware button if necessary. The beauty of it, is that the sensitive box could be running windows 3.11. The attack surface as minimal as it can be.
When you have it on a computer that's connected to the Internet, in my eyes you already lost the game. I'm quite convinced it looks the same in the eyes of some 3 letter agencies, and it sure gets an average script kiddie excited.
Anytime someone suggests that they have a "secure" piece of software without providing caveats relating to the inherent insecurity of every modern CPU and firmware stack based on closed-source proprietary blobs that the software will undoubtedly be running on it alerts to anyone with any meaningful understanding of the complexity of security that they are at best omitting crucially important information and at worst incompetent.
Sure, the size and complexity of the Linux Kernel is a problem, but the crumbling foundation needs to be addressed before problems with the first floor.
I don’t think the goal of the software is to be hardened from all attacks from every angle.
I think they are trying to prevent human error by compartmentalizing the system so they can work with sensitive data and avoid commingling it with public data.
Also, the vast majority of breaches are due to this human error.
Anything that attempts to mitigate that in one way or another is to be welcomed.
From browsing the docs[0], CLIP OS looks like an interesting project. However, I do question how much of it is.
One of the more interesting approaches using containerization in recent years is sandstorm.[0] It's unfortunate it wasn't a commercial success, because a lot of the ideas in there would go a long way to mitigating the majority of breaches that can occur at the userspace level.
In sandstorm, web apps are containerised in a really minimal environment. Each app only has access to files it operates on. There is no procfs or sysfs. All communication with the outside world takes place through a unix socket that is opened by the supervisor.[2]
I don't understand why it's deemed necessary to work with both public and sensitive information on the same device. This might have made sense when computers were much more expensive. Today, why not buy two computers and keep them physically isolated?
Yeah physical separation... seems like a large and ultimately good step. All the other efforts to logically separate things often have some sort of downside.
I posted a while ago that there was a customer (tied to the military) that forbid ANY outside electronics of any type to enter their facility (main gate where you parked was ok, but you still turned everything over) without approval, and no electronics could ever leave. They went so far as to keep and presumably destroy anything that entered... bring a laptop, it's gone, be stupid and carry a phone, gone (nobody actually did the phone thing as you were warned plenty of times).
Granted they paid a ton for it (for a laptop many times sometimes (other times they gave you one)) and it is mostly a policy only the military can be strict about.
But having said that this was before smartphones and etc, and honestly now a days that seems like what was an extreme policy, is actually a good policy for many such places. We're already at the point where we know we can't be sure about declaring any equipment "clean".
I was a little shocked to hear the White House was talking about banning smart phones in some places.... like how the hell haven't they done that already? A mic and camera on each person that wanders all over the world connecting to random cell towers and wi-fi and OMG, what a great attack vector!
I've heard a lot of stories regarding security and physical separation.
The most funny was a repair guy from IBM or Cray in the 80ies or 90ies was tasked to repair a broken storage array in a super calculator doing some sensitive computation.
He expected one of the disks to be dead (at the time those were quite expensive) and brought a new one.
He plugged it and it was not that, the array was still KO. In the end it was a broken (and much cheaper) controller card. So he fixed it, and starting to going home. But the guards didn't let him go back with the expensive spare drive and the brand new drive had to be destroyed.
There was some conflicts after that on whether or not the destroyed drive was part of the maintenance contract and who should pay for it (the government agency or the company doing the support and maintenance).
Yeah that's pretty common today even at a lot of places, no hard drives leave the premises.
Generally though there's not much to be confused about in my experience, it is typically in the contract. When I went onsite if they demanded the equipment for security reasons I just handed it over... you'd be surprised how often security guy or IT security guy suddenly doesn't like being responsible for some cards or such and lets you go ;)
I don't recall anything in the policy that spoke to that. That's not a good answer, but it is all that I got. This wasn't a situation where you asked questions / got to.
Because if you don't, your workers will keep processing sensitive information on a public device because bringing data from one to the other is too tedious.
Ignoring user behaviour makes any kind of security solution fundamentally flawed. Good security takes into account what people do and how they behave when they reach obstacles.
Building your security solution in an ivory tower and then pushing on people will result in an utter failure - see PGP or password post-it notes on monitors.
I agree that working with rather than against people's inclinations to reinforce security protocols is the best method. I'm a bit concerned that it's commonplace that people will route around those protocols in an environment where presumably the staff is all highly trained and disciplined.
This is intended to be use by the french administration (eg. the DMV), not intelligence agencies or the military. Sensitive in this context means nonpublic, not classified.
(Administration in French would be the government in English - as in government shutdown, while government means administration in English - as in the Trump administration)
Not defending it but based on descriptions I’ve heard about government air gapped networks I wonder if it’s less about cost of individual computers and more about the annoyance, overhead and expense of the transfer systems.
Yeah there is definitely annoyance, overhead, and expense but you are trying to securely transfer data to an air-gapped network so just "plugging things in" (what some people expect, like just grabbing a usb drive) is a risk in that regard so it's a lot of trade-offs.
To join OS data with TS data you need to somewhere.
Though I've always said it should be a one-way / connectionless transfer of the OS data to the sensitive computer, with the human mind as the bridge to a connected machine. Thing is, we actually do this some places, but air gaps don't work. People say they do but they don't. Someone fucks up at some point.
ASIC + no interfaces might work. But sensors are essentially interfaces, though they're much harder to exploit, and if you have no sensors and no interfaces then the device isn't going to be able to do anything useful.
The words network and sensitive information do not belong on the same computer. We can steal information from air gap-ed computers. Plug anything into a network and forget about any sort of security over the data.
I agree, but as someone who works on airgapped networks it effectively doubles (if not even more so) overhead and burden, and not only for infrastructure and administration but you end up duplicating and creating a lot more work for every person.
I wonder if there’s some context lost here with the term “sensitive”. I’ve seen data classifications use words like that. While “secret” and “ultra secret” is accurately described as “sensitive” or “confidential” the latter could be the names for distinct lower levels of data classification.
So this might check enough security boxes to allow public data and the data classification tier immediately above that on the same box. That might be as innocuous as HR data or something that could be released to the public after redacting some personal information.
Granted, my charitable speculation would require some inflexible or impractical data handling rules but I think a government bureaucracy could more than manage that.
Working in IT for a few years in France, my guess is that this is a direct translation of a term (sensible) which refers to personally identifying information such as address or date of birth, and not some military secret. In this case, they want to provide some isolation but don’t expect to protect against ground breaking attacks like Spectre.
There is no point in debating whether or not this is secure. The inclusion of that word in the title is going to derail all of the comments here. They are using secure from the point of a layman. We know nothing is secure, the folks working on the project know nothing is secure. The government folks and media outlets don’t understand security and their use of the word “secure” should be disregarded.
It would be better (and create more interesting discussion) to pretend that one is reading a title that says “security enhanced” operating system, and evaluate the merits of incremental improvements offered, and not debate whether or not anything can be truly secure.
The original French text (https://clip-os.org/fr/) says "système d’exploitation durci" (hardened operating system) and this has been translated as "secure" in the English version of that same page.
Fascinating! It seems like a cross-over concept of CoreOS and FreeBSD jails.
It seems as if it is just the base system for now (and a bit complex), but I think this kind of architecture is great for “IoT” use-cases when the source of the sensor data should be protected from the containers who are processing the data (i.e. in sealed environments). Can't wait to try this.
> Hardware-based mechanisms and isolation are assumed trusted, properly functional and configured. Here is a non-exhaustive list of hardware-based security and isolation mechanisms: UEFI firmware, Secure Boot ...
Is firmware a "hardware-based mechanism" with comparable isolation claims to a TPM, MMU or IOMMU?
If someone has already looked at the source, would it be possible to get a high-level overview of what's special to this OS? Apparently, authorizations and isolation are not vanilla Linux, but that's all I manage to gather from the description.
I'm not sure if it's a stereotype, or simply outdated.
Much of what happened at the UN in its early decades was in French first, then English and the rest. I could see this happening at all levels within France, since it has such pride in its language. Unless that's an outdated notion, too.
It is most definitely not an out of date stereo type that (a looot of) French people are not particularly fond of English. Go to any university and you’d be surprised about the amount of students that don’t speak English, or can just muster a couple of words. It ingrained in the culture, from dubbing movies and series, having previously been the lingua franca of academics, culture etc, having a large part of the country in rural places, and (in some places) being seen as arrogant of you speak English or say English phrases.
Source: French girlfriend, French friends and having been there a lot.
Hi, source I’m an uni engineer in France, everyone in my 100+ lab speaks English.
What’s more is there’s a funny reversal occurring where people insert English phrases to be hip, eg in the middle of a French sentence, you hear a “yes” or a “let’s go” or a “in z pocket” (I never understood that last one)
It seems that ANSSI has sponsored many open-source researches and development efforts. One day I was surprised that the most complete free and open source implementation of OpenPGP Card firmware on JavaCard was published by ANSSI on GitHub.
Hmm, they says it's Linux based. I was kinda expecting/hoping for something more similar to BAE's STOP, which is a non-open/proprietary OS that originally sacrificed performance/speed for security.
1. The main mechanism for environment isolation is different:
a) CLIP OS leverages Linux kernel primitives to create containers with the help of additional features brought by Vserver, Linux kernel hardening (grsecurity for version 4) and a tailored Linux Security Module (LSM). This approach allows a fine-grained control on the data exchanges between isolated environments (e.g., handling a notion of files, processes and sockets) and permissions (e.g., restriction to ring 3 features for malicious code, limitation on the allowed system calls).
b) Qubes OS leverages hardware based virtualization with an hypervisor (Xen), and a main virtual machine (dom0) which is a GNU/Linux system with services handling data exchange between virtual machines.
2. Administrators have different roles and power:
a) Administrators on a CLIP OS system are not able to compromise system integrity nor access user data. They can only access a restricted set of configuration options.
b) On Qubes OS systems, the main user of each virtual machine is also the administrator of its own environment. The system administrator of the main domain (dom0) can change all the configuration options and may access all user data without any restriction.
Saw the same. The thing I like about Qubes OS is that they are trying to move to an OS / hypervisor agnostic model. I personally would love to see Qubes on freenas driving Jails.
Pragmatism pulls in the non-security direction. When security people see "Linux-based", they already know the thing is situated at some particular point on the scale, or worse. Linux has big TCB surface, so it's intrinsically hard to secure.
Sure, one can roll on the ground hoping to get to a slightly higher place ("making Linux more secure"), or one can dig a hole, put up a foundation (L4, Genode, etc. etc.) and build a tower on it. Rolling on the floor is sure easier and provides a sensation of movement ("yay, we have something 'practical', another Linux-based distro! look, we've spent a lot of effort and careful direction planning to roll a few cm/inches higher!"). But as for "realism", which is more realistic: to roll on the ground hoping to get higher, or to build something more difficult which can really get one higher? If I see "Linux-based secure OS", I think of the rolling, and think "right, they maybe got a bit higher, hope they didn't break something unexpected by virtue of spaghetti interactions and actually fell into a hole"; also, there's most probably still a lot of bugs in Linux kernel they didn't know of so couldn't secure. So maybe they even didn't get that much higher they thought they did. But even assuming they really improved a bit and didn't break nothing, they sure didn't get to a tower height.
Common hypervisors like ESX and Xen definitely aren't like microkernels, though they'd be more secure if they actually were.
ESX and Xen are more like monolithic kernels. The problem is that in practice they've become mini-kernels with their own driver systems and increasingly large code surfaces, recapitulating the mistake of monolithic kernels.
seL4 actually can act as a hypervisor, and their build framework comes with support for building Linux guests into the bootable system image. I don't know if there are proofs of security wrt to the hypervisor, nor what the worth would be considering how complex the CPUs are these days, but I'd have much more trust in seL4 as a hypervisor and driver framework than in something using FreeBSD or Linux as the critical guarantor of system security.
If you're stuck with commodity x86 or ARM hardware and really want a strong architecture, seL4 is the best option unless you want to build from scratch. Apple uses another L4 derivative as the OS for its security chips; an in-house derivative that predates the seL4 project.[1][2]
The terminology is confusing because for a lot of purposes every Linux distro can be considered a separate platform, so it may as well be a separate OS even though it uses the same kernel (and often the same everything, just arranged and configured differently enough to cause headaches).
That said, I am also disappointed that everyone who cobbles together a Linux distro these days insists on referring to it as a new OS, because I remember the days when people actually announced new OSs like Syllable, AROS, SkyOS, and Haiku.
Lately I've seen the term "Operating Environment" emerging as a way to further delineate an actual "new OS" from something that just sits on top of an existing kernel but is more different than just a new Linux distribution.
You don't have to, it's for French government employees who have no other choice. However it's easier to trust an open source government OS than a closed source one.
Just don't. Please don't. You can have the most sane architecture, but there's a whole pile of s..tack underneath. You can't even trust a modern CPU.