Hacker News new | past | comments | ask | show | jobs | submit login
HardenedBSD Feature Comparison with OpenBSD, FreeBSD, NetBSD (hardenedbsd.org)
99 points by transpute 73 days ago | hide | past | favorite | 77 comments



It's easy to invent a chart that only you can get a nearly perfect score on.

This does nothing to explain what any of these features are, what are "Boot hardening" and "sysctl hardening"?

At least OpenBSD's innovations page makes an attempt to explain new concepts and features that have been developed over the years, and people can make any comparisons for themselves.

https://www.openbsd.org/innovations.html


It says "Please note that this page is out-of-date. For a more detailed and up-to-date guide to HardenedBSD's features, please visit our wiki." at the bottom. So I tried that.

I'm guessing that "sysctl hardening" is https://git.hardenedbsd.org/hardenedbsd/HardenedBSD/-/wikis/...

There's no clue on the wiki what "boot hardening" is, though.


I mean, both of these are really just lists of mitigations. What would really be useful is why each of these has been developed, what attacks it tries to stop, why that was chosen as something worth protecting against, and what is out of scope for each of them.


Good LLM prompt seed :)


I doubt it. Most humans do a pretty poor job at this already, and LLMs train on things other humans have written. If you have examples to the contrary for a novel mitigation I'd love to see it, but I will also only believe it when I see it.


Still useful for a formatted markdown table based on a specified list of non-novel mitigations.


Sure, but that’s not what we want here?


The HardenedBSD list isn't focused on novel mitigations, so LLM summary is possible. For the OpenBSD list, an LLM can help a knowledgeable reader to quickly separate the novel and non-novel mitigations, so that a human can focus on the novel mitigations.


Read these claims with a pretty big asterisk. Implementation quality of HBSD features is often poor or very poor. https://www.fabiankeil.de/gehacktes/hardenedbsd/ is just one example. Specifically some of the changes made to "harden" the system are pretty dubious and introduce new bugs, possibly security relevant, that did not previously exist. No one runs or pen tests HBSD. It's even more niche than OpenBSD.


Self promotion / advertisement should always be read with a _big_ *. What should probably not be read with a _big_ * is their roadmap end: 2021...

https://hardenedbsd.org/content/roadmap


With small projects, websites tend to be updated infrequently. On the other hand, there seem to have been 11 commits to the main project repo today: https://groups.google.com/a/hardenedbsd.org/g/src-commits-al...


These are all or mostly all just automated sync from upstream FreeBSD.


A positive scenario for forks is when the original project incorporates downstream changes that were positive.

Could FreeBSD implement some of the security features identified by HardenedBSD?


Could implement some of the features (even if off by default), but probably not by using the HBSD diffs.


I can just think of so many better things to do than to write online arguing which BSD implementations of features is better.


Write online how Linux distributions of features are better?


If anything, that's much more pointless. The biggest difference between linux distros is what package manager and init systems they use by default. BSDs develop and maintain their own kernels and userspace, so there are actual non-cosmetic differences between them.


Could you point me to some of the current changes that have made the system buggy? If HardenedBSD's features do introduce bugs, they should be fixed.


Your example is 9 years old (2015) when the project was less than 1 year old:

https://hardenedbsd.org/content/about

It also sounds like the person wants to promote his own "ElectroBSD", but just in the form of a patch-sets because of "unresolved license issues" he reallyreally wants his GPL-Code in BSD:

https://www.fabiankeil.de/gehacktes/electrobsd/


The date really doesn't matter, I promise. It's representative of the overall quality. And HBSD has been defunct/inactive for some of those 9 years, possibly the majority of them -- the 2nd developer left 5 years ago, for example.


HardenedBSD has never been defunct or inactive.


> Executable file integrity enforcement

I assume but don't know for sure that this refers to Veriexec in NetBSD, and I'm not sure what in HardenedBSD. Anyone know?

https://man.netbsd.org/veriexec.8

My understanding is that Veriexec isn't enabled by default - the manpage says only that "[s]ome kernels already enable Veriexec by default." If you have this enabled, how do you upgrade binaries? The manpage says that in strict mode 1, write access to monitored binaries is allowed but then access is denied. So I assume that after file modification, root then runs veriexecgen and veriexecctl load as mentioned in the manual to update the signatures list. So it seems that strict level 1 isn't functionally different from a read-only /usr or even just root-owned binaries. In either case, you just need root to update targeted binaries. Surely I'm missing something and would appreciate some insight.

At a glance as an outsider, stricter modes appear somewhat functionally similar to "chflags schg" on BSD systems, where more work is needed to get around restrictions. In the case of schg, you have to reboot into single user mode, remove the schg flag, then modify the binary, and continue booting into multi-user mode. You could do this as a remote attacker (as in not having console access) depending on what boot files are or aren't protected with schg, but modifying all the necessary files can be a source of new problems.


I'm not familiar enough to know if this is particularly well adopted with NetBSD, but the obvious way to do it that I could see is A/B roots, where the active system can only update the inactive root.

I suspect the actual most likely case is that it's meant for appliances where the running system doesn't update itself, and updates are accomplished via actions like "go physically replace the SD card with the new one".


> In either case, you just need root to update targeted binaries.

My understanding is that the difference is you would need to boot with a kernel with veriexec disabled to replace binaries and regenerate hashes. Root alone isn't sufficient, and you can't disable veriexec as root in strict mode.


For a hardened linux distro, take a look at kicksecure [0], which uses debian as base image

[0] - https://www.kicksecure.com/


The list of disclaimers does impute confidence in their claims:

  most of the topics.. are still in development and are not yet used by default
  .. Kicksecure has adopted a best-effort, but admittedly quite weak approach
  CFI, SafeStack, automatic stack variable init..unlikely..due to.. resourcing
  kernel security issues.. are rooted deep within its design
  upstream developers are not very focused on serious security enhancements


Let’s appreciate the transparency and follow the project for progress rather than dismiss them for it.


Hand in glove, the only distro with stack zero-initialized by default. As of yet, still no SSL integration in crypto library.


Can somebody explain to me how anybody could verify or reproduce working address randomization and position independent executables?

This is basically what muslc does, but with the kernel itself offering a memset / malloc with randomized padding, right?

Also, what does ASLR brute force detection (SEGVGUARD) do? To me this sounds like malloc has a hook that maps process ids and tries to find heuristics of the allocated offsets?

Anybody know where those features are documented do someone can try to bypass/exploit it?


"Please note that this page is out-of-date. For a more detailed and up-to-date guide to HardenedBSD's features, please visit our wiki."

I don't see a comparison in the wiki.

https://git.hardenedbsd.org/hardenedbsd/HardenedBSD/-/wikis/...


That checklist is interesting. In what sense does OpenBSD have most of the base OS sandboxed? I'm kind of skeptical of that, but wondering if I missed something.

What any BSD needs to have a merit to a claim to security, in 2024, is some form of MAC. As long as there is no way to take away from having an all powerful root user (and pledge and unveil ain't it), then they have a long ways to go.


> In what sense does OpenBSD have most of the base OS sandboxed?

They're talking about pledge. https://man.openbsd.org/pledge.2


I thought that might be the case. So not actually sandboxing at all.


What does Sandboxing™ give you that pledge+unveil doesn't?


> Sandboxing™

Writing it like this is kind of funny, hoenstly, but not for the reasons you probably think.

> give you that pledge+unveil doesn't?

Not requiring the cooperation of developers to opt-in, for starters.

You really think pledge and unveil are equivalent to sandboxing? Can you refer to any sandboxing solution or technologies that limit themselves to restricting syscalls and hiding file paths? Unveil is a lot more useful as a component in sandboxing, I'll give you that.

Something like linux namespaces or even capabilities would be a hell of a lot better though.


Most programs require a setup phase, where they want a great deal of access to the environment to set up the resources they use, and a steady state phase, where they need very little access beyond pre-opened file descriptors.

An externally imposed sandboxing feature can be useful for namespacing, but is necessarily less restrictive than pledge and unveil. For example, in steady state on OpenBSD, most programs can't even their own configs.


> An externally imposed sandboxing feature can be useful for namespacing, but is necessarily less restrictive than pledge and unveil.

An externally imposed sandboxing feature isn't necessarily less restrictive than pledge and unveil at all, although I'm curious why you think that is the case.

To say nothing of the fact that pledge and unveil are wholly dependent on developer opt-in.

A good sandboxing solution should be robust and not require the cooperation of the programs that require sandboxing.


Those are two different kinds of sandboxing. One protects application from itself - "here is what I use, if I try anything else, then it's a bug".

Sandboxing you're talking about protects system from the application. You really need both.

re: restrictiveness

With external sandboxing, you need to restrict to a common denominator of all application states you see youself observing that application. Internal sandbox can adjust itself as it goes.


> Those are two different kinds of sandboxing.

I'm arguing that only one example is sandboxing, the other is imposing limitations but doesn't meet the definition of sandboxing.


> An externally imposed sandboxing feature isn't necessarily less restrictive than pledge and unveil at all, although I'm curious why you think that is the case.

With pledge, you can read config files, open a file handle to your logs, and then completely drop the ability to open() files at all for the remaining lifetime of the program. How would you do that from outside the program?


By monitoring and intercepting what the program is doing?


I guess technically you could write a dynamic policy that... I guess you'd give it a list of files that the program can access exactly once? But that seems difficult and brittle. Is anyone actually doing that?


No, I suppose not, but then, is there really an advantage to doing so?

I'm much more concerned with blocking write and execute access than I am about a potential hacker being able to read the config files of the program they leveraged to get a shell.

I think it's a good approach and part of defense in depth, but if we're comparing approaches I'll tale the former every time.


> No, I suppose not, but then, is there really an advantage to doing so?

A massive one; not reading the config is just a consequence of dropping all file system access once you're done with the initial program setup. You can also set up listening sockets, and then stop the program from making any new network connections, even after a compromise. And so on, with most resources. There are a lot of resources that tend to be needed to set up a program. There tend to be very few needed once the program is running.

How would you block all file system access after a program is finished reading its config files, the files they include, and the shared libraries and plugins scattered around the file system? How would you turn off all network access after the listening socket is established?

With Pledge, it's trivial:

    load_config();
    load_plugins();
    open_sockets();
    pledge(NULL, NULL);
    run();
Pledge makes everything you're talking about work trivially, the only thing that's needed is for the program to opt in to security with one line of code. You don't need to micromanage the permissions and what the program is doing to drop privileges from the outside, with all of the race conditions and fragility that implies.


> There are a lot of resources that tend to be needed to set up a program. There tend to be very few needed once the program is running.

I think this is true for simple programs, and less true the more complex a program is.

What about programs that due to their nature need to frequently make new network connections, or to periodically check config files?

> How would you block all file system access after a program is finished reading its config files, the files they include, and the shared libraries and plugins scattered around the file system? How would you turn off all network access after the listening socket is established?

This could be done with seccomp, although it would be more work than it is to use pledge (although a pledge 'port' also exists), it could also be done with things like SELinux.

> Pledge makes everything you're talking about work trivially,

I was talking about more complete sandboxing, and pledge doesn't allow for that. Pledge is substantially more limited in scope.

> the only thing that's needed is for the program to opt in

That's actually a pretty big issue. If all the software you want to use is in the ports tree I guess it's fine, but what about for untrusted or complex code? Say, running an instance of Oracle, or a torrent program that by it's nature constantly needs to make network connections and write/read different files? Pledge is little help in these cases, and especially ineffective as any attempt at sandboxing such applications.


> I think this is true for simple programs, and less true the more complex a program is.

Chrome and Firefox have both been successfully pledged and unveiled. What programs more complex than them are you considering?


I gave examples at the end of my previous reply.


> Say, running an instance of Oracle, or a torrent program that by it's nature constantly needs to make network connections and write/read different files?

Yes, those seem relatively simple to pledge (source availability aside); there are a lot of permissions that they should be able to drop once they decide on, say, where the database lives or what files they're saving to. It gets even better if you're willing to privsep the torrent program, though that could take some refactoring.

Note that you can trivially do a looser sandbox around unmodified processes using exec pledges and unveil, even for proprietary code. These kinds of sandboxes need to be permissive, though, since they're not aware of program phases. So they're not nearly as tight as a sandbox written by the developer with knowledge about expected program behavior.


> It gets even better if you're willing to privsep the torrent program, though that could take some refactoring.

Now you're talking about modifying the code substantially which is out of scope of the thought experiment.

Pledge can't really help with the torrent program since it needs to make new network connections and write and read arbitrary files constantly. Unless as you say, you substantially modify the code.

If substantially modifying the code is off the table, can you give an example of how pledge can prevent an attacker leveraging an RCE in the torrent program? To what extent would they be restricted? You can't say, limit execution to only certain files/libraries or restrict the ability to delete or overwrite files, right?

> Note that you can trivially do a looser sandbox around unmodified processes using exec pledges and unveil, even for proprietary code. These kinds of sandboxes need to be permissive,

Yeah, I wouldn't consider that to be a sandbox. Imposing limitations on a program isn't by itself a sandbox, nor is every instance of doing so sandboxing.


> Pledge can't really help with the torrent program since it needs to make new network connections and write and read arbitrary files constantly. Unless as you say, you substantially modify the code.

Unveil helps with the "arbitrary files" part. There's a reason Linux is cloning that interface with landlock.


> Unveil helps with the "arbitrary files" part.

How? The torrent program needs read and write access to create whatever files it needs to, which can't be predicted ahead of time.

Imagine a worst case scenario for an RCE in a torrent program, and then what is your best case scenario for pledge and unveil being able to confine an attacker?

Because I'm pretty sure it would be a lot less restrictive than what proper sandboxing can provide.

> There's a reason Linux is cloning that interface with landlock.

Sure, because it has advantages as part of defense in depth. I never said it was useless or without value.

Besides that, from memory landlock actually preceded unveil having started development in 2016, so I don't know that it's fair to say Linux is cloning anything if they had a solution first.


> How? The torrent program needs read and write access to create whatever files it needs to, which can't be predicted ahead of time.

The same way it was handled in Firefox, for example; unveil the output dir. At least my torrent program doesn't shit files all throughout my file system. Maybe yours does?


I meant arbitrary files within the dir. Not including any other dirs/files it has to read. So basically, it's marginally more effective than a chroot, without any real granularity.

Besides, you avoided the hard question:

Imagine a worst case scenario for an RCE in a torrent program, and then what is your best case scenario for pledge and unveil being able to confine an attacker?

Because I'm pretty sure it would be a lot less restrictive than what proper sandboxing can provide.


Oh look who is here Ori!! How is gefs going and do we have in the next 9Front release?


> Can you refer to any sandboxing solution or technologies that limit themselves to restricting syscalls and hiding file paths?

FreeBSD has capsicum and Linux has seccomp-bpf & landlock. Of the three, pledge and unveil are the least horrible execution of the same idea (though capsicum is alright, too).

Obviously it's not a "run any random machine code on my system safely" type of sandbox, rather it's used by e.g. browsers as defense in depth so that attackers can't just outright call execl.


> Of the three, pledge and unveil are the least horrible execution of the same idea

I don't think they are the same idea though. pledge and unveil are significantly more limited in scope.

> Obviously it's not a "run any random machine code on my system safely" type of sandbox

That's pretty much what a sandbox is though. Not every limitation or security augmentation is a sandbox, nor does it have to be.


> I don't think they are the same idea though. pledge and unveil are significantly more limited in scope.

The "idea" I'm talking about is privilege dropping. But sure, you can also use seccomp for other things.

> That's pretty much what a sandbox is though. Not every limitation or security augmentation is a sandbox, nor does it have to be.

seccomp-bpf and pledge are both tools used to implement sandboxes in browsers.[1][2] Maybe you don't like this terminology, but it is quite well established at this point.

[1]: https://wiki.mozilla.org/Security/Sandbox/Seccomp

[2]: https://www.openbsd.org/papers/eurobsdcon2022-landry-taming_...


> The "idea" I'm talking about is privilege dropping.

OK. But privilege dropping alone does not a sandbox make.

> seccomp-bpf and pledge are both tools used to implement sandboxes in browsers.[1][2] Maybe you don't like this terminology, but it is quite well established at this point.

A browser sandbox is quite a different thing from an OS level sandbox, which is the context we are discussing sandboxes in.


> Not requiring the cooperation of developers to opt-in, for starters.

This is a good point. My initial thought is that the developers would know best what features their application needs? Furthermore, it's work that every user benefits from, rather than requiring an IT professional to reconfigure for each use case.

What do you see as the downsides?


> What do you see as the downsides?

Legacy software, new security issues which has not yet been patched. As the operator you have no way to isolate the application, you just have to wait for the developer for both mitigation and bug-fixes.

In the real world I don't see that making much of a difference. Many sandbox solutions aren't really being used anyway, legacy software can go on it's own VM and network. Micro-segmentation of networks are in style for those types of companies anyway.

Pledge and Unveil seems like more pragmatic solutions to me, but they do require a certain level of care and engagement from the developers. If you don't have that, then things like SELinux, AppArmor or some type of sandbox that can be applied by the operator can help prevent an attacker from getting a foothold on a server or VM.


> you just have to wait for the developer

If it’s open source then it can be patched in bsd packages manager.

Most usage of pledge is here rather than being upstreamed.


I don't think it applies in this exact case (the base OS locking itself down), but in general:

> Furthermore, it's work that every user benefits from, rather than requiring an IT professional to reconfigure for each use case.

A distro could distribute, say, bubblewrap configs that automatically sandbox packages and every user could likewise benefit from that.

> What do you see as the downsides?

I think you're right that devs are best placed to lock down their own code, but if you don't have developer buy-in (or if they just don't want to make the effort or aren't available) then externally-imposed sandboxes are a lot easier for distro maintainers or end users to add after the fact than trying to patch the actual program source to do pledge+unveil or such.


> Not requiring the cooperation of developers to opt-in, for starters.

True, meaningful in the general case, and completely irrelevant in this particular case, which started with specifically the question of OpenBSD applying the protection in question to its own base system. I actually agree that being able to externally impose a sandbox is super useful, but self-imposed restrictions are perfectly applicable in this usecase.

> You really think pledge and unveil are equivalent to sandboxing? Can you refer to any sandboxing solution or technologies that limit themselves to restricting syscalls and hiding file paths? Unveil is a lot more useful as a component in sandboxing, I'll give you that.

I think that pledge and unveil are a type of sandboxing, certainly. And... I'm struggling to think of any sandboxing tech that does anything but limit syscalls and filesystem access. After rereading https://github.com/containers/bubblewrap?tab=readme-ov-file#... a bit, I suppose there's a case for being able to change what a sandboxed process can see rather than only masking (ex. PID 1 is a different process inside and outside the sandbox), but that strikes me as a slight variation rather than a fundamental difference in what is or isn't a "sandbox" per se. Likewise, I could see an argument that OpenBSD's approach is coarser than it could be; ex. I think you could restrict a Linux process to keep your real user and be able to read files but not write them even though they're owned by your user and are 644, but that's more of a convenience thing than a true fundamental difference - an OpenBSD process could open files in read mode, keep the socket open, and then pledge away open() altogether which gives you the same outcome with more legwork.


> which started with specifically the question of OpenBSD applying the protection in question to its own base system.

Agreed, my answers did quickly go beyond the original point being claimed.

> I think that pledge and unveil are a type of sandboxing, certainly.

I think they are limitations and that's about it. They don't fit the metaphor of a sandbox IMO.

> but that strikes me as a slight variation rather than a fundamental difference in what is or isn't a "sandbox" per se.

Fundamentally I see a sandbox as something that is hard for the sandboxed application to escape, or even communicate out from except via limited well defined channels.

I don't think limiting syscalls alone satisfies that.

As a test, I think a robust sandbox should be able to apply to any program, no matter what it is doing.

Do you think pledge satisfies that? What about a complex piece of software that needs to use a number of syscalls that could be leveraged for an attack frequently, and so can't be meaningfully limited with pledge.

If that software has a vulnerability, the attacker now has access to the host system at least to the extent of the user the program was running under. That attacker certainly isn't sandboxed, they don't even have to escape a sandbox because there wasn't really one there, just 'concepts of a sandbox'.

Appreciating this discussion by the way!


It is cute that it only lists security features HardenedBSD has.

While e.g. ignoring many security features openbsd has and HardenedBSD has not.


To clarify things a bit, HardenedBSD is a bit like Linux+Grsecurity, but for FreeBSD, yes, there are differences, but overall it's like Grsecurity+.

BTW: For Linux, OpenPaX is on the horizon:

https://www.phoronix.com/news/Edera-OpenPaX-Announced


OpenPaX looks great, but I can't find a patch anywhere, just an already patched forked kernel tree.



I mean I don't see anywhere I can download the patches to apply them to a source tree locally.


  git format-patch -29 HEAD --stdout > 0001-last-29-commits.patch


Interesting, thank you. I don't use git that much and my own searching turned up creating a patch that seemed like it required git to apply.


"hardening" doesn't mean anything without details. I see that word used several times in the list.


Security feature comparison, is should be said. Though I would guess it otherwise matches FreeBSD?


Pretty much yes? -

> Introduction

> Founded in 2014 by Oliver Pinter and Shawn Webb, HardenedBSD is a security-enhanced fork of FreeBSD. The HardenedBSD Project is implementing many exploit mitigation and security technologies on top of FreeBSD. The project started with Address Space Layout Randomization (ASLR) as an initial focal point and is now implementing further exploit mitigation techniques.

> Why Fork FreeBSD?

> HardenedBSD forked the FreeBSD codebase for ease of development. Prior to ...


Good to see Secure Boot is on the list. /s




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: