Hacker News new | past | comments | ask | show | jobs | submit login

Where does CISA/NIST recommend (for software developers) or require (for government agencies integrating software) specific software/operating system hardening controls?

* Where do they require software developers to provide and enforce seccomp-bfp rules to ensure software is sandboxed from making syscalls it doesn't need to? For example, where is the standard that says software should be restricted from using the 'ptrace' syscall on Linux if the software is not in the category of [debugging tool, reverse engineering tool, ...]?

* Where do they require government agencies using Kubernetes to use a "restricted" pod security standard? Or what configuration do they require or recommend for systemd units to sandbox services? Better yet, how much government funding is spent on sharing improved application hardening configuration upstream to open source projects that the government then relies upon (either directly or indirectly via their SaaS/PaaS suppliers)?

* Where do they provide a recommended Kconfig for compiling a Linux kernel with recommended hardening configuration applied?

* Where do they require reproducible software builds and what distributed ledger (or even central database) do they point people to for cryptographic checksums from multiple independent parties confirming they all reproduced the build exactly?

* Where do they require source code repositories being built to have 100% inspectable, explainable and reproducible data? As xz-utils showed, how would a software developer need to show that test images, test archives, magic constants and other binary data in a source code repository came to be and are not hiding something nefarious up the sleeve.

* Where do they require proprietary software suppliers to have source code repositories kept in escrow with another company/organisation which can reproduce software builds, making supply chain hacks harder to accomplish?

* ... (similar for SaaS, PaaS, proprietary software, Android, iOS, Windows, etc)

All that the Application Security and Development STIG Ver 6 Rel 1[1] and NIST SP 800-53 Rev 5[2] offer up is vague statements of "Application hardening should be considered" which results in approximately nothing being done.

[1] https://dl.dod.cyber.mil/wp-content/uploads/stigs/zip/U_ASD_...

[2] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.S...




I don't know how meaningful those countermeasures really are. Like, you're basically looking at the space of Linux kernel LPEs, and you're bringing it down to maybe 1/3rd the "native" rate of vulnerabilities --- but how does that change your planning and the promises you can make to customers?


Government agencies have got their hands tied just getting people to use MFA and not click on phishing emails. You're decades ahead of the herd if you're thinking about coding in SECCOMP-BPF.


Software sandboxing has a relatively good cost-to-benefit ratio at reducing the consequences of software bugs, which is why it's already implemented in a lot of software we all use every day. For example, it exists in Android apps, iOS apps, Flatpak apps (Linux), Firefox[1][2], Chromium browsers[3][4][5], SELinux-enabled distributions such as Fedora and Hardened Gentoo[6], OpenSSH (Linux)[7], postfix's multi-process architecture with use of ACLs, etc.

Kubernetes, Docker and systemd folk will be familiar with the idea of sandboxing for containers too, and they're able to do so using much higher level controls, e.g. turn on Kubernetes "Restricted pod security standard" for much stricter sandboxing defaults. Even if containerisation and daemon sandboxing aren't used, architects will understand the concept of sandboxing by just specifying more servers, each one ideally performing a separate job with the number of external interfaces minimised as much as possible. In all of these situations, the use of more granular controls such as detailed seccomp-bpf filters is most useful to mitigate the risks introduced by (ironically) security agent software that is typically installed alongside a database server daemon, web server daemon, etc within the same container.

Tweaking some Kubernetes, Docker or systemd config is _much_ cheaper and quicker to implement rather than waiting to rewrite software in a safer language such as Rust (a noble end goal). Even if software were rewritten in Rust, you'd _still_ want to implement some form of external sandboxing (e.g. systemd-nspawn applying seccomp-bpf filters based on some basic systemd service configuration) to mitigate supply chain attacks against Rust software which cause the software to perform functions it shouldn't be doing.

[1] Firefox Linux: https://searchfox.org/mozilla-central/source/security/sandbo...

[2] Firefox Windows: https://searchfox.org/mozilla-central/source/security/sandbo...

[3] Chromium multi-platform: https://github.com/chromium/chromium/blob/main/docs/security...

[4] Chromium Linux: https://chromium.googlesource.com/chromium/src/+/0e94f26e8/d... (seemingly Linux sandboxing is experiencing significant changes as this document or one similar to it does not appear to exist anymore)

[5] Chromium Windows: https://chromium.googlesource.com/chromium/src/+/HEAD/docs/d...

[6] https://gitweb.gentoo.org/proj/hardened-refpolicy.git/tree/p...

[7] https://github.com/openssh/openssh-portable/blob/master/sand...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: