Hacker News new | past | comments | ask | show | jobs | submit login
Desktop Linux Hardening (2022) (privsec.dev)
289 points by pabs3 on June 12, 2023 | hide | past | favorite | 181 comments



I went this rabbit hole roughly 2 years ago and I just quit at the very end, because my focus went from using the OS for something productive to maintaining it, securing it and becoming home sys admin.

Since then, I’m simply “ignorant” and sane - use it, update it regularly, use official software sources (so official distros repos and Flatpaks), FDE, SecureBoot, do not run random net stuff (like scripts, “Git” etc.), try to stay as default as possible and use a VM to experiment if I really need to.

I am curious - how many of you regular desktop Linux users actually had security issues (or at least suspected something shady)?


As great as flatpaks are for providing sandboxing, the problem with them is that they are upstream software. When Audacity added telemetry it only affected binaries downloaded from them, not distro-compiled binaries. When firefox stops you from loading your own extensions, that's because you're using a binary downloaded from them, not distro-compiled binaries. A decade ago I used to think distro package maintainers were unnecessary middlemen who were just introducing failure points (like the Debian openssl fuckup), but today they are truly the last line of defence from malicious upstreams.

Even if something sneaks into a distro package it's possible to convince the distro package maintainer to disable it, because the maintainer's interests are aligned with you and not upstream.


The example you gave, Audacity, has a flatpak managed by one of the flathub admins, and most definitely does not include telemetry.

https://github.com/flathub/org.audacityteam.Audacity


1. My point was about using upstream software, not about the Audacity flatpak specifically. Audacity binaries downloaded from upstream's website have the problem I mentioned.

2. The Audacity flatpak is maintained by a Flathub dev only because Audacity upstream has not expressed an interest in maintaining it themselves, not because of some Flathub policy to reject telemetry etc. If they wanted to maintain the flatpak themselves they would be allowed to do that, since Flathub policy is to have upstream developers maintain their flatpaks. And that would lead to the problem I mentioned.


But you can get alternatives which have the telemetry removed. Like VSCodium and some Firefox forks. And instead of every distro needing to do those patches themselves, they can share the effort.


Maintaining a firefox fork is a much bigger job than putting `--with-unsigned-addon-scopes=app --allow-addon-sideload` in the package build script.

Distros already share effort implicitly because maintainers of distro X often look at what distro Y is doing for that package. Also users compare distros frequently and will tell the maintainer of distro X that the same thing works with distro Y.


But a user has to know to look for these alternatives. Use a distro you trust and you don't have to.


Conversely Debian broke Firefox's ability to update without requiring a restart.


Can you expand? I thought updates required a restart no matter what platform or distro.


I misremembered. The effect is that you don't get the "Restart required" message, but the implementation is that they just postpone applying the update until your next re-open.

> We also are not trying to force you to install updates before you can keep using Firefox. Firefox's update mechanism downloads updates while it is running and installs them at startup. ... In the case of Bug 1705217, the package manager updates Firefox's files on its own schedule that is outside of Firefox's control ... Firefox had no intention of forcing updates to be installed at an inconvenient time.

https://bugzilla.mozilla.org/show_bug.cgi?id=1761859


That is not a difference in behavior between distro packages and upstream Firefox. If an update gets installed then FF requires you to restart it. So just like the upstream binary's updater doesn't install the update while you're still running it, you shouldn't update the distro package while you're still running it either.


No, if you update upstream Firefox you'll see a prompt inviting you to restart when you want. If you update distro Firefox you'll see a page blocking everything telling you you need to update now.

That's because upstream Firefox separates out downloading and applying the update.


>If you update distro Firefox you'll see a page blocking everything telling you you need to update now.

Correct. Because you installed the update.

>That's because upstream Firefox separates out downloading and applying the update.

Correct. And that's what I'm telling you to do with your package manager as well.


> Correct. And that's what I'm telling you to do with your package manager as well.

Please inform me how to automatically have my distro package manager download updates and install them on the next open. Note that it's critical here that the install is perceptively instant: it means I'm never waiting for my browser to be available.


Now why would I do that? You originally asserted "Conversely Debian broke Firefox's ability to update without requiring a restart" and now you know that Firefox doesn't have the ability to update without requiring a restart. Any further shifting of your goalposts is your problem, not mine.


I did shift the goalposts once because I was wrong. In my second post I expressed that Firefox's built-in update manger still did something much better than the distro package manager.

You asserted that was not the case, so I asked you to prove it. If you don't want to you don't have to, but I'll assume you admit that the distro package manager is in fact inferior.


This doesn't make *any* sense. Even in the example you gave, Mozilla really wanted to block you from using extensions, they will just remove the entire store.


It doesn't make sense because you need to read carefully. I said "loading your own extensions", ie loading extensions not from their store but by dropping a .xpi in /usr/lib/firefox/browser/extensions that you made running `zip` and changing the filename.


This is pretty much the best approach, currently, and probably into the far future.

When I need to run a program from a dev I don't fully trust to behave well (e.g. the app is closed source for no particular reason, has known extensive telemetry, or has an unhealthy tendency to fuck with configuration files), I run it in a firejail, container, or reboot to windows.

For everything else I fancy the thought that everything I install being open source and looked at by multiple people including a package maintainer means that there's a significantly lower chance of easily exploitable vulnerabilities (e.g. in system config and general program behaviour), and an almost nonexistent chance of outright malicious code.


I recently shut down my last Linux box and I do not run Linux at home anymore. However, I did run various distros of Linux on the desktop from 1999-2022. Prior to that, I ran Minix and OpenBSD on my desktop machines, starting around 1992. So let's say 30 solid years of Linux on my desktops.

I never ever, never once had any security issue. I never had an issue with malware being installed. I never had an issue with external malicious users accessing my system - not even DDOSing my network.

In fact, I have run Windows as well since version 3.1, and DOS before that, and I've never had a virus or malware on Windows, either. Not a single compromise or glitch at all.

Once, I was a guest at a friend's house and I discovered that he had fallen victim to some sort of replicating virus on his Windows 98 system. I was able to manually eradicate it for him, without resorting to commercial anti-virus software.

A few weeks ago, I did discover that my home router had been compromised and some sort of malicious DNS service had been installed on it. So, I must confess to my first-ever home network compromise, albeit an embedded router OS I had little control over.


> do not run random net stuff (like scripts, “Git” etc.)

Man, half of the cool tools want me to do this

   curl cool-company.io | sh


That's how Rust installs. I'm not that happy with it but at the time (years ago) the distro (Debian) packages were incomplete. From https://www.rust-lang.org/tools/install

curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

At least it does not require sudo (unless that's buried in the script.)


It does not require sudo, by design.


Usually I look for alternative way - like zipped binaries or similar. Most of the developers offer it.


Stay away from them.


It’s hard to prove a security issue but “shady” describes my experience with most all modern OS’s. I trust Linux the least but it’s my daily driver. I just assume it’s comp’d because at some level (NSA at least) it is. I also try to reinstall a fresh distro at least quarterly, but even on a fresh install don’t trust it. If I can’t trust all of the devices on my network (I don't) then I also don’t trust my network. If that’s true then no device on the network can be trusted. Until I own and administer every device on my network, there’s no way around that. Even if I did trust all the devices I also have to trust the isp and the router manufacturer (which I certainly don’t).


How do you work with a computer you don't trust?


>I am curious - how many of you regular desktop Linux users actually had security issues (or at least suspected something shady)?

Replace Linux with OSX or even Windows in this statement and i wouldn't expect crazily different results from a crowd that follows the best practices you outlined.


Have you got SecureBoot working well on Linux? My last try was with Fedora 34, and I had to manually reinitialize TPM with every kernel upgrade. One of the serious issues that keeps me on Windows.


Don't worry about it: Secure Boot is (currently) 100% pointless on Linux because the initrd is not authenticated.

Once the work described at https://lwn.net/Articles/918909/ this will change, and , and kernel updates should no longer require will (hopefully?) no longer require re-initializing the TPM.


Well, all my machines use Arch Linux with custom Secure Boot keys and unified kernel images (essentially, the kernel, the initrd, the command line, and the splash screen fused into one EFI executable and signed as a whole). So on my machines, the initrd is definitely verified. Thanks to Foxboron who made this easy with sbctl.

An entirely different matter is that the default Microsoft keys allow running all other distros, with their GRUB which allows to load initrds without authentication - which would allow evil-made style attacks by replacing the whole boot chain and the kernel. So in my world, all builds of Shim and GRUB are malware, and keys that allow booting them are not allowed in the DB.


The TPM isn't involved in secure boot. Could you provide some more details about what went wrong?


It is about full disk encryption with automatic unlock during boot. One needs to make TPM dependent on a successful secure boot to allow access to decryption. The boot completes no problem, but the TPM entry that controls access needs to be manually recreated with each new kernel update.

See https://gist.github.com/jdoss/777e8b52c8d88eb87467935769c98a... , the bit "then auto volume decryption on your next reboot will fail". This makes sense.


Using anything other than PCR 7 is going to make it very fragile, yes - I have no idea why that doc is recommending using PCR 4 as well.


To defend against an attacker with physical access to an offline machine you need to verify anything that the attacker can overwrite without the encryption key. Aren't bootloader and kernel on the writable unencrypted partition?


If you have secure boot enabled, how does the attacker replace the kernel or bootloader?


Pull the drive out, insert it into his machine, replace, then insert it back.


And now the signature doesn't match, so the system doesn't boot


Which signature?


The signature that's validated by secure boot. If you don't have secure boot turned on then there's no point in verifying PCR 7, because all PCR 7 contains is the secure boot data.


It is just SecureBoot which is officially supported by many mainstream distros.

I unlock encrypted partitions with a passphrase, not TPM.


I am nuts into linux desktop security, but I cannot imagine recommending this to even other developers, let alone non-dev users. There are just too many usability tradeoffs, the whole thing is a major time waster, and you have to deal with a lot of breakage. Making basic security measures easy to implement is what's going to help in the long run.


Completely agree. I also went through this exercise, having switched from a Chromebook (RIP pixelbook, probably my favourite computer until it died) to Linux on Framework, where I set this up from scratch.

Anyone can say what they want about chromeOS not being a real OS (I disagree, that's off-topic), but it was built to be secure-by-default -- at least with regard to basically everything in this article.

Even with a stock, up-to-date linux kernel (I use arch BTW), there are a ton of gotchas, such as the fact that lockdown mode disables hibernate (!)-- I needed to patch the kernel to tell it I know better (there are some good LWN articles on why they did it, it's cause there are far too many gotchas):

https://gist.github.com/kelvie/917d456cb572325aae8e3bd94a9c1...

Chromebook on Framework actually seems like a great value prop in this regard, but of course you have to sell your metadata to big G.


Note that encryption handling on ChromeOS is not that great at all. The wrapped encryption key is protected with the hardware id and user passphrase, and the passphrase gets sent to Google's server without client side hashing.

Essentially, anyone with access to Google's web servers and physical access to your hardware can just unlock and decrypt your device.


Not to derail the thread, but i'm interested in your disagreement. It seemed really restricted for the few moments i played with it.


By default, yeah it's just running chrome on what sometimes seems like hardware that is overkill for it, but then you have:

https://support.google.com/chromebook/answer/9145439?hl=en

It runs a VM that runs LXC on it, and by default installs a debian container you get root to. They also created a X11/wayland bridge so that graphical apps work, even 3d acceleration to some degree, as well as volume mounts from the host, and USB passthrough.

I was able to use OpenSCAD on it.

You can even install containers of your choice, e.g.: https://wiki.archlinux.org/title/Chrome_OS_devices/Crostini

You get a full linux container inside your secure Chrome OS, the main limitation being that you have to use whatever kernel their presumably signed VM uses. All this without needing to disable secure boot or booting into a "developer" mode or whatever.


Thank you for the detailed answer. You're right. It is interesting.


Most people disable hibernate because they use ssd’s for boot disks, no?


It's not really boot speed that's the issue, it's just that, like on my work Macbook, when I close my laptop's lid on Friday to catch the train, I expect:

1. The laptop to sleep

2. After some time, it'll hibernate so it doesn't drain the entire battery over the weekend

3. When I fire it up on monday, whatever I was working on is still there.

It's hard to do this without hibernate (or a _really_ efficient sleep).

You actually have to configure sleep-then-hibernate on systemd for this to work right. Again, I still think chromeOS is the future of the linux desktop (again, if you don't mind Google watching over).


Hibernate is usually used to allow the computer to shut down completely and not remain in standby, saving battery. It’s not about boot speed anymore.


Yeah, mine too. Getting to hte grub menu takes probably 90 seconds. Booting from there takes about 10. I'd love to know why that is.


My power on self-test takes an order of magnitude longer than the whole rest of the boot process... so I sleep or hibernate whenever I can.


Lenovo?


> I use arch BTW

Sigh you people...


I feel every privacy & security measure taken be it on your system or just your browser config needs to be understood properly before applying.

So I think there is value in recommending such articles to everyone interested, as it would open them to understanding attack/leak vectors and be aware of those. The fallacy that anything that isn't Windows is safe needs to come with a caveat that blind trust with the basic security policies can lead to disaster.


>I am nuts into linux desktop security

What things do I need to worry about? I'm about to make the switch from Windows to Linux.

Is it general 'dont run things you don't trust'? keep things updated? Use good passwords?(do I need to change a setting to prevent brute forcing?)

Am I missing anything?


If you run programs that you can't trust (for example because the source is unavailable and/or nobody reads it, no one is a gatekeeper of program updates etc), then you are opening yourself to attack because the default unix security model is just as bad as the Windows one. (The Windows one used to be much worse, but it isn't anymore, now they are the same badness)

Nowadays, the web browser is used as a platform to run programs on and so it has a lot of access to weird things (multiple monitors, local filesystem, ...) so one has to be careful what you enable there--since you probably run a lot of Javascript written by shady people you don't know.

The Linux desktop security model has user accounts, and then files have permissions for those users (yeah, there are also user groups, whatever). That means if you have a misbehaving (or malicious) program, it can read stuff in your home directory since that's owned by the same user account--so it can read your saved passwords, delete all your photos, figure out your banking info--but hey, at least it can't remove your printer (since for the latter you need to use the root user account :P) /s .

There are mitigations against that and the easiest (and crappiest) one is containerization: just run each program in an isolated virtual-machine-like-thing and they can't access other program's data (i.e. the rest of YOUR data) in the first place.

Then there's SELinux which flips this entire thing on its head. By default, nobody can do anything (that's the first very good decision!). If a process wants to do things, there has to be a policy installed that mentions that specific thing to be allowed and when. Otherwise no go. This way of working is much safer, BUT someone has to maintain those policies! And it must not be the program's author since he could just add whatever line he wants to have to the policy--and that's obviously bad. So who does it? Usually the Linux distribution's maintainer. Or, more commonly, nobody--so there's no policy for your favorite program and so it won't work.


Containers are not virtual machines.


But container is a form of virtualization.


No. Just a form of setting namespaces. Nothing it's virtualized.


Although I use the term "virtualization" the same way as you want to understand it, containers are still called "virtualization", as this wikipedia page suggests:

https://en.wikipedia.org/wiki/OS-level_virtualization

Good to know about it. Especially once you meet decades-experienced folks (mainframe guys), they rather use the word "virtualization" in its broader sense, yet they understand the difference between containers and virtual machines.


Userspace namespaces/chroots/jails are NOT virtualizations, period. Plan9/9front it's composed/run on namespaces and no one would say every window(1) process it's being virtualized because it has instances of different Plan9 devices such as their own /dev/draw.


Here's what those IBM guys have to say about container and virtualization:

https://www.ibm.com/cloud/blog/containers-vs-vms#:~:text=Con....

Carnegie Melon University chimes in as well:

https://insights.sei.cmu.edu/blog/virtualization-via-contain...

Amazon AWS:

is container virtualization


> Amazon AWS: is container virtualization

Well, if you rent containers, you get containers. If you rent bare-metal machines, you get bare metal machines. So I don't see how this assertion makes any sense.

The CMU article skims around the names but call containers "virtual runtime environment" instead of straight "virtualization". But by that definition everything in any modern computer is "virtualized" because it runs over Virtual Memory.


essentially just a very robust deployment of a chroot jail.

which can totally happen on a VM, but not a VM in-and-of-itself


Then you don't have a container. You have a VM with a container.


"read scripts before you run them" is a good bit of advice. Some little things can help you protect your GUI apps if you're extra paranoid (eg. Wayland is more secure, Flatpak offers additional sandboxing with bubblewrap) but not everything is necessary.

If you use a mainstream distro like Fedora or Ubuntu, its default configuration should be safe enough to use like Windows. Just remember that there isn't really any antivirus or Microsoft Defender to save you if you really mess up. I do offline backups of important work on all my devices, for this reason partially.


> "read scripts before you run them" is a good bit of advice.

I always read the script and do also verify they're only using ASCII chars (and if they're not, I check which Unicode shenanigan is ongoing).

For example:

    ... $ file /usr/local/bin/somescript.sh 
    /usr/local/bin/somescript.sh: Bourne-Again shell script, ASCII text executable
FWIW most of the scripts out there are only ever using ASCII chars.


How do you check it without running something that may execute it? Many scripts will execute if opened with vi or even cat or printf from what I understand.


> Many scripts will execute if opened with vi

I haven't encountered this. You can execute scripts intentionally in vi, and I assume you could configure it to happen automatically, but can this happen without you making some sort of effort to enable it?

What scripts present such a risk? What vi configuration is required to stop this?

Same question wrt cat.


Source for executing files with cat? That certainly defies my understand of how cat works.


Maybe they mean that it can include control characters that mess up your terminal and make it look like something malicious is happening, which of course is not the same as executing it (and has a warning anyway).


(n)vi(m), cat, less/more do not execute scripts on read. That would be utterly insane.


>Some little things can help you protect your GUI apps if you're extra paranoid

Can you explain this? What are the exploits?

Is it the equivalent of running a .exe or .jar on windows, but with an extra layer of protection? (Or am I trying to shove my windows ideas into linux?)


> Can you explain this? What are the exploits?

Traditional desktop Linux allows different apps to more-or-less freely communicate with each other (called the x11 protocol). This works really well for old apps (and admittedly not too bad for new ones) but is very slow and exposes a lot of bugs. On top of this, you have potential exploits in your filesystem from loosely permissioning important settings and identity files. So, the attack surface for a traditional Linux distro is relatively large.

> Is it the equivalent of running a .exe or .jar on windows, but with an extra layer of protection?

Kinda! Linux software is distributed as packages, which contains a binary (the exe/jar portion), a dependency manifest and whatever static content it needs. A sandbox like Flatpak will take those packages and put it into a sandbox, to prevent hostile interprocess communication and filesystem exploits.

None of these mitigations are perfect per-se (nor are they on Mac/Windows), so use them wisely if you intend to use them at all. I have used a non-sandboxed system to run old 90s Windows games for years though, and haven't picked up any significant issues. YMMV, it is Linux after all :P


(U)XTerm can lock input to a terminal so snooping it it's impossible.


That doesn't help when a bunch of disparate credentials are entered and cached in a shared runtime environment inside the same bunch of processes (aka: your browser).

Obligatory XKCD: https://xkcd.com/1200/


You have pledge and unveil under OpenBSD.

Or, if you are truly paranoid... just create another Unix user for sensible web browsing and log in with that account:

          useradd -m webcare
  
          passwd webcare
Done.


>Traditional desktop Linux allows different apps to more-or-less freely communicate with each other (called the x11 protocol).

Oh gosh. I'm getting heart palpitations :P

Is this a common attack vector then? Seems like an easy way to get an overflow error, or elevate privilege's.


There's also a newer protocol called "Wayland" which aims to avoid some issues with X11 (and is also a general architecture overhaul). GNOME and KDE support it out of the box and iirc at least GNOME also enables it by default.

There are some tradeoffs though. Some things like screen sharing might be worse on Wayland (e.g. I can't share a single window, only whole screens) and some apps don't support Wayland at all (including for example all of the IntelliJ IDEs). The latter can be circumvented by enabling XWayland (basically an X server running side by side with the Wayland compositor to handle X11-only apps). In that case you're left with the X11 security model for apps not running natively on Wayland.


And Wayland doesn't support being able to have multiple logons, each with their own independent desktop, simultaneously like X does.

I just like to bring that up even though it's not a common use case, because that's what keeps me from using Wayland.


Interesting. Do you know why? I've only ever tried launching nested instances of Sway (the compositor I use) which each use their own sockets and render as a client to the parent compositor and that worked fine. Does it have something to do with rendering to a real display?


I remember this being brought up early in Wayland development. The stance was taken that it wasn't a popular enough use case to be worth putting work into. I don't think there's any real technical reason why it couldn't be done, but I don't know. I'm not familiar with the architecture of Wayland.


I think seatd allows this now? So it works on the rather small sliver of distros that don't run systemd.


> And Wayland doesn't support being able to have multiple logons, each with their own independent desktop, simultaneously like X does.

Not sure what you're referring to. I'm using wayland and I can open multiple desktops via gdm.


I think he means multiple simultaneous seats, and IIRC this is a limitation of systemd-logind, not wayland itself (it works on seatd, though that makes you manage your own XDG_RUNTIME_DIR, so I guess pick your poison).


multiple seats of the same user?


Sure, if you're willing to multiplex your own XDG_RUNTIME_DIR. Hence "pick your poison".


There's whole classes of software that just don't make sense on Wayland. Most DAWs won't work because the whole point of a DAW is "be a big box that hosts docked windows of out-of-process audio plugins" which just isn't a concept Wayland has; the only way it would work would be to have Reaper or Ardour or whatever be its own compositor. Maybe somebody some day will write a "libsubcompositor" or something; IDK.


Not common at all. Because most software comes from distributions, not from warez.exe.

Sandbox comes with a heavy usability price, so they are lax by default.

For example you protect your files, but then you can't load your photos to facebook because your browser doesn't have access.


"more-or-less" is doing a lot of work there. It uses a cookie-based authentication system (in fact I think that's where web browsers got the word "cookie" from) but it was intended for trusted networks; when it's done in other situations it's via kerberos or over an SSH tunnel.

It's a pretty elegant setup, honestly, and one thing I miss when I'm on Wayland. The idea is you're at a small desktop on your University's network and you need to use some of the computing power of the big data servers they have. So you run e.g. a graphing program that does all of its computation on the big data machine but does its display on your little desktop (in this setup the big data machine is called the "client" and the tiny little desktop is called the "server" which confuses people a lot at first but makes sense when you get into the architecture).

X itself had been moving away from this for years by adding some extensions that don't work well (or at all) over the network, but there are absolutely still lab setups that use the old way because it's pretty much tailor-made for it.

The bigger security issue was that X programs could see and modify certain things about each other and about the system as a whole, so for example the program XEyes had a pair of eyes that pointed at the location of the cursor (it sounds silly but this was an accessibility thing). And a screenshot program could take a shot of the screen. And so on. Wayland got rid of all those capabilities, but there turned out to be a lot of baby in that bathwater so they're slowly adding back each of them, one at a time. In another decade a new crop of developers will no doubt say "What's all this needless cruft?! Let's start over!" and the cycle of life will continue.


The core three tenets are drive encryption (easy), regular wipes (10h/year), and sandboxing (crazy expensive at the moment). That gives you decent security.


If you have a basic understanding of mounts, sandboxing a program in bubblewrap is pretty easy. And of course there are pre-made solutions like firejail.


>sandboxing a program in bubblewrap is pretty easy

It's not easy. Just getting the mounts right is not sufficient. You also need to consider syscalls. E.g. TIOCSTI is permitted by default and allows an easy sandbox escape. See [0]CVE-2017-5226. TIOCLINUX can also allow sandbox escapes.

This can be prevented with the --new-session option, but that breaks interactive terminals, e.g. it causes ctrl-C to restore input to the parent shell while simultaneously running the child shell on the same terminal with the same inputs, which makes it unusable.

The alternative is compiling a seccomp BPF program to filter the syscalls, and loading it with the --seccomp option, but even this is non-trivial, e.g. see [1]CVE-2019-10063. This is also a case of "enumerating badness"; there's no guarantee that other syscalls are not also exploitable.

[0] https://nvd.nist.gov/vuln/detail/CVE-2017-5226

[1] https://nvd.nist.gov/vuln/detail/CVE-2019-10063


Ah it sure does feel good to be humbled. Glad to know my personal setups are not affected as they do not give access to the outer tty but at the same time - WTF is going on inside developers heads!?

I've noticed a trend of adding N deeply intertwined features which then interact in N! different ways and no one bothers to define rigorous semantics for them. And then you have to get a PhD in Linux just to understand whether your system is vulnerable when you use POSIX message queues inside a FUSE filesystem inside an unprivileged user namespace inside chroot inside setarch running in 9 nested terminal sessions.

I think I'm starting to understand why some people swear by the BSDs.

Edit: just checked and OpenBSD has indeed removed TIOCSTI in 2017.


It's the result of 30 years of tacking on half-assed shower thoughts to grampa's Unix APIs from the 70s.

0% design and 100% scratch-my-itch tack on another feature.


Not to mention, there's a lot of things that you can mount inside a bubblewrap sandbox that will lead to a trivial sandbox escape. The X11 socket, the DBus socket...


Sandboxing a program in Bubblewrap is trivially easy if it's a CLI program that doesn't need network access.

Sandboxing a program that needs graphics performance anywhere near 10% of what the hardware is capable of is, at the moment, as we're stuck with X11 and growing gray hair waiting for Wayland to become a viable alternative, literally impossible.

Firejail does provide pre-made solutions like running nested X servers, which can sandbox graphical applications at insane resource costs, such that you might as well just run a full virtual machine.

Mount namespaces are security theater if you're giving access to X11, Pipewire, the SSH agent, the GPG agent, the dbus session bus, and god knows what else.


How effective do you think the access control mechanisms in Wayland/PipeWire will be? I'm not sure how widely used they are right now but it seems like a step in the right direction rather than allowing all Wayland clients all the privileged extensions (screen recording, input methods) nullifying most security benefits over X11. Also xdg-dbus-proxy exists for dbus though its security depends on what peers you allow it to talk to.


OTOH it is amusing to watch Wayland re-adopt, one piece at a time, all the "cruft" they started the project to get away from. Embedding will probably be next.


Wayland works okay for me so far!

All you need to do is share the compositors socket and /dev/dri with the "container" and it "just works" for me!

I do not have xwayland enabled either, all of the software I use is wayland native!


What is the 10h/year metric?


All three of those are great things to do. The fact that you know to mention them makes me think you don't need to worry much. When Linux people say they're into security, it usually means research level computer science, not "I don't want my laptop to get malware".


The only caveat I'd like to say, I think we shouldnt assume every user isnt a VIP.

I'm not a VIP, but I have some huge huge huge secrets that are worth at least a million dollars, maybe 10s of millions of dollars, that I need to access with a computer.

Malware means potentially losing millions or tens of millions of dollars. I'm so afraid of those secrets, I don't even access them on my windows computers unless necessary.

I'd love to be comfortable doing this, but given anyone can seemingly make a python keylogger, I'm not really comfortable using my computer(windows or linux) for this purpose. Gaming/updating my website/etc... all of that, who cares if I'm hacked, I got backups. I just havent found a great way to keep secrets that need to go online.


I'd recommend not taking any security advice from internet forums at all if millions are on the line, regardless of OS. I'd hire a professional to harden the computer I want to use for that and wouldn't use it for anything else. An apple device would probably be my go-to there over any Linux system.


All security (computer or otherwise) is a balance between being usable and being secure. The ideal balance point is a very individual thing.


I would throw hardware security keys in there. I use for two things (in the context of UNIX security): local PAM authentication through pam-u2f, and the SSH *-sk key types.

The first one allows me to almost never type in my local password (and use a 7-word diceware that is impractical to bruteforce). This should prevent many local privilege escalation scenarios by things like shitty npm packages. I used a trivial password for years because it was tiresome to type in dozens of times a day. Now I just type `sudo -i` and tap the hardware key. (I already use containers and firejail to prevent them from accessing my personal files, so this only leaves zero days).

https://wiki.archlinux.org/title/Universal_2nd_Factor

The second one provides a second factor for my SSH keys that makes them completely useless without a corresponding hardware key. You need OpenSSH 8.2 or later on both sides. `ssh 1.2.3.4` and tap the key (or `ssh-add` it to the key agent — but it's less secure).

https://gist.github.com/Kranzes/be4fffba5da3799ee93134dc68a4...

https://developers.yubico.com/SSH/Securing_SSH_with_FIDO2.ht...


Do you believe that using an external hardware key is actually more secure than using an internal TPM whose short-period unlock is triggered by direct hardware signals (from e.g. a fingerprint reader) that software can't simulate, for your use-case?

Personally, I could only see the security benefit if the hardware key and the laptop are stored separately — if e.g. a three-letter agency robbed my house, they'd get both anyway. But that's entirely impractical if the HSK is used for every PAM auth, rather than just to do "special" actions (in the way that e.g. hardware crypto wallets are used.)


I'd need to get a TPM first, which was worse by every metric compared to external keys last time I compared them.

FIDO keys are already here and can also be used for web authentication (which is their main use case. this is just a nice add-on).

They can also be used to conveniently unlock LUKS volumes which I completely forgot about since I'm not using LUKS:

http://0pointer.net/blog/unlocking-luks2-volumes-with-tpm2-f...


In reality they're probably good enough for most of users - but in theory any SW based solution can theoretically get exploited, so external key that requires user action is certainly more secure (and not significantly more inconvenient to use - you need to press something in both cases).


Put another way, if someone has broken into my house and has physical access to my desktop, the hardware token is moot -- they can just take the box. And the token.

But someone can, in theory, break that software from multiple timezones away


But I wasn't talking about a software solution. I was talking about something that works like the touchID on Macbooks, where there are dedicated traces running between the fingerprint reader and the TPM's SoC pins such that "software" can't tell the TPM to unlock. (You'd instead need a malicious signed firmware update for the fingerprint reader.)

(Also FYI, this is why Apple cryptographically "pairs" device fingerprint readers to their TPMs, such that you can't just replace them without having Apple "activate" the new one. It's so that bad actors who acquire your laptop can't just quickly swap out the fingerprint reader for one that always puts "good fingerprint, please unlock" on the signal line.)


No, they cryptographically pair the hardware because it makes repair impossible. If they only cared about security you would be able to use a new fingerprint sensor or camera in an old device after wiping/factory resetting it. They have even started pairing screens and batteries, which are not security devices.


Apple doesn’t care about repairability one way or another. They thing they care about, that makes it seem like they hate phone repair, is that there are gangs of pickpockets who steal phones and send them in bulk lots to China, where they’re scrapped for parts to use to repair other phones, or to build phones or other devices that use phone parts. (Search “my stolen iPhone ended up in Shenzhen” if you don’t believe me. This is a whole thing.)

Apple borrowed the cryptographic pairing system they created for security in the fingerprint reader, and reused it for the display et al, to make stealing iPhones to scrap them for parts pointless. This has massively decreased the value of these phones on the black market (all you can really extract now are the low-value bits like the speaker or charging assembly); which has in turn made iPhones the least desirable target for thieves.

Every hurdle you have to jump to take part in Apple’s self-serve repair program — the “phone Apple to activate the pairing of these parts” step, the only being able to order parts once you have a specific broken device to order them for, etc. — is the way it is precisely so that the people who scrap the stolen phones can’t participate.


Linux security is surprisingly weak, and a mess.

Are users expected to search for hardening guides, spend time learn all these pieces and implement them securely? Even the system administrators may not know tools such as SeLinux, let alone the end users. It takes a lot of time to learn write SeLinux or AppArmor policies.

We need an operating system that is already hardened, preconfigured with secure defaults, and can be a daily driver. It should not need the user “harden” it.

I don’t mean OpenBSD: it’s secure because it doesn’t enable much.


> We need an operating system that is already hardened, preconfigured with secure defaults, and can be a daily driver. It should not need the user “harden” it.

It'll look like macOS does out-of-the-box. I think everybody here who's not already on macOS hates that, judging from what people post about it. Even though you can disable most of that stuff if you need to.

Probably nigh-impossible to get such a thing working with decent UX all the way up through the GUI layer, on Linux. Too fragmented. The pieces are there, but no-one can say "THIS is how it works". I think we're stuck with the current situation, until/unless RedHat decides to create & push something for this that further cements their control over the Linux userspace (which... yeah, that actually might happen, and it'll probably be pretty bad because they seem to have really poor taste when it comes to software design, but we'll all end up stuck with it regardless). Maybe we'll see most distros shipping and enabling "systemd-security" when RH gets around to making it, and ensures that Gnome is extremely hard to package without it.


Basically, a distribution that implements a system similar to Android (which is based on the Linux kernel) for desktop or MacOS.

It will take some flexibility from the user, and might annoy the user with permission approvals, but that’s what is needed.

I’m surprised that there are so many similar distributions, yet a secure MacOS-like distribution doesn’t exist.


> We need an operating system that is already hardened, preconfigured with secure defaults

There are Linux distros that do this.

> It should not need the user “harden” it.

Well, the problem is that the more hardened a system is, the more of a pain in the butt the system is to use. Most people (even those who are security-minded) don't want to use a fully "hardened" system, at least as a daily driver.


> Well, the problem is that the more hardened a system is, the more of a pain in the butt the system is to use.

That may be true for Linux, but I think macOS proves that it can be done.


Maybe? I don' t know. The reason I don't use Apple machines is primarily because I find them a pain in the butt to use.

(Also, MacOS is really BSD, isn't it? So there's not a million miles between that and Linux, security-wise.)


> Maybe? I don't know.

FWIW, macOS is my daily driver, and the last security-related hassle I can remember was related to installing an iSCSI kernel extension. IIRC it added 5-10 minutes to the task.

> (Also, MacOS is really BSD, isn't it? So there's not a million miles between that and Linux, security-wise.)

I wouldn't think so, but my Linux knowledge is shallow enough that I wish there were a secure-by-default Linux distribution that I could safely recommend to neophytes.


> macOS is my daily driver, and the last security-related hassle I can remember was related to installing an iSCSI kernel extension.

There are probably quite a few security-related hassles that you encounter every day, but you're just used to them and they don't bother you.

Which is totally fair. In practice, the most intuitive and easy-to-use systems tend to be the ones we have used for long enough that they are second-nature.

> I wish there were a secure-by-default Linux distribution that I could safely recommend to neophytes.

There very likely is such a thing, honestly. But for the average user, the most popular distros are very likely "secure enough", especially if Windows-level security is considered acceptable.


You either get a mess like iOS/Android that take everything away from user in the name of "safety", or yet another complicated beast like Qubes that has its own tradeoffs.



Something like Genode has the correct security foundation, though I'm not sure how practical it is to use as day-to-day OS yet: https://genode.org/


> I don’t mean OpenBSD: it’s secure because it doesn’t enable much.

If you want it easy for the average joe, it's going to be measurably more insecure. If you want it secure, it's going to be barebones by virtue of intentionally having reduced attack surface. That's just how it goes. Unfortunately, I will concede that default Linux is in an awkward spot of being too insecure to be considered "secure enough", and too difficult for newbies to configure properly.


If you are running Linux then 9/10 times the expectation is that you are the system administator. This also means, any hardening/security burden is placed on you.

And besides, the whole community is built on user freedom and security-for-everyone can sometimes get in the way of that goal.


One security expert a while back recommended grsecurity. From what I gather, it prevents permission exploitation in the first place and does so in a way that is pretty robust (can't remember the principle on how it does so)

Otherwise, I'd say forget about trying to secure Linux and just go with QubesOS unless you need 3D graphics acceleration

Edit: QubesOS allows you to run at least some conventional brands of Linux out of the box, but it allows memory efficient VM isolation between them, and some other really cool features, overall making securing stuff much simpler than installing a bunch of of random tools from different parties


I nuked my Ubuntu Mate laptop and installed Qubes, just to get a feel for it. It's significantly more maintenance than a vanilla distro, because of all the separated environments, but if I ever wanted to return to Linux on the desktop as my main OS, I'd absolutely use it. It's the only BYOD/WFH solution I've found that feels even close to secure enough if you don't want to run physically separate PCs.


Security wise, QubesOS is better than separate PCs since (at least in part) isolating the network card into a separate VM prevents it from having direct memory access to the whole system (I think devices do have DMA?)

It also provides a better way to communicate between VMs through simple RPC commands rather than hoping USB device drivers are not malicious

In terms of maintenance, I'm pretty sure you could have only one templateVM for everything, which means you only have to update dom0 and that templateVM. So in terms of maintenance thats really not that much more I guess?

I think I might try that myself actually

If you need persistence in the root filesystem, that could mean a standalone VM or a new VM. Last I tried I had trouble with their AppVM solution on that


Not a pro but I'd like to point how that privsec is part of the PrivacyGuides/GrapheneOS mob. I don't want to put them down in any way but that group causes a lot of drama. They are Tommy_Tran/B0risGrishenko and moderator of r/PrivacyGuides.

Accusations have been made about them and by them regarding other privacy advocates. None of that has anything to do with the quality of the post, but all sides of that drama seem to be very stubborn and immature when it comes to criticism which is a red flag in my book and takes all credibility away from them and suggestion they would make.


Are you TAJ? The only people who refer to me as B0risGrishenko (my old Reddit account for a few months) are either TAJ or the crazy people on r/privatelife who have no idea what they are talking about.

Quite frankly, anyone looking at that subreddit will know that it's just a bunch of incompetent and incoherent rambling from a few guys with quite literally no technical understanding. They been doing this for years now, spreading terrible privacy and security advice and spearing developers with these cheap shots when they cannot argue on the technical front.

PrivSec is not a part of either PrivacyGuides or GrapheneOS. I am currently a moderator on GrapheneOS, and have not been associated with PrivacyGuides for almost a year now.


This is why you have Arch wiki - https://wiki.archlinux.org/title/Security. It is a living and breathing document which is updated and maintained for a long time now. It also has proper warning on usability and practicality effects. And how to do it.

Everything mentioned here should be attainable in any Linux distro.

Linux is like Firefox. Just like how Firefox comes with Google as the default search engine and other bad defaults. It is also the most configurable browser when it comes providing privacy to end users. Linux defaults sucks. But it is also the most configurable when it comes to security.

And anyone recommending ChromeOS or MacOS are trading off privacy for security. Especially if you are recommending ChromeOS. Motivation and agenda > technical merit. Linux provides a good balance here.


> You should make sure your system receives microcode updates to get fixes and mitigations for CPU vulnerabilities like Meltdown and Spectre.

These cause a 10%-20% performance drop and I see no reason to enable them on a desktop. They can be turned off using mitigations=off on your grub command line.


They were shown to be exploitable by JS (though I think V8 had a mitigation, WASM probably not?)

Anyway, modern chips have hardware fixes, these microcode updates and kernel retpolines etc. should be less impactful if your CPU was made post-meltdown.


I seen these claims, but I've never seen a publicly available demonstration of these exploits.

Converting random bits into meaningful data would seem to be the sticky point.


I’ve seen a handful of proof of concept exploits over the years for the speculative execution bugs, none of them really worth caring about.

The most impressive was a local exploit for spectre which on certain kernel versions could leak /etc/shadow, and a similar one for windows that sometimes could leak a ntlm hash for local admin user. Both were from a commercial pentesting software company.

There’s claims of more impressive ones in academia, but no fucker in academia is sharing their code as usual.


Can you share a link to those claims from academia?


A modern desktop has all kinds of sandboxes and other privilege boundaries including the browser, system services restricted using seccomp, various application level sandboxes like Flatpak/Firejail/...


I think the main limitation is currently in Flatpak sandbox. Lots of apps assume control over the whole computer (IDEs, Rust, Android tooling). Lots of apps still rely on X11, which allows trivial sandbox escape. And then a lot of apps just ask for full access to your home folder or even to the whole filesystem, just so that they can open documents. Even if you try to restrict it manually, you can only make an access blacklist, not a whitelist. It's like trying to plug holes in a sieve. You are one update away from Flatpak poking another hole in the sandbox at the request of the app developer.


JFYI, firejail can do all of that. It can start each application in a separate xserver (giving you a choice from several implementations), deny access to everything with whitelisting (or do the reverse — your choice). The profiles are really easy to read and write, and it already covers a lot of popular stuff.


Still hoping for something other than Firejail to come along: The way profiles are designed is brittle and an order-dependent override hell, and it's common to use denylists vs. allowlists.

Firejail is a setuid binary with tons of attack surface and has had a number of privescs itself[1], unlike something like the lower-level Bubblewrap tool.

I'm glad it exists, though - it's the only mature solution other than Flatpak that has proper dbus filtering, among other things.

[1]: https://www.cvedetails.com/vulnerability-list/vendor_id-1619...


> And then a lot of apps just ask for full access to your home folder or even to the whole filesystem, just so that they can open documents.

Therein lies IMO the biggest problem with application security on desktop Linux. Just like selinux, apparmor and other similar security tools *someone* has to globally deny access to permissions, break things in the process and find the minimum number of permissions required for the app to function.

Many users are not going to do this and (lazy or uninterested?) developers (package maintainers?) don't want to do that work either.


AppArmor policies can be few hundred lines. Do people write these from scratch, or there is a repository for good policies?

Couldn’t package managers install and enforce such policies?


Debian has apparmor-profiles but it’s too immature to enforce by default.


And the various portals are used to satisfy user consent implicitly. If you are clicking something in the open file portal the permission is granted. I found this pretty weird when I first encountered it.


The link near the top of the article that goes to a separate page explaining security problems with Desktop Linux is quite interesting.

The article itself starts off much more focused on privacy than on security though it does have a fair bit of content on both topics.


>Most Linux distributions have an option within its installer for enabling LUKS full disk encryption. If this option isn’t set at installation time, you will have to backup your data and re-install, as encryption is applied after disk partitioning but before filesystem creation.

If you have a live CD and enough spare disk space, you can create a new LUKS partition and dd / filesystem-specific-backup-restore your existing partition into it. You don't have to backup and reinstall.

If you're already using systemd-gpt-auto-generator to auto-detect the root partition, you just have to make sure the new partition has the expected UUID. Depending on your setup you might have to regenerate the initramfs though, say because it didn't already contain `/usr/bin/cryptsetup` etc.

>openSUSE uses a unique ID to count systems, which can be disabled by deleting the /var/lib/zypp/AnonymousUniqueId file.

Deleting it will not help since it'll get recreated by the next command that needs it. Empty it instead.

>Encrypted /boot [...]

>- openSUSE uses LUKS1 instead of LUKS2 for encryption.

>- GRUB supports PBKDF2 key derivation only, not Argon2 (the LUKS2 default).

Yes, the first point is because of the second point. There isn't a security difference between LUKS1 and 2 when using PBKDF2, so sticking with LUKS1 means you can't accidentally switch to non-PBKDF2 and end up with an unbootable system. But yes, switching away from grub as the next point talks about is ideal. And if you switch to UEFI boot with UKIs in /efi then you won't need a separate encrypted /boot anyway.


LUKS/cryptsetup knows how to in-place encrypt and decrypt if you give it a place to put some metadata (16 MB in my experience):

    cryptsetup reencrypt --encrypt -q --header /some/where/file /dev/mydev
    cryptsetup reencrypt --decrypt -q --header /some/where/file /dev/mydev
(Just used that to encrypt a server disk during shipping and decrypt it afterwards once it reached its destination datacenter)

I haven't looked into it but it might be possible to convert a non crypted disk to a cyrpted one.


> Deleting it will not help since it'll get recreated by the next command that needs it. Empty it instead.

Interesting. Are you sure about this? The Wiki says it can be deleted and I did not see it coming back during my time with openSUSE. I can check it again later though.


Yes I'm sure, since I went through the same process of discovering it myself some months ago. I just tested it again to be sure.

It's also covered in https://forums.opensuse.org/t/zypper-uuid/135752


Oh wow. I will update the post later today. Thanks for pointing it out!


I updated the OpenSUSE wiki page too.


If you want a hardened Linux desktop right out the starting gate, give QubeOS a try.

Your privacy is already secured for the most part and keeps your work and personal stuff separate.

It is what i use to ensure that Firefox is started with empty data cache/file-store but populated with bookmarks.

https://www.qubes-os.org/


Yeah, Qubes feels just a few versions away from being really very nice to use too. But there's still quite a few pain points for me, most notably anything GPU related. And also a general lack of detailed documentation that's quite frustrating. The default security can be improved a lot too. Running a full-blown linux distro in net-vm and firewall-vm is sort of silly, really. Leaves a lot of unnecessary attack surface, and wastes a bunch of memory too.

There's also qubsd which is a qubes like thing for Freebsd, leveraging zfs, jails and bhyve. Looks really promising, but is still in a lot of dev flux. Definitely gonna switch to it when it's solid enough though. Because it looks like a better way to set up your own custom qubes-like setup.


I've been daily driving it for 5 years now on an old Lenovo Thinkpad. No major issues the whole time. I think people solve the GPU issue by using multiple GPUs, that way one can be dedicated to dom0 and the other can be passed through to an HVM OS. People play games in a windows vm, within qubes, so it is do-able. I don't have the hardware to test this out myself, all my machines are single GPU.


Yeah, I've been trying to set something like this up with a combination of iGPU and dGPU, which theoretically should be possible, but I certainly haven't gotten it to work as yet, and the documentation(and maybe the tooling too) leaves a lot to be desired. It offers ways to do some things, but very little in terms of more general explanation of all the steps, and the steps don't always apply as given(in my case, since I have pretty new hardware, they rarely apply at all). So I just have to sort of laboriously figure everything out. And it's quite a bit of dicking around to set up custom PVHs and such, again with lacking documentation.

I like Qubes quite a lot. But there's still a lot of friction if you're doing anything not supported by default.


The lack of GPU support by default and nvidia specifically makes it a no go.


> swap on zram

If I'm going to use memory that I don't have as swap, I might as well just use no swap and take the OOM if I use too much memory.


Not sure I understand. Zram allows you to store pages in a compressed fashion to memory to prevent or at least delay until the OOM kicks in and to keep hold of cached files for longer to prevent needless disk reads. Considering the tradeoff is a marginal amount of CPU resources it seems like free extra space and a no-brainer to enable if swap on disk isn't desired, at least in my mind.

I remember having issues with system starvation if it started to make excessive use of zram but that has been pretty much levitated with the introduction of MGLRU.


swap on zram is awesome, I get quite good ratios with zstd. Though its effectiveness will obviously depend on what's in memory. If it's a bunch of already compressed stuff then it won't get a good ratio on that.


Unfortunately the article fails to mention the most import aspect of security ... WHAT IS YOUR THREAT MODEL?

Who/what are you protecting against? Without a clear answer to that, you're probably wasting your time.

And check out Bruce Schneier for intelligent advice. https://www.schneier.com/


Very useful. One practical thing to add: enabling automatic snapshots (e.g. with https://github.com/openSUSE/snapper), ideally backing them up separately (e.g., with borg) might help recovery.


Security is a process not a product. Reduce your attack surface and practice defense in depth.

Once you know how it is a small investment of time to compile a custom kernel with everything you don't use disabled.

On any modern machine, just disable swap. It's not useful unless you want to suspend on mobile.


It is useful, unless you have an absolute crapton of memory, several times what you actually need. The swap is trivial to encrypt if you're paranoid (the key doesn't need to be persisted and can be generated from /dev/urandom on each boot).

https://chrisdown.name/2019/07/18/linux-memory-management-at...

https://haydenjames.io/linux-performance-almost-always-add-s...

https://wiki.archlinux.org/title/Dm-crypt/Swap_encryption


absolute crapton of memory, several times what you actually need

Err, you mean every rack-mount system since around 2000s? Seriously, the commissioning overhead for virtually anything exceeds the "max out the ram" tax. Unless you're a cloud provider commissioning autonomously and operating at excessive scale seeking some particular unit cost optimization ... and even then not maxing out RAM (to some current definition of effective price:performance maxima) would be odd.


> Once you know how it is a small investment of time to compile a custom kernel with everything you don't use disabled.

Having timely zero-effort updates from distros is far, far more valuable for security than the reduction of attack surface from custom kernel.


That's highly dependent on circumstance, and not usually the case. How many remote 0-days were patched in Linux recently? How many local privilege escalations? How many of those subsystems can already be totally removed for a known workload? At the pointy end, you can be more bleeding edge compiling your own vs. waiting for distro releases. In the "I care enough to break from the crowd and own my own infrastructure" class of situation, this is not unheard of.


"security is a process not a product". YES. (same with thinking you can just buy XYZ DevOps product and you're suddenly going to be "agile")

...at the same time I'm annoyed by the lack of quality and care in most products. Security products having pre-auth RCEs and other dumb things are really annoying.


The supply-chain problems that come with using flatpak so completely outweigh the minimal gains from sandboxing that I find it difficult to take this seriously.


This is a pretty good guide.

I also recommend manually reading/checking the the BIOS EEPROM and re-installing the OS from scratch at least every 6 months. This should mostly eliminate most of the advanced threats.

You can setup an ansible script to re-install everything so it can automated.


How does re-installing the OS from scratch every 6 months "eliminate most of the advanced threats"? The malware has up to 6 months to do its work. OS re-install may delete the malware, but the next visit to bad link may re-install the malware as well.


It is just a precaution measure, some of the malware like DDOS Bots might persist more than 6 months.

Honestly, an immutable OS would be more ideal but it isn’t very realistic. If you are adventurous, it would also be possible to setup a system where host image gets rebuild every night and persistent data gets pulled from a git repo.


In addition to this site, I'd also highly recommend this resource.

https://theprivacyguide1.github.io/linux_hardening_guide


Thank you! This is an extremely good hardening guide. Next weekend is set on this


I used to be a Linux fanatic (arch, i3-gaps, etc) but then I tried mac. It’s like Linux, but it just works. Highly recommend just getting an M1/2 laptop and enjoying yourself.


Wondering if there is something that can allow programs to access files in certain directory lists, and require explicit permission from the user to access anything elsewhere.


Really wish there are a wizard that did all this for you. Just click "harden" and wait till progress bar reaches 100%


Someone tell me how often threat actors or the angry neighbor poke in your swap partition to fish out things...


outbound firewalls like littlesnitch are bypassable, but at another layer of depth. visual prompt for unknown network attempt is probably good. would be nice to have similar prompts for the filesystem.


Would also like to add in this resource which is pretty in-depth.

https://theprivacyguide1.github.io/linux_hardening_guide




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: