Hacker News new | past | comments | ask | show | jobs | submit login
FreeBSD 13.0 – Full Desktop Experience (tubsta.com)
320 points by tate on March 17, 2021 | hide | past | favorite | 287 comments



I would love to be able to switch to FreeBSD but the one thing holding me back is support for .NET Core (and lack of VS Code support if that also doesn't run on FreeBSD -- but that's moot if .NET Core support isn't there). Docker support would be nice but isn't essential since code can be checked in to a Linux-based build server and VS Code can attach to a Docker instance on another machine.


(A port of) Visual Studio Code is available for FreeBSD (using it myself); `pkg install vscode`. If you would like to use docker, you could run a Linux distro in a virtual machine with docker daemon and configure your host docker command to use the daemon inside the VM. That's basically how docker works on macOS.


For docker on freebsd to get any traction, it would need to be implemented with the vm behind the scenes. The beauty of the macOS/windows versions are that they require 0 setup/maintenance and are an implementation detail.


It seems like this should be relatively simple; build docker-cli as a native FreeBSD program, make a docker-machine backend for bhyve... port docker-machine to FreeBSD, I guess (I hope that's not hard; it's not doing anything OS-specific), and you should be good to go.


>docker on freebsd to get any traction

Yeah no thanks....run bhyve..with buntu and than your docker...have (no) fun.

That's "docker" for freebsd:

https://bastillebsd.org/


Wow... bastille is a pretty capable container manager!


Yes! I love to work with it.


Instead of _switching_, I just got a used ThinkPad from ebay for a secondary OS, installed FreeBSD to it and I'm having a blast exploring a non-Linux OS and trying things out on that side. You don't need to go all-in, but do things gradually and see if FreeBSD offers you any new insights how an operating system could look like.

I also have a NixOS laptop and my main Arch Linux workstation for work use.


.NET Core can be build for FreeBSD [0] but it looks like there isn't official support.

[0] - https://github.com/jasonpugsley/installer/wiki/.NET-5.0-Prev...


I guess that is the question, is it the fault of .NET core or FreeBSD , that it is not availabe for FreeBSD?


Maybe it's nobody's fault?

I don't blame Microsoft for not spending limited resources on providing official support for an OS with (wildly guessing) 0.25% marketshare. And I don't blame FreeBSD for not spending limited resources on maintaining their own packages for an SDK with (wildly guessing) 0.75% marketshare among non-Windows users.


I blame Scott Hanselman for not achieving "dotNet Everywhere." :-)


Funny thing is there was a .NET Framework "port" to FreeNSD a long time ago called Rotor...


Looooool


Well, it's not the hardware vendor who is usually blamed for a lack of a device driver for Linux (or a BSD).


Unless the vendor in question actively takes steps to make this more difficult.



I dunno.. but the very first betas of dot net were actually available for FreeBSD. I remember downloading the spice for it way back. I’m surprised if it’s not available.


Are you talking about "Rotor" back in 2005 or 2006 right before Silverlight came out?


Uff I think it was actually before 2005. Was a long time ago. I recall it was open source, which was a surprise to me. It was definitely called dot net.

I think it was more towards 2001 or something, with the whole Java dispute between SUN and Microsoft. Visual J++ had some sort of dialect. (loved the IDE though, microsoft has always been good at developer tools).

Before that I was more interested in fahrenheit // openscene graph between Microsoft and SGI.


You mean mono(1)? From Miguel de Icaza & Ximian - started around 2001?

1: https://en.wikipedia.org/wiki/Mono_(software)


Big sticking point for me is I'm doing development targeting Linux. As much as I feel *BSD is superior, I need the right tools to do my job.


> I would love to be able to switch to FreeBSD

Curious what you think you would personally gain other than the much-repeated mantra of "they have organized man docs and a strong set of CLI userland tools because they are developed alongside the kernel"


Well, I was thinking more like “They have organized man docs, and a strong set of CLI user land tools because they are developed alongside the kernel.” But I’m an Oxford comma man myself.


Google Oxford comma. It’s not simply a comma before an ‘and’.


I know, it was a joke on His dismissiveness. I'm only half as clever as I try to be


Those sound like excellent gains in and of themselves.


ZFS DTRACE and pf....oh and the mantra.


BSD ZFS I believe uses the same code as Linux nowadays or was at least planning to.

eBPF should cover your other two needs, by way of itself and through bpftrace.


> BSD ZFS I believe uses the same code as Linux nowadays or was at least planning to.

Just because FreeBSD and Linux share the same ZFS upstream it doesn’t mean the experience of running ZFS on Linux is in any way comparable to running it on FreeBSD.

For starters, it’s a default in FreeBSD rather than an optional driver you have to install yourself (or even compile yourself on some distros). And that along makes a massive difference when it comes to maintenance.


What is the benefit of using ZFS on a personal or dev machine? I almost never tinker with filesystem partitions or allocation once I set them up as part of installing an OS. I occasionally do on a production machine if I've misjudged space needed for something, but there again it's pretty rare.


Boot environments for operating system upgrades. They take advantage of ZFS.


Snapshots was the biggie that made me switch to ZFS more than 10 years ago. Never looked back.


Oh it's massive especially for kernel-devs/os-devs, snapshots and boot-environments (bectl)....for personal machines...zfs-send..never have to think about correct backups/restores anymore.


I run Windows and macOS on my personal and dev machines (so don’t use ZFS), but I’d really appreciate the peace of mind of bit rot protection.


I really like the idea here - I was using a FreeBSD desktop 18 years ago with KDE2.

But in terms of actually being able to get things done as a desktop workstation, and software I can run natively on it, I could replicate almost exactly the same setup starting from a bare bones debian bullseye (testing) install, then adding xorg and xfce4 and customizing xfce4.

That's the setup I'm using now - I ended up adding a ton of gnome and kde related libraries so that I can run software derived from both projects. Yes it uses multiple gigabytes of disk space, but now I have a solid setup that can run just about any Linux GUI application, and Windows 10 inside virtualbox full screen on a second monitor to the side.


I bought a MBP, turned it on and started working.

I'm partly joking here, but overall I've been trying to use Linux as a desktop environment for nearly 20 years now. It's come a long way but it feels fragile to me. I've been able to do something as simple as a reboot, only to find the graphics go wonky and I'm left with a terminal to debug it.

I even ran FreeBSD as a desktop OS too. That worked well but was also fragile (less so than Linux. The BSDs are solid options.)

Maybe as I get older, wiser (ha!) and more patient and I'll give these things a try again (because there are Apple/Mac related issues slowly cropping up) and perhaps they will work better.

I even gave Windows 10 a serious try recently, using all the latest console emulators to get a better (Linux/iTerm like) experience, VScode, etc. The "sub system for Linux" worked well for a bit before it didn't (file permissions, moving between the two operating systems, etc.)

Ideally I wish I could find the experience I desire within a FreeBSD based desktop environment because then so many hardware options become available to me. Old and new.


It's weird, because at work we're a Mac shop, but I (among a small group of others) buck the trend and run Linux. I have very few issues; I just run Debian stable for 6 months or so after its release, and then switch to testing until 6 months after it goes stable.

Meanwhile, my coworkers complain all the time about macOS. Definitely some of it is due to the poorly-written corporate management software they're required to run, but a lot of it is just them constantly fighting against the OS to get their work done. And then even after IT has finished testing the new major macOS release every year (which takes them like 4 months) and allows people to run it on their laptops, they still look at over an hour to upgrade, and a lot of post-install issues.

The one big caveat is that on occasion, say once every 2 years, I hit a big issue on Linux that I can fix, but an average non-technical user would probably not be able to figure out, and would end up having to do a full reinstall (or get their family techie person to do it). I do feel like MS and Apple have mostly eliminated that sort of issue (though I could be wrong).

The other problem I see is that most people have an existing laptop and want to try Linux on it, and run into hardware incompatibilities. Unfortunately that's just how things are going to be; if you really want to try to run Linux on a laptop, you need to make your purchasing decision with that in mind. You never really think about that when you run macOS or Windows, because Macs/PCs are designed to run that particular OS, and the manufacturer guarantees that everything will work. This phenomenon is especially bad for people with Mac laptops; at this point anything after 2016 means a lot of pain and things that just won't work. Even with a 2016 MacBook Pro you're going to be fighting to get things set up.

On the flip side, if you get something that's designed with Linux in mind (like the Dell XPS 13 or something from System76), you're going to have a much better time. But I totally get that most people won't want to buy a new laptop and would prefer to give Linux a spin on whatever they have already. It's just that results are going to be wildly mixed in that case.


I almost never have major issues with macOS, except for the poorly-written corporate management software. :) But, I know a lot of that is simple familiarity. Someone who's used to Windows or a given Linux desktop environment is going to struggle a bit; I get frustrated with Windows laughably easily. I'm better with Linux, but I've run it -- and FreeBSD, for that matter! -- as full-time desktop environments in the past, and even though the "full-time" part of that is many years ago, some of it still sticks. (I also ran BeOS full-time for a bit over a year, and oddly, that's really the one that I miss the most.)

> The other problem I see is that most people have an existing laptop and want to try Linux on it, and run into hardware incompatibilities.

Yes, a thousand times. I ran into that more often than I would have expected just trying to get Linux to run in a VM on various Macs over the years, and once in a while on more recent PC hardware that I was trying to do something with. (Then I would write something that suggested Linux was still hard to install and people would argue with me. If it ever comes up again, I should definitely remember to add your caveat about existing hardware!)


I don't think I've been happy to have upgraded macOS since about 10.4. something always breaks.


Same. Off the top of my head, one upgrade broke my external monitors (by breaking DisplayLink drivers), one upgrade broke Karabiner (by breaking the way it remaps keys) which entirely broke my remote-work workflow, one upgrade broke my ability to play music (by removing iTunes, and having the Music app replacement not be compatible with the iTunes server capability on my Synology NAS), Big Sur for Apple Silicon broke my ability to install any CLI tools. (Homebrew didn’t have Apple Silicon support for some months after the launch of Apple Silicon Macs.)

I get that these are in large part technically the fault of third-party developers, but it’s still pretty inconvenient as an end user to be unable expect to upgrade my OS without some major portion of my workflow being entirely non-functional for months.


I've rarely had even anything minor break. I don't know how much of a unicorn this makes me. (I also never had an issue with running MacBook Pros in clamshell mode, which I did literally every day for years, but that's something I used to hear lots of complaints about from others.)


On the other hand I’ve only ever regretted one upgrade, the first Mojave beta.


I've done the FreeBSD thing and I'm currently on macOS. Were you running actual BeOS or Haiku? I wish someone would dump a chunk of money into funding another OS for hardware support.


I was running actual BeOS. This was a long time ago. :) I think it'd be harder to do that with Haiku in modern times; BeOS had a nascent commercial software market for a fleeting moment, so I had a credible "Works" style office suite (GoBe Productive), a good text editor (Pe), a good GUI mail client (Mail-It), a surprisingly good object/bitmap hybrid graphics editor kind of like the old Macromedia Fireworks (e-Picture), and probably other things I'm forgetting. I think I had an AIM client, for instance. Which also dates this. But it was important then!


> I think it'd be harder to do that with Haiku in modern times

Probably, but maybe not for the reasons you would think...

> so I had a credible "Works" style office suite (GoBe Productive)

Haiku has a LibreOffice port.

> a good text editor (Pe),

Pe is still around, or you can use Koder (a new Haiku-native app), Notepadqq, GVim, Kate, ...

> a good GUI mail client (Mail-It)

We have a few mail clients; the native one is alright but could use some work; there is also Trojita and some others like it.

> a surprisingly good object/bitmap hybrid graphics editor kind of like the old Macromedia Fireworks (e-Picture)

e-Picture has gone the way of the dodo (though you probably can still use it on 32-bit Haiku which has BeOS binary compatibility.) You can instead use WonderBrush (a native application), Krita, etc.


Nice to see you here! Thanks for all the effort over the years!!


I regret not picking up a boxed copy of the office suite at Frys years ago. I still have my installation media for BeOS 4.5 and 5.0. The software I saw was for stuff like stage management. Thanks for sharing!


As someone who's used a Mac for work while juggling a variety of other systems on my personal devices, my biggest complaint is actually the constant need to realign muscle memory for basic keyboard operations.

Sure, I could find something that would give me more familiar bindings at the system level, but I'd then also have to (hope that it's even possible to) also deal with all of the downstream apps that then have to use different ones, because the globals in macOS have superseded whatever they'd be using on other systems.

It's like dealing with English grammar. Swap Cmd for Ctrl and Ctrl(?) for Alt, except when you're switching tabs or windows and it's the exact opposite, or when whatever operation you need is bound to Option for some reason.


Because I need things that will have a reasonably long battery life and 100% reliable suspend/resume/hibernate functionality, I have a Macbook Air 2020 as my laptop, and the previously mentioned debian+xorg+xfce environment as my desktop.

The laptop is increasingly a thin client these days as everything I want to do is either inside a TLS session in a browser, or I'm controlling remote Xen, KVM hypervisors and virtual machines (either by SSH or VNC-over-SSH) that have much more cpu and ram resources than could ever be reasonably squeezed into a laptop.

The highly refined macbook touchpad user interface is also a part of it. Nothing matches it in the market space intended as Windows-10 laptops.

For me what makes Linux on a laptop a really bad and unreliable experience overall is a combination of poor quality/small touchpads, poor quality microphone built in near the keyboard, poor quality screens (you have to pay a LOT to get a screen anywhere near the quality of a macbook air or pro, and I absolutely detest 16:9 ratio laptop screens), and general unreliability in suspend/resume functionality.


It is kinda funny how we’re going back to a kind of mainframe/terminal model with the way everything works now


I don't know what the macbook touchpad does that you like so much, as I have not owned a macbook for at least 10 years, however, I think nothing can match a ThinkPad trackpoint.


I'm with you on this one, right up until the last two versions of OSX.

Stability seems to have gone right out the window, with my machine crashing on a daily basis. It really doesn't like being plugged and unplugged from its Thunderbolt dock, which was a straight up non-problem prior to Mojave.

It's also slower. OSX never felt as immediate as Linux or BSD, but it's gotten even worse.

When I have some time, I'm going to toss a systemd-free Linux on my old Macbook Air with the garbage keyboard, and see how it feels as a desktop system.

Pretty sure I'll miss the consistency of the user interface. That's my biggest gripe about the FOSS world: the user experience is generally shit. And there are a few apps for which I don't have a good replacement yet (Dash being the biggie).

And I'm not sure what to do on the phone front: the iPhone is the best hardware by far, but I barely use mine for anything other than the web and watching videos. Will play with a Pinephone and see if it does the job.

But everything else I can do between a terminal and a browser, and given that I'm not happy with where Apple is going, it seems like it's time to start getting off the train.


> That worked well but was also fragile (less so than Linux. The BSDs are solid options.)

I seriously doubt you will run into more problems with Ubuntu or Fedora than you would with FreeBSD. It’s been awhile since I’ve tried, but the BSDs are basically optionless if you want to take full advantage of modern graphics cards. You boot into Ubuntu and the graphics stack, WiFi or Ethernet, it all just works. BSD is like using Linux a decade ago at best.


Fedora and the various *butu derivatives (e.g. Linux Mint) have always been solid. Built multiple computers from parts and slapped em on several different laptops from different vendors -- no issues.

I tried FreeBSD on the desktop and it was sparse and a PITA. I'm sure they can catch up to where Ubuntu was a few years ago, but I'm currently running AAA VR games on Steam on Linux -- no reason to change what I'm running now.


Absolutely. It depends on the hardware.

I've found if all you're doing is developing then FreeBSD is fine. If you bring simple multi-media use cases into the mix then things get a bit hairy.

Ubuntu certainly helps, but it's also hairy when things go wrong or get close to going wrong.

I just turn my Mac on and get on with working in a Unix environment.


I am the same way. I have always yearned to be one of those people who daily driver linux - but I have never been more productive than I am on macos.


I'm completely opposite. If I don't get my tiling wm setup with just the apps I need, I'm totally lost and lose all my productivity. Be it Windows 10, macOS or the default installation of Ubuntu or Fedora.

Therefore I always use either nixos or freebsd for all my work.


Same here.

I think sometimes choice is a problem. I like a walled garden for some things.


> But in terms of actually being able to get things done as a desktop workstation, and software I can run natively on it, I could replicate almost exactly the same setup starting from a bare bones debian bullseye (testing) install, then adding xorg and xfce4 and customizing xfce4.

Which is great until they change how it all works. Something as simple as setting up a keyboard layout that applies for any login manager - first it was in xorg.conf, then it was in some HAL XML file, then it was a udev rule file, god knows where you can do it now. Connecting to the network - first it was ifconfig, then it was iwconfig, now it's some dbus NetworkManager incantation that doesn't work. Sound - first you learn a bunch of OSS commands, then everything gets switched to ALSA, then everything gets switched to PulseAudio and blows your eardrums...

FreeBSD may not be as "just works", but they don't keep changing everything. I'd far rather have a system that stays understandable, even if that means configuration is a little more manual.


> "Full Desktop Experience"

> Now, write all these obscure commands on terminal, and edit all these cryptic text files.

:/

I love FreeBSD in many ways, but seeing bar for desktop experience that low saddens me.


This is Hacker news and the top comment complains about that the cited article uses some not-too-strange command line tweaks?

That's sad.


I criticize the article based on its claim, not my expectations.


I'm pretty sure you misread the claim. When it says "full desktop experience" it doesn't mean that FreeBSD ships with a full desktop experience that is already enabled and active by default, but rather that it's possible to get a full desktop experience in FreeBSD (which is very much not the default - it boots to a terminal) and explains how to do that.

"Full burger experience" might be the title on a recipe for making a burger, or it might be part of an advertisement for a restaurant where the burger is already made for you. In the case of this article, it's the former. People already in the know about FreeBSD are generally not expecting a full desktop environment out of the box, so they won't be confused about the title.


Methinks you have a different cultural understanding from the auctor of this article.

Terminal commands are typically not seen as some inferior way to do this, in want of a better solution to eventually be developed, they are given because they are the fastest, clearest way of doing something, especially in online discourse.

Explaining via text, or even images, how to navigate a dialog window to achieve something is quite a bit more involved than telling a man to copy a simple command.

There exists a dialog window based interface to `pkg`, as far as I know; it is simply seldom used as it is considered inefficient compared to the terminal commands.


I think the problem raised is that it should just work, not require tinkering, regardless of how you do the tinkering.


I'm not sure how the philosophy of minimal install vs. full install has anything to do with a “desktop experience”.

The “tinkering” is nothing more than installing optional software which one may or may not want, in this case suspending and installing XFCE.


Why would someone not want suspending to be enabled if their machine is capable? Why is DBUS, NTP and slim (I don't even know what that one is) not enabled by default when the user signals they want to use xfce? You have to have a lot of knowledge to just have a basic install in your laptop. Compare that to windows, macos and ubuntu that drop you in a desktop friendly environment as soon as you finish the install.

I know that's not FreeBSD's goal nor am I saying that it ought be. I'm just stating the fact that it is not an easy to use system for the uninitiated and it will require at least a little bit of browsing the internet and trying things for all but the most experienced FreeBSD users, if they choose to adopt it as their desktop system.


> Why would someone not want suspending to be enabled if their machine is capable?

Because suspending is very hardware dependent, for it to work everywhere one must install specific drivers for everything.

Note that the suspending installation steps are about the specific hardware of the auctor as an example.

It's as though one expects not having to install drivers for one's specific graphics card very specific mouse that has unique features.

> Why is DBUS, NTP and slim (I don't even know what that one is) not enabled by default when the user signals they want to use xfce?

Because installing a package does not automatically start a service, that would be very annoying.

The user did not indicate that he wished to use XFCE, only that he wished to install it. — I would personally be supremely annoyed if by merely installing software, which I might simply do to inspect some of it's files, all sorts of services would suddenly be started, especially if this be DBus, which has a reputation with it's “DBus activation” mechanism of starting a bunch of other things because it guesses that the user wants them started based on similar heuristics as you suggested.

The next thing I know, DBus has started NetworkManager, which has then suddenly overwritten some configuration files, all because I installed XFCE, without even deciding whether I wanted to run it.

It is very good practice for installation to purely be installation and place files on the filesystem, not start any processes. — the user can do that at any point if he so choose.

> You have to have a lot of knowledge to just have a basic install in your laptop. Compare that to windows, macos and ubuntu that drop you in a desktop friendly environment as soon as you finish the install.

FreeBSD's cards are on the table here with their target audience.

The systems you mentioned indeed take another approach, and I'm sure they have their reasons to, but FreeBSD has very good reasons for it's own, and I personally find the idea of a system that starts all sorts of processes because it guessed that he user willed it so to be quite annoying.


> Because suspending is very hardware dependent, for it to work everywhere one must install specific drivers for everything.

That's a solved problem (except for the most exotic devices) in Windows and Ubuntu. Mac OS, obviously, doesn't even have that problem.

> It's as though one expects not having to install drivers for one's specific graphics card very specific mouse that has unique features.

That's exactly what one expects from a desktop system. That the system just works out what you have and install whatever is needed to make it work.

> The user did not indicate that he wished to use XFCE, only that he wished to install it. — I would personally be supremely annoyed if by merely installing software, which I might simply do to inspect some of it's files, all sorts of services would suddenly be started, especially if this be DBus, which has a reputation with it's “DBus activation” mechanism of starting a bunch of other things because it guesses that the user wants them started based on similar heuristics as you suggested.

You would be personally annoyed, but other people would be personally annoyed about not having a graphical interface ready after the install for their desktop system. On top of that, even when they try to install the graphical interface for that system, nothing works unless they understand (albeit not deeply) the inner workings of said system.

I'm not saying you're wrong about not wanting that, but most people expect their desktop system to just work, not require googling around why xfce4 won't start. Remember, we are talking here about desktop computers, where the end goal is to run a browser, a video game, an IDE, a video editor, etc.

> The next thing I know, DBus has started NetworkManager, which has then suddenly overwritten some configuration files, all because I installed XFCE, without even deciding whether I wanted to run it.

NetworkManager does solve some of the problems for desktop users who don't want to understand any more of the system than absolutely necessary. Starting it as soon as possible will just help people.

> FreeBSD's cards are on the table here with their target audience.

This was a later edit on my post and you may have missed it:

> I know that's not FreeBSD's goal nor am I saying that it ought be. I'm just stating the fact that it is not an easy to use system for the uninitiated and it will require at least a little bit of browsing the internet and trying things for all but the most experienced FreeBSD users, if they choose to adopt it as their desktop system.

Not saying that FreeBSD don't have their reasons, just saying that most people expect something else from their desktop systems.


> That's a solved problem (except for the most exotic devices) in Windows and Ubuntu. Mac OS, obviously, doesn't even have that problem.

The problem is solved by simply installing the drivers and modules for everything.

> That's exactly what one expects from a desktop system. That the system just works out what you have and install whatever is needed to make it work.

That's what you expect, that has nothing to do with whether the system is “desktop” or not.

The fact that many desktop-only systems exist that are worthless for servers or phones that do not follow this philosophy makes it clear that this is not what everyone expects, especially when many of these drivers are proprietary, and many users have ideological objections to having them on their system altogether.

> You would be personally annoyed, but other people would be personally annoyed about not having a graphical interface ready after the install for their desktop system. On top of that, even when they try to install the graphical interface for that system, nothing works unless they understand (albeit not deeply) the inner workings of said system.

And they can use the systems they want.

I am merely pointing out that how FreeBSD does this is well thought out, and has it's reasons with respect to what it's users expect.

> I'm not saying you're wrong about not wanting that, but most people expect their desktop system to just work, not require googling around why xfce4 won't start. Remember, we are talking here about desktop computers, where the end goal is to run a browser, a video game, an IDE, a video editor, etc.

I would be surprised if those were the end goals of most FreeBSD desktop users.

NetworkManager does solve some of the problems for desktop users who don't want to understand any more of the system than absolutely necessary. Starting it as soon as possible will just help people.

N.M. has a reputation of being most undesirable software among many that not only very often leads to loss of internet, but also takes control of one's configuration and alters it without warning. — many avoid it as though it be the plague.

> I know that's not FreeBSD's goal nor am I saying that it ought be. I'm just stating the fact that it is not an easy to use system for the uninitiated and it will require at least a little bit of browsing the internet and trying things for all but the most experienced FreeBSD users, if they choose to adopt it as their desktop system.

What would any of that have to do with desktops?

I daresay that desktops are probably more likely to be manned by “initiated” users than laptop and phones are.

I fail to see what “desktop” has to do with “initiated”? are you suggesting that “initiated” users should rather use a phone or laptop?

It is a desktop system for what you call the “initiated”; these two are completely orthogonal axes.


Methinks?


Probably the last remnant in English of Germanic impersonal verbs, a common feature of many Germanic languages where the subject of many nonvolitional verbs of perception is in the dative case rather than the nominative.

It is fossilized now in a fixed expression, but the verb “thinks” here is actually a different verb from the modern verb “think”. This difference is very much alive in, say, Dutch, where one would say “Ik denk dat ...” for “I think that ...”, but “Me dunkt dat ...” with a different vowel for the same meaning as “methinks”, which denotes a less voluntary perception, an observation if one will.

It's not that dissimilar to “To me, it appears that ...” I suppose, with the key difference that the grammar does not demand another subject. It is simply “Methinks that ...”, not “Me, it thinks that ....


In modern English, I don't see any difference between "methinks" and "I think" it's just two ways of saying the same thing. If anyone sees a different shade of meaning between the two, what is it? And "To me, it appears that..." is just more words to say "I think..."


FreeBSD by default installs command-line only. FreeBSD is intended to be a base system suitable for both servers and desktops, so X11 is an optional install. The guide for newbies [1] and the FAQ [2] both point to GhostBSD or MidnightBSD as more suitable for full-desktop.

[1] https://www.freebsd.org/projects/newbies/

[2] https://docs.freebsd.org/en/books/faq/


Yes, no objections there. I think the article's title is overpromising.


I've used NetBSD, FreeBSD, CentOS and Debian as a desktop OS, and they all felt similar in terms of setting up the desktop stuff i.e xorg and then more DE and OS specific stuff... if you are expecting Ubuntu style one click install straight into a ready made desktop then you should be looking at DesktopBSD and similar variants... honestly though it's 1000 times simpler to install a desktop from a minimal OS than it was 20 years ago when you had to generate your own xorg config.


Did you like NetBSD?


Kinda difficult to say because it was over 15 years ago and I only tried it once. I was only using it as a desktop for easier development because I was experimenting with it as a replacement for some proprietary embedded software and was trying to port something to it - tbh I was reaching for something way beyond my ability, I was pretty green at the time so consider that most of my progress was highly dependent on the manuals (which I suppose is a testament to the quality of the NetBSD manuals.)

In hindsight I would have used a different OS as a desktop and then done dev over SSH. The reason being, my (probably outdated) memory of it was having to wait a lot for compiling the relatively large desktop ports because it lacked a binary package system, and spending a too much time messing with configs probably due to lack of use as a desktop - partly to blame was my cheapskate employer of the time who would never invest anything in anyone, forcing me to scrounge for spare old very slow machines. Other than that it felt like a much more minimal OS compared to FreeBSD, setting up X felt similar to other OSS systems of the time.

It seems to have a stable community around it and keeps getting updates since then, i'd be interested to see how it compares today. I think OpenBSD has a much stronger following as a desktop OS though and is similarly minimal but not as portable (i mean not "runs on a toaster" portable), it's desktop/laptop hardware support is better even with their exclusion of bluetooth, and I suppose those are the ultimate deciders for what works well as a desktop OS today.


Would the same argument work for an article titled “Well Done Steak” that mention how meat should be prepared before it is done


I would not buy a car that comes in Ikea flat packages and you have to assemble it yourself. On the other side, in aviation this is a thing and I really plan to build a plane from a kit when I will have the time and money to do it.


Totally. It's 2021 after all. It should just work out of the box. Who in their right mind would want to buy meat that's not fully cooked yet? And then discuss it on Chefs News? Ridiculous.


If it's sold as a "ready to eat steak" (hence the title), I wouldn't expect cooking instructions, yes.


It's not. It's an article on Chefs News titled "Full Steak Experience".


If it's titled "Fast food steak experience", yes.


IS THIS NOT HACKER NEWS?


Don't hackers dream of smooth UI experience?


I have seen FreeBSD and other non-mainstream OS (as in not: windows, macos, linux variants) keep going but I have never seen a "reason" to use those except for the bragging rights. Can the developpers of these OS please enlighten me why I or anyone should use them?


I've been a life-long Linux user and took a dive into the deep-end with FreeBSD recently. I find it extremely difficult to articulate the why. For example, you can say things like, the entire stack is maintained by the same entity. Ookkayy... so what? But it matters, and it matters in a nuance ways that's difficult to articulate but you notice it when using the system.

Kinda feel bad for not being able to articulate it. It's not like I can point at one thing, e.g. the system call function foobar(2) is better. I either don't have enough FreeBSD experience or I'm not smart enough. Here are some things I've enjoyed:

- I didn't realize how terrible iptables was until I used pf on FreeBSD

- I didn't want to master yet another networking abstraction that docker introduces for configuring containers. Granted, I have not used VNETs on Jails.

- I like to _invest_ in the tech stack that I learn and not have it change the next release. I have more trust in the stability of FreeBSDs choices and roadmap.

- I like how light-weight the base OS install is.

- I like that FreeBSD doesn't use systemd and I like the simplicity of rc(8).

- I like that jail has stronger integration with the OS, like installing packages into a jail.

- I like that the book, "design and implementation of freebsd o/s 2nd edition" is available and my hope is that it's still relevant.

Something things I don't like:

- There are subtle differences in tooling and outputs that surprise me. Nothing I haven't been able to work-around. E.g. I learned that GNU Make has made _vast_ improvements to make that's available on BSDs. Not a big deal but had to install gnu make and invoke it as gmake.

- I miss cgroups.


You can usually trust the manpage in bsd land. That’s not anywhere near as dependable in Linux land. The system upgrades are much tighter when every component is maintained by one organization. Its historical killer feature (for a while) was zfs, but Linux has that now too. In FreeBSD though zfs is ‘native’.


FreeBSD is a nice, solid piece of Shaker furniture. It's simple, elegant, and done "the right way," meticulously planned out. There are no extraneous seams.

GNU/Linux is an amalgamation of 2-3 of the highest quality IKEA sets. It's more up-to-date, with more bells and whistles. If you don't like one component from one set, you can (with enough elbow grease and fuckery) replace it with the analogous component from another set. However, there are bespoke seams everywhere, and the theoretical flexibility is often more trouble than its worth (just try using a mainstream distro but replacing systemd, for example). Also, the documentation is sometimes out of date, causing things to fail in utterly inexplicable ways, forcing you to resort to online forums and mailing lists.

FreeBSD has somewhat fewer features, and doesn't work as well with the latest hardware. But the things that are supported largely Just Work(tm). It is, by and large, just less hassle to run FreeBSD and you don't have to tinker as much.


>GNU/Linux is an amalgamation of 2-3 of the highest quality IKEA sets.

Depends on the distribution :)


What are some of the more cohesive distributions? Elementary comes to mind


I talked about the quality of distributions.


I'll say that the reason I have been considering switching is more of an ideological, not bragging rights, but after years of seeing free software be attacked and assaulted by supposedly "friendly" companies, who are constantly trying to take away my power to control my machine it pushes me to want to take some sort of stand. It may not be much but least I am doing something.

The only other big thing holding me back was the tooling but now that I've switched most of my workflow to Emacs, the option of choosing FreeBSD is becoming more and more appealing.

EDIT: Just be clear, I have been using Ubuntu as my daily driver since 2012; however I don't love all the decisions Canonical has made, I don't like how the entire ecosystem has grown to a level of complexity that makes it hard to understand what the hell is happening under the hood, and I am beyond peeved about the havoc systemd-resolver has wrought on my system.

EDIT 2: If anyone has any other suggestions for a better more free distro that I can use I'd love the advice, the thing keeping me form Hurd is worrying that it won't have full driver support.


Why FreeBSD and not Linux?


Personally, I see Linux as a major attack vector. Binary blobs are a big skeleton in Linux's closet. One reason why big business got behind Linux is their ability to have closed source, binary firmware blobs baked into device drivers. You can choose instead to use Linux-libre, of course, but your user experience will suffer about as much as it would by choosing any other *nix with poor proprietary driver support.


FreeBSD is comparatively more friendly to proprietary blobs, as it maintains a stable kernel ABI within major releases. [0]

Even choosing to ignore any blobbed drivers, Linux will still have better software compatibility, and still generally superior hardware compatibility. Certainly not worse hardware support. Linux supports more classes of hardware like wifi adapters, while FreeBSD is stuck trying to bring up newer standards. It also requires the same firmware blobs that Linux does -- in the end, whether on Linux or FreeBSD, if you dislike binary blobs in your drivers, you just have to limit yourself to hardware that doesn't require them.

[0] https://docs.freebsd.org/cgi/getmsg.cgi?fetch=284199+0+/usr/...


Big business like Sony and Apple prefer BSD exactly because of not having to provide everything to upstream.


Google prefers linux on the servers because of not having to provide everything to upstream.


That's a bit of a different case, though. Google isn't distributing it to end users the way Apple is with Darwin.


True, but in the end it's the same, you don't get access to the google nor the sony "magic" development.

BTW: Darwin is opensource:

https://github.com/apple/darwin-xnu


> "years of seeing free software be attacked and assaulted by supposedly "friendly" companies, who are constantly trying to take away my power to control my machine"


https://www.gnu.org/distros/free-distros.en.html

And how is upstreaming things an attack?


Based on the subtext of GP's comment, I'm guessing they're either offended by RedHat, Canonical, and/or Microsoft's various involvements, things like systemd or snapd, and think it taints the whole linux ecosystem.


I'm curious how they would feel about Bell Labs. Sounds like a proprietary OS that happened to be open source wouldn't go over well either.


I use OpenBSD for most of my servers, with FreeBSD mixed in too. It is very simple to administer, easy to upgrade (sysupgrade every six months, syspatch otherwise), reliable, and fits how I like to administer it. Pf on OpenBSD has pf auth which allows unblocking ports for users who have sshed in, which is nice when you are on a dynamic IP. It is generally just a nice system, minimalist, and straight forward. I don't want a million moving parts, just a UNIX platform to build on.


I've recently switched my home network jump box to OpenBSD. Still learning the basics. I didn't even know about authpf. I'm reading the docs now and it looks great. Thanks for sharing that.


> sysupgrade every six months, syspatch otherwise

But that won't keep your system up to date with security patches, because OpenBSD does not rebuild binary packages.


They do now.

Binary packages for -stable are rebuilt only for security issues or other major fixes. Simply call pkg_add(1) with the -u flag to get the new files.

https://www.openbsd.org/faq/faq10.html#Patches


But that's just not true, check out how often they rebuild Chromium for their stable branch.


Chromium isn't part of the base system. There's some misunderstanding here between the base system and ports.


Sure they do, and he talked just about the system, remember packets and the system are two different thing in the bsd-world.


I doubt anyone manages to get with just the OpenBSD system on their desktop. Browsers in particular are a gaping security hole if not updated regularly.



No Chromium, no Firefox.


Because it's already in the official package-tree:

https://ftp.openbsd.org/pub/OpenBSD/6.8/packages-stable/amd6...

firefox-esr-78.8.0.tgz


Not a developer, and if I were I might find such a question puzzling. Here's why I use it:

+zfs +(poudriere & /usr/ports) +(helpful mail lists) +bhyve !systemd

FreeBSD since 1994, Debian since 1998.

But underneath it all is a philosophy about how to live your life. If you want Big Corps deciding your computational experience, good for you, even Linux can provide that for you now. I don't. I don't care if I'm irrelevant. I approach cooking, reading, travel, etc the same way. And there are ranges. OpenBSD goes even further. I respect that.

My partner lives in Big Corp IT land during the day and she loves the in house experience of our rather extravagent mostly FreeBSD "cloud". Some Debian in there too; got to run some appliances like unifi-video.

Writing this on a FreeBSD NVIDIA desktop. Which I'm going to use to do my own taxes myself, with org-mode, and I'm going to file here shortly. For that I need chromium, which works fine. Otherwise I use Firefox, which ports tracks releases quite closely.

All this unnecessary effort! Why would anyone do such a thing. Mysteries.


FreeBSD's ZFS implementation is really solid and if the integrity of your data cluster is a top priority, this is one area where it really pays dividends.


Yeah, I run a raidz2 (I was young back when I built it ;-) and have replaced all but one of the drives at least once. Rock solid. And bhyve + zfs, that's really cool. More and more of my Debian (multiple version) appliances are small bhyve instances on my 32 thread 128 GB poudriere build box. Chrome and Firefox and a few rusty or haskelly things get in the queue and it do get a bit laggy... NBD.


> FreeBSD NVIDIA desktop. Any links to guides/docs? My GPU is really the one reason I haven't yet at the very least tried FreeBSD on my desktop.


The Nvidia binary driver is provided for FreeBSD too.

Guide at http://us.download.nvidia.com/XFree86/FreeBSD-x86_64/460.56/... and driver at https://www.nvidia.com/Download/driverResults.aspx/170806/en... for the currently latest one.


I am biased but FreBSD is very well supported on NVIDIA, so I wouldn't let it hold you back from trying it out. I think you could load NomadBSD on a usb and have NVIDIA drivers set up without any real trouble.


What do you need a guide for? You install the nvidia driver through the normal package manager and it works, same as on Linux.


Previous job was at a University, 15+ years using FreeBSD for almost all services. We've had Jails, ZFS, etc for a good part of those 15+ years, which was a huge asset. Not that it was completely trouble free, but assuming that with anything else it would have been easier, is simply a delusion.

Why we went with FreeBSD in the early 2000s? If memory serves, the ports tree and the package management were completely blowing away anything else at that point. Nowadays it is considered given for any platform, but it was a great advantage to be able to compile packages reliably with your own options back in the day!

The fact is, FreeBSD is (and was for a very long time) a trustworthy and very stable platform to run services in. FreeBSD has a very deep philosophy of trying to minimize surprises, which means that most of the time you can focus on your real tasks instead of fighting what has changed in the last version. And Jails are still a major plus, at least from a certain point of view, since they are very easy to manage, copy, install, administer, etc.

Even today where Linux has catch up in all aspects, if you have a large set of services on FreeBSD, you'll invariably have to pay a huge upfront cost to switch everything to Linux and re-learn stuff all over again. It's not a question of freedom or open-source (FreeBSD and Linux are totally free and open-source, in their own way each), it's a question of overcoming the mountain to get across to the other green side. I'm sure it's exactly the same for Linux shops that would like to try out FreeBSD. Why should they pay the cost, if they cannot really get any significant advantage or profit out of it? So one basically sticks to what's known and trustworthy from before, since the investment has already been made and most people don't like doing the same stuff all over again.


I switched from Mac to Ubuntu 4 years ago and I'll happily say that I won't go back.

Everything does "just work" and I have a development environment that mirrors my production environment because of it. All of the dev tools run smoothly with Linux. As a bonus, I can run it on any laptop I like regardless of whether Apple decides the configuration should exist (RIP 17" MBP).

Corporate environments or personal stuff, I haven't found anything that I personally need that I can't do.

Legitimately, the only tool that I miss from OSX is OmniGraffle for making diagrams. Draw.io is good, but Omni was a special kind of polished. If they ever decided to release a cross platform version I'd buy.

There was an initial learning curve the first month when I committed to it, but I don't spend time having to get into the weeds of the system unless I just want to do something complicated.

All that said, this is something that I'm personally happy with and love. I won't recommend it to family and friends.

I'll recommend Apple for them because I know I can count on Apple Care to handle all of their problems, which keeps me from getting those same tech support calls.

For me though? I'll never go back if I can avoid it.


Hello fellow Ubuntu user!

I made the switch from Windows almost 8 years ago, i actually loved the Uinity DE, these days i'm rocking Kubuntu on my work laptop, somehow Gnome 3 doesn't feel the same...but i reeallly like Plasma.

Somehow everything just works, my hardware is probably running on blobs of proprietary software but i really don't care ( i care about getting my work done tho).

I actually recommend Ubuntu/Linux to family and friends (granted that they don't do any kind of specialized work), my 8-yo nephews have been using KDE since the beginning of the pandemic and after a week of getting used to it i haven't received a support call.

Totally understand the appeal of configuring/tweaking every single detail of the OS to one's desire (been there,done that), as times goes by and life happens i rather spent that time with my family or hobbies


I would agree Omni's software is the one thing that one might miss moving from MacOS to Linux.

For me, I miss OmniGraffle and also OmniPlan. OmniGraffle lets one make nicer-looking drawings more quickly than anything on Linux that I've seen (xfig, ImageMagick & co.), and OmniGraffle is a slick and much simplear and cleaner project management software than e.g. MS Project and even Merlin.

(For the record, I never "moved" from MacOS to Linux, but I've been using Macs on and off at work as secondary machines in parallel to Slackware Linux, SuSe Linux, Red Hat Linux and eventually the Ubuntu LTS Linux of the day).


For diagrams, have you tried PlantUML [0]? I've just recently been trying it out and found it very easy to work with. Being able to compose diagrams with text is so powerful IMO (and excellent for collaboration). Definitely take a look if you haven't already. There are plugins for most IDEs as well so you can live-edit your diagrams. It's open source too which is a huge plus (if not a requirement altogether).

[0] https://plantuml.com/


I haven’t yet, but I will take a look tomorrow


> All that said, this is something that I'm personally happy with and love. I won't recommend it to family and friends.

For family that uses their computers to access the internet and their email, Ubuntu LTS releases have been great. I'm of the opinion that if someone's use-case can be addressed with a Chromebook, then Ubuntu with Firefox or Chrome would serve them just as well.


I like Yed(1)for charts and graphs. It works in Linux/Win/Mac and is freely available, but not open source.

1) https://www.yworks.com/products/yed


No. The BSD world doesn't work like that. You either use the system because you like it, or you don't. Nobody is going to convince you.

The only "reason" to use any BSD over another system is because you want to. That's why you never get a good answer.


For me, stability. I put OpenBSD on my router and it is really quite nice to administrate it with unbound, dhcpd, and pf. Rc is significantly nicer to work with than systemd.

OpenBSD's documentation in general is very high quality, and FreeBSD to a lesser extent. Other distros are not really documented as well, so I have to rely on adapting information from the arch wiki most of the time.

For FreeBSD, jails are nice but I prefer docker. (which I have to use anyways for work).

Both FreeBSD & OpenBSD have a nicer user experience in general, in my experience. That said, I can't use them as daily drivers due to not having docker and other tools, and gaming. Linux isn't stellar for games but I can at least run a lot of games these days through Proton, etc.


My shortlist would be, PF, jails, and the handbook. PF in particular is much nicer to use than iptables to an incredible degree.


Stability. I don't have to chase the "next shinny thingamajig". For example, I have a code that uses OSS that worked 20 years ago on FreeBSD, and it still works now.


[Not an OS developer] In my (admittedly limited) experience, a BSD somehow works better when you need an "appliance" that you can turn on and forget about.


I'm not a developer of either OS, but I use both.

Pro FreeBSD:

- ZFS, the best file system. Its fully integrated into the OS so it always works and its always there. I've always run into issues using ZFS on linux like cache not getting freed causing processes to get killed when I should have had ram available.

- VNET Jails. I can give each light container (FreeBSD would be jails, Linux would be docker/lxd/nspawn...) its own networking stack so it can bring its own network interface up and down, assign its IP address, get an address from my router over DHCP like any other computer, and run its own firewall. The firewall bit is particularly helpful for running brute force protection like fail2ban/sshguard/blacklistd.

- Additionally, I can delegate a ZFS dataset to a jail and let the jail manage it itself. This lets the jail create sub-datasets with control settings like transparent compression.

- After using pf (its a firewall) on FreeBSD, using iptables on Linux makes me want to walk into the ocean.

- Its really trivial to build everything from source on FreeBSD. The base system comes with everything you need to build the operating system so rebuilding FreeBSD and installing your new build is just a couple of make commands away. For packages, FreeBSD has a tool call poudriere which is the best package builder I've ever used. I use that to compile packages with custom options and CPU optimizations enabled for the specific processor in each of my machines for multiple versions of FreeBSD. It also makes debugging and modifying FreeBSD a breeze. For example, at my previous company we had 802.1x authentication for our ethernet, but I wanted to run a container directly connecting to the network so I had it behind a bridge. Turns out in the 802.1d spec it specifies that the ethernet frames used for 802.1x authentication should not be passed over a bridge, so I found the part of the code that did that filtering, commented it out, rebuilt FreeBSD, and everything started working!

- Bragging rights

Pros for Linux:

- The Arch Linux wiki is really top-notch. The FreeBSD man pages are better than the man pages on Linux but I much prefer the Arch Linux wiki over the FreeBSD handbook.

- You get a lot of features from systemd that aren't in widespread use in FreeBSD. For example, you trivially get process monitoring/restarting and stdout/stderr capture for every service. You can get the same functionality with a built-in tool called "daemon" in FreeBSD but its up to each service to call daemon as opposed to being built into the init system so its a lot less common. Essentially, I have to write custom service files a lot more often on FreeBSD.

- systemd user services

- steam


For what it's worth, one can take systemd services and convert them into service bundles that run under a service manager on FreeBSD. Or use one of several hundred service bundles that have been done for you for various softwares. There is also a per-user service manager.

* http://jdebp.uk./Softwares/nosh/

* http://jdebp.uk./Softwares/nosh/worked-example.html

* http://jdebp.uk./Softwares/nosh/guide/converting-systemd-uni...

* http://jdebp.uk./Softwares/nosh/guide/per-user-user-services...


Actually, I was "forced" to go back, as java only did green threads on bsd, and the market was moving towards Linux.

Super sad to do that, I loved my BSD jails back in '98 or before :-)


This got really long. Sorry for that, TL;DR answer is "To understand design choices in OS design."

Now for the full answer, with more than a bit of historical perspective ...

It is a subjective topic of course. In "theory" there are 2.5 forms of mainstream operating systems; Windows, UNIX, and Microkernel UNIX (Mach derivatives).

Microsoft has invested in Windows.

Apple has invested in Mach/UNIX in the form of MacOS after the purchase of Next. (prior versions of MacOS are essentially dead).

Then there is UNIX.

UNIX was a product of AT&T, later it became the property of Novell and then the OpenGroup. Because it was a research OS originally (before System V was released) its source code was shared to other researchers and the folks at Berkeley Computer Science Research Group (CSRG)) created a their version of UNIX which they called the Berkeley Standard Distribution (BSD). Sun Microsystems worked with CSRG to turn BSD into a commercially successful OS (SunOS) while AT&T struggled to turn their distribution into a commercially successful OS (System V). This got ugly and legal fights ensued and AT&T paid Sun about a billion dollars to merge their successful OS with the unsuccessful OS. It also resulted in the Regents of California working to rewrite/remove any "AT&T proprietary code" from their distribution which left them with a research OS that they controlled and didn't have to worry about getting sued over. That OS, called FreeBSD, is pretty much as close as you can get to being a UNIX OS without owning the trademark (which OpenGroup now owns).

Then there is Linux. Andrew Tannenbaum wrote a "toy" OS that could be used to teach operating systems principles to college students and based it loosely on UNIX. He called it "Minix" for "minimum UNIX". Tannenbaum is, and this is putting it nicely, off-putting. He is one of those people who seem to rub people the wrong way. He got into an online fight with a teenager named Linus who had written his own "toy" OS that ran on the PC/AT computers at the time. He called it "Linux" and in his Usenet post suggested he didn't think it would amount to much.

A satellite player, who becomes important later, is Richard Stallman. Who was so affronted by the legal shennanigans that AT&T was pulling, the increased restrictions on access to SunOS that Sun was pulling (mirroring DEC before them) that he decided he was going do his own thing and nobody would have any way of hiding any of it. He called it "GNU" for, "GNU is Not UNIX." He took a lot of pleasure in the recursive acronym, it is the kind of thing folks in the MIT AI lab would chortle at. He was explicitly calling out "Not UNIX" not because UNIX was bad, but because everyone wanted to run UNIX and AT&T was being a huge pain about people calling things UNIX when they weren't paying taxes to AT&T. So the joke was it is EXACTLY LIKE UNIX but we're "saying" it is NOT UNIX. See? Fun joke. We get to use all your cool OS abstractions and knowledge and you can't sue us, nyah, nyah, hee, hee! Step one was "We need a C compiler" and so that was the first thing they built, gcc, binutils, and make.

At the time it was created, Linux, and the people who contributed code to Linux, were all UNIX fanbois. That is they loved "sticking it to the man" by building their own version of something that they wanted that someone else told them they couldn't have unless they paid them. At the same time Windows was a computer science joke. It was for "stupid people who didn't understand multi-tasking" and only lived in a single address space where random things could crash the system, and every tiny change required you reboot the thing from scratch. The joke was "did you try rebooting it" and used with much derision.

Now here we get to an interesting fork. The Regents of California could not abide the "copyleft" ideas of Stallman and the nascent Free Software movement. It wasn't because they wanted things to be proprietary, rather it was because they had gone through a protracted legal battle and knew their their licensing had been litigated and would hold up. As a result, the CSRG was not going to use anything like gcc with its GPL license and stuck with their portable C compiler and later variants. Meanwhile the rebuffed Stallman found a lot of people who were willing to use their stuff and contribute to Linux[1]. Writing as user land to mimic the UNIX tools was "easy" and done quickly, X was already open source from MIT and so that came too, and many people started writing the myriad of small device drivers that were needed to have this new OS boot on different systems. Many hands make for light work, and it flourished.

In 1996 I had to choose between using FreeBSD 2.x or Linux 2.x in an Internet Appliance our company was building. While we loved that new drivers were appearing regularly for Linux, the entire environment churned with change. There was no discipline in the userland between release to release. Command line options changed or were added, behaviors varied, and every new release had to be scoured for random "wouldn't it be neat if ..." kinds of change that someone had thrown in. FreeBSD on the other hand didn't get drivers as quickly but it evolved in an easy to comprehend way that didn't involve a lot of churn and it never changed important things in a "dot" (or worse dot dot) release.

So the environment was that the "cool" companies, Sun, SGI, Next->Apple, HP, were using UNIX so the open source community worked to make Linux as UNIX-like as possible. And of the UNIXes (BSD 4.x, FreeBSD, NetBSD, OpenBSD) moved along more slowly but with a very UNIX direction.

Jump ahead now 5 - 6 years. And Sun, SGI, and HP are all dead. The "cool" companies (FAANG) are using Linux in their data centers (or as in Apple's case still a UNIX derived OS) but Microsoft has upped their game and now they have a fully multi-tasking OS that a bunch of teenagers spent their formative years using and learning to tweak. Those same teenagers would love to have an "open source" version of Windows but Microsoft is not going to accommodate them.

So once again, you've got the "BSDs" which have a disciplined, ordered integration schedule. And you have Linux with its free-for-all user land. And these same folks decide they are going to start integrating features that they like about the Windows OS into Linux, in part because they can, and in part because Windows is no longer the lame besmirched OS that "only losers" use. And as a result of that activity, Linux begins to turn toward a new "north star" which is now Windows rather than UNIX.

We can argue all day and all night if operating system configuration is best done with a registry or a series of text files, and never get anywhere. They are subjective choices and so, like policy choices, arguing them is not going to get you anywhere. There are a lot of such choices that go into OS implementation. But as a result of the new influx of contributors to the Linux user land, and their early computer experiences, Linux is now mimicking Windows design philosophies rather than UNIX design philosophies.

So to answer your question about "Why should anyone use them?" the answer is to broaden your understanding of OS design philosophies so that you might better understand the tradeoffs and make better choices about which OS you might choose to support or not support in the future.

Phew.

[1] Linus has a similarly cautious attitude about the GPL as evidenced by his messaging over the years.


I tried FreeBSD for the first time last year on a network appliance. Never going back to linux (for a server). It's so elegant and well-put-together, and "just works" to a much larger degree as long as you don't need a fancy GUI. Has lots of awesome features like boot-from-ZFS.

For a desktop/laptop computer, I do not use it.


[flagged]


I seem to recall at one point Sony tried to upstream stuff and it was knocked back for not being up to scratch.

Netflix contribute back without any obligation to. But they track the development branch closely, making upstreaming easier than with the PlayStation OS, which presumably is a bit more 'frozen in time', making it much harder to submit patches.


"I've seen some of the BSD folk present that as a good thing. To each his own, I guess."

Yes, this is the whole point of making something Open Source.

With FreeBSD companies like Netflix contribute to it, and I'd imagine it's similar for OpenBSD as well.


I thought the original idea behind free software was to provide more freedom to the users (to thinker with, replace, and learn from said software), and not to pad the bottom line of a multi-billion dollar company by saving them the need to spend millions of man-hours building their own OS from scratch?

If GPL didn't force companies to contribute back, would Linux ever be where it is today?


Technically speaking the GPL doesn't force companies to "give back". It simply says you must give the source to anyone who's been provided a binary. Not even by default, you can comply by making them ask and mailing them physical media (if you really want to be a jerk about it).

A vast majority of the GPL "giving back" I see is Company XYZ dropping a messy tarball in some obscure portion of their web site. The code never goes anywhere, and frequently not upstream. No one else benefits from their work or GPL compliance.

Not that this is a good approach - the real companies that "get it" know they're better off upstreaming anything they want/need to depend on in the future.


Thing is, the decision whether to open source your code comes first, not last. GPL doesn't force it in any way, GPL prevents one from using GPL code if they don't intend to immediately release everything.

In other words, what would happen with GPL is that companies which don't intend to give the code back would simply go somewhere else instead.


"not to pad the bottom line of a multi-billion dollar company"

The freedom in F/OSS licensing applies to me just as much as it does to multi-billion dollar corporations, as it turns out.

Sorry, but this is a fundamental misunderstanding about the purpose and intent of Open Source software. If you want to arbitrarily restrict people from using your code, that's fine, but at that point it no longer is open source. It's completely within one's rights to maintain their intellectual property and license it out to businesses at a cost, but all of us benefit from F/OSS software, so it comes down to a personal decision. Imagine if things like curl were proprietary...

They chose to use BSD licenses, which are fairly permissive. The most common F/OSS license is MIT, which is also very permissive. You could use GPL, AGPL, or LGPL "copy-left" licenses which impose specific requirements, but many orgs won't even look at projects with those licenses.

edit: wording


Not everyone wants to maintain a fork though so there are incentives to giving back.


If you hop in one of the BSD subreddits you'll find that every other post is someone asking your same question. You might find some enlightenment there.


Unfortunately, most of the answers that I have seen come down to saying one and the same thing, namely, that a BSD is better because its userland is "part of the OS." Though could be true, this is not very enlightening.


Perhaps your question is too generic? It all boils down to "right tool for the job". If you need support for latest hardware (as a Desktop OS) you will have better luck with Linux than the BSDs. On the other hand if you want a free, rock solid UNIX OS to run a network/file server, look no further. Also, ZFS & Jails (before Docker was cool).


I agree. That's not a very compelling argument at all, IMO.

I've read some other stuff, too, though, such as praise for the ports system and FreeBSD's "jails" feature.

But I imagine it's a little like Linux distros. If you don't already use Linux, it's kind of hard to understand why anyone cares about the minor differences between distros.


Perhaps your path to enlightenment is different. Try reading these manual pages and this handbook:

* https://www.freebsd.org/cgi/man.cgi?query=ps

ps has one set of options and has used getopt() since April 1990.

* https://www.freebsd.org/cgi/man.cgi?query=inittodr&sektion=9

Documenting the operating system encompasses documenting the kernel as well.

* https://docs.freebsd.org/en/books/handbook/

"Part of the o/s" is not the best description. The idea is that there is a clear demarcation between all of the programs that come as part of the operating system, and all of the programs that are applications softwares that are installed on top of the operating system.


This is a struggle for any system where the advantages really are in the details, but that's probably the best reason to choose BSD over linux.

At some point you just have to commit enough effort to try it out and experience the difference, or just be happy with Linux. After all if you're happy with Linux, then BSD's userland is part of the OS isn't a fix for you. But if working with Linux leaves you with this weird itching sensation in your brain that there should be a better way, give BSD a shot.


I used to spend hours on getting things like window managers, X11, etc. set up just the way I wanted them to be.

However, the older I get the less enthused I am about having to play around with config files to get basic features like suspend/resume to work on my daily work notebook.


I've used to do this for a very long time. I've tried to customize almost every aspect of the system. At the time, I've also used to bash on Macs a lot.

I've later figured out it all stops being funny whenever I have some work to do quickly and am not in the mood for bothering with my tiling WM having too many windows open or some random broken packages.

It finally clicked when I had to collaborate on a UX desgin project and it was a big pain... My teammates used Sketch and Photoshop. Sketch is not available on Linux (which I used at the time) and GIMP just didn't want to open/save PSDs right (there was always something wrong with layers).

I've switched to macOS, it was quite a big change but I've since figured out I don't need to tweak every aspect of the OS just because I can.

Don't get me wrong, Linux, FreeBSD, OpenBSD... are great operating systems. They do just work for lots of use cases. It's just that customizing your OS often doesn't justify the time spent.


I jumped from Apple to Linux because I had to get work done and the scientific software that I wanted to use was a pain to keep working on Macs. Also, it was impossible to configure MacOS to work the way I wanted it to, unlike Linux.

By the way, I use a tiling window manager (dwm) and I can’t figure out what you mean by “tiling WM having too many windows open”. I also don’t understand why free software such as the Gimp should be expected to support the binary file format of some closed-source program. But I do understand the need to work with other people without making excuses, even if their choice of tools is shortsighted.


I've been running a work Linux virtual machine (or a few of them) on a MacBook. Best of both worlds really -- I don't have to worry about running Linux on the hardware or running dev tools on macOS.

As a bonus, a full system backup is a simple folder copy to a USB stick, and with that stick I can instantly continue working on any machine, Mac, Linux or Windows.


Yeah, I think there is virtue in this approach. If I had figured this out back then (and if it were possible—I don’t know) I might have gone that route.


> I can’t figure out what you mean by “tiling WM having too many windows open”

Tiling WMs only work for me when I have <5 windows open. Afterwards, they just become a huge mess, unless you have a big screen with a huge resolution.

> I also don’t understand why free software such as the Gimp should be expected to support the binary file format of some closed-source program. But I do understand the need to work with other people without making excuses, even if their choice of tools is shortsighted.

I'm not saying they should be responsible for supporting a random (tho admittedly quite ubiquitous) proprietary format. But on the same topic, where would Open-/LibreOffice be without support for docx/pptx/xlsx? If they never support any proprietary formats then there's no alternative to what everyone else uses.

If you have the privilege of choosing what to work with in all situations then it's of course not a problem. But one can't expect a group of UX designers to completely break their workflow for a project with a strict deadline just because one person holds onto their sacred OS.


> Afterwards, they just become a huge mess, unless you have a big screen with a huge resolution.

Or you can just use several virtual desktops.


Or use tabbed/fullscreen/custom views.

The GP comment is so backwards that I wonder if they even used a tiling wm for any period of time longer than a day or two. Tiling window managers are better at managing many windows than having them all scattered randomly over a desktop with a traditional window manager.


Not all tiling window managers are the same and one of the bigger differences is what they do with new windows, so this is really dependent on the tiling WM you've used.


If you plug every single window into the same workspace no matter how many windows you have you are basically using it ineffectively.

Let us propose you have a really tiny 12" screen and you are using 5 windows that each really need your full screen area to be useful. It makes no sense to split it 5 ways so instead each window is its own workspace. This hardly seems to be a situation where a tiling window manager benefits you you could very well arrange the same thing on any graphical environment by switching to each workspace opening each application and maximizing the window.

However what it did do was keep you from dumping all 5 windows in the same workspace because the result would be painful and ensure that instead of alt tabbing an average of 3 times per window switch or taking your hands off the keyboard and clicking on a ui element to select the individual app from the taskbar. Instead when you want to go from app 1 to 5 or app 2 to 4 you go directly to the correct workspace.

It also automated maximizing the windows as they were created.

To put it succinctly it encouraged a certain workflow and automated the window management steps when using that workflow.

This is also true if you have 18 windows and 3 28" monitors. By far the most common arrangements are going to be simple arrangements of 1-3 windows on the same monitor which can be automatically applied as windows are added.

If we consult one of my favorite infographics

https://xkcd.com/1205/

To save 2 seconds 100 times a day (not depicted) we can spend 96 hours learning to use a tiling window manager and come out ahead over 5 years. In reality the time required is probably on the order of 2-4 hours and as it is a low stress, simple activity it can trivially be done during downtime in the time when you would spend on social media as opposed to when you ought to be working.

In effect you are trading a few hours of playtime for 96 hours of more effective work time which if you think about is a pretty good trade.

It's also entirely possible that you don't enjoy this workflow and thusly wouldn't benefit which is OK too.

Insofar as LibreOffice vs Gimp you are entirely correct. On a related note Bloom although not free claims to have great PSD support

>You say Bloom imports PSD files. What specifically does it import and does it support layers? We understand why this question comes up. :) A lot of packages claim to import PSD files, but then either end up importing a single flattened image, or layers stripped of styles, masks, and blending effects. We are proud to have created the best-in-class PSD importer for Bloom, which supports not only layers and groups, but also masks on both of them, all layer blending modes, and even layer blending effects such as drop shadows and glows - even on groups! While we can't guarantee the documents will look pixel-perfect compared to Adobe Photoshop (they are completely different software packages, after all), they are very close in terms of their appearance, and all key information is preserved.

https://thebloomapp.com

I haven't tried it personally but it has a free demo.


> It's just that customizing your OS often doesn't justify the time spent.

That really depends on a) what you choose to customize, and b) how adept you are at customizing. In other words, it depends on your skill and foresight.

With enough practice - you get it by simply working with computers over the years - you can recognize the pain points with the biggest payoffs quite accurately. With enough skill, you can fix them relatively quickly. This way you get a "10x" setup tailored for your specific needs.

Of course, trying to indiscriminately customize everything while putting a lot of time into learning how to customize them is a net productivity sink. That's normal. Most people start with such approach, get burned by it, and conclude that the whole customizability-as-a-feature is not worth it.

It's quite possible that it's true for majority of users. The well-tuned, A/B tested, in-depth researched defaults can be good enough for many. I have nothing against such defaults. However, forcing me to use them, while I know precisely what I personally need to be more productive, is something I can't agree to.

My current setup is Linux, AwesomeWM, Firefox, and Emacs. I customized away all the pain points I had with them a decade ago (half a decade with Awesome). The time spent on maintaining the configs across upgrades is trivial, on the order of tens of minutes a year.

To sum it up: customizing your OS can be well worth it if you do it right. You can also go wrong with it, too. But, using software which doesn't allow for customization not only removes the risk of customization going wrong - it also robs you of the possibility of doing it right.


> My current setup is Linux, AwesomeWM, Firefox, and Emacs. I customized away all the pain points I had with them a decade ago (half a decade with Awesome).

Amen brother! Exact same setup here: Linux / AwesomeWM (with a dedicated modifier key on my keyboard only for AwesomeWM related keybindings), Firefox, Emacs and the occasional IntelliJ IDEA for Java stuff.

> The time spent on maintaining the configs across upgrades is trivial, on the order of tens of minutes a year.

Exactly.


As a tech fan, but not a professional programmer, this only took me a couple hours to figure out :-)

Basically the moment you start to use Linux desktop, you will immediately start googling how to twist this and that. Maybe in an hour, you will start to go to this and that folders to change this and that cfg files. For me, I quickly realize that 1) I won't remember what I did and I don't know all the implications of those changes I made, and 2) I don't want to spend time on those stuff.


> 1) I won't remember what I did and I don't know all the implications of those changes I made, and

That's why I keep a log of everything. Any error message I encounter, any change I made and why. Neatly organised, in an org (org-mode) file.

I make exactly the same as I do in programming project or server configuration: keep notes, explaining to my later self (and to others) what I did and why.

> 2) I don't want to spend time on those stuff.

As I commented already: then you're forced to adapt to things others thought it would be best for you instead of adapting the system to your way of working.


Bingo, I just learn to accept the defaults as much as possible. If I can't love them, I accept them.


> It's just that customizing your OS often doesn't justify the time spent.

Of course it does, spending a tiny bit of time optimizing my workflows here and there, now and then, whether OS or text editor, saves hours yearly. Not to mention reducing annoyances and plain friction.


saves hours, yearly? tiny bit of time optimzing?

Hahahaha. Keep telling yourself that.


A linux fanboy would argue that your problem wasn’t linux, it was proprietary software that vendors won’t port to linux. They might also argue that your team should not choose software which is so restrictive.

For me though it boils down to two things: (1) linux does not go out of its way to provide stable ABI’s, which makes porting proprietary software to linux and maintaining it there expensive and (2) if you are serious about doing productive work the best productivity software is often proprietary. Add those together and there is a sort of gradient over time where if you work together with non-linux users there are always things pulling you over to windows or macOS.


> They might also argue that your team should not choose software which is so restrictive.

Which in the end is a valid point.

I've seen offices fighting with their own MS Word templates because neither current MS Word versions nor alternatives like LibreOffice can correctly display and format them anymore. Meanwhile Microsoft Team's online Word is not 100% consistent with the offline package, and when you need your PDF export to just work it's not fun when suddenly PowerPoint decides to always invert the colors for no reason.

Then there are cases like the subscription-based Adobe tools which are nice until you happen to be in a country targeted by a US trade embargo and overnight your subscription is cancelled with no way to even access your own files in cloud storage. Oops.

Is Gimp inferior to a billion dollar corporation's top-seller? Sure. But I know tomorrow I'll still be able to open all my files on almost any device running a desktop OS. If you earn your salary with this kind of software that's still not very convincing of course and I get that, however on the other hand when you depend on this kind of software to be working reliably it's worth considering how much you really want to depend on some corporation's servers being online when you need to rely on it.


> Is Gimp inferior to a billion dollar corporation's top-seller? Sure. But I know tomorrow I'll still be able to open all my files on almost any device running a desktop OS

Not only that, skills you learn on an open platforms don't really expire. I spent a while getting pretty good at using both Photoshop and GIMP, but these days I'm not using Photoshop enough to warrant paying a subscription fee to use it when I do need it. Those skills and time invested in using it essentially go to waste without a license. GIMP, on the other hand? I've been benefiting from the time I've invested in it for over 15+ years now without a hitch.


    > (1) linux does not go out of its way to provide stable ABI's, which makes porting proprietary software to linux and maintaining it there expensive
My impression is that the opposite is true for user space - https://yarchive.net/comp/linux/gcc_vs_kernel_stability.html Maybe you're referring to GNU or kernel modules? There are more than a few anecdotes of people running 20+ year old binaries that still work with X11.


Linux keeps the userspace<->kernel ABI stable, sure. But that's largely an implementation detail; what matters is the ABI presented to applications - at the library level, not underneath libc. And this varies between libraries and distributions.


About (1), Linux itself actually does go out of its way to provide stable ABIs as do some very common infrastructure-level libraries like the GNU C libraries. X11 itself is also very stable and both code and the protocol has been compatible going back to the early 90s.

However everything built on top of those is not and does not care about ABI or even API stability and now several desktop projects are actively undermining X11's stability with Wayland. Gtk+ breaks its API and ABI every major version as does Qt - and IMO even if Qt wanted to remain stable, as a C++ library it is very hard to do. Also Qt is really middleware and its developers have very different priorities than what you'd need for an actual platform (not to mention how intentionally misleading they have been towards users of their library).

There is a stable desktop API on Linux, Motif, but that is ugly and nothing targets it anymore and the company behind it nowadays actively promotes Qt instead.


Regarding Gtk's frequent breakages, I really can't understand the thought process of whoever is making these decisions.

Gtk is pretty much only relevant on Linux, the Linux desktop is itself a tiny fraction of the desktop market, and the amount of developers who're willing to write GUI apps for Linux, to write documentation or tutorials on the subject, etc. is already extremely low. So you'd think it makes sense to keep things as stable as possible, to make sure that no unnecessary effort is wasted and to encourage people to improve their apps or write new ones.

But apparently the Gtk people don't care. So much effort has been wasted due to stability issues, not even talking about the multitude of good-willing people who got burned in the process and just stopped caring about Gtk altogether. It's really a sad state of affair.


It's unfortunate about the ABIs - I think this probably adversely affects linux stability even when you have the source and allow a recompile.


Couldn't they trivially use QT and bundle the libraries needed with the software?

This doesn't seem dissimilar to how one could ship software for windows.


Got a massive jump start to my career by spending high school recompiling X11 and such endlessly.

I have much gratitude for how much I learned. Apple had made great money in my desire to never do that again.


Why would you recompile X11? Back in the 90ies, early 00s, I did compile kernels to make them lean and enable some functionality that was not in the default kernels. But I never saw anyone recompiling X11 outside Gentoo and other source-based distributions.


I remember Gentoo linux being quite the rage in the early 2000's (at least in my office). Compiling everything and getting your system up and running was a badge of honor, I guess.


I remember booting from a Knoppix live CD and using that to install Gentoo so that I could use a web browser, IRC client, and GAIM to keep in contact with friends while the full-day process of the stage1 install worked on my old, slow computer. I remember not including GNOME or KDE in my ebuild flags so that the build took less time, and then using WindowMaker as an X11 window manager because it took less time to compile and ran faster than trying to run GNOME or KDE on that old machine (PII 400, 96 MB RAM, 8 MB ATI onboard video back around 2003 or so).


Gentío taught me so much. When their documentation went through that weird phase where stuff went missing was when I dropped off and my Linux knowledge declined. I stopped using it and lost track of what’s trendy nowadays.

Compiz times with the cube desktop and compiling kernels overnight in my Pentium 4 kept me away from making out with girls many times.


For quite a while, if you wanted to learn how a Linux system really operated, you'd build a Gentoo system.

Eventually, you'd get tired of all the options and switch to something more stable, especially for servers. I have some fond memories of Gentoo and emerge and compiling all of my software, just so. Sadly, it was never very stable... and not really through any fault of it's own. Really, the customization you could do was great... but there was always one more thing to tweak, one more knob to turn...

Badge of honor -- yes. I'd almost call it a requirement for someone to work through once or twice.


> For quite a while, if you wanted to learn how a Linux system really operated, you'd build a Gentoo system.

Because watching 'configure' output scroll across your screen 40 times makes one a computer expert, natch.


It wasn't watching the compiler output... it was choosing the components. You'll need A, B, C, etc. For each category there was often more than one choice. You had to choose which syslogger you'd use, for example. With RedHat or SuSE or other distributions, those choices were already made. You may not have otherwise known what options were available.

Imagine starting out with Linux today and not knowing that systemd isn't the only option for an init system. (Regardless of whether or not you like it, it's helpful to know what alternatives exist).

In the end, with Gentoo, when you had your config set, yes, you'd get hours of compiler messages. And if you were lucky, none of them would be errors.

But you'd also know how the system worked. Honestly, it was also about control. With Gentoo, you could configure the system exactly as you wanted, down to the compiler flags. How many other systems let you really do that? Instead of targeting a well-known arch (ex: i686), Gentoo let you set your compiler flags for the entire system to match your exact CPU. The upside was that it was your system. The downside was that it was your system and if/when it broke, you'd have to figure it out. If your goal is to learn how to use Linux, that's also a feature. If your goal is to have a stable server, not so much.

Like the original parent commenter, I was playing with Gentoo back in the early 2000's, so much has probably changed. But I definitely learned a lot back then.


Configure output isn't compiler output, so I guess we've got to the bottom of this mystery.

> With Gentoo, you could configure the system exactly as you wanted, down to the compiler flags.

This doesn't give you the control you think it does. Most of the customization you refer to is basically adding a USE flag to make.conf and running "emerge whatever" again, which I maintain isn't much of a teaching tool.

You end up learning about Gentoo, not software, compilers, operating systems, or computers.

> Instead of targeting a well-known arch (ex: i686), Gentoo let you set your compiler flags for the entire system to match your exact CPU.

This is a good example. So many people ended up thinking (incorrectly!) that this benefited them in some meaningful way, and thought that they were learning and customizing, when they still don't know, even a decade on, that '-O2' and '-O3 -march=native' are for almost all tasks indistinguishable from a performance standpoint or how to even go about measuring the difference.

Gentoo is like a show on the History Channel: it sure feels like you're learning!


Compiling and installing large amounts of system software, a la `emerge world` or `make buildworld`, is great exposure to many system components. `make menuconfig` introduces one to various features of the Linux kernel, and yes, even a humble `./configure` illustrates how the software in question depends on libraries and hardware. I wouldn't casually dismiss the educational value of these experiences, nor the curiosity of those partaking. They're certainly more expository than the digests displayed in a `docker pull`.


I built a Gentoo system once or twice, and I learned a lot that I otherwise wouldn't. Even just following the directions forced me to go to parts of the system I otherwise wouldn't have.

Now I use Macs on the desktop and linux on the server.


I've had my fair share of X11 builds at different work places.

The typical use case is when you have to work on some Linux dev box which does not have any (or a somewhat recent) X11 and the distribution is either too old to get one, or simply you're not root.

In these cases, the simplest (though annoying) solution is to rebuild X11 and a wm from source on the box as user.

Given OP mentioned he was doing his studies, I guess he was required to work on some old boxes and wanted a decent modern environment.


> or simply you're not root.

Maybe I misremember, but didn't X11 require the SUID bit set before systemd-logind if you wanted to use a GPU?

(Of course, if you want to run remote X11 clients, then you don't need elevated privileges.)


Sure, I was talking about remote dev boxes, I would then connect to them using a local X server.


I remember recompiling X11 around the time freedesktop was getting started. Because features like XRender, XFT, etc. were coming online and I didn't want to wait for my distro to update. Having decent fonts was that good.


I certainly remember rebuilding X to get it to work with a new graphics card. Normally a little investigation to find out the changes I’d need to make to the code for identifying the card and sometimes some other small changes.


Slackware. Constantly trying to install things managing dependencies breaking the entire system starting from scratch etc.

All I really wanted at the time was a Photoshop clone (GIMP). Broke high schooler that didn’t want to pirate.


Installing slackware from a stack o' floppies. Fond HS memories...


I'm guessing at some point in the distant past the distribution you were running didn't build X with the options you needed this wasn't normal 18 years ago and it certainly isn't now.

A Linux Mint install normally consists of a friendly gui installer followed by installing common software from an app store interface. It's more friendly than installing windows.


> It's more friendly than installing windows.

When's the last time you installed windows? I installed w10 about 3 weeks ago and I: - Plugged a USB key & Ethernet cable into my PC - clicked through a handful of GUI options - Made a coffee

And when I returned (~15 minutes, I didn't time it), it had installed windows, done the post-install reboot crap, and was ready for me to install my own software. Out of the box I had internet connectivity, power management, semi-modern graphics drivers (< 3 months old) and was ready to rock.


I had a fresh install of windows from 6 months ago that just committed unrecoverable suicide a few weeks past after an update. The filesystem was fine but it wasn't able to boot any longer and absolutely nothing had changed save for the update.

Going through the recovery tools built into windows was pretty easy but nothing including "refreshing" the OS which is basically just a reinstall while saving your files didn't work either and ultimately I had to just start from scratch.

When I did the reinstall I decided to switch it from legacy to EUFI boot and enabled that. The installation of windows tried to get me to link my account with my imaginary microsoft account. Opting out of that is designed to be confusing. Then it tried to get me to enable invasive telemetry with promises of functionality I didn't care about. It wouldn't have worked anyway because without changing another peripherally related option in my motherboards settings menu windows consistently malfunctioned when enabling networking in a way that was not an issue under Linux. One hour later thinking I had somehow created the windows install usb with the wrong option I figured out what was wrong and finally had windows working again.

In the course of 6 months windows had to be installed twice and took over 2 hours total time and tried to trick me into tying my ability to use my own computer to their permission and giving up my privacy.

The only reason I bother to keep it around is that its still easier to game under Windows. Might as well call it XboxOS because its surely unsuitable for any other use.


I had a non-standard monitor (mid-90s) which will not work with xf86config out of the box. Spent a nice summer trying various settings and was such an aha moment when it worked.


I'm there too right now. Went from using BSPWM on Arch with all kinds of custom hijinks to just sitting on KDE because it lets me go about my work without much hassle. Both have their merits, and if I was on weaker hardware I'd have no qualms going back to my WM-only setup. But KDE keeps getting leaner and lighter, and it's smooth and hassle-free for the most part.


And now we get Wayland by default on 5.22.


I have a super petty reason for disliking wayland. There are no cool retro Desktops/WMs for it. On X I can run shit like WindowMaker, CTWM and if I want to CDE.

That and my xdotool scripts don’t work.


Kind of have to agree w/you. I run KDE on Wayland, it's pretty good, but I'd like to have a really slimmed down desktop like XFCE run on Wayland, possibly on BSD too as well as Linux. That would indeed be killer. CDE? Man..I didn't even know CDE is around anymore.


It’s open source now and somewhat fragile / buggy. I haven’t been able to get it working on Linux yet but the FreeBSD port was quite easy to build.


> the FreeBSD port was quite easy to build

Forget easy to build; it's in the official ports tree; you can `pkg install cde` :)

https://svnweb.freebsd.org/ports/head/x11/cde/


Think of it this way: we get to use "cutting edge" things on Wayland now which people will be calling retro in a few years.


Surprisingly usable, but I'm still holding out because one of the things I need (auto-type on KeepassXC) is still not available on Wayland. It's an active issue[1] and will hopefully be sorted soon. MOST of my workflow looks and works just fine on it though, which is very impressive.

[1]: https://github.com/keepassxreboot/keepassxc/issues/2281


> However, the older I get the less enthused I am about having to play around with config files

That's why you leave around your config files from the '90s, you don't touch them and they still work!

Sent from a FreeBSD machine running fvwm ...


Yeah, I do most of my work inside VNC servers running fvwm with emacs, xterm+tmux+zsh, and firefox. My config files haven't substantially changed in 25 years. The desktop login environment has changed many times over this period (enlightenment, sawfish, compiz, metacity, mutter, mutter-on-wayland, even Windows 7 and 10 for work) but I only configure that enough to set up virtual desktops in which I just bring up VNC viewers for multiple hosts.


> running fvwm

That's the secret. I've been running Windows 10 on the desktop for almost the last decade, but ran Linux for more than a decade prior to that. If I want a Linux desktop, I know I can pull out my old archived FVWM configs and be set.


I hear that from people all the time who are more advanced in their careers than me. But I can't imagine finding myself in a place where I don't have my current setup. I am able to do everything so much faster having my custom i3, Emacs setup than anyone else I work with. People often comment "Wow you are able to do that so fast."

Now that I discovered Vimium I consider it a UX failure if I ever have to use the mouse. Like I don't think I could ever go back to having a system that doesn't navigate by jkl;, everything else seems to slow and clunky. Like yesterday I had to upload a file and the interface only supported drag and drop, and I took it personally that I had to use a mouse.

Sorry I am ranting, but after having experienced the power of the keyboard all time and the ease of doing things in the terminal how can you stand going back? This isn't rhetorical either I genuinely just wonder how you overcome the additional pointing and clicking required?


>how can you stand going back

For me, like many things, it is a tradeoff. Am I THAT more efficient in some uber config - balanced against the time it takes to fix/update/tweak/keep it current, and deal with multiple systems and repeated setups.

And the answer for me is, well, no, no I'm not. So these days I use a much smaller set of "must have" custom configs and mostly go with the defaults.

>a UX failure if I ever have to use the mouse

I can see that for certain systems/applications. But, I have to deal with various webapps - jira, confluence, continuous integration settings, our internal source code instance, etc - and I can't imagine the scenario where spending the time to configure and learn keyboard-only navigation would result in an efficiency payoff.

It's similar to the argument about why dvorak/colemak/workman/etc is "better". Yes, yes they are, but there is no way I'll ever get the time back in efficiency that it would take to become proficient. I'd need some outside motivation, such as RSI or an injury to alter the cost-benefit calculation.

I don't need to turn every webpage I need to deal with into a keyboard optimization puzzle in order to shave a few seconds here and there. That's the time savings we're talking about right?

>I can't imagine finding myself in a place where I don't have my current setup

Do you mostly work on a single system?


> I can see that for certain systems/applications. But, I have to deal with various webapps - jira, confluence, continuous integration settings, our internal source code instance, etc - and I can't imagine the scenario where spending the time to configure and learn keyboard-only navigation would result in an efficiency payoff.

That's the beauty of Vimium, it gets you 90% there, but those 90% work the same everywhere.


I guess it's a spectrum.

I share OP's point of view. I've had my youth years of complete custom desktop experience, every single detail under control and finely customized, on whatever distribution was the apogee of the time (gentoo, arch,...).

Years passing by though, I've grown past it. Now I just install Ubuntu, I don't want to loose time on wifi drivers, keyboard backlight, acpi suspend/resume, etc.

Doesn't mean that I don't customize my environment though. I've been using i3 for 10 years and would not stand anything else. Same for my vim configuration.

I just prioritize some things (i3, vim) over others (distribution, package manager).


Can you explain why you're so find of i3? I've played with it but it never stuck.


It hink the other comments already made good points, but just to complete it for me:

- There's very little learning curve with i3. After setting/learning 2 or 3 keyboard shortcuts you're good to go. That makes the adoption a no brainer.

- 99% of what I do is done in a shell (code, sysadmin) or a browser (read doc, write doc). That means I often need a lot of shell windows all over the place. The tiling really helps here. I could use tmux/screen for that (that's what I did before i3), but I often already have tmuxes on the remote boxes I ssh to, and the inception makes navigation harder.

- it's fast, there no animation or latency whatshowever. I can very quickly open a shell, ssh, run a command, close it, etc.

Most of these could be done with any keyboard oriented tiling wm, I just so happen to use i3.


Not the original commenter here but

- i3 treats individual monitors as virtual desktops instead of having them stretch across all monitors which makes it easy to shift say a workspace with a browser or a chat app into a secondary monitor without losing the current task or having to rearrange everything.

- i3 lets you define keybinding modes that work much like modes in vim

- i3 is very simple and comprehensible its about as non magical as can be making it using it predictable and simple


For me personally, it's about putting the right amount of emphasis on your tools. Using your mouse shouldn't be outright verboten, but I do see your point about having a properly set-up editor. I could probably be 30-50% more efficient with just the right vim config. For reference, here's mine:

$ ls -l .vimrc ls: .vimrc: No such file or directory

However, as a counter example, instead I've spent time on learning how to use Ansible which lets me automate parts of my job in a way that just wasn't feasible 15 years ago. To me that provides a much larger benefit (easily 10X, maybe even 100X).

I guess my point is that I don't want to spend too much time on the plumbing part of technology and leave figuring that out to someone else - much in the same way we're using libraries nowadays instead of re-implementing hash tables ourselves in every new project.


My setups have never been customized that deeply, but I used to be much more into the process than I am today.

What changed that was the frequency of needing to set up a working environment from scratch, for whatever reason whether that be a fresh OS install or a change of personal or work machines. After a while it becomes tiresome, both in initial setup and in maintenance (regardless of OS, highly custom configurations are more brittle and can break in more ways).

I still customize a fair bit but generally speaking I keep things closer to default and gravitate towards OSes and distributions that come reasonably close to where I want my environment to be out of the box so the amount of setup and maintenance is reduced to something sustainable.


I would love to see a video of how one operates a GUI using a keyboard. I'm know some vi and zero emacs, so how Vimium works eludes me, but it would be great to someone do some impressive stuff without ever touching the mouse. I imagine the learning curve must be really steep, and not something I'd like to sped any portion of my work day learning.


Well if you are familiar with Vim learning Vimium was super easy barely any inconvience. I used an Anki deck to become familiar with all of the shortcuts and boom, off to the races.

The biggest keys to remember is f which shows all the links you can click on. I should add a disclaimer however that I ended up using vimium probably only 50% of the time, I've noticed when I am in the middle of working I use vimium more heavily, during light browsing I tend to use the mouse a bit more.

I will also say the other big thing that wasn't possible before Vimium is that I can now add a bookmark to pretty much any page I will visit more than once and then that page is only a 'Shift + b' and a couple of keystrokes away. Super efficient when dealing with giant bloated web apps that take 5 seconds to render every state change.


Vimium for gui in two secs: f - show keys to press to click something and open the link in the current window F - same except opens in a new window /<chars> <enter> - search, n for next and p for previous V - visual cursor selection, y copies esc - get out of anything

There's a lot more, but you can go a long way just with that.


Check out Luke Smith on Youtube.


> Like yesterday I had to upload a file and the interface only supported drag and drop, and I took it personally that I had to use a mouse.

keynav to the rescue! (As I’ve said before it’s no replacement for proper vi bindings like Vimium or better Pentadactyl, but it is useful as a second-to-last resort before a hardware pointing device.)

Dragging is not bound by default but it is easy to uncomment in the example config (cp /usr/share/doc/keynav/keynavrc ~/.keynavrc and gg72 in vi on Debian):

  ### Drag examples
  # Start drag holding the left mouse button
  #q drag 1
  # Start drag holding middle mouse + control and shift
  #w drag 2 ctrl+shift


I used vim for a decade and now use PyCharm+VSCode. I don’t know what you mean about not using the keyboard as I still use the terminal for everything except editing code and I don’t use the mouse at all. There are keyboard shortcuts for everything in both the ides I use,


> However, the older I get the less enthused I am about having to play around with config files...

I'm in my late forties and I don't play with config files a lot because... Once X11 / Awesome WM is set up it is set up, well, for years. Literally years. Does not move: not a iota.

At times I've had my "workstation" (just a big PC) with an uptime of 6 months. 6 months of uptime, for a desktop. That's how stable things can be. (I had my reasons for leaving my computer up at night for one of its core was computing something: but that's not the point... The point is these Linux and BSDs can be so stable you can, if you want and kernel security patch excepted, easily reach one year of uptime).

My current desktop PC is six years old and I'll soon buy a new one and I'll reinstall everything from scratch: I've got notes and may need to "fight" new hardware and whatnots for a few hours (if I'm unlucky) but then hopefully I'll be good for another six years?

The thing is: if you don't like tailoring your system to the way you like it to work, then you're forced to use the way others thought it'd be best for you...

So, sure, it may be a bit more work than a Windows or OS X machine, but the stability and uptime is also on a whole another level.


Same for me.

I have spent way too much time trying different window managers: i3,i3 gap,sway,bspwm,etc. Usually you also need to find a menu bar, customize it, deal with screen locking, multiple screens setup with different dpi, etc.

I stopped trying to create my personalized environment. I just installed Gnome Wayland (Arch) on my personal laptop with some extensions: dash-to-dock, unite. It is good enough for me, requires almost none maintenance and has a MacOSX vibe. It has been quite stable since I made the switch (more than a year).

I still keep an i3 config that I use in a VM running on my work laptop (I prefer it over WSL2). Because I wanted to keep a very lightweight WM environment. But I don't really use i3 tiling. I just launch Tmux in a maximized terminal window. I do some light development in it with neovim and OPS from it (cloudformation, terraform,etc.). I ssh connect to it with VScode.

If the CPU performance gap is not reduced between Mac CPUs and intel/amd laptop cpus for ultrabook, I think my next personal laptop will be an Apple one. I don't want to spend too much time on making the whole setup work.


Similar experience here. I was one of minority of die-hard Linux users until I had a hardware issue and just said f*ck it let me get a company provided mac. I had a super customized i3 setup but now on a 2020 mac I can get by just fine with the OS but the hardware is a ton better. My previous laptop was a higher end dell though not an XPS and the keyboard on this is 100x better imo and the trackpad is 1000x better. And MacOS is a bit more annoying in places but it-really-just-works and I don't think about it really.

I brew installed all the gnu core utilities so now I've got gcat and g-this and g-that. I use many workspaces and fullscreen apps usually with a terminal side-by-side and my productivity is better. I guess I discounted how much good hardware means to me.


i have the opposite use case - i would like a full-fledged desktop environment with all the bells and whistles i can get, but even more than that i want tiling. i settled for mate + i3, since gnome and kde both made it hard to replace just the window manager.


OpenBSD for that when it works it does OOTB.

Setup cwm, some XTerms, Otter Browser/Chromium, and done.

Or xfce4, paper-theme and paper-icon-theme if you are lazy.

Setup the theme, edit the panels, done.


> Setup cwm, some XTerms, Otter Browser/Chromium, and done.

Chrome patched three high-priority security vulnerabilities last week. And OpenBSD 6.8 hasn't rebuilt their package since October 1, unless I'm missing something: https://cloudflare.cdn.openbsd.org/pub/OpenBSD/6.8/packages/...


OpenBSD 6.8-stable packages are in a different directory, the ones you linked are -release packages which are unchanged since OpenBSD 6.8 was released.

https://cdn.openbsd.org/pub/OpenBSD/6.8/packages-stable/

The OpenBSD package tools will automatically prefer newer packages from this location.

That being said, this is a best effort, not all packages receive updates, security fixes for chromium cannot backported to 6.8-stable due to significant changes between versions, and it would be a major burden for the maintainers to update to later versions without potentially also needing to update other ports dependencies. ABI breakages cannot happen on -stable.

There are newer versions of chromium available for users who follow -current and are running 6.9-beta snapshots.

https://cdn.openbsd.org/pub/OpenBSD/snapshots/packages/


Yeah, well that's kind of my point. Recommending new users to install stable OpenBSD as their work/home PC/laptop is irresponsible, especially if the lack of updates (presented as stability / ease of maintanence) is explicitly mentioned.


Who's recommending it? It's up to the user to decide whether to stick with -release/-stable, with the understanding that packages won't see significant updates or new features until they upgrade to the next release in 6 months. But they have the option of following -current and testing the same snapshots developers are running on their laptops, and they can even help contribute so that the next release has even more tested and up-to-date packages.


The OpenBSD documentation does not really make that balance clear to the new user though. And of course there is no mechanism for regular updates either.

> New users should be running either -stable or -release.

https://www.openbsd.org/faq/faq5.html

EDIT: Haven't used OpenBSD in a while, but unless I'm misreading https://www.openbsd.org/faq/faq10.html, syspatch & binary patches only apply for release branches - in which case you would need to either deal with obsolete packages or compile them yourself. On the other hand if you where to track -stable branch you would get semi-regular binary packages (not everything for example no chromium, but at least you get firefox), but in that case syspatch won't work and you'd need to recompile kernel & userland.

Also, which exactly packages get updates is completely non-transparent for the end user if they follow official instructions.


> And of course there is no mechanism for regular updates either.

Not true. There is both syspatch(8) to apply binary updates and sysupgrade(8) to upgrade to the next release or snapshot. And there are regular packages available for -stable and -current.

> New users should be running either -stable or -release. That being said, many people do run -current on production systems to help catch bugs and test new features.

Is the full quote from the page you linked. I won't reply to you further as it's clear from other replies here you have an agenda.


But neither syspatch nor sysupgrade apply to stable branch, meaning you'll be running release and if that's how you're keeping your desktop system updated - you're definitely using vulnerable browser, as in this scenario neither firefox nor chromium will get updated until the next release.

current branch is very clearly not meant for new users, that's mentioned in various faqs multiple times.


Regardless of whether you're running 6.8 release, have applied the official syspatches, or compiled your own system from sources (-stable or release + patches). The binary packages compiled from the -stable ports tree are supported on all of these. Out-of-the-box 6.8 will always prefer packages from the packages-stable directory. And you can absolutely always use sysupgrade(8) to upgrade to the next release or snapshot.

You do have some huge misunderstandings of -stable/-release terminology and how they apply to the base system. Especially with the introduction of binary syspatch(8). There is no longer any incentive to compiling the -stable sources yourself, as the distinction between -stable and -release + errata patches has largely been lost. In the past it might contain changes not worthy of an errata, for instance, but these days that would be exceedingly rare.

You're right, there's no -stable packages for chrome. Boo hoo. What you don't see are the lengths OpenBSD has gone to protect users of these gigantic pieces of software, such as the tight integration of pledge(2) and unveil(2) by default. Heavily restricting to entirely removing filesystem and network access for every unique process type. Leaving only access to the ~/Downloads directory.


What you're telling sounds wonderful. That definitely wasn't the case a few years ago and documentation still does not reflect this conceptual merge between -release and stable.

And also, as far as I can tell the newest Firefox version available on -stable OpenBSD right now is 82.0.3 which was released in November.


Same, I used to spend a week+ tweaking things ever so slightly. Now I just install a base system and use kde5 plasma. Easy to theme, has an app launcher. (Which Windows also just got, same shortcut weirdly enough alt + space).


I was a linux and freebsd user in my younger days for a decade, until I switched to macOS. Now I am approaching my 40s and I would never do a customisation craze of my younger days, where I would spend time selecting a wallpaper and themes. My wallpaper is grey, I barely see it, so why bother?

But I still customise – just different things. Mac lets me customise my workflows with little effort. All the UI scripting, keyboard maestro, Launchbar actions and hyperkey shortcuts help me a lot. Computers are good at tedious and boring tasks, but I am not. It may seldom get me back the time spent for doing the workflows, but it helps me to stay sane with all the mouse-clicking, keeps my RSI in check, and gets me some satisfaction knowing that my craft is not only good for reading sales items out of the database.


How often do you start from scratch when installing an OS?, I've changed my laptop 4 times now in 5 years for different reasons, when I do so, there's only a couple of things to consider:

if I'm upgrading the HDD (for example when I made the jump from HDD to SSD or from my 2.5inch SSD to my M2 SSD currently) I need to clone the drive to my new storage, otherwise I only need to swap out my storage device from my old laptop to my new one.

With linux it just works I don't have to fiddle for my devices to be found, everything is just where I left it, the biggest change was when I went from an intel based PC to an AMD one, I only had to switch the display drivers after the fact (I knew because X crashed, I had to do this from tty), but it is expected since the display cards are totally different, btw all it took was a: sudo pacman -S xf86-video-amdgpu.

having a rolling release distro helps too, because you really don't have a reason to nuke your install and start from scratch, but even if I decided to do that for whatever reason, since most configuration is done via text files I can easily save those in a repo and just clone them to my new install and be done in a few minutes.

drwx------ 2 root root 16384 Dec 25 2016 /lost+found

^ that's when I last installed linux, I've been using the same install through 5 years in 4 different devices, it's pretty cool.

I'll be honest though I do still miss photoshop and Illustrator, I run Illustrator CS6 in wine, but it is missing a lot of features that have been added through the years, but Krita is a decent replacement for photoshop in the Illustration space, which is why I used photoshop in the first place, but nobody is stopping adobe from making a suite for linux I guess.


> I used to spend hours on getting things like window managers, X11, etc. set up just the way I wanted them to be. However, the older I get the less enthused I am about having to play around with config files to get basic features like suspend/resume to work on my daily work notebook.

The most predictable top comment on HN ever.


This is why I use GhostBSD instead of plain FreeBSD.

It’s like a Linux Mint experience but on FreeBSD.


Me too. But that never drove me away from Linux. I just started embracing two specific things:

* Sane defaults.

* Slow moving software that's focused on not changing things constantly (for better or worse).

I.e. Debian Stable :-)

I still don't understand how people who primarily code or wrangle data can possibly prefer Mac or Windows. I'm too old for stuff changing under my feet, but at least on sane Linux distros I have some power in my hands when this happens.


95% of my time re: configuration is dealing with getting suspend/resume to work and not corrupt the desktop environment. Why has this been so damn broken on open source OSes?

The one thing I could do to fully stabilize my env is to ditch the gnome/XMonad hybrid and go full XMonad. That would probably solve all my config issues. Really wish that XMonad with gnome was a first class supported setup though.


It's broken in open source because hardware is horrible but OEM's must make at least a minimalistic effort to make their hardware work with windows lest their hardware be returned to walmart. Presumably infinite effort to figure out how windows does it would yield software that works as well but in the real world of imperfect documentation and finite effort results in imperfect results.

There was a thread on this a while back

https://news.ycombinator.com/item?id=25385860

Short answer buy hardware known not to suck at it.


Same experience on my GNU/Linux zealot days, eventually I settled back on Windows, macOS and whatever Linux distributions do by default.


Same here. My path was pretty much Gentoo -> Arch -> MacOS. There were some slight short lasting deviations, such as Ubuntu and Suse.


I am solemnly transitioning from neovim to JetBrains for this reason.

Feels weird to be on a subscription model for my fundamental text editing needs.

Life on the teat.


I have used vim and later Emacs + evil for years. Recently, I subscribed to the whole JetBrains suite. I had a license for IntelliJ 7-8 years ago when I needed to write Java for work. But these days it's hard to beat CLion, PyCharm et al. It's just so more productive, especially when you have to refactor code.

Magit is still the best git porcelain though ;).


I miss when vim was my full IDE. But on a large project, I could not it keep it from thrashing the system while indexing.


I'd plug Doom Emacs my friend, all the power of a full IDE with all the keybindings of Vim.


If you think Doom Emacs has all the power of a full IDE, you have no clue about IDEs.


I'm an emacs fan. But when it comes to Java and big codebase (5+ developpers), JetBrain or Eclipse are just the way to go . They provide : code navigation and fast indexing (emacs LSP is just so slow), super integrated debugger, tons of predefined stuff to open common's file. They just more intelligence about your code packed in. With emacs it's all bare bones. So basically for me it's about big project == big IDE and everything else is emacs (which is a sizeable share !). Also, I'd say that Emacs makes my life much more pleasant too : the community, the license, the endless customization, the millions packages; that's part of my life too and the more "pro" IDE's just absolutely don't deliver on that side.


Instead of being rude and snarky, maybe you could highlight some IDE-only features that you find compelling?


Have you tried setting up LSC on either vim or emacs? It does everything an IDE does, aside from a debugger. Compile time errors are highlighted as soon as you're done typing, and many other IDE-like features.


LSP provides the bare minimum semantic support (navigation, completion, inline error annotations and basic refactoring support) to make emacs or vim worth of consideration for serious coding at all, and only because the inferior code understanding both have even with LSP is often more than compensated by other advantages such as being able to work in a terminal compelling plugins like magit for emacs, or, at least in the case of vim, general snappiness. I like and use all three, they all have their pros and cons, but saying that LSP (via LSC or one of the other myriad of lsp-support plugins) is about as true as saying that Jetbrains + Vim plugin lets you do everything you can do with VIM.

BTW: emacs does in fact have a debugger: GUD (with a bit of tweaking, it's bearable, too).


LSC is just a language server plugin, right? That's not even close to covering what an ide like Intellij does, eg static analysis, code coverage, profiling, etc.


Is this not easily automatable? My Mac(s) is/are provisioned using Strap, and this includes configuration of the terminal/Neovim/general settings. Surely this could all be done for a BSD setup?


>However, the older I get the less enthused I am about having to play around with config files

Ha! i am the opposite, i love to customize my system to perfection and use it then for years.

My newest project:

http://wotho.ethz.ch/tk4-/

yes i want my mainframe :)


> sysctl hw.acpi.lid_switch_state=S3

Hmm. This makes it seem like it just blindly sleeps on lid events. I prefer sleep on lid close to wait a bit then check to see if any displays are connected and on. That works a lot better for docked setups. On Linux, some part of systemd handles this. Is there an alternative in FreeBSD?


The problem with BSD on a laptop has always been wifi drivers. Is it anywhere near usable yet?


Yes.


How much more difficult is it to navigate a FreeBSD install vs. Linux?

I currently use xfce4 with Ubuntu but have been considering putting FreeBSD on an old laptop


I just spent this past weekend playing with FreeBSD. Pretty sure the answer comes down to whether your hardware is supported.

I gave up trying to install it on my XPS due to the lack of support for 802.11ac and issues with video. But it installed fine on my desktop (wired network).


Thanks for answering - an XPS is actually what I was considering to install on


Does it run NVidia's CUDA?


Does steam work on Freebsd?


Largely. Here's the web page tracking current status: https://github.com/shkhln/linuxulator-steam-utils/wiki/Compa...


Yes. Well enough to play Counter Strike: Global Offensive.

Beware, as soon as I switched to FreeBSD on the desktop my trust factor tanked.

https://www.freshports.org/games/linux-steam-utils/



Just FYI, that info is now 10 years old. I recently researched switching to FreeBSD and found that the NVidia drivers don't support Vulkan, which is a requirement for DXVK to work, not to mention a lot of Linux ports. Can't comment on the other manufacturers' drivers though.


Nexuiz, OpenArena, World of Padman.

I've got nothing against those games but they're ancient.

They'd run on a 15 year old ThinkPad with integrated graphics.

Steam Proton allows you the run new games on Linux like Final Fantasy XV, Witcher 3, Death Stranding, Hitman 2, Doom Eternal.

Actual blockbuster games from the last few years.


It's a 10 year old article. Of course the games are old. It might be that things are different now, or the same. But it still answers the question: Does steam work on FreeBSD (Yes), and is still accurate (Sometimes better)


Considering the age of the article, and the fact that I'm unable to find any information on Vulkan support for any manufacturer's drivers, I'm going to go out on a limb and say that Steam (more specifically Proton) doesn't work as well as it does on Linux.


The documentation is certainly lacking, but sources such as https://github.com/FreeBSDDesktop/kms-drm/issues/130 suggest that Vulkan support is there, at least for Radeon.


Steam was released on Linux 2 years after that article.

Googling "freebsd steam" doesn't really indicate that it's easy to get it running.

The frames per seconds in those results are 150fps+ 10 years ago. You may as well have said that Tux Racer runs faster on FreeBSD.

I doubt people are running variable refresh rate on 360hz monitors on FreeBSD to fully enjoy the advantage of playing these ancient games.

What's the point in being disingenuous?


It can be made to run, but I can't really recommend it unless you enjoy tinkering/troubleshooting. It's still pretty far from being a "plug-and-play" experience.


And for some of us (not sure how many), there's Debian kFreeBSD.


I used to run it. Initially as a VM on Debian, later inside a jail on FreeBSD. It worked nicely.

The main criticism I have of it is that it is a solution in search of a problem. Why would I use it in preference to either vanilla FreeBSD or vanilla Debian? I eventually made the move and just went to vanilla FreeBSD. It avoids the potential for any subtle incompatibilities you might encounter between the FreeBSD kernel and a foreign userland that was never intended to be used with it.

Don't get me wrong, it's a great technical achievement. But I'm sceptical that it has major value.


AFAIK, that has been deprecated at around the same time as LSB (2015?) due to systemd and gnomisms such as gnome's login, dbus, etc. invading every single package. I guess, nowadays the Devuan developers maintain a Debian version that should be as close to a starting point for a new Debian system running on a FreeBSD kernel as it gets.


I would buy a modern , fully supported , freebsd laptop in a heart beat, 11th gen cpu or ryzen 4800 / 5900hx...

However its a pain to find one fully compatible.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: