Hacker News new | past | comments | ask | show | jobs | submit login
BSD vs. Linux (2005) (over-yonder.net)
457 points by _xrjp on July 5, 2016 | hide | past | favorite | 356 comments



I like FreeBSD, and I use it as a server OS and on a NAS box but you only need look at https://wiki.freebsd.org/Graphics to understand that if the "Linux Desktop" is a joke compared to MacOS and Windows then the "FreeBSD Desktop" is even more so.

From my experience of "old" computers, workstations, and then "PCs as Workstations", Windows won the desktop because the UNIX camp could never check their egos at the door and get their act together on a windowing system and graphics architecture. And while it was brilliant that you could do the "heavy lifting" on a server and do the display part on a less powerful machine, that architecture should have been taken out behind the barn and shot the moment 2D and 3D acceleration hit the framebuffer cards. Instead we have the crap that I have to put up with, which is a wonderfully compact and powerful system (A NUC5i7YRH) with a graphics architecture that is not supported at all on FreeBSD and has pretty egregious bugs on Linux, and runs beautifully on Windows 10.

For the conversation at hand though, the Linux graphics pipeline teams are much better at getting things working than the FreeBSD team, even though FreeBSD is much more "friendly" to non-open drivers.

I would love to have a nice, integrated, workstation type experience with a UNIX operating system but that music died when SGI and Sun through in the towel and tried to retreat to the machine room.


> the "FreeBSD Desktop" is even more so.

The funny thing is that there's a reason for that: up until recently, the focus of FreeBSD has been on being a server OS. There's a push recently on improving the graphics situation, but it's only relatively recent, and is mainly focussed on getting integrated graphics on more modern hardware working. There are also some efforts to expand the driver compatibility layer so that Linux drivers can be run as kernel modules with minimal change.

Moreover, FreeBSD types aren't quite so fixated on making a "FreeBSD desktop" as the Linux community is. Sure, there are desktop spins like PC-BSD, but the project as a whole has never really had desktops as a focus.

FreeBSD as a project is pragmatic, and tries to focus effort on where the greatest benefit lies.

Of course, there's the larger issue of Linux being insular, with a tendency to reinvent the wheel (epoll vs. kevent/kqueue being a nice example) rather than looking at what the state of the art is elsewhere and adopting that. You just have to look at containerisation: Solaris and FreeBSD have long had this in the form of Zones and Jails[1], but containerisation on Linux is largely an independent effort that didn't look to learn lessons from those efforts or adopt them.

[1] And I remember containerisation being mocked when hypervisors and VMs were all the rage. How things change...


IIRC FreeBSD was making some decent progress on the workstation experience before Apple released OSX. Once that happened, a lot of the people working on that switched over and the efforts kind of died down for a while.

I haven't used FreeBSD on a desktop for years, but I've been told it's getting a lot better recently.


All very true. Part of the reason why the effort flagged was that suddenly there were these high-quality machines with a BSD userland around. Un/fortunately (depending on your perspective), Apple haven't put a tonne of effort into keeping that side of things as up to date as they might, so improving the desktop experience on FreeBSD again.

Up until 7.0 came out, the desktop experience was pretty decent, and things like audio and video (anecdote ahead) worked out of the box better than Linux. After that, things started to slip in comparison.

The work put into PC-BSD has helped with the out-of-the-box experience, assuming your have the right hardware, but it's good to see more effort being put into improving the general driver situation. I certainly wouldn't be at all against ditching Ubuntu and switching back to FreeBSD as my desktop OS.


The containerisation approach on Linux isn't because of insularity or NIH - it's because the Linux kernel development process does not accept large, far-reaching "big bang" code drops. Instead, changes that have wide impact across the core kernel are expected to be made in a series of smaller incremental improvements, each of which is more limited in impact and more easily reviewed.


Why would borrowing the APIs enabling existing containerisation approaches have lead to large, far-reaching big bang code drops?


Such as systemd? :-)


systemd isn't part of the kernel at all.


Yet.

(Does kdbus count?)


Given that they seem to have given up on kdbus, and is instead developing something called bus1 that appears they are not even planning to submit for inclusion, i find myself thinking that the systemd devs are these days working on enveloping the kernel.

Meaning that you no longer use kernel syscalls, but instead program with systemd in mind and then systemd talks to the kernel as it sees fit.

Just observe their actions regarding session tracking, where they are basically ignoring all data from the kernel about sessions.


The base problem there is drivers. Specifically that we have something like 3 suppliers of desktop GPUs.

One refuse outright to supply information that allows a proper open driver to be developed.

Another have just started providing this information in recent years, and there is still delays between the latest hardware hitting the market and the proper information reaching relevant hands.

The third only recently started producing hardware, and said hardware is only bundled with other products from same supplier. Never mind that it in general performs worse than the other two offerings.

There used to be another, one that was supposedly well supported by open drivers. But that one long since left the consumer market, focusing instead on multi-display business setups etc.

so effectively you are complaining about how a group of people, forced to reverse engineer and often working in their spare time, can't match the market gorilla that gets direct support from third party suppliers.


I had hoped that the FreeBSD folks would put together a locally rendered (composited?) architecture that had a test suite and clear cut API such that vendors could create their own proprietary driver and load it there.

I get that it is impossible for the free software and hardware community to design a GPU. I also get that people who do design them have no interest at all in leaving any money on the table by sharing so much of the design that their "secret sauce" gets stolen (or they get hit with patent abuses).

Because it is genetically impossible for the Linux community to tolerate, much less endorse or encourage any sort of proprietary advantage, I do not expect that Linux will ever be a functional desktop environment of the level and quality of a Mac or Windows environment. FreeBSD could be that environment but they have to design for it. So far it hasn't been a priority for them.


I personally would accept a huge hit in performance for a fully libre system (software and hardware) and would be willing to pay more than the average desktop/laptop price for it. I know I'm not alone but I wonder if the market is big enough for a company to start producing hardware products on that core principle.


Dunno. But i could have sworn i had read about a libre GPU design out there.


Perhaps that is the core complaint. But at the end of the day support for those cards is what we need for our everyday computing. I don't think anyone is trying to undermine the work that Linux/BSD communities are putting forth in attempt to supply end users with software support for uncooperative hardware vendors. But again, they don't have control over that situation. So, in the mean time it shouldn't come as a surprise people are looking elsewhere to satisfy their computing requirements.


Thing is that when one aim the complaint in the wrong place, effort goes to waste fixing the wrong problem (or a non-existent problem).


Why not try Apple? Mac OS X is an awesome UNIX that makes for great desktop use. Of course there are all the hipsters and idiots that use it, but put that aside and give it a solid try.


If Apple were to release MacOS for non-Apple hardware I would upgrade to that in a heartbeat. If the Ubuntu 'shell' for the Windows kernel works out I might switch to that as well.


Why not just do a hackintosh?


I had tried it 6years ago. Every OS update used to break the machine. Even USB ports had issues.


I don't know. That kind of architecture can still be useful and I personally use it when running some graphical apps inside of a docker container


I was a long-time FreeBSD user. Started using it in college and continued for a long time. I started using Linux because I had bought myself a new laptop and BSD didn't recognize the wifi card. I continued using FreeBSD at home for a few more years on my webserver before ultimately moving to dreamhost (I just didn't have the time to keep maintaining my own server).

I like using Linux, but I still miss the predictability of a BSD system - you know where things are, and where they are supposed to be. When I first started using Linux, I was absolutely flummoxed by the lack of distinction between the base system and add-on utilities.

Linux definitely feels more "organic" and "grown" whereas FreeBSD seems like it was architected and planned out. Not that this is a bad thing for Linux. My FreeBSD heritage still shines through when I use Linux; anything I install from source sits in /usr/local :).


> anything I install from source sits in /usr/local

To be fair, this is the norm on Linux too. I have never used BSD as a desktop operating system, but everything I've installed from source also sits in /usr/local. It's the default install directory for most Linux build scripts and I feel dirty if I add anything directly to /usr that the package manager isn't aware of.


Right; the distinction isn't so much "installed from source" as "installed without the package-manager's knowledge." If you `apt-get source`a Debian package, apply a few patches, and `debuild` it, the resulting package should install to the /usr prefix like the rest—because it's being tracked like the rest.


> Linux definitely feels more "organic" and "grown" whereas FreeBSD seems like it was architected and planned out.

This is 'The Cathedral and the Bazaar'. I've often seen the phrase used to contrast Microsoft/Apple with 'open source in general', but it's this right here: a fully integrated and designed system contrasted with an organically created system. Which approach is better is up for discussion.


> you know where things are, and where they are supposed to be

Does this not depend on the distro you're using?


In Linux, it does.


It's more than what you see in Linux. For example in Linux you have system packages, packages that are in distro repo, and 3rd party repo all sorting configuration and startup scripts in /etc.

In FreeBSD anything you find in /etc is part of the system, applications that you install are fully contained in /usr/local including etc and rc.d (init.d). /usr/local is also completely empty when you first install the system.


> It's more than what you see in Linux. For example in Linux you have system packages, packages that are in distro repo,

System packages are packages in the distro repo, so those are the same thing.

> and 3rd party repo all sorting configuration and startup scripts in /etc.

Stuff you compile/install from random sources on the internet will place its stuff wherever the upstream decided to place it. The same will happen if you compile that software on FreeBSD. If you're thinking of ports, those are patched to install to the usual places, exactly the same as the packages in the linux distribution repositories.

FreeBSD's base system is very neat and well-organised, and they have an impressive ports tree with ports that integrate fairly well for the most part. But that doesn't extend to third-party software, and it seems silly to include that in a comparison on the linux side.


I know Linux (Debian) quite well and would like an excuse to try learning FreeBSD, but I just can't find any serious use-cases where FreeBSD would be of any advantage.

I run a small site on a VPS, so:

1. I don't have GBs of free memory for ZFS

2. I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.

Except for an Application Firewall for nginx, what does ports have over deb packages? All in all, how much MB of free RAM or free HD space will I win by compiling everything myself (taking time to do so, and pushing off security updates because I don't have the time to sit and watch the compilation, hoping nothing breaks (which _did_ happen once)

3. License - I don't care. GPL is free enough for me.

4. Base vs. Ports - Why should I care? Debian (testing!) is stable enough for me. Except for dist-upgrades, I never ran into issues, and then it may be faster to nuke the server from orbit. Now had BSD "appified" the /usr/local directory (rather than keeping the nginx binary in /usr/local/bin and conf in /usr/local/conf it would have kept everything related to nginx in a /usr/local/nginx) it would have been interesting, but now?

If anything, I like how Debian maintains both base and ports, so I can get almost any software I need from apt-get, and don't have to worry about conflicts.

5. systemd? The reason Debian went with systemd was (IIRC) because they didn't have the manpower to undo all of RedHat's work in forcing all applications to integrate into systemd (such as GNOME). I don't know how FreeBSD is doing in that regard.

I don't mind learning new systems. (see my username :) ). I actually understand what nixos or coreos, for example, bring to the table. But FreeBSD?


One big thing for me is that I trust the FreeBSD devs to not completely reinvent things unless they're sure it's an improvement.

I got sick of the churn in Linux, having to relearn things all the time and then find out when I'm done that it's not any better, just different.

Things I learned how to do in FreeBSD a decade ago still work. For example, to get a NIC configured I can either put one or two lines of text in rc.conf, (which I have pretty much memorized at this point), or stop and learn what RedHat thinks is the coolest way to do network config this year.


Watching iptables get replaced was a really depressing example of that, after the developers swore that "this time we got it right" when replacing ipchains......


In fairness IPTables has been around and considered stable for, what, more than a decade and a half now?

I'm sure there are great many things that I consider "right" now that will be wrong in 15 years time either due to changes in the environment in which they work or me otherwise learning new information or developing new techniques over time.


I haven't had reason to use nftables yet (beyond the default config in a distro), but if it's in any way closer to PF, then it's vastly superior to iptables in my eyes. I spent a while configuring OpenBSD firewall/VPN gateway boxes about a decade ago, and the all around superiority of PF was astounding.


Thanks - clearly I wasn't paying attention as I had not noticed NFtables. Another exciting task to look forward too, then.


... and ipchains (in 2.2) replaced ipfwadm.


I'm inclined to agree that there probably aren't any great reasons why you would benefit from FreeBSD rather than Debian, but just to correct one misconception:

I don't have GBs of free memory for ZFS

In the early days of ZFS it required many GB of memory -- it was developed for Solaris, and designed for servers with lots of memory. But it has improved dramatically since it first came to FreeBSD, and people run it with far less memory these days.


How much RAM is required these days?


There are definitely people running ZFS on systems with less than 1GB of RAM. I'm not sure if anyone is running ZFS on a system with less than 512 MB of RAM.

(FWIW, at one point the amount of address space was far more important than the amount of RAM -- amd64 systems with 1 GB of RAM would run better than i386 systems with 2 GB. I'm not sure if this is still the case.)


Back in the days of OpenSolaris, I was absolutely running it on 512MB, including a Gnome 2 desktop. I won't claim it won any records for speed, but it absolutely worked, and the data survived a rather nasty intermittent disk controller failure.

No first-hand experience with ZFS on BSD, but I believe in the early days of the ZFS port there were issues where if the system came under sudden memory pressure, ZFS might not hand its RAM back to the kernel fast enough, leading to (I guess) a panic. So this is a fit-and-finish issue with a specific port, not an inherent ZFS issue, but it seems to have fed into the whole notion of "ZFS needs bucketloads of RAM".


A big user of RAM in ZFS is dedup and compression - which can both be disabled.


Compression in ZFS doesn't use RAM, dedup tables, the arc and l2arc mapping tables are all that matters with regards to memory usage with ZFS. l2arc is where a lot of newbies end up shooting themselves in the foot, they go toss in a mirrored pair of 512GB SSD's for their l2arc on a system with 32GB of RAM and wonder why performance got worse or it crashed, or they enabled dedup on 24TB of data with 64GB of ram and wonder why it crashed and their zpool is beyond recovery.

If you are using ZFS without an l2arc or dedup enabled even a measly 1GB of RAM is "sufficient", you just won't get the best performance since it's going to have to constantly fetch from disk depending on your active data set (which is literally no different a problem than OS-level file caching in any other operating system).


Ah crap. The advice I got was just about the opposite - I wanted an L2ARC device in my system for performance.

Atom C2758, 32GB ECC RAM, 4x4TB in RAIDZ2 (losing that much storage was painful, and the $1200AUD upgrade path to 4x8TB even more so) with a 128GB SSD as L2ARC.

I found out some time later that I probably wanted an SLOG device instead but I'm really too afraid to touch my config. FWIW it's pretty stable on FreeBSD 10.3 with around 5GB RAM in use.


Your L2ARC shouldn't be bigger than 5x your system memory, ZFS has to keep mapping tables in-memory to determine where data on the L2ARC is stored, the bigger it gets the more memory you take away from your in-memory ARC, which can lead to worse performance than before since more data is having to come from your much higher-latency and lower bandwidth L2ARC or straight from spinning disks.

RAIDZ is a performance killer right off the bat, I would switch to using mirrored vdev's instead if throughput is an issue for you. Parity calculation kills write speeds, especially on slower CPU's like a C2758, and RAIDZ doesn't give you any extra throughput on reads since there's only one usable copy of a stripe to read from. I have 2x4TB disks in my pool as a mirrored vdev (with an additional 2x3TB coming after I get my 128GB flash drive for XenServer to free up my second 3TB disk again) and I get reads over 500MB/sec using 10Gig-E to my XenServer host, and equally fast writes. Disks are fairly cheap, RAIDZ is not a good solution if you need performance, and if you are using four disks in RAIDZ-2 you would lose the same amount of storage using two mirrored vdev's anyway (and save the increased potential of a rebuild failing due to multiple disk failures that is increasingly common on higher capacity drives).

If writes are a bigger problem for you, then, yes, a SLOG device will help - but any random SSD is not going to do. If you are using a SLOG ZFS expects it will not corrupt in-flight data following a power loss, even a "high-end" SSD like a Samsung 850 Pro or a Crucial MX200 will lose data in the event of a power failure. You also don't want a SLOG that isn't mirrored, if your SLOG is corrupted you just lost your entire pool. In addition you need a SSD with proper power-failure protection like the Intel DC series. Also, large writes skip the SLOG (>64KB) entirely, so if you are bandwidth bound (either from network or to disk) it is not going to help, if you are IOPS bound it can help tremendously.

Also, you mention 4x4TB and 4x8TB, I assume your issue is the available disk bays in your system? I personally ignore high-capacity drives as they are far too expensive, and my HP ML10 that I run FreeNAS off only has 1 internal drive bay (with a $50 add-on I can buy to install an extra 3), instead of dealing with internal drives as well as limited capacity I bought a cheap DAS array (Lenovo SA120, here's a picture of mine and my cheap TP-Link layer 3 switch http://i.imgur.com/eEvtP6Z.jpg) that cost me $200USD and connected it with an equally cheap LSI 9200-8e SAS HBA ($40), I now have 12 hot-swap drive bays and can replace my FreeNAS box without worrying about how many bays it has, if I need more I just need to buy an additional enclosure and daisy chain it off the first. This isn't a dirt cheap solution by any means (my homelab gear is easily worth over $1500 at this point, including my Lenovo TD340 acting as my XenServer host), but it will save you money in the long run by allowing you to easily buy many cheaper drives than fewer more expensive (but high capacity) ones.


And at this point I give up and go back to an off the shelf solution.

I do have mirrored vdevs but now you've made me doubt what configuration I put them in.

This stuff is so advanced it's not funny.


Also, where should I read up about DAS arrays? I didn't even know that was an option when I started building.

Edit: thanks for your help here - you don't have an email address listed in your profile but I'd like to discuss this further if you have the time.


DAS arrays are extremely easy to understand, they're just "expanders" - SAS has special support built in for this, in servers you typically have the front drive bays connected to a single SAS port on the board instead of individual cables running to each drive, a DAS is the same thing just as an external enclosure with separate power. You connect it to a basic SAS HBA with an external SAS port (SFF-8088), like the LSI 9200-8e I mentioned and it just shows up as a bunch of drives to your operating system. There's some special control services you have to the enclosure depending on the model through a protocol called SAS Enclosure Services (SES), my Lenovo SA120 uses this to manage things like fan speeds (very important, because every time it loses power it resets to HIGH which is VERY LOUD, I have a boot script on my FreeNAS box to set this back down to LOW which is much more manageable).

This seems like a lot, but you can ignore most of the technical details I just posted. Buy a SAS HBA with external ports, buy a SAS DAS array, plug it in and you see a bunch of drives - no fuss. Since SAS controllers can also support SATA drives you can save yourself $10-20 a drive and buy normal SATA disks, or you can get some added reliability (multi-pathing and error handling) and buy near-line SAS drives for a small premium (I don't bother personally, but I only have one controller installed in my SA120 so I have no second path for data to travel in the event of a failure anyway).

Feel free to hit me up anytime, I'm /u/snuxoll on reddit (pretty active on /r/homelab) and you can email me at stefan [a] nuxoll.me.


Doesn't look like I can use an expander with my chassis. http://www.supermicro.com/products/motherboard/atom/x10/a1sr.... Don't know why I was so set on this in the end. Had to import it, and then it wasn't really supported by FreeNAS.


You've got a PCIe 8X slot on there, plenty for a SAS HBA card if it isn't taken already. That's all you need to drive the DAS.


> if your SLOG is corrupted you just lost your entire pool

As far as I'm aware, SLOG loss does not jeopardise the pool, only the transactions that are in the SLOG. The pool may violate synchronous write guarantees in that supposedly committed writes that were in the SLOG effectively get rolled back (or rather, never get applied to the underlying pool), but that's about it.


Are you quite sure that is the best solution long term? All of those extra drives are using additional power.


Compression does not use a significant amount of memory - it's the deduplication which uses a lot of memory. In fact compression can make it run faster as there is less I/O data.


1. zfs runs well on a lot of systems. You might have to change a setting or two but it's not as you say.

2. pkg install and the related pkg utilities have existed for awhile.

5. systemd isn't and shouldn't be a requirement for applications going forward. Any application that requires it is limiting its portability for unknown reasons.

The joy of trying a new system is the little things you learn that you weren't even aware of before.


Regarding your point 5: Application developers embrace systemd (and thereby reject portability to some extent) because it makes it easier for them to not do infrastructure work. A description of that phenomenon in KDE can be read here: http://blog.davidedmundson.co.uk/blog/systemd-and-plasma


So it's a feature. I cut myself from shitty software by using FreeBSD.


> systemd isn't and shouldn't be a requirement for applications going forward. Any application that requires it is limiting its portability for unknown reasons.

How should one do it instead? Treat each alternative system specifically? Or is there a way to cover them all at once, including unknown and future ones?


How about this: don't write applications that depend on the init system.

Seriously, who ever thought such a dependency was a good idea? Who even thought it would be a good idea to make an init system that was possible for an application to be dependent on? This is exceedingly poor engineering, and I'm dumbfounded at its acceptance and spread.


Hint: Programming is not engineering.


I've heard this before, but I've seen pretty convincing arguments both pro and contra. I'd actually love to discuss this (not argue -- I honestly don't have a strong opinion on the topic at this time). If you're willing, my e-mail address is on my profile :)


> pkg install and the related pkg utilities have existed for awhile.

So have apt and apt-get. So that takes away one of the advantages of FreeBSD

> systemd isn't and shouldn't be a requirement for applications going forward. Any application that requires it is limiting its portability for unknown reasons.

True, tell RedHat that :(


So have apt and apt-get. So that takes away one of the advantages of FreeBSD

Not completely. In Debian and others it's mostly a choice between a stable base and stale software or a moving base and up to date software (though backports improve things quite a bit for Debian Stable).

Since in FreeBSD packages are separated from the base system, it's possible to run a stable base system, while using quarterly updates or rolling release (latest) of packages.

(Note: I didn't have many problems running Debian Unstable, but it may depend on your requirements.)


Of course, there's nothing stopping people from using things like pkgsrc or Nix on Linux.


What would you consider the advantage of FreeBSD ports/packages? Why would that be taken away by pkg?

pkg provides a way to manage (binary) packages. There are other tools (like poudriere, last time I dabbled in the FreeBSD world) to manage/create binary packages.

You could run FreeBSD on your tiny VPS, but build packages with all the custom ports options you could want in a jail on your local machine.


The advantage of ports is that they are compiled for your machine. The minute you get precompiled packages, you might as well be on Debian


Well, the nice thing is that you can mix and match. Don't care about the configuration of a package, use pkg. Want to use a set op custom compilation options? Use ports with 'make config'.


Linux is fine for ordinary computing and many specialized tasks but some areas are better served by BSDs.

If you require the integrity guarantees offered by ZFS and still want the efficient resource allocation of containers you will not get around FreeBSD or a Solaris based distribution.

If you need a system that is secure by default you will not get around OpenBSD.

If you need the portability of Linux coupled with a license that lets you freely redistribute in order to sell an appliance you will be hard pressed to get around NetBSD.

By the way, ZFS will run fine on small VPS since the RAM cache can be turned off. You still get instant snapshots and rollbacks, efficient replication and integrity guarantees.


NetBSD runs on a subset of the hardware that Linux runs on, other than supporting a very small number of systems built in the 90s that have never been supported by Linux. NetBSD's portability was a strong point in the past, but the massive industry acceptance of Linux means that there are giant numbers of shipped systems that came pre-installed with Linux and which have no support in the NetBSD kernel. There's a bunch of great reasons to choose NetBSD, but comparing its hardware support to Linux isn't one of them.


If you need the portability of Linux coupled with a license that lets you freely redistribute in order to sell an appliance you will be hard pressed to get around NetBSD.

It's perfectly legal to sell an appliance with Linux, people do it all the time.


You can't "freely redistribute" it though. You have to comply with the viral decidedly non-freeing provisions.


The only freedom you give up by using the GPL is the freedom to screw your users over by denying them access to the source code. All you are demanded is to let others enjoy the freedom you enjoy. That, surely, isn't too much to ask.


Meh, hardware vendors routinely ignore GPL, and for the money it takes to sue them in court you could in most cases reverse engineer their drivers and write them for scratch.


Yeah, NVIDIA is an "example" of that.


While I can appreciate that point of view (i.e. as a vendor), the definition does depend on whose POV you are looking at it from. From an end-user (individual or business) POV, the GPL is absolutely 'freely redistributable' what you're talking about is your inability to limit that ability. That's the GPL doing its job.


> That's the GPL doing its job.

It also makes it really hard to earn a living with actual software (not services) that depends on GPL.


Yes, you're absolutely correct. The GPL, and resulting GPL'd software, does not exist for the benefit of commercial software vendors. While some business models are viable under it, that's a side-effect rather than an objective of it.


i.e. it's condemned to be a hobby for almost everyone involved, besides some key players who can live off of donations or those who sell services based on mostly other people's software.


Selling services 'based on mostly other people's software' is not some crime or bad thing. Servicing cars or houses made by other people is an honorable profession. So are software services. If the thing needs servicing, then I salute those who stand ready to do the dirty work.


And I didn't say it is. Only that it's usually the only way of building a business around GPL software. This is fine for serving some people, but it has a huge blind spot for those who mainly want to write Software and try to live off of it. IMO this is the main reason why Desktop linux never took off - you just don't have enough userland software that's ready for production use in a professional environment - see desktop publishing, audio, video etc. This sort of software is not going to be written as a hobby, it needs full time professional developers and investors who can see the ROI starting at around 1M$.


And to many it's fine if it is. The GPL wasn't created to benefit the software industry, it was a reaction to it. That's also part of the reason that some commercial interests (most famously Apple) have been moving in the direction of MIT/BSD-style licenses. It's OK to align your efforts with licenses that match your requirements. The GPL is most aligned with the interests of users, not commercial developers. MIT/BSD is the reverse.


All of the software I have written for money is free software. In some fields it is hard to not do that: Wordpress plugins, for example, are GPL licensed by default. Can you elaborate what problems you have run into?


How would you solve the following use-case: You want users to be able to try your software for non commercial use, but you want them to pay for commercial. I don't see how GPL licenses can do that for you. In fact, I don't know any common OSS license that has non-commercial limitations with the option of purchasing a commercial license.


You can always dual-license the software. The GPL version of your software can always be a basic version, customers could pay for features they want to have.


Yes, but we're at the start again: Only when you don't depend on other GPL software at compile time (and even runtime bindings can be tricky AFAIK). So it only works if all people do is write a bunch of siloes and don't work as a community at all.

So, looking at Linux system software as an example, it only works as long as you stay far away from the kernel.


If you depend on software, the authors of the dependencies have done a part of your job for you. GPL basically says the authors only agree to help you on the condition that you also help others. You can always choose not to be a part of such a community, then you will not get help. What is the problem?


For me the problem is that GPL mixes up the requirement of "Open Source" (which I'm fine with) with a requirement that anything depending on it has to be GPL as well (which I have a problem with). Basically, if GPL would allow someone to sell licenses to his software as long as he keeps publishing the sources, it would go a long way to enable a userland application market on top of GNU based kernels.

Gnu.org could even demand a percentage of revenues going towards the GNU foundation. IMO that would make Linux even free-er since it would democratise the way kernel development is funded (i.e. redistribute some of the influence away from the big software corporations who currently chip in). Right now kernel development works fine because you have the right gatekeepers at the top (especially LT), but I wonder what happens when he retires - will some committee take over?


You're right, you can't "sell" GPL software. What you mean by "sell" is "grant a license for usage" and that's exactly what the GPL or a similar FOSS license does. So the two things conflict. Really you're describing a situation that the GPL is not designed for.

Your desires are more inline with Microsoft 'shared source' type of situations.

The GPL and the FSF's world-view isn't particularly aligned with what you want (I believe). Either use the GPL and live with the constraints, or use something else. Something else's that come-up in a GPL but commercial context are:

a. Service or Support. You've said you want to be paid for creating the software and not for providing services, so this one is out. Note that SaaS is now the easiest way to achieve what you want.

b. Dual licensing. You said "you can't depend on GPL software then". Well yeah, because GPL is about a commons of equals 'sharing' and you want to charge for sharing - so you don't get to use other people's stuff for free. Seems fair to me.

But, your actual concern isn't really valid - there are lots of things you can write without depending on GPL software. This is probably more realistic than you think.

c. Open core. Someone described this to you earlier, I think.

d. Validation and/or IPR. Probably outside the boundaries of most individual developers. But think of the way that Java is licensed on the basis of it being a 'valid' implementation and then IPR and trademarks.


Creative Commons works that way. So its possible.


People keep saying CC, but that license has never been intended for software. Did it ever stand up to a trial in court for this usecase? See also [1]. Why isn't there something like CC NC geared for software?

[1] https://wiki.creativecommons.org/index.php/Frequently_Asked_...

> We recommend against using Creative Commons licenses for software. Instead, we strongly encourage you to use one of the very good software licenses which are already available.


Is it important if court has tested the license? A software license is like a will, in that it describes the authors intention in a legal binding document. A infringer would need to prove in court that their activity does not conflict with the requirements of the license, and the words "non-commercial" would be quite hard to sneak around. The main reason that FSF and similar activly advice against such license is primarily because "non-commercial" is ambiguous and there is a clear risk for the recipients/re-distributors if the original author starts to act abusive. A common example I have heard is public schools vs private school, where the later is a commercial activity and the former is common viewed as not. How that would translate to a student using it in a student project would be up in the air.

If I where you, I would instead copy the license of the unreal engine. That is, you don't look at the commercial status but rather on profits from products that include your work. If someone is earning profits, regardless of the nature of the organization who do so, then a cut is required to be given to you. Its simple, it fixes the school problem above, and there is a "industry" example to use if it ever became a court case. The big drawback is that its not an open source compatible license.


Buy commercial licenses from authors of GPL code, so they will earn something too.


I can see how that works for code that has zero compile-time dependencies on other copyleft software. I.e. a very limited usecase. Or am I getting something wrong?


It is astonishing how many people don't know that they could do it.


You can redistribute it as much as you want, providing you permit recipients to do the same thing. Entire industries have found this to be completely reasonable.


> I don't have GBs of free memory for ZFS

Only ZFS dedup uses goobles of RAM, and use of that is discouraged anyway.

> I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.

Last I checked, Debian now supports ZFS in form of a source DKMS package. No need to maintain your own setup.

If you go with Ubuntu instead, which provides binary ZFS packages, you can also use it as a rootfs to get full advantage of its capabilities[1].

That said, I've decided to just stick with btrfs, because it's in the mainline Linux kernel, and works well enough for my needs with absolutely zero hassle.

FreeBSD is nice though. No harm in trying it out to widen your horizon :)

[1] https://github.com/zfsonlinux/zfs/wiki/Ubuntu-16.04-Root-on-...


> I just can't find any serious use-cases where FreeBSD would be of any advantage.

Ditto. I definitely like the idea of FreeBSD (and, even more, OpenBSD), but neither really seems worth the pain. The fact that I can't just rely on third-party software working means that I can't use either for my personal desktop (I like to play proprietary games) or even for servers (without extensive testing ahead of time, anyway).

As for the license, I prefer the GPL over the BSDL. I don't see why third parties should be permitted to forbid their users from enjoying the same rights they themselves received. Nuts to that.

And, honestly, as much as BSDs are a nicely-integrated software suite, they are an old nicely-integrated software suite. As much as the GNU userland is … weird & inconsistent, it's also really powerful. I like having an ls which has added options since 1986.

I'd love to love the BSDs, but … I don't.


ZFS only needs GBs if you need deduplication. 99% don't, almost everyone is better off with compression. 1GB is the min for ZFS though it can run in a little less.


Too much. I've got a 700-800 MB system.

It's unfortunate, as I'd love to have a versioned FS


HAMMER1 on DragonflyBSD runs well with little memory, including running dedup. It also has the binary pkg system just like FreeBSD. Also, the DRM drivers are up to date and support current Intel GPUs.


> It also has the binary pkg system just like FreeBSD.

Not just like, but the very same one. DPorts is FreeBSD ports with a few patches, and pkg in DragonflyBSD is pkg from FreeBSD.

> Also, the DRM drivers are up to date and support current Intel GPUs.

I believe FreeBSD is integrating that work itself. Intel GPU support on Dragonfly is pretty great.


Huh. I had not really looked closely at DragonFly in a while. I had briefly looked at it when setting up my new server, but the fact that HAMMER does not deal with redundancy itself made me stick with FreeBSD/ZFS.

On a laptop, however (with an Intel GPU) that is not an issue, as there is only one disk, anyway. I might have to take a look over the summer.


Redundancy as advanced as ZFS can do would be sweet, but if you're ok with something simple, like a two disk setup, you can put a PFS on each and stream one disk to the other.


Well, if you're willing to tune settings there's an example on the FreeBSD wiki of someone with a laptop "running nicely" with 768M and their settings.

https://wiki.freebsd.org/ZFSTuningGuide

I've no idea what compromises that'll bring you, or how it would react to you pointing it at a multi TB space.


VMS could run a versioned FS in <128MB of RAM, so there's really nothing conceptually in the way. I bet with dedup off and some tweaking you can get ZFS to run in very little memory.


That's not really an apples to apples comparison. VMS versioning is a completely different thing than ZFS snapshots and copy on write.


> It's unfortunate, as I'd love to have a versioned FS

btrfs at your service then. Comes with the mainline Linux-kernel. Give it a spin!


[flagged]


$5 _a month_ on a VPS. If it's a project that doesn't make money...


> 1. I don't have GBs of free memory for ZFS

Others have already addressed the memory hog myth, so I'll just say that you aren't required to run ZFS - UFS still works just fine. I just sshed into my freebsd AWS instance (UFS) to see 116d uptime with 130MB ram in use and the remaining 110MB being free. I've made no effort to optimize resource usage on the instance, but if I wanted to I could pretty easily cut that by a third without a noticeable impact on performance.

> 2. I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.

You might be surprised to learn how many build options there are for the software you use daily. I jumped in with both feet and setup a jailed build server after wanting some uncommon build options for vim, ncurses and sqlite3.

> 5. systemd? ... didn't have the manpower ... I don't know how FreeBSD is doing in that regard.

It has made no attempt, which I'm pretty happy about. The only annoying thing from linux that keeps popping up on my freebsd desktop is dbus.


Which FreeBSD desktop do you use? I'm collecting recommendations before I try using one.


I don't use a DE, but I've used the tiling WM i3 for years:

https://i3wm.org


Try Lumina It's simple and light on resources.

https://lumina-desktop.org/


I was about to ask about it's robustness and security. Then, the website gave me 404 errors on features, news, and other sections on the website. Hmm. I'll still put it on the list, though, for non-critical use.


I am just a layuser, but I have had no issue running MATE and Gnome3 on my Lenovo Ideapad Y410p.


> 1. I don't have GBs of free memory for ZFS

ZFS does not require large amounts of RAM any more, unless you want to benefit from its impressive caching features and have large amounts of disk space.

> 2. I don't have GB of RAM, CPU and HD space to build everything from ports, and most importantly, no need.

Since FreeBSD 10, pkg-ng has been the standard. It unifies binary packages installable similar to apt and other linux package managers with your custom ports built packages as needed. Now you can mix and match seamlessly and there is no need to have an entire ports tree unless you have special needs.

> 3. License - I don't care. GPL is free enough for me.

Each to their own, but MIT and BSD are, to me, free-er and more friendly to business applications.

> 4. Base vs. Ports - Why should I care? Debian (testing!) is stable enough for me. Except for dist-upgrades, I never ran into issues, and then it may be faster to nuke the server from orbit. Now had BSD "appified" the /usr/local directory (rather than keeping the nginx binary in /usr/local/bin and conf in /usr/local/conf it would have kept everything related to nginx in a /usr/local/nginx) it would have been interesting, but now?

The layout you're discussing is basically what happens by default when things are `make install`ed most times without any customizations. The layout that you're lamenting makes sense enough and has merits, with no need to be "appified" -- that is what packages are for. Again, I implore you to check out pkg-ng, it is a core feature of the OS now.

> 5. systemd? The reason Debian went with systemd was (IIRC) because they didn't have the manpower to undo all of RedHat's work in forcing all applications to integrate into systemd (such as GNOME). I don't know how FreeBSD is doing in that regard

People are experimenting with alternative init systems in the BSD world, though we will likely never directly have SystemD, there are some efforts to support systemd unit files as well as some new directions taking into account Apple's work as well as lessons learned from systemd to build a BSD licensed alternative to all of the above.

> But FreeBSD?

FreeBSD brings to the table a free, cohesive, highly engineered and integrated operating system that is coming into its prime. There are a multitude of advanced features in terms of filesystems, networking, and more. FreeBSD has first class support in AWS and some other major cloud environments. However, FreeBSD has been and continues to be a server focused operating system. Even so, FreeBSD on the desktop is coming along, but it's certainly nowhere near linux in that regard.

In the end, each to their own, if you find a reason to like and use it, then great, but at the least you should look into it and learn what you can from it, like any system.


> Each to their own, but MIT and BSD are, to me, free-er and more friendly to business applications.

True, but I'm not going to sell OSs, and plenty of corporations are working on Linux


I think the best, and most traditional, comparison is the Cathedral and the Bazaar[1]. Pretty much every observation the author makes can be framed in this comparison. BSD has built-in libraries, and has a cohesive architecture. Linux is popular precisely because of its chaos. It is incredibly easy to hack some feature together in Linux. It will probably start out as ugly, but if enough people glom onto it, it will become great and maybe even secure. It is pretty clear that Linux became popular not because it was the best, but because it was the fastest development path.

Obviously BSD has its uses, but if you're looking to develop a new feature and get it out the door (in OS development) Linux is the easiest choice.

[1]ESM https://en.wikipedia.org/wiki/The_Cathedral_and_the_Bazaar

Edit:

I remember reading somewhere that Netflix uses BSD for all of its net-intensive servers as network performance tuning on BSD is, at least by reputation, better, but they use Linux as their workhorse. They employ some FreeBSD committers though (obviously not everyone can do that).


BSD development versus Linux development is not an example of the Cathedral and the Bazaar. That essay was about development process: under the Cathedral process, development happened "behind closed doors" and the only time the public got access to source code was at release time; under the Bazaar process, development happened "in the open" and the in-development source tree was always available to the public (typically via a source control system).

To see why the various BSDs are not an example of the Cathedral process, you only need to look at their source control. In fact, OpenBSD pioneered the idea of anonymous CVS -- before that, you needed an account to check out from the CVS repository:

"When OpenBSD was created, de Raadt decided that the source should be easily available for anyone to read at any time, so, with the assistance of Chuck Cranor, he set up the first public, anonymous CVS server. At the time, the tradition was for only a small team of developers to have access to a project's source repository. Cranor and de Raadt concluded that this practice "runs counter to the open source philosophy" and is inconvenient to contributors. De Raadt's decision allowed "users to take a more active role", and signaled the project's belief in open and public access to source code." [1]

[1]: https://en.wikipedia.org/wiki/OpenBSD#Open-source_and_open_d...


I disagree with you on both counts. Cathedral development is not about source availability (Wall uses emacs and GCC as an example of Cathedralism for heavens sakes), it's about source modibility. Theoretically two different projects under the same license could have different approaches. Very few high level architectural decisions are made by the Linux maintainers (unless they are coding it), they're there to say, "yes" or "no". Some of the most important features in Linux, cgroups, namespaces, etc, came from outside the maintainer team. By comparison, NetBSD has a highly top-down approach. The source might be readily available, but good luck landing a big patch.


At the time ESR wrote the essay, both Emacs and GCC did use the Cathedral development model I speak of. They've only opened up since due to forks like XEmacs and EGCS which started to eat GNU's lunch. The FSF eventually had to cave in to the Bazaar model because it works.

Furthermore, people without committer access land patches in the BSD's all the time. In fact, that's how you get committer access: you first start contributing patches to the appropriate mailing list where existing committers can review them, discuss them, and perhaps merge them. It's not terribly different from sending a pull request on GitHub, and afaik it's pretty much the same process used by the Linux kernel.


My perspective as someone who is mostly a desktop user/dev and rarely dives into server related issues other than setting up a home network and maybe sshing into a server somewhere in the magical cloud:

* The BSDs feel more elegant, the Linux distributions feel more complete/up to date (and are probably more performance optimized due to enterprise embrace but that's only speculation on my part).

* I sympathize with the BSD license philosophy and agree that the BSD-license is in theory freer but don't care too much either way

* OpenBSD is awesome (yes even for desktops, with the right hardware) and I install it every now and then but ultimately I'm too familiar with Linux to switch. I do like that they keep fighting the good security fight, don't care about supposed personality quirks of some devs. Keep up the good work

* At the end of the day I use Linux because that's what I grew up with and it tends to "just work" (for the record all BSDs I installed in recent memory pretty much worked out of the box). I am kind of forced to use OS X at work but other than that it's Linux on all machines. The parents also use it by now.


I tried freebsd. Gone back to ubuntu. In this day and age not having proper dependency resolution for packages is not acceptable. I've more than once had some tool based on ports just go into an infinite cycle when asked to upgrade some packages.

The philosophy laid out in this article seems more like rationalization of historical accidents more than anything else. Linux file system layout is just as predictable as anything else. Configuration goes in /etc, PID files and other such things go in /var, things local to the users go in /usr/local, cron related things go in /etc/cron.d, and so on and so forth. FreeBSD file system layout on the other hand makes no sense to me and the rc system is even more bizarre. Do I really need to specify "XYZ_start=YES" when upstart, systemd, and others have this stuff all sorted out already. Well not systemd so much but close enough.

Overall the BSDs are too low level for development and play purposes. For deploying proxies, caches, NATs, and other such single purpose things I'm sure they're great but anything else not so much.


It's really not just a rationalization. BSDs control the kernel and the userland, meaning that file system layout is coherent. Check any Ubuntu installation and packages are all over the place. This is perhaps unavoidable.

Also, OpenBSD, DragonflyBSD, and FreeBSD can all be used as server or desktops without much fiddling. Their installers are better than anything in the Linux world. You can have one up and running with a nice wm, browser, etc, in 15 minutes.

They are more finnicky about hardware, but when you can buy a laptop for $200, that's not so much an issue.


+1 on the BSD installers being some of the best out there. OpenBSD's and FreeBSD's specifically are amazing, both for different reasons.

OpenBSD's is very minimal simple ASCII input and output questions with sane defaults.

FreeBSD offers a console dialog based approach similar in some ways to Debian's installer, but in other ways different (and to my mind, better).

Both of them will get you efficiently setup from nothing to installed within minutes, and since they are very simple and well documented, both of them are easily scriptable for installing servers or laptops quickly with a specific setup or your own prompts.

By contrast, linux installers are generally larger, slower, clunkier processes and I am not sure how easy it would be to integrate any random distribution's installer into a scripted workflow.


I don't think the layout is coherent at all. Ubuntu layout is much more coherent. If I need nginx configuration I look in /etc/nginx. Following the same pattern if I need mysql configuration I look in /etc/mysql. If I need to figure out the actual daemon configuration for each I look in /etc/init. It's all very nice and tidy. Actual binaries go in /usr/local/bin or /usr/bin or /bin. That's pretty much all I need to know for any binary or daemon. This is not true in FreeBSD. So where exactly is the incoherence?


It isn't actually as nice and tidy as it would seem because you do not have a direct separation between system configuration and application configuration.

It's a small, subtle detail that for many is less important today than it once was; however, if I have two FreeBSD systems of mostly equal versions and I want to migrate applications and/or users from one to the other, I know ALL I have to do is copy from /usr/local -- try that on your Ubuntu machine. Good luck separating the system-specific configurations from your user and application configurations.


I don't want that separation in a consistent and coherent system. User vs base is artificial. What I care about is the end result the overall system accomplishes and base vs non-base doesn't really factor into that. What does it matter to me if nginx is part of the base system or not? I need a web server. Arbitrarily choosing apache/httpd vs nginx to include as part of the base system is incidental complexity where I stand from.


> User vs base is artificial.

No. It's not.

If nginx releases an update, do you also need to update your OS? No, because the user dependency graph isn't shared with the base OS.

If you update your OS, do you also need to update nginx? No, because there's backwards ABI compatibility.

The split is not artificial.


> If you update your OS, do you also need to update nginx? No, because there's backwards ABI compatibility.

Only for minor versions. How is that different from any other package that promises ABI compatibility?

Why are FreeBSD OS components split up at a low granularity, so you can't install the stuff you want without the stuff you don't?

Why are there two separate sets of tools to update them and other packages?

edit: Why separate non-base packages from base packages on the filesystem, but not from each other? If the advantage of separating /usr/local is the ability to copy the non-base configuration/apps without the base stuff, wouldn't it be more useful to be able to copy any given individual package or set of packages? Or if that is possible already (by asking the package manager for a list of files or whatnot), wouldn't that obviate the need to separate base from non-base?


> ... non-base packages ... base packages ...

FreeBSD does not work this way. You are projecting Linux onto it.


You're talking about kernel vs userland. That's not base vs non-base. The BSD distributions unintentionally muddy the waters by calling what is an equivalent of a linux distribution a base system and so lose out on the distinction you are drawing.


No, I'm talking about the Operating System vs. Applications.

Drawing the line at the kernel syscall ABI is arbitrary; no other major operating system other than Linux does so.

Consider how difficult it is to ship 3rd-party applications for Linux and all its myriad of distributions; the only thing you can truly rely on is the anemic syscall ABI provided by the kernel (vs. userland).


Hmm, I've never had any problems running something on ubuntu, debian, gentoo, redhat, etc. If it works on one then it works on all the others. If push comes to shove I just install the libraries and compile from source but this is a rare occurence and I've only have had to do that when I've relied on a version of Ruby or Python that has not come pre-packaged with the distribution.

So I don't really know what you're getting at. If you're not talking about kernel vs userland and you're talking instead about ABI compatibility then the ABI is even more stable and requires even less work to port any piece of software from one distribution to another.



This is an increasingly solved problem in the post-systemd world - /etc contains local configuration, system configuration is in /usr. We (CoreOS) support you doing mkfs on / in order to return to a default configuration built from the /usr config.


If that's your definition, then FreeBSD and Solaris solved this without paying the systemd tax, via ZFS-based boot environments.

https://www.freebsd.org/cgi/man.cgi?query=beadm&sektion=&man...

NeXt->Apple also avoided anything like systemd via a configuration system that supported inheritance across search domains (User, Local, Network, System):

https://developer.apple.com/library/mac/documentation/Cocoa/...


Oh, there's certainly multiple ways to do this. My point is purely that this isn't an inherent Linux/*BSD thing - there are modern Linux distributions that have precisely this property.


And isn't it actually part of the same effort (RH Stateless Linux) which spawned systemd and ostree ?


This is something I actually like about systemd and firewalld, everything in /usr/lib/systemd/ and /usr/lib/firewalld should only be handled by your package manager, anything you customize yourself belongs in /etc/systemd/ and /etc/firewalld, and they will override the corresponding entries in the 'global' folders if you give them the same filename.


FreeBSD is quite coherent. Anything not part of the base system goes into /usr/local.

If you're having trouble understanding the layout of the filesystem, you only have to man hier: http://www.freebsd.org/cgi/man.cgi?hier(7)


But I thought BSDs where more monolithic and coherent systems. How is drawing a distinction between base system and non-base system make that true? If anything Linux embodies that philosophy much better. There is no ad-hoc and arbitrary decision between what is part of the base system and what is not. The system as whole is coherently laid out without drawing any base/user distinctions. Daemons are daemons, binaries are binaries, tools are tools, configuration is configuration and they all go in predictable places to become part of the overall system.


Have you read the article? It isn't an ad-hoc distinction. You get the base system with a FreeBSD install. It includes the kernel and everything else that makes it a "FreeBSD system". This is stuff maintained by the BSD team. Everything else is an add-on.

If you blew away /usr/local, you would be left with a pristine (mostly) BSD install.

An analogy is a base windows install and all the associated tools and drivers. Anything else you install on your own is an add-on.

This distinction is hardly arbitrary.

Edit

Seems to me you have a hard time understanding what a "base system" means. This is what the article is trying to explain, and it seems to have gone completely over your head; I can't help with that.


I read the article. The distinction is indeed arbitrary. How much do you need for a base system really? Do you need more than the kernel, network drivers, and the filesystem? Last time I checked FreeBSD came with way more than those things. So someone, somewhere decided to include a whole bunch of other things as part of the base whereas they could just as easily be addon packages. I think FreeBSD includes various compilers and perl by default. Why's that? How is a compiler an essential part of a system? There are no apriori reasons to make certain "addons" part of the base system and others not. Linux does not draw this distinction and reaps the benefits of a more coherent system.

The equivalent of "base system" in linux land is called a distribution and rightfully so. There is nothing basic about a distribution. It is an arbitrary set of choices made by the distribution maintainers and it is sold/advertised as such.


I think the easier way to think about the base system is to look at Mac OS X. Stuff in the /usr are installed along with the Mac OS X system, and when people need other packages (or think some packages that shipped as part of the system is too old), they use their favorite package manager to install it to /usr/local or somewhere outside /usr. This is also the case with FreeBSD base system. The package manager do not manage packages that are installed as a part of the system, only the one installed by the user.

As far as the compiler goes, in my 10.3-RELEASE-p5, I can only see clang 3.4.1. No Python, no Perl, no even Bash. If I happen to need clang 3.8, I can just `pkg install clang38` and it will happily live inside /usr/local separate from the one in /usr/bin (that is probably presented for building the world). This one will be managed by pkg, but the one in /usr/bin will be updated when I upgrade to 11.0-RELEASE (which will ship with 3.8).

Thinking in Linux's terms, is probably like installing Debian stable only the minimum, and use pkgsrc to install the rest of the system to /opt.


You get the base system with a FreeBSD install. It includes the kernel and everything else that makes it a "FreeBSD system". This is stuff maintained by the BSD team. Everything else is an add-on.

This is why I typically find Debian to be a more cohesive system than FreeBSD. If I could run only the base system, or only the base system plus maybe one or two packages, FreeBSD would have a very good story about being developed in a unified, coherent way. But in practice I need a bunch of stuff from ports, and on FreeBSD that much more clearly falls into the "everything else is an add-on" category. There's relatively little integration testing or attempts to make the software in ports do things in a coherent "FreeBSD-style" way; it's a bunch of third-party software delivered mostly as-is. Whereas Debian considers anything in the 'main' Debian repository part of the official release, subject to release-management and integration testing, and ensures it works in a more or less "Debian way". Whether that matters depends on your use case, but for me that makes Debian feel more like a unified system.

I'm thinking mostly about servers here fwiw. On desktop the base/applications distinction works better for me, so I could run "FreeBSD" as a coherent base system and then install some separate applications on top of it, which is all fine. But on servers I prefer the coherent base system to be more "batteries included", including integrated release management of all the major libraries and software packages I'm likely to need. If I deploy on "Debian 8" vs. "FreeBSD 10", for example, the former gives me a much larger set of stable components that work together in a reasonable way, while the latter leaves me to more DIY it outside of the relatively small base system. (Whether this matters of course depends on what you're building.)


> If I need to figure out the actual daemon configuration for each I look in /etc/init.

... except that that was Ubuntu two major versions and some time ago, back when it used upstart. Nowadays, Ubuntu Linux is a systemd operating system.

* https://freedesktop.org/software/systemd/man/file-hierarchy....


Whenever I hear "BSD" I always think: "if you want to use BSD you have to build a machine for it". This isn't really based on any real world experience, but whenever BSD comes up as a topic only thing that sticks with me is: "it only likes very specific hardware" which makes me very hesitant to ever consider trying it. I've had so many hardware incompatibility problems with Ubuntu, which is suppose to support almost everything, I don't even wanna think about trying to run BSD.


It's strange, I have heard lots of people complain who wanted to give Linux a try (over Windows) and ran into hardware issues.

On the other hand, over the last ~15 years, I have almost never checked if a piece of hardware is supported by Linux (or BSD) before buying it, and I have only run into problems twice, really. One was an Aureal Vortex sound card for which a driver was maintained outside the Linux kernel tree, so I had to download it and build the module manually; the other was a really crappy HP scanner. Ironically, these days it does work on Linux much better (meaning: at all) than on Windows. (One key factor, though, seems to be not using brand-new hardware. After a year or so, chances are much* better it is supported properly.)

Apart from that, I have had no problems at all. Maybe I was just really lucky.


GPUs have been the pain of my Linux existence, everything else is generic enough to work, but last time I tried Ubuntu the default settings manager crashed at the mere sight of my GPU and when I installed a differed WM either couldn't find the resolutions I wanted to run or they didn't stick around after booting. After messing around with some "xorg-edgers" drivers I ended up with machine which claimed it had no displays anywhere and I just gave up and installed Windows.

It always seems to be one thing or another. Either it's some program/service I use daily that doesn't work (or doesn't quite work as well as I know it can) or it's some strange issue with sound or most likely GPU


Hardware can actually be a mixed bag between BSD and Linux. In my experience (which is several years dated by now), BSDs seemed to do better with various less popular WiFi chipsets.


I remember having trouble with wifi chipsets ~5-6 years ago when smaller netbooks started to hit the scene. I had one of thous, like 8" laptops with brand new ARM chip and 4 hour battery life, for it getting wifi working was hard, but after that I haven't had much trouble with wifi, but maybe I've just been lucky.


> Also, OpenBSD, DragonflyBSD, and FreeBSD can all be used as server or desktops without much fiddling. Their installers are better than anything in the Linux world. You can have one up and running with a nice wm, browser, etc, in 15 minutes.

When I wanted to try FreeBSD the installation failed multiple times and after the first success I couldn't install a DE for hours. I've also had problems with the package manager because I couldn't install anything before doing something with some configuration files. OpenBSD is the same category but at least it doesn't failed at installation as much.


> In this day and age not having proper dependency resolution for packages is not acceptable.

It still doesn't? Haha!

I remember ranting about this 10 years ago, with a friend who went to a conference in France… that was dedicated to package management in BSDs.

I asked him how I "just upgrade all the packages" (apt-get upgrade). I'd found two-three ways, but couldn't get them to work. He said that yeah there are three ways. They don't work.

I thought surely they'd fixed it by now.


> I thought surely they'd fixed it by now.

They did. pkg has gotten a lot better the past 3 years.


pkg update; pkg upgrade

No different from apt on a Debian system. It's come on leaps and bounds since the initial 10.0 release.


On OpenBSD, just do `pkg_add -Uu`.


That's packages only, and with ports you're on your own, right?


OpenBSD devs recommend using binary packages, even for the ports tree. Even if you compile from source, it generates a binary package then installs it.


FreeBSD has the "pkg" system now, which has binary distribution with dependency management.

Of all things, the lack of a distinction in Fedora/RHEL of "requires vs. recommends" is frustrating, coming from Ubuntu.


This is sort of explains why I have chosen OS X. Its a UNIX operating system that makes for a great modern desktop machine. I don't need to worry about poor and or glitchy hardware support. Its 2016, I don't have the time to play around with finding WiFi drivers or graphics drivers that can hardly support 3D-acceleration. Everything has always worked out of the box. Having had two MacBook Pros, I haven't had one hardware or software issue to date. They (OS X/Apple) have the advantage of being unified. People aren't constantly arguing over stupid shit as they seem to with Linux. FreeBSD's development on the other hand is just too slow for my desktop use. Despite what the article says, I have found that there is usually a support gap somewhere after an install; with drivers or whatever else. This isn't to say I dislike Linux, I do love Linux. I just wish they would get their shit together on a few issues. As for FreeBSD, it still has a strong spot for secure, stable servers. I suppose it's all a matter of preference. At the end of the day however I will always choose a UNIX system for my day to day use.


Such a great illustration!

* You can use great hardware, but the choice of it is rather limited.

* You're OK with upstream making a lot of choices for you, though the upstream is considered good at making the right choices.

* You don't mind running a lot of proprietary stuff (anything above the Darwin level).

People who choose Linux (over OSX and even BSD) look at these bullet points differently, thus the divide.


Exactly.

* When I bought my laptop 5 years ago, Apple was a candidate. It lost out because I still wanted an optical drive (and was interested in Bluray specifically, at the time), I wanted a 1080p screen, an HDMI output, and at least 3 USB3 ports. Upgradeability is a plus; I've upgraded the RAM, hard drive, and wifi card since I bought my laptop, as my needs have changed.

* I like having choices (in terms of software and configuration), and they often don't match up with the best options for mainstream users.

* I've got a Windows partition, but it's not my first choice for general computing. I don't have any particular interest in Apple's technologies or services, so I feel like a lot of their ecosystem would be lost on me anyhow. I do generally find Apple's software to be well-designed and aesthetically pleasing. I'd be worried about fighting the software to get what I want done, though.


Guys, this article was written somewhere in between 2002 and 2005 (based on the author mentioning bitkeeper being used for the kernel). Someone edit the title to reflect this please.


It also mentions gcc 3.2.2, which was released on 2003-02-05.

It also mentions that Gentoo is getting popular, which was happening in 2002 and 2003, after which it has been declining.

Also this article has appeared on Hacker News many times before, with titles of various relevancy: https://news.ycombinator.com/from?site=over-yonder.net


> It's been my impression that the BSD communit{y,ies}, in general, understand Linux far better than the Linux communit{y,ies} understand BSD.

Ok fair enough, but the same can be said about manual v automatic transmissions, static versus interpreted languages, etc.

When something is harder to use then you're forced to think about it more and understand it better.


I also think, it's somewhat of a fallacy. Most BSD users have come into touch with Linux at some point, whereas the reverse happens not nearly as often (in relative terms), and that's mainly because Linux is simply more popular.


A consistently designed well-documented system is probably easier to use, all else being equal.


In day to day use I agree, although it is harder to get exotic software to work on the BSDs.


That's a consequence of that software being written with only Linux in mind, not a problem with the BSDs as such. Also, somebody needs to write a port, and the only person who'll do that is somebody with skin in the game.

Obviously I'm an outlier, but if I need some software on FreeBSD, I'll contribute a port. Most of the time, it's almost trivial, and it's only with very obscure (maude is one that I'm currently working on) or newish (RethinkDB, which isn't written in a super portable manner - it doesn't even use pkg-config and has a hand-written configure script!) where there's an issue.


Yes, in fact Torvald and community (in the other OS hand) understood it, but particularly I fun to play with Linux and BSD both options are strongly valid.


Related discussion on GNU vs. Linux:

https://www.gnu.org/gnu/linux-and-gnu.html (BSD is discussed here as well)

https://www.gnu.org/gnu/gnu-linux-faq.html (According to this page, the title of the article should be BSD vs. GNU/Linux, though the latter was mentioned once in the article)


Disclaimer: I know this is a side issue. However, I'm really wondering what it is that makes people think using such a “creative” color scheme for publishing their texts would do any good. If it weren't for Safari's Reader view, I would be physically unable to decipher this text. Which would really be a shame, because this text deserves to be read.

I guess, I'm still hoping that one of these days, I will reach one of those authors and make them understand that the contents is of paramount importance and creative coloring can do nothing but detract (except when you're an expert). If you use your own color scheme, please make sure you know what you're doing.


It should be a side issue, but it's why I didn't read past the first page. It was physically uncomfortable to do so.


I just noticed there is actually a color theme selector right below the sidebar. The Light and Dark themes are both fairly readable. The author should just throw away the "Normal" theme.


Fun fact: I always use cyan as a "debug color", e.g. as "border: 1px solid cyan" to locate elements in the layout. Cyan jsut has no place in any proper color scheme (IMHO) and thus easily stands out against any background.


I would have entitled it Linux AND BSD

This article is really about the objective difference between linux and BSD. It is not a rant in the form B > L or B < L it is much more about the difference in a lot of direction.

I personally find it much more an help to guide people to come and as much as to stay away.

I am a former user of linux (that I used to like) and I am now using freeBSD. I switched without love or hate. I just was on linux to have video card and latest hardware support, and I am on freeBSD to have the ports, stability and easy upgrading.

I did it after years of experiences enabling me to build the information about the different fit use case of both OSes.

Having this summed up in 10 pages will spare people a lot of time in deciding.

And I think it is as important to have people come to BSD as to not delude them in coming for wrong reasons: disgruntled users are as much a pain that rigorous contributors are a gift.

So information to make choice, especially when they are not structured as an aggressive comparison are very welcomed.

The author should be praised.


"A tour of BSD for Linux people" sounds like a genuinely interesting read, but the aggressively defensive tone starting in the second paragraph and continuing throughout the introduction turned me off of reading past the first page. I'm definitely interested in how BSD differs from Linux, but I'm not interested in being lectured at. (So, please tell me if it tones down after the first page.)


This is probably the best comparison of [Free]BSD and Linux, if a bit dated.

In the meantime FreeBSD has changed the release schedule and there are efforts under way to package the base system separately.


Yes, a little bit old but I condiere (modestly) a little bit opportune to remember ;) In fact, some do not know :D


Very informative article, thank you. This article is missing discussion on ArchLinux ( probably because it is very old ? ). Arch has a minimal base system ( no X etc ) and you can build the system you most like.

Also Linux now uses git for development.

> And normally, you do the whole rebuild process in four steps. You start with a make buildworld which compiles all of userland, then a make buildkernel which compiles the kernel. Then you take a deep breath and make installkernel to install the new kernel, and make installworld to install the new userland. Each step is automated by a target in the Makefile.

> Of course, I'm leaving out tons of detail here. Things like describing the kernel config, merging system config files, cleaning up includes... all those gritty details.

Wow, so painful. For me its as simple as :

$ pacman -Syu

and watch some movie. I bet the BSD way offers more opportunities to learn but I personally don't like learning for the sake of learning. Learning different ways to do the same thing does not make me a better person. So this is not interesting to me.

> Does Linux support hardware that BSD doesn't? Probably. Does it matter? Only if you have that hardware.

I could obtain that hardware at some point in the future.

> "But Linux has more programs than BSD!" > How do you figure? Most of these "programs" you're so hot about are things that are open source or source-available anyway.

Given an application, does the provider support it on your OS is an important consideration that people make before choosing to use an OS, as they should.

> Linux, by not having that sort of separation, makes it very difficult to have parallel versions, and instead almost requires a single "blessed" one.

Isn't this what NixOS ( https://nixos.org/nixos/about.html ) is supposed to be solving ( among other things ) ?

I might try a BSD for the novelty aspect of it, but so far I have seen no reason why it should be better.


Yes, the article is very old.

> Wow, so painful.

And also something that hasn't been true for ages. By way of anecdote, I've had to do config merges on major version upgrades more on Debian systems than FreeBSD systems, but even then the difference hasn't been all that great: major version upgrades just work. And the last time I had to build a custom kernel was FreeBSD 6, which was about a decade ago.

Any remaining pain is largely a consequence of the freebsd-update works and should mostly go away once the base system is packaged (which should also make security updates a little faster to install).


I feel this captures a lot of the feel I had for FreeBSD even back in 2001 when I used it quite a lot. It had a sense that you were chiseling out this solid system that would remain for eternity, along with an assurance of security and stability.

I run Ubuntu now on multiple devices. It often has a feel of flimsy binary patches. I don't know that I care too much though as it works well. I have family members too on it so it is good to remain practiced with it to help them.

I miss playing with FreeBSD though! Perhaps I should run it on one of my RPis? Anyway, this is a great site.


yeah, how many homesickness.. FreeBSD is really cool BSD OS I like it. In other hand CentOS is my default Linux Server :D Debian is really cool OS too..


The only thing that's preventing me from switching my desktop (laptop) OS over to BSD is chipset support. I tried 11.0 ALPHA, but couldn't get anything other than VESA video going.

My setup scripts are all in Git:

https://gitlab.com/duncan-bayne/freebsd-setup/wikis/home

I've used a few different OSs on a daily basis for work and recreation: Windows (since 3.0), Linux (various distros since 1995), OSX (for a few years between 2009 and 2013), and FreeBSD.

I have found BSD to be the most comprehensible, simplest and the best 'cultural fit' for the way I think and work. I appreciate that the latter part is a bit vague, but that's because my understanding of it is vague :) BSD just feels ... more like home.

Those wanting to give it a go as a desktop OS should check out PC-BSD, which is built to be usable 'out of the box' for that purpose:

http://pcbsd.org/


I'm a total linux user and have tried *Bsd from time to time. For some reason, the bsds have always taken a very long time to boot up and even to start X. The last time I tried FreeBsd (about 3-4 years ago), a friend asked me to help him debug why X windows didn't run. It was taking 15+ minutes to startx on his brand new server class PC that he built up himself. We didn't realize that it took so long to start up until we went out for dinner and came back, surprised to see X had started. Same with OpenBsd, it took 10+ minutes just to boot up. I thought I didn't install it correctly and re-installed it a couple of times. I'm guessing, the Bsds are meant to be run continuously and not shut down? I've asked a couple of other Bsd people, and they couldn't provide an answer, so I'm left up in the air about that.


I can't confirm that. By virtue of being very minimal out of the box I've found BSDs boot very quickly.

systemd has arguably made Linux bootup faster than before but I would expect a default BSD to be quicker than a bloated Ubuntu default install.


I have run OpenBSD as my primary OS for years and never experienced anything like that. It boots/reboots very quickly.


What hardware was in that machine? Multiple minutes bootup sounds very odd.


Note that most BSDs (with the notable exception of OpenBSD, of course) lack modern anti-exploitation features like ASLR.

FreeBSD is working on it, but it took surprisingly long.


NetBSD has had support for ASLR since 2009 (NetBSD 5.0), but they only enabled it recently by default: https://mail-index.netbsd.org/source-changes/2016/04/10/msg0...


Pity this documented is not dated. But judging from the version numbers of things he mentions it probably dates from around 2003 or perhaps 2004.


Ahh, this old article again.

>"But Linux is more popular." >So? Windows is even more popular. Go use that.

This slope is so slippery I broke my neck when I fell.


Another aspect of the "Linux vs. FreeBSD" story is Java. I was almost won over by FreeBSD few years ago, however what keeps me back is so to speak the "second class citizenship" of the JVM on FreeBDS. Or at least it was so pre-OpenJDK 8, when I looked.

Is anyone successfully using Java on FreeBSD in production?


Depends on what you mean by production.

I've been running Gitblit and multiple Minecraft (Spigot) worlds with Openjdk8 on FreeBSD 10 for more than a year and never had any problems (Jenkins worked fine too but I used it only for a short time).

But my userbase is only about 10 people, mostly in minecraft (to get a sense of the main world's scale: [0]), so I don't know if that qualifies :)

[0] https://world.vanwa.ch/


There was a time when you were required to accept a license using a web browser to build the port… Now you just `sudo pkg install openjdk8` or openjdk7 or openjdk6.

You can even install IntelliJ IDEA with pkg!


  When I say "Linux", I mean Red Hat. I mean Slackware. I mean Mandrake. I mean Debian. I mean SuSe. I mean Gentoo. I mean every one of the 2 kadzillion distributions out there, based around a Linux kernel with substantially similar userlands, mostly based on GNU tools, that are floating around the ether.
So... do you include Android? It's worth asking, because though it uses a Linux kernel, it's a heavily modified one (little chance of Binder ever being accepted into Mainline, or of Android abandoning it), and its userland is very different from anything you're used to from GNU.

Edit: I can't find a date, but from other stuff in it I'm guessing the article is from quite a few years ago, before Android was mainstream or even existed.


> little chance of Binder ever being accepted into Mainline

I hate to break it to you, but...

https://git.kernel.org/cgit/linux/kernel/git/stable/linux-st...

Looks like it's in 4.1+.


I-- it... what??

I really haven't been paying attention O_O


This article is 10+ years old.


2005. Long before it existed actually.


I have a lot of sympathy for the BSDs. I actually learned Unix with NetBSD in 2004. Problem with BSD nowadays, at least for me, is that all supported desktop hardware is really old. The newest, still available, Laptop that works is the Lenovo X240 (and other x40s....). And when everything works, like on my Samsung NP530, there is strange stuff like a super slow and unreliable Wi-Fi connection. So if you want a great OOTB development experience with the same, working, OS on Desktop and server, Linux is the way to go...

But I cannot help and sometimes try the BSD stuff out, as it feels like "my parents home".


The lead dev of Dragonfly worked to get it working on:

https://www.amazon.com/Acer-C720-Chromebook-11-6-Inch-4GB/dp...

That's a sub-$200 Intel Haswell machine. It also means Dragonfly (as well as the other BSDs) work fine on similar hardware.

Similarly, you can put together a modern sub-$300 desktop that will run it just fine.

Also see:

http://lists.dragonflybsd.org/pipermail/commits/2016-May/500...


OpenBSD 5.9 runs on Broadwell: http://www.openbsd.org/59.html


I use FreeBSD on the Desktop. It took me a bit to set up my system, so I keep lots of backups. I've been exploring the idea of building an image that contains all the stuff I use.

I find that FreeBSD is generally a lot simpler and more discoverable than Gnu + Linux. If you want a bare-bones UNIX experience free from SystemD and Pulse Audio, I'd recommend giving FreeBSD a try. The FreeBSD handbook is very nicely written, and at the very least using FreeBSD is an educational experience.


I remember checking out http://over-yonder.net/ after being linked from Slashdot circa 2006. I always have a soft spot for the "Homepage" link on Slashdot. I was impressed with his resume generator at the time: https://www.over-yonder.net/~fullermd/resume


A check of the wayback machine indicates this was first posted back in October 2010. Mods might want to tag it as such since some of the information is dated.


I'm not sure I see the distinction between the "FreeBSD's version of tcpdump" and, say, "Debian's version of tcpdump" ( http://sources.debian.net/patches/tcpdump/4.6.2-5%2Bdeb8u1/ )?


Linux is Python, FreeBSD is Scheme ;)


What's Solaris, then?


Rust - lesser understood and known but has incredible power...


COBOL


Algol60


> Darwin is closer to a standard BSD feel, but most of its userbase is people who came from BSD, so it’s a bit outside the scope of this essay as well.

I'm confused by this sentence. Did you perhaps mean "people who came from OS X"?


Actually I try FreeBSD as Web Server in a single core VPS (development purposes) and I'm happy with performance and stuff. But my default workstation is Linux in this case Fedora 24 (a RH based) with Gnome 3.x


I love BSDs for their pureness. You feel connected to the machine.


This seems like it was written a decade or so ago. I don't know how much of this has changed in that time.

It's well written and informative, though.


Some call it chaos, others call it 'Freedom of Choice'.


BSD has a wonderful, unified events system. That incorporates blocking disk IO. Linux has epoll.

https://www.nginx.com/blog/why-netflix-chose-nginx-as-the-he...


BSD and Solaris (which also has kqueue, and something else IIRC) got this really right while Linux got this really wrong.

I still prefer NT's IOCP, but kqueue isn't bad. epoll… is what happens when you get someone who's never done asynchronous I/O to design a system for doing asynchronous I/O.


Solaris has its own form of completion ports too, I believe. https://web.archive.org/web/20110719052845/http://developers...

NT's IOCP is good, sensible model overall having used it some recently. It's mildly put off by some "gotchas" (like async calls occasionally returning synchronously if there's nothing else on the queue, so you have to be careful with send, recv, etc), but the actual design is good. Thread pools being a convenient, native API in Vista+ is also a bonus.


FreeBSD also has Netmap and DTrace out of the box. They will also get TLS support in sendfile(2) as soon as the patches from Netflix land in HEAD.


I understand the rationale for this (performance), but it scares me a bit to have something as complex as a TLS stack running in kernel-space, which I assume it will do.


The sendfile/TLS work from Netflix does _not_ rely on the TLS stack running in kernel space. TLS session management, negotiation, and data framing is dealt with at the application layer, via nginx/openssl. Once the TLS handshake is completed and session keys are derived, you bind your session key to a socket. The FreeBSD kernel then sees this, and when you call `sendfile` to push static data out of the socket through the page cache, it does "bulk encryption" at that point (presumably using nothing more than an AEAD or whatever TLS negotiated for you, with the given key). Basically, instead of read()/encrypt()/write() going through userspace, `sendfile` can just directly do something like `encrypt(static_data_in_page_cache_to_send, out_socket_addr)` when you call it, right there in kernel space.

The need for these cryptographic primitives isn't too onerous on its own, either; FreeBSD already needs them for IPSec (among other things probably), so the vast majority of needed, kernel-level encryption code was already there before this.

> ... which I assume it will do

I hate to lament or anything, but: ... why assume at all - they've written about it? I really wish people would just read the paper about this feature, because this is a large misconception about their work that nobody ever seems to get right, and I say this as someone who doesn't use FreeBSD at all, I just found the paper interesting. Everyone hears "sendfile with TLS support" and immediately jumps without actually reading.

Again, sorry to lament. It's just a personal nitpick or something I guess; the paper is very approachable though, so I encourage you to read it before assuming. I encourage you to do this with most papers - even if you don't understand it, since you might still learn something :)

A google search for "freebsd tls sendfile" brought me this immediately:

https://people.freebsd.org/~rrs/asiabsd_2015_tls.pdf

The main points about this are on page 2, last paragraph on the right column, and page 4, paragraph after the bullet points on the left column.

That said, you always have to look at the code to determine if it's really something worth going upstream, for all the usual reasons.


Sometimes I let my biases get the better of me. Thanks for the reference.


I agree, but there's already so much stuff that doesn't belong in kernel space inside Linux and FreeBSD, that this doesn't make it any worse. We will hopefully see the rise of something based on seL4 (microkernel) or Barrelfish's multikernel design (one kernel per core, no shared memory), and fix those issues at the root.



Must... resist... clickbait title... I'm not strong enough!


Given my very limited knowledge of BSD, it comes across as informative and well-balanced. Its HTML title, BSD For Linux Users is more descriptive than its first-level heading.


Linus Torvalds is benevolent dictator for Linux kernel; He is very rigid on what piece of code goes to kernel space or user space;


Something I don't understand about FreeBSD is why they use the evil daemon for the logo? Whatever story behind it, I just think there are many other alternatives they could use as the logo.


We detached this subthread from https://news.ycombinator.com/item?id=12034979 and marked it off-topic.


It is a satanic cult. All these free software people are unpatriotic, satanic terrorists.


And communists! Don't forget communists!


It's a daemon. As in a long-running background process.


https://en.wikipedia.org/wiki/Daemon_(classical_mythology)

Also described as neutral forces of nature, and in other ways. Not always as persons, often as if they were mechanical.


What is wrong with beastie? Unlike backwards religious fundamentalists, the cute little devil hasn't harmed a single person.


> Unlike backwards religious fundamentalists, the cute little devil hasn't harmed a single person.

Even if one doesn't believe in the Devil, surely it seems in poor taste to use as a mascot a mythological creature who is reputed to be the source of every single evil, cruel, horrible thing in all of history. It's almost as bad as naming a piece of encryption software 'felony.'


As a practicing Christian¹, I can honestly say the BSD Daemon never bothered me. It's kind of clever. Also, I have it on good faith (heh) that the FreeBSD developers aren't satanists.

Expressions like “speed demon” don't bother me either. The general concept of a demon is somewhat different from a fallen angel; I don't have trouble with demons being painted in a neutral/jokingly positive light, unless the demon obviously refers to the devil or affiliates (in which case “woo the devil is awesome!” makes me moderately uncomfortable).

¹ Odds are I could even get labelled as a backwards religious fundamentalist.


He's not the Devil, he's a Daemon. Which has been used as a mascot of many schools (in the US) as well. But let's suppose we stop using imaginary creatures as mascots, then we should also stop using real ones. GA Tech and other schools use the yellow jacket, a real creature (a wasp) that has killed people.


I hope you complain to Apple for using as a name and symbol the fruit that figured so prominently in the temptation and fall of Man.


> poor taste to use as a mascot a mythological creature who is reputed to be the source of every single evil, cruel, horrible thing in all of history

Well ... what about Pan and Dionysus ? What about all other cultures in which there is no Devil representation that has a tail and horns ? History is long and the world is more diverse than you seem to realize.


What's evil about Pan and Dionysus?


Nothing, this was my point. The OP just implied that the small devil image of FreeBSD represents evil in the entire history of humanity which is false.


You wrote in the context of the devil being

> the source of every single evil, cruel, horrible thing

and used Pan and Dionysus as a counter-example.

If you're saying 'well, Pan and Dionysus both have horns like the bsd mascot' I don't find that compelling either because the identifying characteristics don't line up (different types of horns, the tail, the pitchfork, and so on).


Both Christians and Muslims have been a huge source of actual (vs. childish imaginary) evil over the past 2000 years. Anyone who takes the devil seriously should stick to reading Harry Potter.


…Believe it or not, humans in general have been a huge source of actual evil throughout human history. You could probably reasonably argue that Christians and Muslims¹ have contributed enough good that in spite of their evil actions they've probably had a net positive effect on humanity.

Oh, and quite a lot of math and science dates back to Christian and Muslim scholars, so your last sentence is perhaps dismissive of the general intellect of religious folk. As someone rather appreciative of the work of Isaac Newton, Blaise Pascal, Gregor Mendel, a variety of ancient Muslim scholars, and many more, I'm glad they didn't stick to reading fiction.

¹ Who I don't generally group with Christians (the religions, while superficially similar, are pretty different), but I don't have anything against them and I would not consider them “evil”, or at least not more evil than humanity at large.


> Oh, and quite a lot of math and science dates back to pagan Platonic and Vedic philosophers.

FTFY.


You can't conduct religious flamewars here.

Also, please don't create many obscure throwaway accounts on HN. This forum is a community. Anonymity is fine, but users should have some consistent identity that other users can relate to. Otherwise we may as well have no usernames and no community at all, and that would be an entirely different forum.


> I like FreeBSD, and I use it as a server OS and on a NAS box but you only need look at https://wiki.freebsd.org/Graphics to understand that if the "Linux Desktop" is a joke compared to MacOS and Windows then the "FreeBSD Desktop" is even more so.

I've never heard anything good about the "MacOS desktop" and I'm pretty sure that the "windows desktop" is far behind the "linux desktop"(plugins, performance, menus etc.). Have you tried something else than xfce or unity?


We detached this subthread from https://news.ycombinator.com/item?id=12034708 and marked it off-topic.


> far behind the "linux desktop"(plugins, performance, menus etc.). Have you tried something else than xfce or unity?

You're missing the point, GP is talking about a level below the DEs, namely the graphics stack, which if unavailable or inefficient makes any discussion about the DEs moot since the GUI may very well be unusable at all in practice.

In that regard the "Windows desktop" has plenty of favorable points and "macOS desktop" is just stellar because it has had a comparatively perfect track record of driver support including a compositor since forever.

With a VESA/OPenGL/Quartz software render fallback many people on the Hackintosh crowd have been using OS X on unsupported cards via software fallback and not even notice they weren't getting the full Quartz QE/CI, which is nothing short of astonishing.

This is the signature attitude that points to the point being missed, what good is conky or awesomewm or ratpoison or xmonad or openbox if I can't make tearing and rendering artifacts disappear nor get proper colour management or sane HiDPI support? (PS: Xinerama, I hate you)

As for the DE experience on other platforms, Windows has had a kinda-tiling WM that regular people do use in the form of AeroSnap, while macOS is now getting tabs-in-the-window-manager, and Exposé+Spaces that evolved into Mission Control has been a positively brilliant experience for years.

Meanwhile on Linux, the very fact that you have to massage it into something useful ever so slightly at every level is, to me, a telltale sign of its deficiencies. Things are getting better (wayland, drm2), but they're getting better on the other platforms too. The best part is that nobody's standing still and things are moving forward (hopefully, as from the outside it looks like Linux people are running in circles these days around, so I hope reinvented wheels are getting rounder).


> As for the DE experience on other platforms, Windows has had a kinda-tiling WM that regular people do use in the form of AeroSnap, while macOS is now getting tabs-in-the-window-manager, and Exposé+Spaces that evolved into Mission Control has been a positively brilliant experience for years.

> Meanwhile on Linux, the very fact that you have to massage it into something useful ever so slightly at every level is, to me, a telltale sign of its deficiencies. Things are getting better (wayland, drm2), but they're getting better on the other platforms too. The best part is that nobody's standing still and things are moving forward (hopefully, as from the outside it looks like Linux people are running in circles these days around, so I hope reinvented wheels are getting rounder).

And then again, after using almost 10 years xmonad at work and at home for everything, using Windows or OS X is definitely not a nice experience. I don't really care about GUI tricks or clutter on my desktop. I just want my terminal, my editor and my browser to appear when I press the key combinations and I want my desktop to manage the windows so that they're always in the right place.

The other thing I don't miss from the Windows and OS X world are the updates. I don't want to spend one work day to update my system to the next version and possibly fixing things because something changed in the xcode and I need to reinstall it to get my toolchain to work, or something changed in the Windows Linux subsystem and again I'm spending time fixing stuff.

I like things to be simple and a very basic Linux installation with a good wm which you know by heart is miles ahead of everything else. And it doesn't change so you can focus on your work.


Most people, otoh, enjoy playing games, watching videos, that sort of thing. Some people even do work that's dependent on the graphics card. Just because you use your computer for a very minimal amount of tasks doesn't mean everyone else does.


That misrepresents the situation on Linux. You can play with it, seriously since Steam is there, and you can watch videos since forever.


Absolutely, I've been using it for a while (NixOS being my current distro), but I took what my parent was saying as that the graphics stack is irrelevant because they don't need to do anything with the graphics stack.


I do all of those things. My gaming PC is solely for games and netflix and it's connected to our television. I think it more as a gaming console than a computer. It has Windows 10 in it just for Witcher 3 and to remind me of never installing it to my laptop or workstations...

With my arch/xmonad laptop I can handle everything else, except that damn Witcher 3 :)


Everybody does that stuff in the browser now a days, which works fine in freebsd.


"Everybody" certainly doesn't play games in the browser.

Most PC gamers will tell you that the experience usually sucks and is often poorly performant.


Ever heard of Amusing Ourselves to Death by Neil Postman?


I think you are referring to the critique on television? Perhaps you can pay more attention to the last sentence of vertex-four's comment. That being "Just because you use your computer for a very minimal amount of tasks doesn't mean everyone else does."

There are a million and one ways to use your graphics. Be it watching videos or gaming, which while you might have a problem with that, is actually done by people. It can also be used to smoothly decode and render a training video, or to make spreadsheets more pleasant to crawl through without the screen tearing and making everything look like a mess. There are even more minuscule applications you can do with graphics hardware and the software to drive it: transparency to subconsciously still remember what you're returning to when you click on another window, for example.

As nice of you as it is to warn the parent about their information-action ratio, you could use this opportunity to appreciate people's freedom of choice as well as understand there is more to Windows and Mac than Netflix and Call of Duty.


Wow, I am not sure how you managed to infer my lack of appreciation for people's freedom of choice merely from a title of a book. You may understand what I am trying to convey, if you ever read the book.


How about just coming out and saying it, instead of leaving smug hints about some niche book?

You're not adding anything to the discussion.


See, from my point of view enough was added to the discussion with just a comment about a "niche book". Rest of the comments on it are mere reactions of ego:)


ChuckMcM's criticisms seem to be of the X Window System, not of any particular desktop environment. Architecturally, it really doesn't fit with modern systems and has a lot of legacy baggage. It doesn't matter what DE you like, X still powers it, and it's the issue.

Wayland is on the way, which may bring Linux graphics into the 21st century, but I haven't been following that very closely.

Apple's way more on the ball about GUI stuff. While OS X lags behind in OpenGL support, which sucks for games, they nail everything their desktop environment needs to function. But notably, they don't use something like X. They have their own software stack.


Enlighten me - what's the problem with the linux desktops?


I would suggest reading (and comprehending) this excellent post for a review as to some of the tire-fire issues of X: http://blog.mecheye.net/2012/06/the-linux-graphics-stack/

DRI2 is an improvement over DRI, of course (which was already about five to eight years behind SOTA for GPUs when it came out), but it's still a rickety, insecure mess--for one, any application can fire messages at any window in the X environment and read input sent to any window, which has some obvious implications for the usefulness of containers on the desktop--that is sorely in need of replacement. This isn't to criticize X when it was developed, mind you--it was fine then. It should have been defenestrated by 2002.

Full disclosure: I get the feeling from your posts that you're looking to win an argument, rather than learn anything, so I'm not likely to respond to you further unless your tone changes. HTH. HAND.


Thanks for the informative link on the technologies that comprise the graphics stack on a typical GNU/Linux system. I see the author also has a great explanation of the different code components that form part of the Direct Rendering Infrastructure (DRI): http://blog.mecheye.net/2016/01/dri/


For summary - X is just as insecure as windows/macos - we know it: wayland, mir and snappy is coming.

For your disclosure - don't be too lofty with me - I'm trying to understand your problems because I've a very different experience.


It's not a security problem really. That's just a side issue. The problem is, today, if I want to watch a movie, play a game, do anything that requires using some slightly advanced GPU stuff, I'll get tearing, artifacts, or downright crashes.

This is mostly because X11 sucks. (It's also partly because the proprietary nvidia drivers sucks). Nowadays I reboot on windows to watch netflix or play games (even linux-compatible ones) because the experience is just much better. Linux has been constrained to work stuff.

Now wayland is coming (apparently. NVidia is still trying to pull some shit[0]) and should fix all that. That's great ! I tested wayland a few months ago though, and my GPU's proprietary drivers weren't supported. So still no games, still no movies. :(.

0: http://www.phoronix.com/scan.php?page=news_item&px=NVIDIA-Wa...


I think this honestly comes down to driver support rather than the Xorg stack. I dual boot with Windows 8.1 (Which is the OS supported by my manufacturer) and Arch Linux, and Arch is consistently more stable and faster for viewing videos and playing games than W8.1.

For example, I tried watching TNG from my external drive the other day (Using the latest VLC for both). On Linux this isn't a problem, the quality is good and there's no stuttering. On Windows it was hellish: The quality was extremely poor and it froze every (literal) 10 seconds to grab more data from the disk.


VLC is just a shadow of what it used to be. On Windows I recommend you to try something else like MPC-HC


How is VLC a shadow of what it used to be? I'm not trying to be argumentative, genuinely curious.


I've been using a Linux machine (Ubuntu+X11+nvidia+Kodi) as my HTPC for years. What's so sucky about it that I didn't notice during that time? No tearing, no artifacts, no crashes.


If you're using the nouveau drivers, everything's fine, but you don't get hardware acceleration. That means no games, but mostly tearing-free rendering, and no artifacts. The nvidia proprietary drivers is probably what's causing problems.


I'm using the proprietary drivers, 340.96-0ubuntu0.14.04.1 to be exact. The machine is 2009 vintage Atom 330 with Nvidia ION graphics, so I need VDPAU support for Full HD video decoding.


Nouveau has had hardware acceleration for years, although it has been desktop focused, not game focused.


Wayland is not going to make your Nvidia drivers more stable.


I have the suspicion that by the time Wayland is done, it is as much a mess as X is right now. Meaning that we would be better of just finding a way to treat every program as a root window (effectively killing the information leakage that exist right now).


Back in 2011, security researcher, Joanna Rutkowska wrote a very enlightening article on the inherent security deficiencies of the X server architecture which allows allows any windowed application to control any other one: http://blog.invisiblethings.org/2011/04/23/linux-security-ci...


Back in the 90's, the high-assurance community fleshed out the issues and solved many of them on top of strong endpoints. Here's an example:

https://www.acsac.org/2006/papers/epstein-paper.pdf

One from 2005 that I sent Joanna in gripe about Qubes not having a trusted path:

https://www.acsac.org/2005/papers/54.pdf

Good to see she added a better graphics system at some point. I suggest looking at prior work, though, as it's by security engineers with knowledge of adding it to every aspect of lifecycle and intent to minimize TCB. For example, GenodeOS is using Nitpicker + a microkernel. The abandoned EROS OS, whose site is still up, combined a different windowing system with a capability kernel and new networking stack. All to minimize risk in system in fine-grained way.



For a start, X11's security model. Every X11 application can read all keystrokes, mouse events, do screengrabs of other window's contents, etc.

In other words, your browser can snoop on what passwords you type into an X terminal if it wanted to or is compromised.


Yup yup. Like, I recall a blog post by a Docker core contributor and employee saying, "Run your desktop apps in Docker containers to sandbox them! Just volume the X socket into the container."

(Of course, the Dockerfiles were also running apps--like, say, Chrome--as root, with no user namespaces, so it wasn't exactly great anyway.)


Sure it's breakable, but that still isn't bad advice. In practice, an application run with the evil bit set won't know how to break out of such a container, nor how to abuse the X socket.

It's only bad advice if you're relying on Docker to provide security against targeted threats. Know your enemy.


I.e. X11 working in same way as Windows, OS/2, BeOS, MacOS, OSX, etc.

But in Linux, I can run application in container or VM and then use VNC, RDP, or Spice to connect to it in secure manner.


I.e. X11 working in same way as Windows, OS/2, BeOS, MacOS, OSX, etc.

No, it's not. OS X has GUI isolation, an application cannot read keystrokes or read other application's contents, unless you explicitly give permission to do so. This is the reason why applications that can do more, such as Alfred, require you to explicitly enable these rights in the Privacy section of the Security & Privacy prefpane.

Windows also has GUI isolation (UIPI), but it's a bit murkier. As far as I understand, lower-privileged applications cannot read events from higher-privileged applications.

https://msdn.microsoft.com/en-us/library/bb625963.aspx


Windows10 is already recording every keystroke/data of yours - what's the difference here? Also, Canonical's snappy packages can kinda solve this problem.


Since about the time of Windows Vista, non-administrative windows aren't able to hook keyboard events for administrative windows.[0]

So while it's possible for a program to listen to keyboard events for other non-administrative windows (such as the password for a browser), it isn't possible for a non-administrative window to grab keyboard input for stuff like windows password prompts, or information typed into administrative console windows, etc.

[0] - http://stackoverflow.com/questions/3169675/how-to-use-setwin...


This great argument again...


Based on this thread, I'd say the problem is a lot of people haven't given them the time.

If you select your hardware, they work pretty flawlessly. Certainly right there with Windows or OSX or anything else. In a lot of cases you don't even need to be very selective, you just need to have a modern machine.


> Based on this thread, I'd say the problem is a lot of people haven't given them the time. > If you select your hardware, they work pretty flawlessly.

Maybe the fact that I have to give them time is one of the major problems? You know how much time I invested in getting my Windows and OSX desktops to work flawlessly? Zero.


If that's your anecdote, here's mine: _Every_ time I install Windows I need some third party driver just to make graphics and networking go. Then I need third party software just to get a development environment or a decent shell.

Upgrading these are hit and miss, and it's more common than not that third party drivers do not support newer versions of the operating system.

Linux, meanwhile, integrates all of these components. As long as you stay away from non-supported third party drivers it just works, and upgrades are painless. (Until some desktop developers change the GUI again, but that's another story.)


I have been using Linux for a long time and while all the underlying criticisms of X and the desktop environments are valid I have to concede things have come a long way.

I think a modern Ubuntu 16.04 Unity desktop for instance is actually a bit of a revelation for long time Linux users because it just works out of the box. I didn't have to install a single package or fiddle with anything for a change. It's fast, smooth, robust and works exceedingly well out of the box and wait for this it's even a delight to use.

It's another thing I am a long time Gnome user and won't even have tried Unity because of the now I think inexplicable bad press Unity has got but it's been quite a revelation. I urge those who have not tried it yet due to bad press to give it a shot and prepare to be suprised.

I compare it to my Windows machines and OSX laptops and do not feel a particular difference apart from preferences and of course once you get into specific use cases like adobe apps or gaming there is still ground to cover. But for a general productivity desktop with full acceleration I think it's there in many ways. If you need a rich ecosystem of dev tools it goes to the top. There is of course always scope for improvement architecturally and other things and I think that is happening with mir, wayland and Vulcan.


I do a windows reinstallation about once every second year, and I routinely get problems.

Network drivers are almost always missing. Looking at the device manager, about 5-10 different devices fails to auto-install, all part of the motherboards. The default graphic drivers tend to "start", but is limited in refresh rate and resolution, and moving around windows shows a noticeable stutter until I install some official drivers from the graphic card manufacturer. Sound normally works without issues.

On linux, the problems are almost the reverse. I have yet to have network problems on a fresh debian installation. Graphic is a all-or-nothing deal, which means either x-server starts up normally or it refuses to start completely. Sound is normally a pain, but looks to have better default behavior in the last 3 years or so.


> I do a windows reinstallation about once every second year, and I routinely get problems. Network drivers are almost always missing. Looking at the device manager, about 5-10 different devices fails to auto-install, all part of the motherboards.

What are you reinstalling, Windows XP every time?

Newer versions of Windows are getting better at finding and installing sane drivers for the hardware. It’s still not perfect, Windows Update doesn’t have all the latest drivers and OEMs ruin everything especially on laptops, but these days I install the latest Windows and everything usually just works.


Windows 7, desktop machine, and windows update don't really work without a functioning network connection.

But if you don't believe me, and choose to believe those reporting issues with Linux, just do a google search on windows install and network issues. There is plenty of people reporting the same issue. to cite windows own support page:

"If Windows can’t find a new driver for your network adapter, visit the PC manufacturer’s website and download the latest network adapter driver from there. If your PC can't connect to the Internet, you'll need to download a driver on a different PC and save it to a USB flash drive so you can install the driver on your PC"


At this point, Windows 7 is 7 years old. Do an apples to apples comparison here, and try getting your same hardware working on a copy of Ubuntu 9.04 or RHEL 5.3. How hard is it?


Why should I only look at the exact date which windows 7 was released, rather than the date when a major release of debian got out, say 2011 with Debian 6.0? No network issue, occasional problems with x-server, problems with sound (as said above). Network works out of the box. On windows, issue with network, issue with graphics (until driver was installed), no problem with sound.

A few months ago I bought a gaming laptop on release date. Guess what, network worked without issue. X-server did not start until proprietary drivers were installed. Sound worked.

Windows release schedule do not match Debian's release schedule. Each year after a release, the default drivers will get worse, but the general experience can still be obtained by people who experience the install process. The experience I have endured with windows is issue with new motherboards and especially network drivers (and built-in raid cards for the installer... good grief, that was a wasted afternoon trying to get the installer to accept the raid drivers). Second to that, the fall back graphic drivers look crappy and is bad in every way except that its slightly better than a basic command prompt.


We're literally talking about a 15 minute setup on a variety of nixes on a wide-range of hardware.

Intel chipsets and their graphics drivers have excellent support out of the box. That goes for everything from refurbished Thinkpads to Chromebooks to used Macbook Pros to new desktops.


Are you comparing like with like, your Windows & OSX desktops were preinstalled presumably, same price brackets too?


I run a Linux desktop for work (typing on it right now) as a dual-boot with my Windows setup. It is a "modern machine", at least if an i7-6700K and a 980Ti are modern, and I couldn't get X to start with Ubuntu 16.04 LTS. And yet it did with Ubuntu 14.04, which is bizarre. My time is valuable enough that I didn't debug it further than that--if I wanted to light my time on fire I'd just go set up a Hackintosh and have a better desktop experience, so for now I'm getting by with 14.04. But that sucks profoundly; it was my first attempt at a Linux desktop in probably three years and I was pretty disheartened by how things have gotten worse for my use cases in that time.

"If you select your hardware," you can definitely have a decent time. But I don't know many people for whom the operating system is more important than what you can do with it, and part of "what you can do with it" is "use your hardware."


That is bizarre. I don't use a lot of Ubuntu.

Having had an issue with Ubuntu and nvidia in the past, you might want to google NOMODESET and setting it at boot, which should let you boot into X/Unity and get the latest drivers.

> But I don't know many people for whom the operating system is more important than what you can do with it, and part of "what you can do with it" is "use your hardware."

Absolutely. But if OSes aren't directly equivalent (and the hackability of a nix gives it more power than Windows can ever have), then it's worth sorting out those hardware issues (as frustrating as they are).


> Having had an issue with Ubuntu and nvidia in the past, you might want to google NOMODESET and setting it at boot.

Thanks; I later ran into something that hinted at that. Frankly, though, at this point I don't care. I have something that works and will be patched until 2019. I don't care about my desktops. Every second I spend debugging something stupid on a desktop is a wasted second. This is bad for me and I resent it.

> and the hackability of a nix gives it more power than Windows can ever have

Ehh--if you have to use that "hackability" to get something that is minimally usable, that's kind of a push. (Or, when you consider OS X, a serious negative, because the only thing I have to do to get OS X to where I want is install Homebrew and a short list of packages, none of which require configuration.) I don't care about desktop environments or tiled window managers, the extent of my interaction with my WM, which I could not name, is a sort-of reimplementation of Aero Snap. Again, if I wanted to throw a bunch of time away on an operating system, I would set up a Hackintosh and actually be able to use Photoshop. Which, in keeping with the theme of "Linux desktops are a fractal of unusability", would be a significant improvement. I tried to avoid a reboot and use the GIMP yesterday for something. I think I need counseling now. I ended up using Pinta, whose layered image format ("OpenRaster", which wants to be a standard but it seems like nobody uses it?) is so bonkers and edge-case that ImageMagick doesn't even support it, to say nothing of Photoshop or even Paint.NET.

It turned out, kind of to my surprise, that Linux on the desktop offers very little to me as a Linux developer, sysadmin, and long-time entrails-reading "power user". That's pretty damning, in my book.


> OpenRaster

Another ocean boiled, courtesy of the Free Desktop Project.


Yeah...I went down a pretty bad rabbit hole trying to figure that one out. I mean, XCF already exists? And application support is marginal, rather than nonexistent.


I'm pretty sure the only reason the initials are "FDP" and not "NIH" is because somebody else came up with the latter.


Without a bug report, nobody can tell if your case is unique or general. You spend time to rant here but with a bit of additional time, you could have opened a bug report with the appropriate information and maybe help people running into the same problem.


I think it should be obvious that I don't have a reproducible case anymore; this was a couple months ago now, after I built a desktop. Nor am I interested in expending multiple hours creating one, because that doesn't benefit me--my stuff works now, if suboptimally. I couldn't leave it in that state then, because I had work to do with that hardware; I couldn't leave it in a trashed state just in case somebody had questions and needed to autopsy it, now, could I? Expending more time than I did would be better served just biting the bullet and setting up a Hackintosh so I have an environment I like, rather than tolerate.

And this isn't a "rant". I promise you, when I'm ranting, you will know.


> if I wanted to light my time on fire I'd just go set up a Hackintosh and have a better desktop experience

As I understand it, installing macOS on non-Apple hardware is a license violation. Your employer is okay with that?


I'm self-employed, so yes. I've got four grand of Apple gear in my office, and if they sold an xMac I'd buy one tomorrow. I don't consider it to be a moral question, and in practice Apple doesn't seem to really care.


I work for a software company and it's pretty strict about honoring license terms because we want our customers to do the same.


That's a fair position. From mine, Apple gets its vig from me in plenty of other ways. There's zero pirated software, music, videos, etc. in my house or my business; giving Apple another channel through which to sell me stuff, from developer licenses to apps, doesn't trouble me.


They're Linux.

How are you supposed to run Adobe inDesign on a Linux desktop?

Or any other useful application, really.

Linux should just kill off its desktop. Mac OS X won the Unix desktop wars, and has the expected use models of a desktop, with a proper modern GUI API as well, instead of the ancient X hacks.


> Linux should just kill off its desktop. Mac OS X won the Unix desktop wars, and has the expected use models of a desktop, with a proper modern GUI API as well, instead of the ancient X hacks.

Mac OSX won the "desktop wars"? That's not even funny - it's one of the most awful DEs. You people have clearly no experience with any other OS besides the one from your favorite brand. "Hacker news"... more like "Noob Army News".


> How are you supposed to run Adobe inDesign on a Linux desktop?

With wine...

> Or any other useful application, really.

You search in the software center or in your menu and start the app. Simple. FUD elsewhere, apple-noob.


We've banned this account for repeatedly violating the HN guidelines. If you don't want it to be banned, you're welcome to email hn@ycombinator.com. We're happy to unban people when there's reason to believe that they'll only post civil and substantive comments in the future.


Please stop disrupting conversations.


You have a longstanding pattern of abusing HN, including (I believe) with multiple accounts that we've had to ban in the past. We've given you many warnings and requests to stop. Since it seems you can't or won't stop, I'm banning this account as well.


I've never heard the opposite. I've switched to OsX 4 years ago after being completely unhappy and irritated with Windows for over 12 years. Now after months of using Ubuntu (unity, xfce) and Mint (Cinnamon) I'm convinced that OsX is best for average (and power) desktop user by a huge margin. Although I would be much more happy with something Open Source.


You can hear the opposite from me then. I have an OS X laptop and a desktop running Gnome (Fedora) and I find Gnome far better. Originally, I didn't like Gnome 3, but after I used it a lot, I got used to it; I added some extensions, and now I prefer it. In my opinion:

* Finder is garbage (alt tab is broken and non configurable. I can't figure out how to convince it that the built in display is always the primary display. It seems to struggle with nfs mounts which Gnome doesn't at all. I've had just as many, or more issues with OS X and projectors as I have with Gnome). Where is the centralised location for Finder extensions to tweak it to my preferences?

* The default command line tools that come with OS X are old and basic (reminds me of Sun's tools) and have a death of a thousand paper cuts (e.g. top defaults to sorting by pid, lol; they still ship bash 3.5.57, locales are all fucked up).

* brew, compared to dnf or apt, is not good. But that's Open Source people trying to shore up OS X deficiencies so I won't call it "garbage". (F)OSS people put in a lot of effort to keep it running and they deserve praise.

* The programs from Apple are all terrible. I literally don't use any except Safari sometimes since it allegedly doesn't use as much battery as Chrome or Firefox. I guess I sometimes copy paste stuff into Textedit and I sometimes use Calculator.

It's all certainly usable but there is no consensus that OS X is the best. YMMV. My feeling is that each time there is an OS X release, it jumps ahead of Gnome; but Gnome usually trundles along and surpasses it and maintains a lead most of the time.


> Finder is garbage

Try PathFinder: http://www.cocoatech.com/pathfinder/

It can be used for free until you decide to buy it.

> alt tab is broken and non configurable

Cmd+Tab? What exactly do you find annoying?

Try Missing Control (F3, or set a mouse hot corner in System Preferences -> Mission Control) to quickly see all windows (for all process, or Ctrl+F3 for current app, Cmd+F3 for desktop) and just raise the one you want.

Also try TinkerTool: https://www.bresink.com/osx/TinkerTool.html

> brew, compared to dnf or apt, is not good.

Try nix: https://nixos.org/nix/

> The programs from Apple are all terrible.

Which ones, and how, exactly?


>Cmd+Tab? What exactly do you find annoying?

Cmd-` to change between windows within an Application is a behaviour I don't like and prefer Cmd-tab to cycle through all windows. Gnome defaults to the same thing as Finder but it's possible to change the behaviour.

>Try nix: https://nixos.org/nix/

I've taken a look at guix and it's promising.

> Which [of Apple's programs are terrible], and how, exactly?

Well since I stopped using a lot of these programs where possible, my opinions may be out of date. But Quicktime player doesn't play most files. It's slow. It hangs. It's all around worse than VLC and MPV. And you used to have to pay to upgrade it to do things that other players do for free. And why can't it play DVDs? Or can it? If so, why is there a separate DVD player? (Rhetorical questions; I don't care since I don't use either program).

App Store is outrageously slow to do anything. 14 seconds to check for updates? wtf? (server backend and not gui issue, but part of UX). I had XCode update fail at downloading the 3 or 5 GB ball of mud. Instead of resuming the download or checking the hash of the file to make sure it wasn't corrupt, it just tried to download 5GB again.

XCode takes up 5GB and seems to be required for some particular development work. It's not really appropriate for Mac Book Airs with their small ssds. So I just don't do that work or try to use a Linux VM.

Facetime just doesn't work 3/5 times.

Preview is alright for reading PDFs, but you can't look at an image in a directory and press a hotkey to see the next image in the directory (as least, I haven't seen how); eog does this.

Mail is an absolute pile of garbage. The threading is confusing as hell. It's soooo slooooooow. Then it hangs when pulling in sysadmin style alert folders (with thousands of mails). Deleting mails is also very very slow (i.e. cleaning out said mailbox takes days). Instead of pushing operations to a background thread it where you can see the progress like in Thunderbird or whatever, it just beachballs. And for some reason, if you want to attach Calendar to an online account you have to do it through Mail (wtf) and if you don't want Mail to be used for the Mail then you have to configure the account correctly. Some people may be tricked into subjecting themselves to the pain of using Mail. :(

With Mail being so terrible, one ends up using Thunderbird or Outlook which have Calendars integrated. So Calendar becomes superfluous. Which is a shame because it works alright with online calendars, but it doesn't seem to have the integration to help plan anything with people.

I don't understand why Notes, Reminders, and Stickies all exist. Reminders should be rolled into Calendar. Notes allegedly integrates with Google but I don't see anything from my Google Keep account in my local Notes hierarchy. And is Notes supposed to compete with OneNote? They've a lot of catching up to do.

Then there's a lot of programs that I haven't opened in a decade since they used to be terrible (iTunes can't even play ogg out of the box, wtf) or I just don't have a use (photobox, game center, imessages). I used to have Pages and Numbers but I don't remember being impressed with them. From what I read, I haven't missed anything by not using them. But if they're free now, why aren't they installed by default for people who might want to write a document?

And if I've signed on to App Store with my icloud id (which needs a credit card to get the free updates, wtf!?), why is there no single sign on? Why do Game Center and Facetime and iMessages prompt me for iCloud credentials? I guess so I can sign in with different accounts (corp vs. personal?) But keychain access doesn't prompt me. So maybe keychain isn't backed up automatically to the cloud... :(


I used Linux on the desktop from 2001-2007 or so. Switched to OS X until then. It's no contest: OS X is smoother, more consistent, and lower maintenance. Linux desktops seem to have peaked with GNOME 2.x. I'm not even sure what's going on with Unity.


9 years is an incredibly long time in tech.

Aside from a lot of other very good wms, there's always mate if you really like GNOME 2.

http://mate-desktop.com/

Even better, none of those wms will go away just because Apple decided to change things.


MATE is better than OS X desktop, at least for me.


I tend to agree. My development workstation runs Linux (and it's fine, but on that machine I only need Sublime, a terminal, Chrome, and IntelliJ--anything creative or entertaining I do on my laptop or in Windows respectively) literally only because I'm not buying a Mac Pro. It works, sort of, after spending a nontrivial time figuring out kludging together decent replacements for common functionality (xsel garbage for pbcopy/pbpaste, etc.); it probably helps that I already install the GNU coreutils on my Macs because I know the GNU tools better.

In retrospect, a Mac Pro would probably not be that much more expensive than the time I spent getting this fairly minimally demanding environment set up, though on the other hand I did re-learn a decent bit of stuff about Linux in the process. On the gripping hand, things related to X aren't particularly important to my life, so that's kind of a push.


I agree that GNOME and Unity have jumped the shark — I think that I started to notice when GNOME got rid of xscreensaver and replaced it with a screen-blanker, with an unfulfilled promise of adding screensavers back someday, c.f. https://mail.gnome.org/archives/gnome-shell-list/2011-March/...).

Meanwhile everything went down the path of compositing and otherwise burning way too much CPU to do not a whole lot extra.

I switched over to a tiling WM years ago, and have been extremely happy since. It does exactly what I want; I can connect to a running instance with my editor and reconfigure it, as well rewrite it while it's running. It's pretty great.


Then I'd encourage you to try KDE. 4, not 5; 5 is still being developed.

I don't know what's going on with the Gnome guys, but KDE is flat-out the best desktop manager I've used, and I'm including Windows and OS X in the comparison. It's not without glitches, but the glitches are mostly of the "Google can't be bothered to make Chrome cooperate, and no-one has updated Emacs' DM support in the last ten years" sort.


As a long time Mac user I find KDE hard to enjoy. The UIs of KDE apps are almost universally overpacked, messy, or lopsided (one side of the window had its controls crammed while the other has awkward white space). It might sound like a silly gripe but it really bothers me. The design of apps from GTK+ desktop environments (GNOME, Cinnamon, Pantheon) generally feel much better, but they have their own problems.


> Linux desktops seem to have peaked with GNOME 2.x. I'm not even sure what's going on with Unity.

Gnome 2.x is a dead-end - it was buggy and ugly. Gnome 3.x is far better. Unity is just another canonical-outrage.


I use Mate (Gnome 2.x fork) daily. Gnome 3.x is ugly and buggy.


Would you mind explaining what you mean by outrage? For productivity, Unity beats any Gnome 3 setup I've tried so far. And we're still not talking about the maintenance/security nightmare that is the Gnome 3 plugin system.


By "outrage" I've meant canonical has developed it because they didn't like gnome shell(they like the innovation-thing).

> For productivity, Unity beats any Gnome 3 setup I've tried so far.

Unity has problems with multiple monitors, consumes more RAM and CPU and is also hard to customize.

> And we're still not talking about the maintenance/security nightmare that is the Gnome 3 plugin system.

But we should talk about it if we're here...


> By "outrage" I've meant canonical has developed it because they didn't like gnome shell(they like the innovation-thing).

After watching the "we're the only ones who know best, so shut up"-antics of the Gnome developers, I can understand that canonical got cold feet.

> Unity has problems with multiple monitors, consumes more RAM and CPU and is also hard to customize.

In my experience, setting up multiple monitors was much less of a hassle under Unity than under Gnome. I don't have any recent data on the resources use, but I wouldn't run either Unity or Gnome on a low memory setup. Firefox and Chrome dwarf any RAM use for compositing anyway.

>But we should talk about it if we're here...

Ok, gladly. In Gnome 3, a lot of functionality comes from extensions. Even changing the theme (from a black top panel) needs an extension. Installing extensions is done over the web using your browser (this works to a varying degree out of the box). I don't know of any recent changes, but about a year ago I had a look under the hood, because I wanted to make my own extensions, and was shocked by how they work.

First of all, there seems to be pretty much no integrity checks, signing, hashing to prevent malicious extensions. That's pretty much a no-go if you want to use Gnome in any kind of industrial setting (you will have to maintain them offline and manually on the file system level, sidestepping the supported way).

My second gripe is the stability of extensions. Since you're downloading from a website, the currently offered may not fully support your slightly outdated Gnome install. However, if you keep your Gnome up to date, expect random failures of your extensions.

From http://lwn.net/Articles/460037/ about the reasoning behind sidestepping distro packaging (emphasis mine): "The second reason is that extensions are not working with a stable API, so have very specific version requirements that don't really fit into the normal assumptions packaging systems make. If a packaged extension says it works with GNOME Shell 3.2 only and not with GNOME Shell 3.4, then that will likely break the user's attempt to upgrade to the next version of their distribution. We'd rather just disable the extension and let the user find a new version when it becomes available."

So, you just updated your Gnome and your productivity extensions fail. Now you have to search replacements and, should you find them, configure them anew. Sometimes extensions just randomly fail. This was the main reason I finally gave up on using Gnome for work.

And speaking of stable APIs: another shock was to see that extensions don't operate against a plugin API but are basically Javascript code smudged into the existing running code. This makes gauging the effects one extension has on another pretty much impossible.

My final conclusion regarding Gnome 3 was that it is a wobbly work in progress, less configurable than Compiz (there just aren't that much extensions to choose from in the end), based on questionable design principles and taste. It's okay for hobby use, I guess. But I have yet to find a distro that provides an as polished DE setup with Gnome as Ubuntu does with Unity.

Don't get me wrong, I want Gnome to be great, since many distros use it as default DE and I want an alternative if Canonical mucks up Unity with their Mir transition and their focus on smart phones. There's also Cinnamon wich I kind of like, but it has the same problems regarding extensions as Gnome. I will give KDE a closer look in the future.


> After watching the "we're the only ones who know best, so shut up"-antics of the Gnome developers, I can understand that canonical got cold feet.

If only we could exile the GNOME devs, Canonical & the systemd devs to a desert island, the state of the Linux desktop would be … well, probably not as good but at least it'd be a much more collegial community.

To get back to the original topic: the attitudes of the GNOME devs, Canonical & the systemd devs reminds me of that of the OpenBSD devs, with the exception that the OpenBSD guys are generally right, just socially inept in how they convey their message.


> If only we could exile the GNOME devs, Canonical & the systemd devs to a desert island, the state of the Linux desktop would be … well, probably not as good but at least it'd be a much more collegial community.

I wouldn't judge them that harshly, but the kerfuffles had the effect of splitting the Linux community into those who embrace progress and those who shun it. Sadly, most of the experienced went the latter way because the progressive path was full of cranks.

> To get back to the original topic: the attitudes of the GNOME devs, Canonical & the systemd devs reminds me of that of the OpenBSD devs, with the exception that the OpenBSD guys are generally right, just socially inept in how they convey their message.

Most of this, in my experience, comes from sticking to a very strong opinion that is heavily based on ideals. The further apart from the real and existing world these ideals are, the more caution should be taken when implementing them. Otherwise you will be placing a huge turd on someone's desk. During work hours. On a deadline.

I think this is the main problem with the attitude that the Gnome and systemd devs have. The Canonical devs took their ideals at least closer from a working model (OS X) and they were (I assume) motivated by pragmatism.

The OpenBSD devs base their ideals probably closer to their own experience. That makes it more likely to be right.


Also OpenBSD devs are happy staying in their sandbox, rather than trying to turn every sandbox in their sandbox.


I've got other experience:

- random failures to mount network shares,

- drag-and-droping files between Finder windows suddenly stops working,

- the file rename edit widget appearing in random part of the screen

that are bugs, that should not appear in the supposedly best desktop OS.


I can recommend using Forklift. It has its own share of bugs, but you don't get the freezing and general crappy experience of Finder when accessing network shared :)


> Now after months of using Ubuntu (unity, xfce) and Mint (Cinnamon) I'm convinced that OsX is best for average (and power) desktop user by a huge margin.

I recently set up Ubuntu and Mint computers for some folks, and can report that Mint is not bad, but Ubuntu is dreck (seriously, the change-user-password dialogue hangs: do Ubuntu users never change their passwords‽).

I've been running Debian with stumpwm for years now, and am convinced that this is the way of the future: a tiling window manager, extensible in a real language, capable of performing literally any task I ask of it. Most of the time, my main window is full-screen — that's odd to someone used to a classic desktop interface (as I used to be), but it's actually very much like a modern tablet or phone.


I use KDE and can confirm, unfortunately, that (at least on my setup) I have encountered many bugs pertaining to graphics. I also often get PulseAudio issues like the sound skipping when the system is under heavier-than-usual momentary load. I can't run Chrome developer tools without it crashing inside the driver every few minutes. Mounting an NTFS partition works on second try every time. I've used GNOME previously, which was too memory-hungry and often too unresponsive to use.

I love Linux as a developer platform and that's why I'm staying here. But if I could give up the shell, package management, understandable system architecture and the like, I'd move to Windows in a jiffy. Its desktop works flawlessly on my PC.


I stuck with linux for many years for the very same reason.

The solution that worked for me was getting an SSD.

Now, I always have a debian VM working in seamless mode. Ctrl+alt+t opens a terminal window, all linux dev tools work without a flow.

With an SSD there is absolutely no lag. Virtual desktops in Win10 and snapping with win+arrow keys, eliminate the need for a linux DE.

Plus you can use Adobe products, Visual Studio etc. at full speed, without hassle.


There is a VM that supports seamless mode for Linux?

Tell me more. Which one?


Virtualbox has so-called 'seamless' mode, in which the X11 windows from VM running on Windows host appear as separate windows in the Windows desktop. However, they only appear separate; they don't have separate panel button and they are not directly reachable via alt+tab. One needs to switch to VM first, then use keyboard shortcut set up in VM to access them.


If you alt-tab it to the vm, you have to alt tab again to switch between linux programs.

Usually that's not a problem, because I only have emacs and/or a terminal session running. With the control key on the right, you switch back to your host os.

To switch to a windows program from linux you simply press RCtrl+Alt+Tab.

Virtual desktops in Win 10 are very handy while using a VM. Ctrl+Win+d opens a new virtual desktop. Ctrl+Win+Arrow Keys switches between them.


> I've used GNOME previously, which was too memory-hungry and often too unresponsive to use.

Maybe you should buy a dev computer instead. Gnome consumes far less memory than the average DE. I always try every DE at every release and as far as I've experienced gnome-shell and cinnamon are the best at customization/plugins/performance.

Edit:

> I'd move to Windows in a jiffy. Its desktop works flawlessly on my PC.

I'm currently at work and I've a windows and an ubuntu on virtualbox - gnome-shell is pretty smooth while windows appends the "Not responding" text to every window's title, the search in the menu is much slower than it should be and the apps are often frozen - it is far from "flawless" for me.


I use Mate at Fedora. I see none of your bugs.


I'm sure that's of great value to him.


For me, it looks like a lie, because of no information about hardware, distro, kernel, etc.

For example, pulseaudio works fine for me for at least 6 years on 6 notebooks and 2 workstations of various vendors (HP, Dell, Acer, Medion). But if it will work badly under heavy load, then obvious command will fix that: `sudo renice -n -10 $(pgrep pulseaudio)` .


Assuming that other people are lying because they have trouble with things you like is paranoiac behavior. The almighty Linux desktop is not important enough to lie about.


I am leader of Linux User Group in my country, so I am aware about typical problems with Linux desktop. Last problem with PulseAudion on major distro, I heard off, was years ago.


Linux on the desktop is still full of issues that were solved by Windows and Mac a decade ago. Fonts still look like crap (Ubuntu is an exception), and even with patches there are still problems with Java, old GTK libs, etc.

I could care less about games, MS Office, a d a lot of other things that are deal breakers for some, but I have tried Linux on the desktop and it is still lacking.


I agree that Linux on the desktop has issues but fonts isn't one of them. Linux has had excellent open-source fonts and font rendering for a long time.


My experience with fonts is the opposite; in Win 7 the letters are always tattered and manifest some color distortions around the letter edges; no matter what I do with Cleartype settings, both in browser and other programs such as Outlook. In Debian with Xfce or Gnome, the letters look much better, smoother and almost no color distortions apparent. I do not know if it is because of different font or different rendering (hinting) algorithm, but I never saw text representation on Windows be as good as on my Debian.


I'm right there with you, I've never had good luck with fonts on Linux. They always look fuzzy, instead of crisp and sharp like on OSX


> Fonts still look like crap (Ubuntu is an exception)

You can change|install fonts on linux.

> and even with patches there are still problems with Java, old GTK libs, etc

What kind of problems?

> but I have tried Linux on the desktop and it is still lacking.

Can you tell us what do you miss?


My latest foray involved Infinality patches on Fedora 23. Honestly, with the exception of Ubuntu, this is the best I've ever gotten with Linux. I think part of this is due to Google having released a lot of great fonts with open licenses (droid mono, e.g.). But still, they just aren't as good as Windows 7. And it's not because I'm just used to Windows. I rarely use Macs, but when I do, I'm more than happy with the fonts (they're probably better than Windows). In both, the display and sizes are consistent in all apps, but Linux has so many different toolkits that don't respect the default - Java, VLC, Chrome, etc. - basically like 50% of the apps I'm using most of the day.

My latest try with running Fedora 23 on my laptop outside of the VM seemed okay... until a recent Kernel / X update turned it into a hot mess - fan constantly running, processor overheating warnings in the logs, etc. Maybe it was my NVidia card or something. The problem is that I have a Dell Precision which is one of THE few models (along with Thinkpads) that are well supported on Linux.

But I just don't have time to deal with these things anymore. If Windows 10 doesn't clean up its act, I'll probably be moving to a MBP, even though I haven't had good luck with them in the past.

Honestly, I'm at the point where I'd pay serious money to RedHat or some other company (maybe Dell or Lenovo) to put out a well supported laptop / Linux combo: nice fonts, supported discrete video, working ACPI, upgradable parts, and no spying or trying to monetize the OS user nor dumbing down the interface a la Apple and MS.


And the hipsters are out in force...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: