It's odd, because he says that GNU can't run without Linux (it can, see Debian kFreeBSD, which has a FreeBSD kernel and GNU userland, also see the numerous GNU ports to Win32, starting with Cygwin and MinGW). He also says that Linux requires GNU which is AFAIK not true either (Android uses Linux but replaces the userland with its own Apache-licensed tools).
Not surprised people dont know that - Debian kFreeBSD is pretty obscure, and he had heard of HURD. And Android is still slightly a Linux fork, even if it is gradually converging, while the other non Gnu userspaces are not mostly entirely non GNU yet.
It's also not that uncommon to run a largely GNU userland on non-Linux-based distributions. Some of the Illumos distributions default to the GNU rather than the Solaris-heritage userland, for example (I believe SmartOS is one). And you can install most (all?) of the GNU userland on FreeBSD too, if you prefer it.
I suspect the high memory usage OP saw on Linux was a combination of:
- swappiness:
they said they switched swappiness to zero but my guess is that by that time their mind was made up. Also, from the mention of swapping and the mention of apps being slow to come back to life, I really think this was the main reason for their memory issues. For reference, high swappiness will make the kernel will push memory for apps into swap to the benefit of caches. Default is 40, I have not seen one single case where that is a good thing in 12 ro 15 years. Do yourself a favour, make your first CM rule to switch swappiness to zero!
- the myriad of small apps in a lot of modern Linux distro (pieces of systemd, pieces of gnome (even on servers!), dbus, etc...). They are minimalistic distors out there which use a very small footprint
- the way memory is reported:
If you look at free, the first line shows free memory as what's not in use by apps, cache nor buffers. If anything, free memory on the first line is memory that the OS is wasting (not using). If you want to know what is available for use by apps, you have to look at the second line (-/+ buffers/cache)
He claims it saved him $3500, but I just read a story about a guy squandering a few grand in billable time screwing around swapping one POSIX OS for another.
You've put it better than I ever could. It's frustrating to see someone share an experience along the lines of "I earned X this weekend by exploiting a niche I completely ignored before and learned loads!", only for a commenter to inevitably come along and say "well, thats all well and good, but if you value your time at Y per hour you've actually lost money, not to mention the wasted opportunity cost of not using that time to become a rocket scientist and earning even more!"
Well, you can't have it both ways. If you (in the general sense, not you specifically) do a project for your own reasons, great. The problem is when you try to make an ROI case that only represents one side of the equation. If you don't want people to poke holes in your ROI argument, then avoid framing your projects in a way that creates opportunities for that discussion.
The biggest difference was memory usage: "FreeBSD is just too good at managing memory. My server earlier used to consume over 1 GB of memory for running PHP, MySQL and Nginx. Now, it doesn’t even touch 500 MB! It’s always less than 500 MB. Everything is just same, configuration, etc. Only OS changed"
Later in the article, a recount of desktop memory shortage issues under Linux and much improvement under FreeBSD. He even reports single applications like Chromium performing better and with no swapping.
One has to be careful when looking at memory usage to judge these kinds of things, though. Linux has a tendency to use memory unused by applications for its own purposes, because it's not being used anyway and it might as well do something useful with it.
Oftentimes this ends up being something like aggressively caching file data so that it doesn't have to write to disk very often. While this does get labeled as "used" memory, it can very quickly be freed up by the operating system by evicting this cached data back to disk if that memory is needed elsewhere.
This can make Linux appear more memory-hungry, but in actuality a better description might be that Linux is more effectively using the resources at its disposal. I'm not saying that this is what's happening in this case, but it's a possibility.
The main thing that matters is when I execute a command it runs fast. and OS is fast... As far as I am concerned os could be sending all my free memory to /dev/null. As long as performance and stability stay up; it can do whatever it wants...
They're less labels for purpose and more labels for lifecycle - data you'd consider "cached" can appear just about anywhere. From what I remember (corrections more than welcome):
Active is fully-fledged in use memory, mapped into one or more processes
Inactive is where Active goes when it's less in-use. Cheap to reactivate, but relatively costly to free: can still be mapped into processes, and may be dirty (modified) and thus require writing to disk before being unmapped and cleared for reuse.
Cached is where lesser used Inactive cached data goes before it dies. More costly to reactivate, but cheap to free: No longer mapped directly into any process, and strictly consists of only clean (unmodified) pages that don't need writing back to disk before clearing.
Wired is pinned-down memory that can't be swapped out. ZFS's data and metadata caches are counted here, since it maintains its own (known as the ARC - Adaptive Replacement Cache, after the algorithm it's based on) instead of just relying on the traditional VM page cache.
The TL;DR is really: No idea what I'm doing here, so I might as well blog about it.
An application doesn't magically allocate less memory when running under a different kernel. The underlying idea must be that LLVM wins over gcc in a real world scenario, which certainly would be interesting, but there is really nothing to suggest it here.
It's definitely not a deep technical analysis as it's not written by a kernel engineer, but it's also plausible. VM management and swapping can and do have implementation quality differences on this scale.
The kernel VM is all about heuristics. In a memory pressure situation, it's constantly discarding cached pages that it thinks are better used for other data, making decisions about whether to swap out some data to make room for more cache, etc.
If you remember/look at the history of Linux VM and swap behaviour tuning and algorithms, there have been large improvements and regressions in this area historically.
Absolutely, but the claim here is entirely different, "my application used less memory when running on FreeBSD". While within the realm of the possible, it is more likely it is running on a completely different configuration, or "memory used" changed meaning, or both.
I'm sure FreeBSD is probably better at memory management than Linux (considering his Desktop issues) but wouldn't it make sense for the OS to nearly max out memory usage as long as applications can make use of it?
Oh come on! This is the second comment in this thread that seem to claim Linux is so special and glorious. FreeBSD definitely does this, maybe even 3BSD or 4BSD did this.
Functionality like this is basic OS design knowledge.
Oh come on! This is the second comment in this thread that seem to claim Linux is so special and glorious. FreeBSD definitely does this, maybe even 3BSD or 4BSD did this.
The modern VM architecture was definitely introduced after 3BSD or 4BSD, since NetBSD only introduced a unified buffer cache in 2000 and is also derived from 4.4BSD.
While I respect other people's option for FreeBSD (I tried it several times), in my ~3 years experience using Ubuntu Unity (from 11.04-14.10) I never had low memory or unreasonable swapping (swap is zero 99% of the time I check). My laptop has 3GB of RAM and I was always impressed with how little memory my usual session (1-5 apps + Firefox/Chrome with 5-10 open tabs max) consumed compared to windows: the average memory load was around 33% of the 2.7GB available = ~1GB. For some time I was using virtualbox with 1.5GB RAM allocated to it and I still didn't use all the RAM available and the swapping was minimal if not 0. My conclusion: I'm highly sceptical that Linux memory management is bad enough to justify giving up the ease of installation and the minimal configuration & tweaking Ubuntu needs after install.
To give a counterpoint, I frequently get into trouble with Linux 3.12.6. If I open more than a few tabs (70 say, with FF I can do an order of magnitude more) in Chrome it maxes out on memory to the point of the system becoming useless. Funnily enough it actually maxes out on swap space even though there is RAM available and then starts thrashing like crazy. Setting swappiness alleviates this somewhat but not entirely. Its a box with 1 GB ram + 1 GB swap. Not big by current standards, but has more than plenty RAM in my opinion if it werent for the tendency to write bloatware these days.
Yeah I could get some more RAM, but this box is a good testbed for software with a leaner streak, bloat does not excite me as much, sorry. Funnily enough this seems to offend a whole lot of people, as if I owe it to them to have more RAM on my m/c. It is probably worth it just for that amusement.
Definitely interested in giving *BSD a shot. The thing that has stopped me so far is the difficulty in sharing data between them.
EDIT @DanBC
> 70 tabs is not normal and 700 is just, well, weird.
So I have been told :)
Firefox manages it well enough though, I would assume Dillo would too. Dont want to hijack the thread with why I abuse tabs so, but I still contend that 1 GB is enough if the code is tight. 1 GB is a huge freaking load of memory if you think about it.
70 tabs is not normal and 700 is just, well, weird. It's great to run a machine with "low" amounts of RAM but you shouldn't then complain if it does weird things.
I'd be interested in what happens if you run 700 Dillo instances?
Sidebar, but everyone mistakenly thinks things that they do are therefore normal. I think the difference between "normal" and "common" is a little vague. If you define "normal" the same as "common" then we could do a study and see if it is normal or not (my guess is that it is not). However "normal" can also be defined as "not unreasonable", in which case I would agree, 71 tabs is normal.
70 tabs is only normal in the group of people reading HN. Other normal things for that group of people would be having a compiler installed on their OS. Would you describe having an installed compiler as normal for the general computer-using population?
I'm willing to bet that the vast majority of computer users don't ever have more than 5 tabs open at any time.
"Would you describe having an installed compiler as normal for the general computer-using population?"
Totally off-topic, but Yes :-)
.Net has a compiler built-in (you may need the SDK to get the command-line version), many systems have a compiler for a shader language, and your packet filter may ship with one.
I really love FreeBSD and use it for servers, but I wouldn't use it for desktop. It lags behind Linux quite a bit (which itself lags behind proprietary OSes) in things like driver support. I've found it very hard to use with modern hardware.
This guy's issues aside, if you're interested in trying FreeBSD, but would like for a more turnkey experience, there is the PC-BSD distro that has the modern conveniences like a graphical installer and preinstalled window manager.
Linux memory management is actually quite good. In particular the unified fs cache.
I would think this guy had weird settings/tweaks and or more things enabled since he ran gentoo but misconfigured swapiness.
The apps code is very close and linux doesnt magically needs to allocate more memory. What changes is just libc.
I applaud for this move, but I do embedded linux, need checkout NetBSD which is typically the embedded BSD, the concern is about all those drivers' support.
Many people are leaving Linux because of systemd (among other issues); but are choosing FreeBSD because of ZFS, pf, geli, jails, dtrace, bhyve, etc. (I haven't used the latter two myself yet, but the rest are fantastic.)
In the end I've gained more than I lost, it's just that a catalyst was needed to get past the inertia of changing operating systems again.
> Many people are leaving Linux because of systemd
Do you have any number to justify this statement?
I also note that all the tools you mentioned are of interest mostly in a server context (of course if you manage servers you will happily use them on your desktop too).
My guesstimate would be based on the number of posts on the FreeBSD forum(s) and mailing lists I visit where there has been a noticeable increase in the number of questions that mention leaving Linux due to the systemd debacle or wanting something that's not trying to become Windows or a XBox.
That's weird. I haven't kept up with the latest *nix gossip, but isn't the very principle of the system is that it is open and fully configurable/modular/extensible? What does it really matter in the end what system you use if you can make whatever system your own? (serious question)
It matters because if it "invades" your distribution of choice. Imagine you've been useing a distro for years and grown to love it and now systemd is being forced upon you of you make the next major upgrade.
Right now it isn't really that bleak, because some distros still give you a choice, but people are worried that the choice will be sacrificed in favor of easier maintanability in the future.
It matters because if it "invades" your distribution of choice. Imagine you've been useing a distro for years and grown to love it and now systemd is being forced upon you of you make the next major upgrade.
Like ELF, glibc2, egcs, devfs, hotplug (the old script-flavored version), udev, eglibc, etc.
I am mentioning these, because all of them caused a controversy with a vocal minority. It is evolution. None of these are controversial anymore. Some of them were replaced, because they were bad ideas in hindsight (devfs).
By definition any fundamental part of the system (such as init or the C library) that changes is 'forced upon' the user.
Your quotes around 'forced upon' make it seem like it is not really being forced onto users. Gnome users for example are getting shafted now that systemd is a dependency for the gnome DE.
"In layman's terms, the hardware interface is called Linux, while the rest of the part: the shell, core tools, etc are GNU.It's a piece from there, another from somewhere else and merging the whole thing into one collectively known as GNU/Linux. [...] In FreeBSD, the whole thing is a complete unit."
I think the OA will be quite happy with a mature systemd based GNU/Linux OS. Seems to like complete units.
PS: 'Many' is a hard concept when the OS is freely downloadable and not particularly monitored. One hopes the refugees actually make donations/buy DVDs.
no. And I hope not.
I moved to FBSD for positive reeasons: ipfilters and stuff vs packetfilter and ezjails.
It makes you feel linux still cool, but you want that simplicity and power back (jails vs VM).
I must admit shellshock and last security flaws also made me aware of my love for bloatwares. I find it sane to question my former choices.
No. This is _bullshit_, and people should really stop saying this.
A lot of components (OpenSSL? GCC? LLVM?) are in that repository merely for convenience but ARE NOT developed on internally (except for patches to make them work, eventually).
This IS NOT the same of "everything developed internally here, from syslog to dhcpcd" mentality of systemd.
One single repository containing many different components of the OS, which are not designed to be interchangeable. Basically the biggest complaint most systemd critics have ('monolithic!').
Monolithic may have a specific technical meaning in the context of kernels which is not opposed to modularity, but in general the two can be considered opposites. Monolithic is simply "of one piece", while modular means something has separate, changeable pieces. If you disagree it would be more enlightening to tell me your definition instead of a list of exceptions, because right now I don't see why they are.
But they are in the same repo because they are tightly coupled. Some of the code is shared between kernel and userspace for example, and stuff is versioned to match, as the BSDs do not have the strict userspace compatibility guarantees that Linux does, so things do change (though there are compat shims).
On the other hand, with the BSDs, you can swap out nearly any userland component for an alternative and it keeps working. While various things share code, it's very rare that one feature depends on an entirely unrelated feature, which is extremely unlike systemd.
OpenBSD (newbie alert, I'm no expert): when following -stable or -current branch in OpenBSD, the documents advise you to keep the source tree for the kernel, the src and the X system in step with each other and to only take ports or binary packages from the appropriate repository. You are also advised to compile the updated kernel first, reboot into the new kernel and then recompile the src and xenocara(X system) trees. So although the individual programs are all separate, they are intended to work together as a whole with common configuration settings &c and the source is kept in a single cvs tree.
You can't mix and match (say) an Xorg taken from somewhere else and your own special cat program.
My understanding of the systemd project - especially later versions with kdbus in the kernel and udev integrated in along with networking and the console - is that you may need to 'lockstep' your choice of kernel, systemd packages and possibly the DE if using Gnome in order to have a functioning system. I can see advantages in this but it does represent a considerable cultural change in the Linux world which has previously been a little bit like a Lego set.
Which ZFS on Linux is not? I guess it depends on your definition of stable due to the upgrade from libzfs1-2 but the richness of the Debian packaging system is just too good to pass up for me.
Given the licensing issues of combining GPL with CDDL code, ZFS will probably never be included in major distributions (assuming that the license will not change), so they will never have the same amount of integration as e.g. FreeBSD has in the installer.
Have you used other packaging systems? I've always liked ports (or portage in Gentoo) much more than apt. I'd go as far to say, I find apt infuriating compared to those.
Here we go again, some one telling us one os is better then the other ^_^ Never listen to anyone saying that. Simply use the OS that works best for YOU period. Few replies here where correct, linux uses more memory because it works like a cache, to not use HDD so much and if you have enought memory like 8GB+ you don't have to use swap at all.
enought memory like 8GB+ you don't have to use swap at all.
...depending on usage. We have 64GB+ machines at work that regularly go swapping. It depends on various factors: how quick do you want the OOM killer to kick in, how slow do you want the system to get as a result of swapping, and how much budget is there available to buy machines that can take more memory.
Better suggestion: Be careful when listening to anyone making absolute judgments based on generalizations.
Some operating systems, just like some applications, are better for certain purposes (often because they were designed and built specifically to be suitable for those purposes). "You can do anything with anything" is a very idealistic oversimplification that's implied by the idea that no OS is ever better than any other.
> FreeBSD gave my computer a new life, otherwise I was nearly going to get a new desktop because of shitty performance. In other words, it saved me ₹35000+
No, it didn't save you any money. Delaying a purchase is NOT saving money.
It is saving, delaying a purchase that would be financed out of debt until you accumulate cash allows you to save on debt interest. Alternatively, if you already have the money you can invest them for a while and earn interest.
Come on, the computer industry existence rely partially on cyclic purchases for that exact reason. My neighbor got a brand new SONY laptop so she could do .. everything I already do on my 2007 ThinkPad. And she can't so she pays me to fix her system from time to time. It's not far fetched to say that different OSes can and will save you money.
And you can do things like refresh (say) the one third of the desktop computers that need a higher spec (Cad, Video &c) every two years and then 'hand down' the older boxes to the other two thirds in stages. Worked quite well in one College I worked in a few years ago.