When I read things like this, it makes appreciate OpenBSD (and the rest of projects under the OpenBSD Foundation) even more, considering how it is still actively and fearlessly maintained by few individuals (less than 40), motivated mainly by their own enthusiast and passion, investing their own time and money on it, just for the purpose of producing a bloat-free OS focused on security and correctness, that can rival GNU/Linux in terms of performance.
Um... How do the 40+ OpenBSD make living? I first thought donations to FreeBSD were already tiny compared to Linux, and then OpenBSD seems to be even smaller.
I am amazed at how the BSD community continue to survive when Linux has literally suck out all the OS OS development fundings.
FreeBSD's kernel offers better performance (on the server) compared to Linux, and the licensing model makes it more attractive to some industries. Whatsapp (server side), Netflix, and the internals of PS3/4 all use FreeBSD code. OpenBSD is actively used in networks, as a firewall (where iptables from Linux is a mess) and other security oriented aspects where stability is also crucial. Linux is popular because it became available to users first than the others back in the 90s, OpenBSD's contributors, on the contrary, are comfortable on not implementing overengineered "features" with the sole purpose on keeping the source clean, stable, and with as less bugs as possible, as a UNIX system should be.
GNU stands for a philosophy of freedom, thus guixsd won't provide official repositories for installing proprietary software, some users don't like it, even though they might be interested in the technological approach of the system.
GNU utilities, are not only unsexy, they are bloated and messy, and prone to failure; the GNU implementations (coreutils: grep, cat, tail, etc) of standard UNIX tools are not done with simplicity in mind.
But hey, after all GNU is Not Unix. For those of us, who really appreciate the UNIX philosophy still have OpenBSD, which is the only light in a world of chaos, in my opinion.
> GNU utilities, are not only unsexy, they are bloated and messy, and prone to failure; the GNU implementations (coreutils: grep, cat, tail, etc) of standard UNIX tools are not done with simplicity in mind.
I've heard people say how GNU code is bloated and messy many times before, but never that they're prone to failure. I've never had any failure myself with any GNU code. Can you give some examples of failures you've experienced?
Also, I'm looking at the coreutils source right now, and it's not as messy as I was expecting. true.c is only a pageful with 80 lines, many of which are simply because of the license comment and the usage() function for --help. cat.c and tail.c also seem reasonably understandable. Biggest complaint I can make is that there's cases where spaces and tabs are mixed in the indentation, but I've long resigned myself to expect that in projects that have more than 1 major contributor.
I do, however, think that glibc and gcc are pretty messy. I tried looking for the definition of fopen() in openbsd's libc and found it in less than 30 seconds by grepping. I still haven't found glibc's. gcc seems to rely heavily on its own extensions, because I don't understand what's going on here:
That looks like a function prototype in a function definition, but it seems to mean an assignment going by the next line. Then in toplev.c, we have:
int
toplev::main (int argc, char **argv)
{
That looks like C++, but the file extension is ".c"...
You know what? Nevermind. Comparing the code for true.c and cat.c between glibc and openbsd's libc, I do rather like how clear openbsd is in its code. Damn. Sexy is a good word. Now I understand why people speak so well of it. I don't even need grep, the source file hierarchy is so clear. Looking back at GNU's true.c, I don't even understand half of what's going on there in those 80 lines, and it turns out that true.c is also the source for false.c, it just #include "true.c".
TL;DR I agree that GNU utilities are messy. I'm not sure of the bloated aspect, because I do like that utilities have internationalized documentation built-in, but that seems to be bloat by openbsd's standards. And I wouldn't know of them being prone to failure, because I never had one with them.
EDIT: Huh. I wanted to reply to Hello71, but there's no reply link under his post. Anyone know why? Anyway, yeah, I saw a comment in the file mentioning that over a line that referred to stdout. Can't check now because I'm away from the computer. I didn't really understand the reason though.
It is c++. The file is .c but whatever. They use a lot of c++.
I agree with you however. Having worked with the code gnu relies a lot on macros & a lot of auto generated code. The code is a big mess, imposible to tackle if you dont spend a huge amount of time on it.
A lot of symbols are generated through #defines and pastting (X macros) so you cant grep shit for one.
That reminds me; I wonder how the uutils project[1] is doing. While I still haven't gotten around to giving Rust a shot I think their idea of reimplementing coreutils in the language has merit.
> I've heard people say how GNU code is bloated and messy many times before, but never that they're prone to failure.
Just have a look at changelog for coreutils [0]. Sure it's very long, especially if you're not following its releases, sure it's full of weird edge cases that you might've never encountered (I'm certainly way too lazy to go as far as to look for those rare bugs that I stumbled upon years and years ago but there definitely were some), but this, IMO, is a great illustration of how GNU (or, rather, GNU coreutils) code is "prone to failure"—mainly because it sometimes tries to do way too much.
speaking of true --help, did you know that GNU true can exit non-zero? the exact way is left as an exercise to the reader :)
(if you're actually trying it at home, remember that "true" is virtually always a builtin. AFAIK there is no legitimate way to have shell builtin true return non-zero. (overwriting the command doesn't count :P))
I agree with you on 16/44.1 audio, I have a transparent DAC that can output 24/96, and on repeated blind tests I have failed to identify which is which, using files from the same source (a 24/96 song converted to 16/44.1). For audio production and mixing 24/96 makes a lot of sense though, because of dynamic range.
> For audio production and mixing 24/96 makes a lot of sense though, because of dynamic range.
Obviously only the bit-rate matters for dynamic range, and a 16 bit signal has 96 dB of dynamic range. That is more than enough for even the most dynamic of audio signals.
A larger bit rate is useful for lots of digital summing as I mention here [1] and I assume what you allude to, but for most home applications nobody needs 16bit+ for anything other than improving their noise floor (which is still borderline inaudible at 16bit).
> A larger bit rate is useful for lots of digital summing as I mention here [1] and I assume what you allude to, but for most home applications nobody
Yeah that's why they said audio production, and not home use. 96db of dynamic range sounds like a lot, but not when you're stacking 45 tracks. Also compression and saturation later in the chain will further bring up that noise floor.
And you don't get that full dynamic range because you want to prevent clipping in a recording, so you give yourself something like 12db of headroom away from the maximum, and now you've lost a good chunk of your dynamic range. 24 bit recording lets you keep plenty of maximum headroom while also staying far from your noise floor.
> Yeah that's why they said audio production, and not home use.
Yes, but the comment was specifically about dynamic range in audio production, not about higher bit rate's prime benefit of reducing artefacts in digital summing. The noise floor at 16 bit is -96db, if you're down there, you're doing it wrong.
> 96db of dynamic range sounds like a lot, but not when you're stacking 45 tracks. Also compression and saturation later in the chain will further bring up that noise floor.
It's more than enough if you're doing proper gain staging - the number of tracks is irrelevant.
Any mastering compression of more than 3db gain reduction is
mostly excessive, but let's say 6db. So a final master will bring up a noise floor 18db at most (if gain staging was done properly and as you say you left plenty of headroom for the mastering engineer - although clearly you wouldn't be able to use all the headroom). That's still a noise floor of less than -78db, but more realistically around -85db.
Saturation usually only add harmonics not gain (obviously depends on the kit you're using).
> And you don't get that full dynamic range because you want to prevent clipping in a recording, so you give yourself something like 12db of headroom away from the maximum, and now you've lost a good chunk of your dynamic range. 24 bit recording lets you keep plenty of maximum headroom while also staying far from your noise floor.
Myth. Watch this [1] and you'll realise that there's plenty of range with 16 bit. People were producing with 16 bit for years perfectly fine. Before that high end studio tape machines (like the classic Studers or Telefunkens) were equivalent to about 14-16 bits (based on their noise floor) and we had decades of music produced and recorded on that format.
I am not saying that 24 bit isn't better than 16 bit. Of course it is, there are tangible benefits in summing and it gives you more headroom to work with. But, I'm saying it's a myth that there's not enough dynamic range or that you're somehow just bumping along above the noise floor. If you are then you're doing it wrong.
Another much appreciated advantage of 24 bit is that it allows so much extra headroom:- when recording, levels can be set so the peaks hit maybe -18 -- -12dB(FS) which helps eliminate any chance of clipping in unexpected circumstances.
> Yes, but the comment was specifically about dynamic range in audio production, not about higher bit rate's prime benefit of reducing artefacts in digital summing.
But audio production often involves lots and lots of digital summing. I don't see how that's not related.
> Any mastering compression of more than 3db gain reduction is mostly excessive, but let's say 6db. So a final master will bring up a noise floor 18db at most (if gain staging was done properly and as you say you left plenty of headroom for the mastering engineer - although clearly you wouldn't be able to use all the headroom). That's still a noise floor of less than -78db, but more realistically around -85db.
That's only counting mastering compression.
What about NY style parallel buss compression on drums where you want to hit -20db GR as an effect, which will inevitably add some saturation too. 6-10db of compression on an individual drum track is common too if you're doing metal or electronic music. So you've got 10 drum tracks, each with several effects and a parallel track with distortion/compression.
> Saturation usually only add harmonics not gain (obviously depends on the kit you're using)
If it's not adding amplitude to your signal it's almost certainly bringing up the noise floor. Saturation generally works by applying gain until the device/plugin distorts. Just because amplitude isn't going up doesn't mean gain isn't increasing.
> Myth. Watch this [1] and you'll realise that there's plenty of range with 16 bit. People were producing with 16 bit for years perfectly fine. Before that high end studio tape machines (like the classic Studers or Telefunkens) were equivalent to about 14-16 bits (based on their noise floor) and we had decades of music produced and recorded on that format.
This is true for when you're operating on friendly signals like a sine wave signal generator with a fixed amplitude. Not so nice when you're tracking with 20db of headroom and the drummer some how manages a flam 24db f*cking louder than everything else which clips the overheads (this actually happened using 24bit, but now with crazy dynamic drummers I give myself TONS of headroom because there's no reason not to). At least when tape "clips" it doesn't ruin the take.
I do agree with you that 16bit can be good for recording (and for home audio I agree you'll never be able to tell a difference) but I don't believe there are literally zero differences between 24bit, and hard disk space is so cheap that there's no reason not to use the extra bits.
If you are into headphones, you could start hanging out on the subreddit. It's a big community, and you'll see many of the current technologies. It could get really expensive, I mean the new Sennheiser closed-back HD820 has a ~$2k price tag (ouch!), but certainly you could call yourself done with a $400 dollar setup, including DAC and Amplifier.
EDIT: You could also try IEMs, Etymotic makes some of the most clear, coherent, natural and neutral sounding. Their TOTL (Top-Of-The-Line) is the ER4 line which has a MSRP of $350, but you could get the same value with the ER3 line at $179.
Here I am too. I stick to dwm, currently migrating to wmutils because portability, no compositing on my side. Firefox is the only GUI program I use (besides st), and it will probably remain as such since the web has become a chaotic and complex environment, unusable without a major web-browser.
This GRUB bug you are talking about, is not a kernel problem though; on a side note, I'm going to read on the links you provided as I want to see if encrypted root partitions could also be compromised, I suspect no.
This looks good, I think that it would be worth giving it a shot. My main problem is XML, it is always incredibly slow to parse, and there is not an accepted defined way to express the content, s-expressions would do a much better job, perhaps a combination between C parsing scheme definitions of UIs could work.
I'm not sure why the OP thinks that parsing XML is any slower than any other data representation method that allows arbitrary nesting.
Love sax by the way. I just switched over one of our sites to use a sax based parser because the default rails .to_hash was choking on 100MB+ XML files.
I don't see libxml2 being slower than any other markup language parser. In comparison to how slow some JSON (an easier to parse language) parsers are, libXML2 is actually quite fast.
I'm not a scientist by any means, but since childhood when I was first exposed to the fact that life on Earth needs oxygen (and many other things, in extremely basic terms) to exist, I've been thinking to myself what if there are other distant planets where life has bootstrapped itself from completely different (or maybe opposite) components and conditions. I'm sure someone with more knowledge than I in biology and physics would certainly be able to offer a good explanation or thought.
I think the problem there is simply the fact that it isn't knowable. If there is a way for life to be that different, how do you look for it?
My first statement is likely stronger than it needs to be. My main question back to you is, how do you look for something that you imagine may exist, but couldn't say how?
> My first statement is likely stronger than it needs to be. My main question back to you is, how do you look for something that you imagine may exist, but couldn't say how?
I certainly agree with you, thus seeking for life similar than ours is the only feasible way to spend time in deep space research; we cannot look for something different, because we don't know what to look for.
I know. I mean, I do this. I go to a lot of effort and expense to play music for small audiences, solely for the pleasure that they're there to listen to my songs. Which is why it's so sad when someone is busy reading social media rather than listening.