It’s not unusual to do it in C or C++, just because the dependency management story sucks so bad (there’s a lot more options than there used to be, but still no one defacto standard like pip or cargo).
It is well known that committing dependencies is a bad thing. See sibling posts. It's worth considering what ignoring that best practice gets you.
For example, you've been handed a bug. The customer is important and is running a version of your code from three years ago. You have source control, so you check it out and try to build it. What stuff might you expect?
1/ It uses docker and the image isn't online any more. I've had this one.
2/ One library dependency you used to use has been deleted from the internet. Also had this.
3/ Another dependency is still available, but it uses a dependency which isn't. Not yet.
4/ You managed to gather all the code and it refuses to compile with a modern toolchain
5/ As above, but this time the modern toolchain makes a different program to last time
6/ Another dep has dubious ideas of semver and the current copy doesn't behave like the old
7/ Actually anything using semver is considered deeply suspicious in itself
That's off the top of my head. I think there's probably a long list of variants on the source tree isn't sufficient information to recreate old versions. The reliance on old compiler bugs feels particularly realistic to me, but then C++ people mostly check in our dependencies. I've definitely checked out npm projects from a few months earlier and discovered they don't run any more.
Compare to the silly, paranoid, I've-checked-in-gcc-and-linux alternative. You check out code from N revisions ago and it all builds and runs, exactly like it used to, provided you can find hardware which looks adequately the same as it used to. I've heard rumours of warehouses of new-in-box sun workstations waiting for their time to replace the current ones too.
On balance, I reckon the industry best practice of grabbing whatever code some server gives back with an associated version number is a nonsense and obsessively committing the entire dev and run state into source control is the right thing. But I'm clearly in a minority.
It depends. You may want to protect against the dependency disappearing from a public repository, or being changed by a malicious actor, or your internal repo is faster to clone and build, or... I'm just saying there are very valid reasons to vendor a dependency. There are also drawbacks: some folks vendor and then make small modifications... that's forking, good luck keeping it up to date. You also have more work to do to vendor new versions but that's easily automated.
Common? Perhaps in less than ideal or legacy situations. Best practice, definitely not.
A lot of website projects I have worked on in the past included composer or node dependencies in the repository. It really slows down the whole git system.
> still don’t have contents insurance (a hangover from poverty, I just wasn’t in the habit of insuring things and now keep putting it off, because paperwork terrifies me)
The author seems to be from UK. Is contents insurance a thing in UK? How many renters really do buy contents insurance? Genuinely interested to know how many think that contents insurance is worth it?
I've had content insurance as a renter before, and it was useful (my flat was burgled; my flatmate did not have insurance).
That was a while ago, when things like CDs were still targetted by theives, and I got everything replaced, which would have cost me thousands.
I still keep contents insurance; haven't claimed on it more than once in a couple decades though - it's there for the case where my appartment is destroyed (I live in a block of flats), and it's nice to have the lost property/damaged devices cover for travel and general use.
Is there a service that actually gives MP3 downloads? Call me old fashioned but I'll happily pay for a service that actually gives me MP3 files that I can download and keep offline.
MP3 isn’t as popular due to its inherent poor quality but the iTunes Store (not Apple Music) still gives you non-DRMed AAC files (I don’t believe any lossless files are available that way).
It’s okay for non-demanding music but even at high bit-rates there are types of sound (e.g. cymbals) which it can’t reproduce well. Smaller, better sounding files are usually going to win and indeed they have for the most common ways people listen to music.
Isn't V0 MP3 basically indistinguishable from lossless, per a bunch of abx tests from the guys at hydrogenaudio (surprisingly even slightly better than CBR 320k)? Sure aac can achieve perceptual transparency at lower bitrate, but compressed 320k is still a lot cheaper storage-wise than lossless, and if you're limited to what's available then it doesn't make sense to turn it down.
Not for all types of sound: I mentioned one of the common problems with sounds which ramp up sharply, which is not something VBR can solve. AAC also has pre-echo issues (all DCT-based formats do) but the designed was improved to reduce them since this was one of the known problems with MP3 which you couldn’t just throw bandwidth at.
My larger point was simply that most services use AAC now because it saves them money, fits more music on your phone, and has better sound quality. MP3 is good enough for a lot of things but there’s no point in supporting two formats when both are widely supported and one of them is better across the board.
I don't disagree that AAC is better, but other than iTunes most people buying from the stores mentioned on this subthread are buying MP3 (if not lossless).
Perhaps - I haven’t bought MP3s since the 2000s since sites like Bandcamp almost always offer lossless or AAC — but I was also thinking about how almost all of the streaming services use AAC, too, and that’s a huge chunk of people’s listening.
Well.. yes but don't re-encode audio from Youtube to MP3, that's adding further quality loss to already not-great quality files. Download AAC or OPUS or whatever it's serving now.
> In many years of coding C++, very rarely I experienced a stack overflow or segmentation fault. It is literally not an issue in every codebase I have worked with.
Is this true for most people though? What do you think about this quote?
Frankly I've seen a lot of segmentation fault issues with C++ projects. Granted much of it can be detected pre-emptively with good tools and when a segfault happens, I can debug it quickly too using a good debugger.
But I've to say that they still do happen and I still see them happening enough while developing and testing that I cannot simply dismiss segfaults as something unimportant.
As someone said above in the thread, segfaults is what you get if you're lucky. If you're unlucky, you get silent memory corruption that you will eventually notice through bugs popping up in seemingly unrelated parts of code.
A few things I've found notoriously difficult to do in C++:
1. Add two signed 64-bit integers without overflowing on either side (negative or positive) and without using 128-bit integers. If the sum is going to overflow, generate an error. If the sum is not going to overflow, evaluate the sum.
2. Multiply two signed 64-bit integers without overflowing on either side (negative or positive) and without using 128-bit integers. If the product is going to overflow, generate an error. If the product is not going to overflow, evaluate the product.
Essentially under no circumstance evaluate something that might overflow. Detect the overflow and generate error. Evaluate only if the detection algorithm says there is not going to be overflow.
We can't use 128-bit integers ecause some hardware may not have a 128-bit registers. It's probably possible to solve these but it gets complex pretty soon once you begin handling all the failure modes.
Does Rust make this kind of problems easy to be solved?
Option<T> being the Rust equivalent for std::optional, but with all the nice ADT features so it's ergonomic to use. You can transform it to a Result<T, E> if you want, or even just panic and exit if it overflows. In any case, the overflow handling is built-in and doesn't use 128 bit ints.
> 2. Multiply two signed 64 bit ints without overflow
> Does Rust make this kind of problems easy to be solved?
For this specific case (specified behavior on overflow), Rust makes it trivial. All of the primitive integer types have methods such as `checked_mul()` or `saturating_add()`, which provide deterministic arithmetic on all platforms.
Rust often has convenience methods in its standard library for things like this that are common and tend to produce bugs when re-implemented over and over, and have a reasonable way to do it without making tradeoffs that will annoy some people over others.
In this case, it’s:
10i64.checked_add(20i64)
10i64.checked_mul(20i64)
Both expressions return an Option type, which is effectively a tagged union with type safety on top so you can’t misuse it. You can either call “unwrap” to get the value out and trigger stack unwinding on error, or pattern match on the Option and its inner value in a more functional style.
When you have wrappers over these that produce optional types, and use statement expression macros or monadic member functions to handle the control flow, I think it's not too hard.
Rust has i64::checked_add and i64::checked_mul, which return None on an overflow. Not only is it easy to do, the compiler help ensure you checked the result.
1. About 15 years of experience in C++, 2-3 years in Rust.
2. I'm retired now, but I would have considered Rust for any greenfield project that didn't have a compelling reason to pick C++ (e.g. mandatory interop with an existing big C++ library).
3. Rust nearly every time. I sometimes use a layer of C++ as an FFI shim, such as when writing a gRPC client/server (the Rust libs there aren't mature).
4. The library ecosystem. Rust's crates.io seems full, but if you scratch the surface it's just a bunch of half-baked hobby projects at v0.x versions. And Cargo (the most popular build system for Rust) has essentially no support for multi-library packages, so instead of something like GLib (a single dep with lots of functionality) it's common to see even trivial programs depending on dozens or hundreds of half-baked hobby projects at v0.x versions.
5. Most of the Haskell/ML-ish functionality, such as traits and proper enums. I'm currently in the middle of a Rust project that's gotten much bigger than I expected (parsers are hard! compilers are hard! zero-copy is hard!), but it's still possible to make progress by leaning heavily on the type system.
I often describe Rust as "a dialect of C++ where the compiler forces Google Standard C++ on the world", or alternatively "the language you'd get if you hired a team of Haskell developers to write firmware".
Your thoughts about crates.io mirrors mine exactly. I'm becoming more and more interested in just ditching it entirely and using vendored checked-in (or submoduled) dependencies and managing the transitive dependency tree myself. Cargo&crates.io seems to have been inspired too much by npm. Lots of half-baked, abandoned libs that wantonly pull-in piles of other 3rd party deps and create a sprawling graph of dependencies that ends up with binary bloat and multiple versions of the same package, etc. etc.
somewhat OT, but what happened to haskell? a few years ago it was all over HN - now you hardly see any mention of it. surely this won't happen to rust?
Entirely different snack bracket. Haskell is great stuff, but not something you'll be able to practically staff a team and build with.
Rust is not nearly so "brainy." It's a much more pragmatic and conservative language.
And it's targeted in an entirely different domain -- systems programming. Haskell is a garbage collected pure functional programming language. Rust is not anything like that.
> And it's targeted in an entirely different domain -- systems programming.
> Haskell is a garbage collected pure functional programming language.
At the time there were many people trying to make a go of doing systems programming in Haskell. I remember lots of arguments about libraries using ByteString/Text vs String in their public APIs, where the research people were like "linked list of u32 is so conceptually pure!" and the systems people were like "what, no".
In retrospect, starting with a low-level systems language and adding safety turned out to be a better idea than using `ST a` to implement borrow-checking in a high-level GC'd hosted language. I can only say "it seemed like a good idea at the time...".
There was an era of stability in the Haskell project during which it was possible to write code for industrial purposes. A lot of people were attracted to the idea of a memory-safe language that compiled to native binaries, and there wasn't much competition in that area at the time.
Then the research-oriented nature of the language and community re-asserted itself with Haskell 2010, and there was a lot of churn in both the base libraries and Hackage. Around this time was when Swift, Go, and Rust had their first releases, all of which offered memory safety and varying levels of ML/Haskell inspiration. So Haskell was suddenly less appealing, and there were other options -- why file another perf-regression GHC ticket into the void when Go has an HTTP server built right into the stdlib?
I don't think it's got anything to do with Haskell 2010. Haskell is far more widely used in industry today than in 2010 by at least an order of magnitude. Heck, Mercury alone has about 250 Haskell programmers I think. That's probably more industry Haskell programmers than there were at all in 2010.
As far as I remember, most of the pillars of Haskell were hired to work on projects that were, at some level, competing with Haskell (by Microsoft, Facebook, Epic).
> 1. How many years of C++ programming do you have under your belt? How many years of Rust?
12-15 years of C++, 8 years of Rust. I generally used C++ because it was the only tool that could do the job at the time, not because I actually liked it. At best, I had a love/hate relationship with C++.
> 2. For new work projects, do you choose C++ or Rust? Why?
Almost always Rust. The main reasons:
- Cargo is awesome, and the crate ecosystem is incredible.
- I barely trust myself to write rigorously correct C++, despite having plenty of C++ experience and being extremely paranoid about code correctness. And I defintely don't trust most other people to do it correctly. (I have maintained more than my fair share of other people's C++ code, and the average C++ code is buggy garbage.) C++ is like fishing around in a drawer of extremely sharp knives in a dark kitchen; you're going to get cut.
- I trust the average Python or TypeScript developer either to write correct Rust, or to be defeated by the borrow checker. This isn't just the borrow checker—it's also bounds checking on arrays and slices, the choice to panic instead of corrupting memory, the rareness of undefined behavior (and "nasal demons", and irresponsibly aggressive optimizations) in safe Rust, the safe mulithreading support, etc.
However, I would consider using C++ in domains where correctness and security weren't important, and where I wanted to use mature frameworks. Games would be an obvious case!
> 3. For new hobby projects, do you choose C++ or Rust? Why?
Rust. Way more fun, personally. :-) I tend to like using "functional" architectures, so I don't fight the borrow checker much.
> 4. Is there something about C++ that you wish Rust had?
Well, now that const generics are in, I'm pretty happy with Rust's feature set. But for quite a while that was the best C++ feature Rust was missing.
I currently work in a mixed C++ & Rust shop -- embedded Linux, autonomy systems for tractors -- and know C++ very well. I choose to work exclusively in Rust.
1. 10-20 years of C++ depending on your definition. But used C++ part-time casually/open-source from mid-90s to 2012 or so, and then mostly full-time from that point on @ Google, with a big chunk of that working in the Chromium source tree.
2ish years Rust, including the last year or so fulltime professionally.
2. Rust. Because of the more expressive type system, simplified/cleaner tooling, consistent syntax and style. The safety features are nice, too. C++ projects at work tend to get wrapped in Rust for the project I work on.
3. Rust. Same reasons as above.
4. C++'s const generics and constexpr support is much richer and better than Rust's. Placement new and custom allocator support in the STL has no answer in Rust yet, and the efforts to fix that (allocator_api etc) seem terminally stalled. Same with really good cross platform simd library support. C++ has a better story generally on embedded devices still, in terms of sheer # of toolchains.
5. I am not sure why people are focused on the borrow checker and safety as somehow the only distinguishing feature -- it's really not. C++ is missing a whole boatload of things from Rust, but most notably would be ADTs/sum types/pattern matching. This is the "secret weapon" of all languages inspired by the ML-series functional languages (StandardML, OCaml, etc.) and something C++ has no real answer for, though you can "sort of" approximate it with some template metaprogramming wanking I guess. Rust doesn't do this as well as ML or Haskell or F# or Scala because it confines it to its enums, but it's still really lovely and makes programs far more expressive and readable.
Not nearly as expressively though. You can destructure a struct, but you can't express alternatives on anything but an enum. Languages like F#, Scala, Haskell support more ways of creating ADTs than the kind of enums that Rust has.
2) C++, my employer's choice. Their rationale: easier to hire people with experience, internal codebase is mostly C++ so tooling/people more versed.
3) Rust. For me it's just easier overall to work with. It takes me an hour or so to get my head back into Rust's standard library and idioms/quirks, but afterwards it feels much better to work with: dev. workflow, testing, expressiveness. Plus it catches programming errors for me every once in a while.
4) I wish Rust had wider adoption by the industry. It's always compared to C/C++/Java/Go and the usual argument boil down to "not enough people use it".
5) I wish C++ had better compiler error messages. I have to rely on instinct when it's not scrolling through nested template/SFINAE errors to understand what's happening. Rust tells me "here's what's wrong" 99% of the time.
1. I had about 7 years of professional C++ experience, a few more as a hobbyist. By now, I have ~10 years of Rust.
2. I would definitely choose Rust. For better or for worse, the teams I work with tend to consider developer velocity as their highest metric. Between cargo, clippy, crates and the type system, I'm orders of magnitude more productive in Rust.
3. Same thing. My life is too short to spend it debugging memory or concurrency errors.
4. There are a number of C++ libraries that have no equivalent or good bindings in Rust yet. But if you're talking of the language, no. There are C++ features that do not have an equivalent in Rust but I don't miss them.
5. From the top of my head, affine types (e.g. after a `std::move`, the type system ensures that you can't use the value anymore), proper enums, proper pattern-matching, better concurrency operations in the standard library (which pretty much requires affine types), a linter as good as clippy, built-in support for writing new linters/gradual type systems, derive macros (that have access to the AST).
1. Hard to count. Started as a youth on game modding, but in terms of serious projects, probably 5-7 years.
2. Rust. It's a Rust shop, and the only C++ we have is in dependencies that we wrap with Rust interfaces.
3. Rust. I've probably shaved a not-inconsiderable amount of time off my life in debugging C++ issues at both compile-time and runtime (inscrutable behaviours, memory safety, broken / unstandardized tooling, platform-specific nonsense, template metaprogramming sins that I've had to debug and - worse - add to, and much more)
4. Hmm. Template specialisation and more constant-time evaluation, I think. There's probably a few others, but that's what comes to mind.
5. Uh. Pretty much everything?
- Consistent tooling that works across all platforms (I will be happy if I never have to look at a line of CMake ever again)
- Built-in dependency management
- A robust engineering culture (your code should account for failure!)
- Consistency of code itself, made possible through rustfmt and clippy
- Generics that surface issues at point of definition and not at point of instantiation
- ADTs
- Reduced reliance on human strictness / "getting it right" to, well, get it right. This isn't just the borrow checker - most APIs in Rust are designed to make using them wrong difficult. Even things like `Mutex<T>`, which combines a mutex and value, to make sure you can't access the value without locking, and you can't lock without knowing what you're locking.
- Easy multithreading through invariants that the compiler tracks (the Send+Sync traits); they're not perfect, but there's nothing like the first time you change an `iter` to Rayon's `par_iter` and your code is magically eight times faster.
- A value-based typed rich error handling scheme, so that you can see what errors a given piece of code might produce and be able to handle and propagate them with little fuss. (Similar to checked exceptions, but much more convenient! Adding some context to an error is a function call and `?` away, not a per-statement try-catch-rethrow).
---
Honestly, I could keep going for a while, but my general point is that Rust has had the ability to learn from its predecessors, and it shows. Many of the mistakes, bad ideas, or poorly-fitting features of past languages just aren't present in Rust because the language was deliberately designed to avoid them.
It's not a perfect language by any means - async Rust is the first and most obvious pain point - but it solves my problems in a much more ergonomic - and dare I say it, more fun - way than C++ ever did.
Well, they already did [0]. It's not very fun to use; something that was less miserable would probably see more use. Retrofitting them to a language isn't impossible, either; there are a few languages that have managed to do it by adapting to their ecosystem's mores, like TypeScript and Kotlin.
That beings aid, I don't disagree with your general point. C++ is already incredibly complex and impossible to understand as a single individual; I don't think that can be fixed without removing things from the language.
My gut feeling is that the future of C++ is not C++, it's cpp2 or Carbon.
> if those are bad or clickbaity titles, they probably could just not be upvoted much
I am an active /new page lurker. You'd be surprised by the number of times I've seen clickbaity titles getting upvoted and making it to the front page.
Sure the clickbaity titles had good content too, otherwise they wouldn't get upvoted again and again. But when there are two posts with equally good content, the one with clickbaity title gets the needed votes soon enough to get into the front page and the other one keeps languishing at /new. It's sad but that's what I see happening again and again at /new.
I try to offset the unfairness by voting for the non-clickbaity interesting posts but adding one vote to a post that nobody else is voting for does not make any meaningful difference.
> Have you looked at for instance Khan Academy's Grant Sanderson (aka 3Blue1Brown) Math videos?
I have. I went through Khan Academy, Brilliant and 3Blue1Brown. After spending more than 100s of hours I started getting the feeling that these are all good for elementary level math.
But for any serious math (think real analysis, complex analysis, group theory and beyond), all these platforms did was leave me with a warm fuzzy feeling of having learned something cool but in reality that warm fuzzy feeling was not good enough for solving actual exercises that come in textbooks or really deeply understand the material.
I've given up on these online learning media. Back to textbooks. The difference is like night and day.
Note that 3B1B often warns you that his videos bring perspective for a book/class that you're doing or are already done with. And Khan Academy's focus is for K-12.
If you want anything past Analysis 1 I think you'll find that universities guard their content.
> If you want anything past Analysis 1 I think you'll find that universities guard their content.
Not so; there's an absolutely vast amount of freely available undergraduate mathematics resources available at all levels. Honestly, so much that it makes it confusing to choose and not get distracted by the options -- perhaps AI-mediated distillation could be helpful in the future.
Really? Can you link some for Analysis and beyond?
I wanted to find good analysis video lectures from a real university complete with problem sets, homeworks and their solutions. I couldn’t. I think MIT OCW now has one analysis course like that, but it’s relatively “recent”.
> I think you'll find that universities guard their content.
Hmm. All the way back to when I was in college there was advanced content available from the Open University. You had to be awake at 2am and it was in black and white, but it was there.
I didn't mean to say videos are better - just as far as I can tell that's where the most creative new teaching techniques are on display. I'd definitely prefer they were in the written word. Especially if they were an open collaborative effort. Books are great for flipping back and forth with. You have an "ah-ha" moment and skip back several pages and reread something you misunderstood on a first read-through. It's somehow clunky and takes you out of the flow when you do it in a video.
Critically, you can read/listen to something and come away with the false impression you understand it. Sitting down and doing problem is .. not always fun.. but can be critical for the concepts to sink in. I think this is the main point of what you're saying
I could see in the future it being something like watching a video and then doing a programming exercise
I've given up on these online learning media. Back to textbooks. The difference is like night and day.
Are there people who think this is an "either/or" choice, as opposed to a "use both" thing?? I ask, because it's pretty well established that learning is enhanced by use of multiple media types and it seems self-evident to me that books and videos are complementary.
> it seems self-evident to me that books and videos are complementary
Can't speak for others but for me it is more about efficient utilization of time rather than complementing multiple learning methods.
I've found that time spent in learning math from videos have poor return of investment. That time is better spent re-reading a chapter or that thing that I couldn't fully understand the first time and doing more exercises.
Fair enough. For me personally, I find great value in jumping back and forth between different modalities, where the different presentations reinforce each other. But what works for me may not work for everyone, and vice-versa.
I think you're just parroting things you've heard other people say. 3b1b's videos are universally agreed to be excellent, and it's baffling that you think it is a choice between watching them and using textbook and doing the exercises. Anyone with the intellectual capacity to study that sort of material is not going to have a hard time comprehending that they are intended to be complementary, as Grant Sanderson makes very clear at numerous points.
Since FreeBSD is on the front page today, I thought this is a good time to learn from the collective wisdom of this community. 5 simple questions to those who use FreeBSD to stir up a conversation.
1. Where do you use FreeBSD? On your laptop? Remote servers? Routers?
2. Why do you use FreeBSD instead of Linux?
3. Why do you use FreeBSD instead of OpenBSD or another *BSD?
4. Do you find something lacking in FreeBSD? Is there something that is good in another OS that you'd like to see in FreeBSD?
5. What is that one thing about FreeBSD that you would hate to lose if you were forced to use another OS?
2) Poor interactions with Linus, DaveM, and the Linux community (see below)
3) Performance
4a) A different package system, more akin to Ubuntu LTS with PPAs, so we could have stable packages for 99% of things, but could get the latest versions of software when needed.
4b) ebpf/xdp
5) Missing ^T (SIGINFO) support, simple init system, a kernel that makes sense
2 extended:
In 1994, I was a linux fanatic, with a stack of floppies I'd use to install linux on any pc I could get my hands on. I landed a job as a sysadmin for a stats dept, where we were using ancient DECstations running ULTRIX. Disk space was expensive, and TeX fonts were large. So we kept TeX fonts centrally located in NFS. The DECstations were old and slow compared to PCs (12.5MHz mips, vs 66Mhz 486), so I wanted to replace them with PCs running Linux. The problem came that it took the DECs ~1 second to render latex .dvi file using xdvi. It took a linux box a minute or more. This was because the DECs did NFS file caching, while Linux didn't, and xdvi seeked around at random in the font files.
I went to the 1994 Boston USENIX and met Linus at the Linux BOF. I asked him about NFS file caching. He said something like "bah, nobody uses NFS, we have no plans to implement any sort of NFS caching, its not important".. So I got up, went to the FreeBSD BOF, and they said it should work just fine. I've been a FreeBSD user ever since.
Then ~10 years later, working for a company making one of the first 10GbE NICs, I had submitted our base driver and got it accepted via the netdev list. I then completed work on TCP LRO in our driver, submitted that, and got it NACK'ed for the reasonable reason that they didn't want to have multiple LRO implementations. EVEN THOUGH OUR COMPETITOR'S DRIVER HAD THEIR OWN LRO IN THEIR DRIVER, AND ANOTHER VENDOR SUBMITTED A DRIVER WITH THE BASE DRIVER AND LRO IN THE SAME PATCH AFTER US AND GOT IT ACCEPTED, AND THEY DIDNT RIP LRO OUT OF THE OTHER DRIVERS. So our driver had dog-shit performance out of the box due to a lack of LRO as compared to 2 of our competitors. At that point I was pretty much done with Linux.
Compared to FreeBSD, where we had the best LRO in our driver, and a few other vendors copy/pasted it, and it wound up eventually being centralized as tcp_lro.c and driver-specific LRO ripped out.
1. 2 laptops and quite a few servers, some remote, some local
2. While I prefer FreeBSD, I use Linux too. I think it's important to know more than one OS. Also my employer standardized on RHEL, not much choice there
3. It seems to be the one where most of the (non-strictly-security-related) innovation takes place, and many of the OpenBSD security related innovation trickle down reasonably soon.
4. Docker (hate it, love it, but so much software is distributed in this way now), recent WiFi adapters/standards
5. ZFS (I know it's possible on Linux too now, but good luck convincing my employer, anything not covered by support is a tough cookie there), ZFS, ZFS, dtrace, its very nice update/upgrade process, a sane and quite rich base system, the Ports collection, its very nice handbook and documentation.
2. As of right now, mostly curiosity. I have dabbled in FreeBSD on and off for years, and every time I install something from scratch I give it another go. In addition, the experience is more cohesive and "it just works" compared to almost every Linux distro I've ever used (the closest I've found is Slackware.) Let me explain.
When I say cohesive, I mean mostly that it has extremely well thought out documentation, and most common things you might want to do are in the FreeBSD handbook. Just go to the relevant chapter and read up.
When I say "it just works", I don't mean that the out of the box experience is just magically perfect and pre-configured like you might see on Windows or Ubuntu. What I mean is, the ports (and the pre-compiled packages made from them) are extremely reliable. You install them, and they work. Not to pick on Ubuntu specifically, but I have found this not to be the case more often on Ubuntu. Sometimes you install a package on Ubuntu, and it doesn't just work, and you might have to hunt down a config file to make some adjustments, its location may or may not be different from vanilla and may or may not be documented, or maybe it's a bad package and you're SOL.
Furthermore, I've found that upgrading ports and packages in FreeBSD doesn't break things anywhere near as often as on most Linux distros I've used. It's very reliable in that sense, and it just works.
3. This is older hardware for personal use, and I don't really need the enhanced security of OpenBSD, especially the enhanced security that comes with a performance cost.
As for NetBSD or DragonflyBSD, I have the impression, which may or may not be correct, that FreeBSD has better hardware support than either of them, and I want all of my peripherals to work.
4. I do most of my gaming on dedicated hardware these days (Steam Deck, Nintendo Switch), but it would be super great if the experience of playing games was more smooth. Thanks to the magic of Linux binary compatibility and Proton, you can totally play Windows and Linux games on FreeBSD with decent success, but the process of setting that up is awkward and ad-hoc.
5. See 2. Also, I like ZFS a lot, and I get the impression that getting up and running with ZFS on Linux is more of a hassle.
2. FreeBSD offers a coherent base system with sane tools. pf is a godsend, and the ports collection is a good trade-off of configuration and simplicity. ZFS used to be a killer feature but is being integrated into Linux systems.
3. FreeBSD is designed around "general purpose" use and has a lot of backing from corporations like Netflix.
4. FreeBSD is missing a true container solution that is compatible with Docker (jails don't count). There has been some work on this as of the most recent BSDcon, so I'm hopeful.
1. Servers, I'm partial to openBSD because of a saner IMO /etc and works-out-of-the-box (for me). My coworker is the more freebsd kind and since he does the work, his opinion prevails.
2. I moved houses 3 days ago. Had installed MX linux prior to moving on my desktop computer. Today, no DHCP IP on my computer. Man and apropos didn't help much. Ifconfig, arp don't exist. They require an apt install. I'm clueless as to what's happening. GUI tools didn't help much. So yeah to all predictable systems including windows.
3. VSCode (which when I last checked a week ago didn't work on freebsd either) and a lot of other programs which aren't there on OpenBSD. NetBSD haven't touched, so won't comment.
4. Userland stuff. BSDs in general pitch themselves as complete OSs, but the whole getting X working is like assembling a GUI stack IMO.
5. Continuing from the previous point (yeah, I'm a hypocrite), a few hundred MBs of RAM and very little GHzs on the CPU gets you a fully functional Desktop environment. If a browser is needed, add a bit more RAM and maybe some CPU.
I might be wrong, but VSCode didn't work (for me) on 13.x and I ran across a few forum posts for others who couldn't get it done either. I had very little time to figure out the right "distro", and VSCode was a requirement. Went to distrowatch, and installed the top choice (please don't roast me about it).
Hmm weird. Did you try to just do "sudo pkg install vscode" ?
Don't try to download it from the website, this won't work as freebsd is not a supported platform but it's simply in the package collection and works great as such.
Ps I wasn't trying to roast you at all. I'm happy you found a solution even if it's not FreeBSD <3
In general that's one of the things I like about the FreeBSD community. We don't really have this push to make it mainstream or to advocate it. If you like it welcome to the club. If you don't, that's fine too. We have no desire to see "the year of FreeBSD on the desktop" generally speaking.
I really like that lack of evangelism which is so common on Linux especially because of the distro wars.
ifconfig (on most linuxes) has been deprecated because it doesn't support all network features anymore. You're meant to use "ip". It also does some of what used to be netstat/route. And there's 'ss' for the rest of netstat things.
1. I'm using FreeBSD-13 running on RPi2 (armv7) as DNS server to the SOHO network. I'm using dnscrypt. I'm serving 1-5 rps. From a stress-test the system can serve up to 43 DNS rps without problems.
2. Small overall footprint in terms of resources.
3. I went with NetBSD because of the even smaller footprint (see 1). NetBSD, according to the docs, requires just 40mb of RAM to run. Hit many walls with NetBSD and switched to FreeBSD.
4. PF is not working on armv7 for 13.2[^1]
5. The rc system, the complete control over the small-ish amount of processes running, pre-compiled binaries are nice. I must say that I enjoy the overall simplicity and clear-cut documentation, no need to go through hoops to understand _how_ DNS resolution works :-)
1. Servers
2. I use both, but use FreeBSD for bare metal nearly exclusively. Linux for virtual hosting because you can't always get FreeBSD.
3. I use both FBSD and OBSD. OpenBSD I use for router-type applications and for some standalone things. I use these instead of other *BSDs because I know them better.
4. Not really.
5. The consistency and stability of the OS from a usability perspective. Linux moves fast, e.g. systemd, while FreeBSD isn't fundamentally that much different from 20 years ago.
Also, FreeBSD has deeply entrenched support for ZFS, which is a game changer. It's available for Linux, but it's not quite in the same first-class citizen state as it is on FreeBSD.
I'd miss the documentation as well. Both Free and OpenBSD have astoundingly good documentation.
3. I'd actually rather use some some mythic OS that was the combination of DragonflyBSD + OpenBSD ... but since that doesn't exist, FreeBSD just nudges ahead of either Dfly standalone or OpenBSD standalone.
4.
- turning all services (except ssh) off, by default. OpenBSD does this.
- move all non-core things out of the base, like sendmail (now DMA, what a nice import from DFly btw). Minimum base (OpenBSD)
- the base should only have one way to do things (don’t have 3 different firewalls in base like today)
1. old computer found in a dumpster. Using it to learn about sysadmin and networking.
2. I installed it when centos was dead and I was looking for a stable os for a web server. My main computer runs Linux Mint.
3. It’s the first one I tried. Might check out open bds and dragonfly when I get into hosting and virtualization.
4. I’m just limited by lack of knowledge. But I like that there’s a more common path to follow, with great documentation. For the things I have working now, I feel like I have a better handle on my config choices vs in Linux distros I don’t always know if I’m doing things the Linux way or the Ubuntu way.
5. Several nights and weekends playing with ancient hardware and alternative software.
1. I use it on a NAS system. For years this was vanilla FreeBSD from 10 to 13. A few months back I replaced the system with TrueNAS Core which is based on FreeBSD 13, retaining the ZFS pools from the original installation. This system hosts storage and network shares, services hosted in jails such as databases, build slaves and artefact storage, and Windows virtual machines also hosting services and remote desktops.
2. First-class ZFS support, full NFSv4 ACL support which works with Samba and Windows ACLs and is a massive improvement upon POSIX.1e DRAFT ACLs.
3. No real preference, but ZFS support is (for me) the killer feature. Next are jails and Bhyve.
4. The main lack is the more comprehensive selection of drivers found on Linux. That said, it's pretty decent and I do find the overall quality of the drivers and system as a whole is better than Linux.
5. The system is engineered as a cohesive whole. While other BSDs might be similar, and perhaps Debian was attempting this a couple of decades back with its core design principles, most of the alternatives are lacking in this essential cohesiveness.
> 1. Where do you use FreeBSD? On your laptop? Remote servers? Routers?
Everything headless and sometimes kiosks.
> 2. Why do you use FreeBSD instead of Linux?
There's no such thing as "using Linux". You should ask why use FreeBSD over Debian, or why FreeBSD over Arch. Much easier questions with often quite obvious answers.
> 3. Why do you use FreeBSD instead of OpenBSD or another *BSD?
OpenBSD has no usable filesystem.
> 4. Do you find something lacking in FreeBSD? Is there something that is good in another OS that you'd like to see in FreeBSD?
Not really. Maybe swappiness and zram would be nice sometimes, but no biggie.
> 5. What is that one thing about FreeBSD that you would hate to lose if you were forced to use another OS?
1. Used in on my desktop for approx 8 years but migrated away a year or two back when I ran into one too many things that doesn't work well for desktop experience. Now it's only running on my nas.
2. I still use it on my nas because I'm familiar with it, has good documentation and very stable, and first class zfs support.
3. OpenBSD and DragonflyBSD both look quite fun to mess with but not the greatest desktop experiences, so that's why I always used FreeBSD in the past. Now I keep using it because familiarity and I know it will do what I need for my nas (mainly serving files and virtualization).
4. The desktop experience. It's doable if you're willing to give up some features and spend a bunch of time configuring some other ones but I've got other stuff to do these days.
2. Centralized documentation, board of directors versus benevolent dictator for life, faster network stack, fewer GNU tools in the base install, ports tree, license.
4. Hardware support, especially power management (ACPI, SpeedStep, etc.) on laptops that are not ThinkPads or Dell Latitudes. Wayland.
5. The FreeBSD handbook.
The biggest problem with the BSDs are not the operating systems themselves, but the network effect surrounding GNU/Linux causing developers to completely overlook them, going on to create bodies of code that are not easy to port or in some cases impossible (Systemd, Wayland).
2. ZFS, jails, stability (although the root reason at the time was "because people smarter than I recommended it"). In hindsight I can see how running some services works better on this system.
> 1. Where do you use FreeBSD? On your laptop? Remote servers? Routers?
Right now, I run FreeBSD on a few machines: my rented server, my two home servers which also do redundant PPPoE + NAT, a NAT machine at my MIL's that I also use for offsite backup of my home servers (I don't trust the rented server enough to run backups there), additionally, I have two mini PCs setup to run mythtv frontend (but the backend is not currently running, because our tv habits have changed). My desktops are Windows. My wireless access points and I think my managed network switches are Linux, so is the ISP provided DSL modem and LTE to network device (which I believe also runs embedded Android on the actual modem). I do have an old acer chromebook that boots FreeBSD, but I haven't used it in a while (my son used it for minecraft until an update changed a library and made it too hard to get working)
2. Why do you use FreeBSD instead of Linux?
I had great experiences with FreeBSD at Yahoo and WhatsApp, and bad experiences with systemd when Debian adopted it, so I decided to switch. And I've been mostly happy. Going between Yahoo and WhatsApp, I jumped about three major versions of FreeBSD in a weekend, and everything felt the same, but better. Jumping between major releases of Debian, lots of things feel different and sometimes better, sometimes worse, sometimes the same; I felt a lot of major changes I was dealing with were complicating my system to deal with use cases I didn't care about, and that a lot of the worse offenders were coming from the same developers and I was tired of dealing with their software and tired of trying to get things to work without it.
3. Why do you use FreeBSD instead of OpenBSD or another *BSD?
IMHO, FreeBSD is focused on being practical and also has a performance focus. OpenBSD has a very opinionated security focus, which I apprechiate, but features needed for performance are missing and unlikely to arrive; I'm more willing to compromise security than performance, so there you go. I mentally associate NetBSD with portability, but I'm not running FreeBSD on exotic equipment, and not interested in fighting with my embedded Linux devices to get them to run software of my choice: been there, done that, I'd rather tilt at different windmills now.
4. Do you find something lacking in FreeBSD? Is there something that is good in another OS that you'd like to see in FreeBSD?
Hardware support is hit or miss. I have usb 2.5G nics I can't use that I thought I'd be able to (but didn't check, and I have other uses for, so no big deal). HDMI audio on certain generations of intel chips is difficult because the graphics and audio drivers need to coordinate on clock settings, but there's no mechanism for that in FreeBSD, the Linux gpu driver is shimmed into the FreeBSD kernel, so that makes it trickier than it maybe already was. I had trouble getting FreeBSD installed on the chromebook I mentioned originally, but it did get fixed.
Not having a large user community makes it hard for hardware/software providers to get excited about providing support, because they don't get a sense of return on investment.
5. What is that one thing about FreeBSD that you would hate to lose if you were forced to use another OS?
Stability of experience. I know that 90-95% of the skills and knowledge I develop on FreeBSD will apply to future releases. I've had to learn firewalls on Linux three times (of course, FreeBSD also has three firewalls, but they are all supported and all kind of different; I simultaneously use pf, because of pfsync which provides for seamless NAT failover and ipfw to do traffic shaping and network delay simulstions; I've never used ipf, but I assume it provides value to some), and I work with mixed versions of Linux for work and have to deal with sometimes ip, sometimes ifconfig, sometimes netstat, sometimes ss on a regular basis, and it makes no sense. FreeBSD has the same needs for interface config and routing and socket listing, and made the existing tools do it, rather than new tools that work the same but different. Who has time for that? If FreeBSD ends, which I don't expect it to, I'll just go live in a cave with the final release and the best computer I can put together that's supported by the final release, and that will be my computing for the rest of my time. Honestly, while I sure computing will continue to develop, a big beefy box purchased today should get me a long time of continued use, so I'll be fine. When it falls apart, maybe there will be a retro marketplace, or maybe I'll move to assisted living.