Hacker News new | past | comments | ask | show | jobs | submit login
Learning modern Linux: A handbook for the cloud native practitioner (oreilly.com)
201 points by teleforce on May 7, 2022 | hide | past | favorite | 56 comments



I could probably use this. My muscle memory still reaches for “ifconfig”.


And the command line options for netstat, route etc.. But usually I just install net-tools :) I know it's outdated but my main driver is FreeBSD where these tools are still the normal toolchain.


Me too. I just, yet again, rediscovered the ip command. This and moving to systemd from init are two things that always trip me up.


A couple of days ago I needed to set up a scheduled job for something. I typed "crontab -e" and got "no such command or filename" and remembered "Oh yeah, this distro doesn't even install cron by default anymore."

I could have done a quick "dnf -y install cronie", but I decided "F it, might as well go ahead and learn to write SystemD timer definitions". It wasn't bad, but I feel like I've turned to the dark side or something.


Good news - a new version of Slackware was released just for you! ;) (And me as well, actually)


But then, a book won't fix your muscle memory.

I'm also one of the many people suffering for the loss of the old tools. Maybe we should create a support group?

Anyway, I would keep them installed, but the new ones interfaces just make a lot more sense, so even though it's hard to write the correct command (they could be better documented), it is much easier to decide if a command is correct or not after it's written.


> My muscle memory still reaches for “ifconfig”

Indeed. I finally got over the transition from screen to tmux.


I’ve read some chapters from the linked book to complement “How Linux Works” from No Starch Press and while the information was useful and seemed correct, in the end I decided to stick with HLW, since it was both broader and deeper at ~460 vs 250 pages.


What is the best book on the design/architecture of the Linux kernel?


Although it's not a book, the Linux documentation [1] is actually very good and does the job of explaining design and architecture of the kernel.

I'd also suggest reading "Unix Internals" by Uresh Vahalia. While this book is from 1995 and it doesn't cover Linux specifically, it's an exceptional resource for understanding how *nix kernels work in general.

[1] https://docs.kernel.org/


"A Heavily Commented Linux Kernel Source Code" by Zhao Jiong and "The Linux Programming Interface" by Michael Kerrisk are the best I'm aware of.


LPI while amazing doesn't exactly deal with the internals or architecture of the Kernel. Robert Love's Linux Kernel Development is probably what OP is looking for.


Also Maxwell's "Linux Core Kernel Commentary" which also annotates the kernel. It's old so is based on 2.2, but its a gold mine


+1 for “The Linux Programming Interface”


One very in-depth book I have is: The Linux Programming Interface: A Linux and UNIX System Programming Handbook


Have you read it? I recently purchased a copy, and the exercises look fun/useful.


As someone who does not write C regularly and doesn’t not need to read stacks docs regularly, I’ve found it immensely useful. I have not gotten very far yet but have already found real world use for flag combinations for avoiding race conditions


The author [1] appears to have some substantial hands on experience

[1] https://mhausenblas.info/


I skimmed this book the other day while looking for OReilly books. It was alright. It doesn’t offer enough over other similar books like How Linux Works, the ship book, or TLPI. Maybe one advantage is the detail of the networking chapter which is more substantial than the ship book or HLW. Still however it is not enough for understanding modern cloud infra


I have been using linux since 1992, Solaris and UNIX (dec, etc) before then... I run Gentoo now on my box, compile my own kernels, etc. I admin 100s of linux boxes at work.. but there is verbiage that different people see totally differently:

What does "Learn critical components such as the Linux kernel"

mean to the author? I have not read this book, but do they mean we are going kernel hacking or I will show you how to compile a kernel for cloud native applications (whatever that would mean, really)?


The book's table of contents is viewable at the link, if that helps at all.


We're doing book adverts now?


Yes with quite a lot of defense for some reason.


Everything looks okay except for the section regarding other “friendly” shells: that’s really the part where we as an industry don’t need more fragmentation.


I don't see the problem. I like zsh and I can live with bash. I've tried fish, but it just didn't click with me.

Others swear by fish and how it revolutionised their terminal. Hell,I've seen people fall in love with Powershell of all shells because of its interactivity and cross platform scripting language that doesn't rely on Unix tools being present in $PATH. I don't get it, but that's okay; they can have their preferences and I can have mine.

You've got to find what works for you. Nano, emacs, .*vim, vi, using a magnetic needle to manually flip bits on a spinning platter, it doesn't matter, whatever keeps you happy and/or productive.

The vast majority of Linux users seem to use bash. Zsh has been picked up more and more by Apple for licensing and the "threat" of open source. I don't think shells like fish are a threat to the ecosystem as long as they keep the most basic form of compatibility with bash (not sh). You need bash installed anyway, otherwise you can't curl2bash to install "modern" Linux software!


The problem comes in when you need to manage 100's of machines, and there isn't a single shell that is in everyone on the team's muscle memory.


Typing directly into a shell is a smell for teams that manage unwieldy-without-automation amounts of machines.

Shells are just REPLs; use your editor to drive them. This helps document what exactly was done on a machine, and encourages code re-use (yank-put from last time) and other tidbits you get when editing (git repos, git search, etc.).


That's why you can pick your favourite shell for your user account.

If you all log in to the same account, you'll need to pick some standard. It doesn't really matter if the system decides on that standard or if you just ask the team what they prefer and pick the most popular shell. When in doubt, install all shells, log in to sh and let people start their shell of choice.


You shouldn't have a user account on prod machines. Typically, there's just the user the service runs as. If it's in the cloud, the "machines" are containers. If your company sells on-prem stuff, there's a good chance it's an appliance that's owned by someone else.

If you operate your own data centers, then you need to do configuration management of the machines, so their configurations are immutable/reproducible.

(These problems all arise once you have a large fleet and a large ops team.)


If your fleet is large enough that user accounts are no longer feasible then you probably shouldn't be managing these servers by hand anyway, so the preference of shell becomes irrelevant.


> That's why you can pick your favourite shell for your user account.

nobody cares about user accounts. use whatever you want on your laptop, nobody cares.

however if a script reaches a production server, it should either be sh, bash, or a real programming language (python/ruby/perl/whatever).

the ugly stuff i've seen is that some snowflake user drops their scripts written in ${shell_of_the_week} and leaves, the script breaks and now i have there's this thing that has to be not only fixed, but possibly rewritten from scratch.


I'd caution against python in prod. In addition to supply chain issues with pip ("pip install" in prod is foolish on the best of days), python libraries love breaking compatibility.

The "one off" script from two years ago that you really, really need right now probably won't work. Perl/bash are less likely to hit this (they stopped changing years ago). It's an open question whether golang will have this problem.


You don’t use a shell for that, directly. Ansible, puppet, docker, etc.


For that you have to standardize on bash compatible. It's ok if the exact command/syntax isn't in muscle memory. I've worked with a number of "alternative shell" fans and one thing that nearly everyone agrees on is that "bash compatible" is the lingua franca and the standard.


I worked at a place that was forcing zsh on its developers, mainly because it has been "deprecated" on mac OS. It's not like you couldn't install the newest bash off of MacPorts or brew. Production ran on Linux, and zsh wasn't installed there. Better to be consistent, stick with the least common denominator: bash.


Many years ago I had to write a script that ever developer in the company would occasionally have to use. After running into compatibility issues with python and bash I went for zsh, and never heard a single complaint. I still use bash as my main shell, out of habit (and because I know it'll be installed on whatever remote server I connect to) but zsh isn't so different. It's array syntax, for one. And figuring out what directory "this script" is in, regardless of `pwd` output, is easier.


That's what sh or bash is for, a least common denominator present on every machine.

Sure, it is not the same as your home machine - but even with the right shell, you customized config files woukd still be missing. So it is easier to get used to defaukt setup on defaut shell when managing many machines.

(This is the reason I had to learn vi back in the day: sure, my machine has lovingly customized emacs.. But that old Sun one needs to debug? Vi only.)


I would argue it's sh (Bourne shell, or POSIX shell rather) than Bash, as Bash is not present by default on some notable Linux distros (Alpine Linux, OpenWRT etc.) and especially not present out of the box on many popular *BSD variants.


and not even vim, but vi.


What I love about PowerShell is that it allows you to pipe structured data between tools without needing to 'screen scrape' the output like on Linux.

I still don't use it on Linux but I do think it's a big step forward for scripting.


FreeBSD has integrated libxo[0] into a lot of its base utilities, so you can use something like `netstat --libxo json` to get structured data that can be sliced and diced with other tools. Unfortunately, it hasn't been extended to all of the base system yet (ls, for example, doesn't support libxo output).

For example:

    $ w --libxo json|json_pp
    {
       "uptime-information" : {
          "days" : 46,
          "hours" : 22,
          "load-average-1" : 0.19,
          "load-average-15" : 0.13,
          "load-average-5" : 0.12,
          "minutes" : 11,
          "seconds" : 34,
    [...]
Obviously it's not as extensive as PowerShell, but it definitely gets rid of screen-scraping.

[0] https://www.freebsd.org/cgi/man.cgi?query=libxo&sektion=3&fo...


Ooh this is really cool, thanks for sharing. FreeBSD is actually my daily driver so that'll really come in handy.


Actually, it is a step backwards: unless you control both sides (perfect version compatibility), passing text to extract only parts you care about is more resilient (think SOAP vs. REST).


Right, how dare people use tools that they find comfortable and enjoy using? What if they choose a tool that goes against your preferences, or worse (bad word coming, consider yourself warned) makes you have to learn something?


As a fish user myself, I’ll admit there is something to be said for standardization. It’s easy to poke fun at somebody for being afraid to learn something new - but with the explosion of different tools it’s becoming impossible to stay abreast of all the changes and feel proficient in anything anymore.


Standardization builds industries and economies of scale. Having competing standards is great for technologists, but makes the current problem particularly painful: in job descriptions, employers now specify not just general technologies used but also particular frameworks, ci/cd tools, and on and on. So many tools that a person must be competent with using, that only a handful of people might know all of them on day one, which makes hiring much more expensive. You either pay over market rate for the perfect person, or hire someone decent and pay for them to learn the job. Ultimately, I think we’re in a better place than we were without modern tooling, but it makes it hard to hire…


Disagree. In my experience it’s way easier to use something like fish out of the box rather than spending hours configuring bash to your liking assuming you don’t have your dot files made to your liking yet. If someone is just starting out I’d absolutely recommend something like fish or zsh over bash or csh.


It's almost twenty years that I use Linux daily and the only thing I ever changed in .bashrc is the history size. What kind of customizations are we talking about? maybe I'm missing out using stock bash configuration and I should start customizing it to my liking it (serious, no sarcasm)


Autocomplete from history is something I really enjoy from my config. I can live without the nice colors and data. Though having the execution time of the previous command has been very handy a few times


fzf for command history searching

custom PATH for your own programs

options to programs like LESS


History-based auto completion is a productivity game changer. But, You can get fish-like niceties with trivial effort in zsh.

The deal breaker for be with fish is abandoning decades of bash syntax. With zsh, I can have my fancy, productive prompt and still write bash-compatible sheets scripts.


And who is gonna decide which shells stay or go?


So is this a book mostly on Linux or Linux with systemd?


The latter


I am so sick of the word practitioner.

I don't have a better alternative but something about practitioner just feels so forced and stuffy.

On the flip side I do like viewing operation of technical tooling in support of businesses as a practice to be continually improved. Most businesses don't really embrace that in my experience. If they did, even with it being a cost center, they would spend more time making sound decisions around tech being used (which would hopefully reduce the k8s mania).


I have a similar reaction when people are called consumers. They should just cut to the chase and call them what they really want to call them: useless eaters.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: