Hacker News new | past | comments | ask | show | jobs | submit | lexlash's comments login

The most interesting thing is the timeline at the end, which shows what they were successful at and promoted for (management-type roles) and what happened when they tried to transition from SRE management to SWE IC (they fell back into management).

I don't see that reflected in the rest of their postmortem learning - other than them being dissatisfied with doing what they were good at / promoted for - so that kind of helps me ignore the rest of the postmortem. :)


To be fair to her, those contributions are quite old and probably not representative of current skills.


I'm not sure what you mean by "junior" here.

L3 is early career, L4 is mid-career, L5 is senior. You can hit L5 on the strength of pure technical contributions regardless of business/org needs, usually.

L6+ is staff, and tends to involve a very different skillset. (If you're not looking to lead a team, you're probably not going to have the kind of impact that gets you to L6, let alone L7 or higher at Google.)

This is all to say that ICs in the L3/L4/L5 bucket generally show a clear progression in technical skills but beyond that it's fuzzier.


My definitions are basically: Junior developers need supervision because left to their own devices they'll screw things up horribly; normal developers can produce good code independently; and Senior developers are able to catch the mistakes the Junior developers are making and set them on the right path.


I see; that's L3, L4, and L5 progression in a nutshell at Google - although leaving L3s alone doesn't _guarantee_ something will go wrong, it was more that there was no way for them to figure out optimal solutions without help thanks to the sheer complexity of Google.

I'd say the same held true at Amazon but I was in groups which were, at the time, at the periphery of the company's engineering efforts - we didn't have any associated principals to talk to, and maybe one SDE3/L6 to 10 SDE2/L5s mixed with SDE1/L4s.


I would say* under these definitions L3 is junior; what the industry calls senior is somewhere between L4 and L5. L5s at Google are expected to mentor L3s and L4s but also to design systems, break down into tasks, and coordinate tasks between teams and engineers.

If you were a senior engineer at a 50-person startup you would commonly get hired at L4.

* I left Google 18 months ago; also, Google is a large company, and while they strive for uniformity across teams, the levels aren't really quite the same company-wide.


Yeah, as an ex-SWE, seeing this person take just over two years to get from L3 to L5 is kind of shocking.


https://www.levels.fyi/?compare=Amazon,Google&track=Software... is largely accurate. An L7 at Amazon would have an easy time getting an L6 interview at Google; an L6 at Google would not have an easy time getting an L7 interview at Amazon, barring prior experience and other modifiers.

Of note, The person who wrote this article spent the vast majority of their tenure as a SRE TL/M, from their timeline. That's not going to map cleanly into any career track at Amazon, and when this person tried being an L6 SWE, they transitioned back into management.

At Google, I knew L6/L7/L8 managers who were fantastic engineers; I knew L6/L7/L8 managers who were pure-management and excellent at that but hadn't written code in a decade and change. Varied dramatically by what the org needed - those engineer-managers tended to have a lot of lower-leveled engineers and the pure-managers had more highly leveled engineering reporting to them.

Anyways, while I was at Google, L5 was the lowest level where you could officially have a direct report (not counting interns), so yeah, anything of cross-team note was generally lead by an L6 or higher. (L5s routinely lead things that were critical _inside_ of a given group, but if you were having cross-team impact, well, that's L6 work.)


Off the top of my head:

1) glibc doesn't static link and musl requires you to understand how musl is different from glibc (DNS resolution being a favorite), so you always end up with at least that dynamic dependency, at which point might as well have more dynamic dependencies

2) static binaries take significantly more time to build, and engineers really hate waiting - more than they care about wasted resources :)

3) static linking means having to re-ship your entire app/binary when a dependency needs patching - and I'm not sure how many tools are smart enough to detect vulnerable versions of static-linked dependencies in a binary vs. those that scan hashes in /usr/lib and so on. If your tool is tiny this doesn't matter, but if it's not, you end up in a lot of pain

4) licensing of dependencies that are statically linked in is sometimes legally interesting or different versus dynamic linking, but I'm not sure how many people actually think about that one

I've also personally had all kinds of weird issues trying to build static binaries of various tools/libraries; since it's not as common a need, expect to have to put in a bunch of effort on edge cases.

Resource usage _does_ come up - a great example of this is how Apple handled Swift for awhile - every Swift application had to bundle the full runtime, effectively shipping a static build, and a number of organizations rejected Swift entirely because it lead to large enough downloads that Apple would push it to Wi-Fi or users would complain. :)


You don't need to run bleeding edge versions unless you feel like it; there's a stable release with rolling security patches every 6 months (current is 24.05, next will be 24.11).

You don't need to keep multiple copies of each library - but you _can_ when you find out that an update broke something you care about while still updating everything else on your system. You aren't rolling back your entire system state, just the...light-cone of the one tool that has issues.

The problem with SO numbers is that your Python/Ruby/Java/NodeJS packaging and tooling doesn't respect that at all. If you can satisfy all of your dependencies using the Debian-maintained repositories great! When you can't, Nix provides a harm-reduction framework.

Nix also makes certain hard things trivial - like duplicating the exact system state that someone else used to build a thing some months/years ago, or undoing the equivalent of a `dist-upgrade` gone awry.

> And then when there's a security problem, who goes and checks that every version of every dependency of every application has actually been patched and updated?

The nixpkgs maintainers, same as the Debian maintainers. Repology's down right now but nixpkgs seems to do quite well on a CVE level.

> Why would I want to roll a system back to an (definitely insecure) state of a few months ago?

Insecure is sometimes preferable to down. Being able to inspect an older/insecure state with new/secure tools is neat.

> I have many of the same questions about Snap and even Docker.

Snap and Docker solve similar problems that most people don't have. Same with k8s. You might just not have these problems - I have a screwdriver on my desk that's specifically for opening up GameCube consoles (so it's longer than the one I use to open up N64 cartridges, even though it's the same shape); unless you have that specific need, it'd be completely pointless in your toolbox and cause you trouble every time you tried to use it.


NixOS aside, Nix manages state _outside_ of / independent with respect to your operating system, which is why it's so damn useful.

With Nix, I can build OCI images the exact same way every time; with Docker, I have to hope that the `apt update` thrown in at the top doesn't accidentally put me on a new major version of some dependency that breaks the rest of the script. I tend to deal with Dockerfiles written five or more years ago, so I will admit to bias here.

I'll also admit that I don't really enjoy NixOS. It's neat enough on my headless devices but not something I'd want to try to daily drive; I'm more a fan of the Universal Blue / Project Bluefin approach.


Chiming in to say that direnv is one of the greatest projects I've ever come across and it gets damn near everything right out of the box - you can also use it without any Nix at all. (It makes a nice gateway to Nix, though; once you have your directory-based env vars, it's a shorter hop to directory-based package configuration...)


Absolutely. I use it both with and without Nix.

By direnv's design, this vscode extension restores sanity in vscode env handling mess†: https://marketplace.visualstudio.com/items?itemName=mkhl.dir...

† Depending on how you (re)start vscode (terminal vs launchd) it's going to either have some project env vars or not. e.g do `code /some/path` in a terminal and it inherits env vars from the terminal, which is nonsense on macOS because then if you reopen the project the env vars are gone because it's been relaunched by launchd. Dunno if it has been fixed but it was even worse when a vscode process initially started via terminal would have env vars inherited for all subsequently opened projects, even different ones.


Nix and direnv is such an insanely good combo. I use them together, typically via devenv, the latter sometimes as a library on top of a plain flake.nix, other times with the full devenv CLI and experiene— I love both for different use cases. Really pleasant.


Nix, and NixOS, are designed for those of us who have to clean 10,000 proverbial litter boxes every day. I use Nix fairly extensively at work; I use it very little at home, where I don't need to worry about what dependency someone took on a specific version of Python five years ago, etc.

It's like k8s, imo - it solves some real problems at scale but is rarely going to be a good idea for individual users.


It's also nice in the small. At home I like using datasette to search SQLite databases. One day it decided to stop working. I tried reinstalling it with pipx, didn't work either. nix-shell -p datasette, it works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: