As a happy user of AWS Linux 2, it is extremely disappointing that they're no longer providing a drop-in RHEL replacement for EC2. I don't see any mention of things we and many large shops like ours care about - long term support and RHEL compatibility.
We've been very vocal to AWS product managers and solution architects about our needs for an Amazon Linux 3 that is a refresh over AWS Linux 2 (at least 5 years support with RHEL 8 compatibility, free kernel patching w/o reboots, official support from datadog, vmware images). Sad that we haven't been heard. We'll now need to plan to move over 20k instances to Rocky Linux.
I suspect that the move to using Fedora has something to do with changes to the CentOS project that AWS Linux 2 forked. Let's hope the beancounters at IBM doesn't have other plans for Fedora.
You could pay Red Hat for licenses if it's that important to you. But as far as I can tell Amazon is a major supporter of Rocky Linux so sounds like they heard you and are giving you exactly what you want.
I don’t understand your mindset at all. You’re moving thousands of instances that are likely responsible for millions of dollars over to a community run project that is effectively in the hands of random people? Why don’t you just pay Red Hat for their work?
If it's only thousands of instances that's unlikely to involve enough money for Red Hat to actually provide anything like meaningful support, from what I can tell from talking to RH customers of various sizes - but it is likely to involve enough money that selling it to management as basically a moral license isn't going to work either.
Plus, honestly, per-system licensing plus cloud autoscaling isn't really anybody's idea of a good time.
> Let's hope the beancounters at IBM doesn't have other plans for Fedora.
What do you think they would potentially do? Fedora is the upstream for RHEL, it's integral to it. Part of the reason they dropped support for CentOS was because it didn't serve to benefit RHEL very much.
>I suspect that the move to using Fedora has something to do with changes to the CentOS project that AWS Linux 2 forked. Let's hope the beancounters at IBM doesn't have other plans for Fedora.
I work at a large bank and we are also sad they didn't release another RHEL compatible distro. We prefer not to pay RH subscription fees and are now looking at both Rocky and Oracle Linux. Can I contact you to share notes on approaching such a migration from AL2?
In addition to RHEL compatibility, AL2 worked well because our existing support plan with AWS covered it. It also came with free kernel 'live' patches unlike others (except Oracle).
I don’t imagine this is really the kind of thing you can get into publicly but I’d love to know more about why a bank of all places doesn’t want to pay for RHEL and just go with the free fork instead.
They may be wiling to pay for RHEL for some critical services, but there are a lot of things where a full RH sub doesn't make sense. Dev/test environments, non-critical services, web applications and applications with high parallelisation. You want to support as few OS variants as possible, so by using RH clones there this way you can have "can't tell by the taste" compatibility across all your systems. Plus it makes it easier hiring people.
I work at AWS but not in the AL team. I’m curious what parts of AL2022 don’t meet your needs? It’ll have 5 years of support and live kernel patching. Just like AL2 I’m assuming official partner support is coming soon (AL2022 is still in preview release)
It seemed to read pretty clearly to me that "being its own Fedora derivative and therefore not as easy to trust for RHELish workloads as something like Rocky" is the sticking point, though 'mst needs more coffee' is always a possibility here.
Similarly had amazon created their own Debian derivative I'm not sure it'd be that tempting to users whose baseline is Ubuntu.
If compliance is an issue, I doubt whether there is such a thing as a "suitable substitute". Certainly none of the commercial applications I interact with accepted CentOS as an alternative to RHEL, so they certainly won't allow a substitution for Rocky or Alma.
While AL2022 isn't a drop-in RHEL alternative, it seems much more likely that vendors will accept it as a commercially supported install base given that it is a first-class citizen on the biggest cloud.
So this is a weird thing to think, because Amazon Linux 2 was never a drop-in RHEL replacement. It was a bastard child between RHEL 7 and Fedora stuff, combined with a custom kernel. You didn't have RHEL kABI and you didn't even have complete RHEL userspace compatibility either.
If anything, Amazon Linux 2022 clarifies things by indicating they directly track Fedora Linux, branch and stabilize that, and offer their own lifecycle guarantees for that branch.
I'm confused about what data dog has to do with running an operating system and what RHEL 8 compatibility means. Kernel patching is a feature of a good number of free operating systems. As for LTS, I really wish that every company that griped about long term support for an OS would volunteer to pay 2-3 engineers to contribute to OS development. You'd probably see a lot more sustainable OS ecosystem if that happened.
On AWS, I always now use Amazon's Linux distro. They also maintain their own version of OpenJDK.
As skeptical as I am about huge tech corps like Amazon, Google, etc., I have to admit I enjoy being their paying customer - nice experience. I find GCP and AWS a pleasure to use.
Just be aware that it isn't a drop in replacement. We were using AWS Corretto (https://aws.amazon.com/corretto/) and had to back out because we had all sorts of connectivity issues in combination with Mulesoft Mule ESB. I suspect it was because Corretto deprecated a number of cipher suites, but we weren't able to determine for sure.
How do you develop for it though? Do you install it locally as well? Or do you only do interpreted languages and/or Java? I suppose Go would work across distros also (because it doesn't use libc), but that's all I can think of.
You should just develop your apps in/with/for containers. The container contains all the dependencies for your app. This way you never have to think about the host OS ever again; your app "just works" (once you hook up the networking, environment, secrets, storage, logging, etc for whatever is running your container). That sounds like a lot of extra work, but actually it's just standardizing the things you should already be dealing with even if you didn't use containers. The end result is your app works more reliably and you can run it anywhere.
Some of us are systems/infrastructure engineers who have to build the intermediate layer. You can't just lay a dockerfile on top of a kernel and hope the system learns how to run it by osmosis.
Yes there are services like Fargate but they're not cost efficient for many cases.
The person was asking how they should develop their app to run on a particular host. If they need to run/deploy it, they can use the EC2 Instance Launch Wizard to set everything up in the console, log in and install Docker, use Docker.com to pull their container, and then run it.
Or, like you suggest, they could use an AWS service to manage their container, like App Runner, or Lightsail, or EKS, EKS Fargate, EKS Anywhere, ECS, ECS Fargate, ECS Anywhere, ROSA, Greengrass, App2Container, Elastic Beanstalk, or Lambda. There are plenty of guides on AWS's website on how to use them.
Cost is mostly irrelevant to the conversation, as you can run containers anywhere (other than, say, a CloudFlare worker); pay for any infrastructure you want and then run the container there.
This is true, but people focusing on only these benefits often miss the fact that they still have to update the image contents and re-deploy as soon as security patches are available.
This is like updating the direct dependencies of your service itself (e.g. cargo audit -> cargo update) but anecdotally I'm seeing many people neglect the image and sometimes even pin specific versions and miss potential updates even when they do later rebuild it.
We take unattended upgrades for granted on Debian-based servers, and that will likely help the Docker host system, but I'm not aware of anything nearly as automated for rebuilding and redeploying the images themselves.
It could be part of your CI/CD pipeline but that in itself is a lot of extra setup and must not be neglected, and it must make sense, e.g. pin in a way that will still pick up security patches and have a dependency audit as part of CI/CD to report when the patching hasn't been enough (e.g. due to semver constraints).
Docker's website has pretty sweet automation that you can use to re-build your containers automatically when the base image changes.
What you describe isn't hard to achieve. Write a one-line cron job that gets the latest packages for your container's base, writes them to a file, commits it to Git, and pushes it. Then set up a Git webhook that runs a script you have to build your container with a new version and push that to a dev instance. Add some tests, and you have an entire CI/CD process with just one cron job and one Git webhook.
Why? I develop C++ servers for Linux. I have script that can build production server from nothing with all the dependencies needed, deploy database and then pull down source build executable run tests and install it as a daemon. I test if from scratch every once in a while just in case and did not have any troubles for years.
> you never have to think about the host OS ever again
This is literally one of the only things that is not included in a container image. The Linux kernel is the Operating System and you are subject to differences in its configuration depending on where the container is running. You are referring to the distribution.
> You should just develop your apps in/with/for containers. The container contains all the dependencies for your app. This way you never have to think about the host OS ever again; your app "just works" (once you hook up the networking, environment, secrets, storage, logging, etc for whatever is running your container). That sounds like a lot of extra work, but actually it's just standardizing the things you should already be dealing with even if you didn't use containers. The end result is your app works more reliably and you can run it anywhere.
This is a false sense of reproducibility. I encountered cases where container worked well on one machine and crashed or had weird bugs on another one.
This happens, but is pretty rare. Using containers generally leads to much more reliable portability than trying to manage all the dependencies by hand.
If I remember correctly Go does use libc by default if you link with net package (you can set CGO_ENABLED=0 to disable it but then you won’t get NSS). On openbsd it also switched back to using libc
Well, it is generally more likely to be tuned to AWS, containing right drivers and tools installed than a default distro you would download from the website, but the images that are available on AWS would likely also tuned similarly. If there are some issues where other image is noticeable worse they would look into AmazonLinux and apply the changes from it.
I would say that AmazonLinux is likely to have less issues with latest instance types (if they change something "hardware" wise, for example when AWS started exposing EBS using NVMe drivers there were some issues originally).
Enabling it won't in itself secure your company's applications, as the default policies in Fedora only apply to installed services (e.g ssh) that have a policy written for them.
This is probably right on the boundary of the shared-security-model, but I think it would be great if they also offered easier ways for application developers to leverage the advertised feature.
FWIW, Docker, podman, LXC, and Kubernetes will apply SELinux policies to containers automatically if you have that support enabled at build time (many distributions do have it enabled, esp Fedora family) and SELinux enabled at runtime. Likewise for AppArmor.
most servers use debian or ubuntu, i think this will be greate and maybe even a killer feuture to change a little the landscape, but i don't think is as much impact as we wish in at least the next 5 years
You're not wrong, but writing selinux policy isn't that complicated. You can easily look at ausearch output to understand why a constrained process failed and brute force a policy using audit2allow. Although as the policy writer becomes more familiar with selinux and their app, they can write better policy.
I do know this, I'm currently putting together a training course on authoring SELinux policy.
Surely the fact that 'disabling SELinux' is the top result on the subject in Google or StackOverflow will tell you that you would be in the minority of developers that like working with it and find it easy to do so.
I think there's more to it than just simply running an app without receiving an AVC complaint in auditd, you need to be able to test that the controls you put in place actually protect the application in some way, this does not come for free with audit2allow and other such generative tools.
The problem I found (on Centos 8) is that audit sometimes denies but nothing is logged. I found this is the case when an apache script tries to kill another process. It required 2 separate policies: one of which audit2allow came up with, and another I had to figure out myself after a whole bunch of time scouring stackoverflow. After that I just gave up on selinux and turned it off, as I just couldn't trust it.
If it actually did what it was supposed to do in a reasonable manner, people would use it.
But for applications with a large feature set - e.g. a web browser - if the policy author didn't use a particular feature - e.g. U2F security key support - you might be introducing a new source of problems that only advanced users can easily solve.
Not that I imagine Amazon Linux is used for web browsing very often....
I don’t understand why this is based on Fedora. Isn’t that more of a desktop distro…? And this seems more aimed at virtual machines running on EC2…? Or am I missing something?
It’s also interesting that at the same time Amazon is sponsoring Rocky Linux: https://rockylinux.org/sponsors/ (Which is based on Red Hat Enterprise Linux.)
"Our release cadence (new major version every 2 years) best lines up with a highly predictable release cadence of an upstream distribution such as Fedora."
"We believe that having Fedora as upstream allows us to meet the needs of the customers that we talked to in terms of flexibility and pulling in newer packages."
It depends on how do you define stability. Fedora packages are very stable in terms of bugs, but the problem of changes between versions might cause extra work. However, many run their services in containers anyway, and you can use the latest packages on your host.
This is the whole reason why people use CentOS and Debian as server side old stable and most importantly security patch. if you need your dev stack as newest version just installed it on the old stable OS base so you can worry only on your dev stack.
> CentOS and Debian as server side old stable and most importantly security patch
Historically, CentOS has been very slow to release security patches, compared to upstream (RHEL).
And for anything non-critical (but still often high) Debian stable tends to receive fixes a lot later than unstable, and sometimes never due to need to backport.
Fedora is the source and integration space for many things these days, not just the Fedora Workstation any more. Its the upstream for RHEL/CentOS but also has a ton of editions and spins, including Fedora CoreOO, Fedora IoT, Fedora Silverblue, etc.
Fedora is the upstream for RHEL, and was the upstream for CentOS. While many folks use Fedora as a desktop OS alternative to Ubuntu, Fedora was not designed with desktops in mind.
You are correct saying it is designed for EC2 instances, as it is the de facto default image for EC2 instances, despite many folks choosing an Ubuntu image instead.
Looks interesting. SELinux by default is certainly a win, it seems that Linux has finally hit a tipping point where SELinux is a reasonable option (ie: someone else is going to do the work for you).
Unfortunately I'm just way more used to debian based systems, and I feel like having a mismatch in production would just lead to friction.
RHEL running with SELinux enabled has been a thing since I worked at Red Hat 12 years ago, and Amazon Linux 2 was based on a CentOS upstream that had the capability of running in that way. All certification had to happen with SELinux enabled, and any distro provided service was setup to run with full restrictions, and it was the default on for all Professional Services work.
However it became a problem once you used 3rd party software as step 1 of most install guides was to disable SELinux.
In RedHat or CentOS it was enabled by default as well for a long while. The problem was that if you installed custom software (not packaged by the distro) you had two options:
- create and install SELinux rules for it
- disable SELinux
Unfortunately most did not bother to learn how to do the first option so they go with the 2nd.
Besides the sibling answers, it has been enabled by default on Android for quite some time now, it is one of the mechanisms how they enforce the NDK being mostly about extending the Java/Kotlin userspace with native code and nothing else.
The irony with Android, is that from the userspace point of view it doesn't matter it runs on top of the Linux kernel.
So while they are the Linux distribution that takes advantage of almost all security knobs available, LinuxSE, seccomp, eBPF, userspace drivers,..., that is transparent to apps unless they try to see behind the curtain.
SELinux has always been a reasonable option but it’s just scarier than people are used to. I used Fedora for a couple of years and was surprised by how straight forward it was once I understood it.
I work alongside a small team maintaining quite a lot of machines on AWS. They're struggling (IMHO) to manually apply all of the security patches their scanning tool identifies. My theory is that Amazon Linux gets patched frequently, and so they'd be better off spending time normalizing our EC2 infra so that every instance is running Amazon Linux, and then work on an easy rollout mechanism to deploy the latest version.
Has anyone got any thoughts on this? It wouldn't obviate the need for patching completely, but I feel like AWS is already doing some of this work for us, so we should take advantage.
For those few AMI's that are long lived, AWS SSM Patch Manager is your friend. Naturally take care to roll out patches in a rolling block, you don't want to apply a broken patch everywhere in the same day :)
I second this, we use it to manage a bigger fleet with a few hundred machines. One thing to keep in mind though is that it will not apply kernel updates (as those require a reboot) so you still need to account for it.
Every mainstream upstream Linux vendor is continuously pushing updated AMIs. It shouldn’t really matter whether you solve this with Ubuntu or Amazon Linux or RHEL/CentOS.
Sounds like you need a better process / automation for rolling updates. Either continuously rebuilt golden images or rolling security patches, or turning on your distros unattended upgrade mechanism could be solutions depending on your environment.
For the life of me I never got Image Builder working in a decent state.
I opted for Packer and I've been very happy with it. Though with that said I'm still using AWS SSM Patch Manager for a few outliers that are long lived.
Like. You, Okta AD agent that can only be programmatically installed using AHK. :-/
It was a little strange to set up, I remember it taking a while/a lot of experimentation... But in the end it's just running userdata, and/or "component" scripts, and baking that into the AMI. It's been happily updating and switching out Launch Template versions for our ASGs (for reasons each pipeline can only push to 5 LTs).
I guess I should write up a blogpost, because... the documentation is kinda garbage.
I never got around to using packer properly so can't compare.
Yes, one of the core benefits of a provider like AWS is that they provide tooling to treat individual instances as immutable entities that you simply replace without any interruption to your users. You should focus on expressing the infrastructure as code and using mechanisms like ASGs to roll out new instances based on the latest Amazon provided AMIs.
If you can, definitely standardize on as few distros as possible. It'll make applying patches (and learning when things go wrong, because they will) much easier.
We used to have all sorts of distros that people just felt like using without worrying about their maintainability. We kept fighting fires to keep everything running. Once we standardized on a single distro (CentOS at the time), everything started working much more smoothly. We could have picked Debian, Ubuntu, it doesn't matter.
That being said, Amazon Linux 2 is pretty well maintained. Most things (all?) that work on RHEL, will work on it. You may need to use 3rd-party repos if you want really newer stuff (eg. PHP) but that's inherent to such LTS releases. That situation is expected to improve with the improvements that adopting Fedora brings in AL2022 but I need to catch up.
Yep we do this, works good - you can either trigger a server refresh from SNS (AWS notifies you of certain AMI updates) or we just rebuild our underlying fleet each week with the most current AL2 AMI
There are really two primary camps - RedHat based (CentOS, Rocky Linux, Amazon Linux, etc) and Debian based (Debian, Ubuntu, etc). There are of course many other bloodlines - but these are the most common in production environments and more specifically cloud env. If you are familiar with one version of linux that is RH based, you will tend to gravitate to others with similar DNA. Likewise, if you come from Debian/Ubuntu you will tend to stick with those. At the end of the day they are both Linux, but each has their own approach to configuration, where things go, package management, etc.
You really can't go wrong with either - use what you prefer.
FWIW, the real brunt of my question was why one would go with a cloud-provider specific operating system over one from a group like Canonical or RedHat, as I would naively expect it to have less support and particularly less ecosystem-wide understanding and experience while not being available for other systems, and so it would seem like an easily-avoidable source of vendor lock-in. If I were be part of Camp RedHat I would personally use CentOS, not "Amazon Linux", unless there was some extremely good reason why Amazon Linux in specific was awesome.
AWS’ flavor of Linux is open source, though. You can run it anywhere, not just Amazon. I don’t see this as a vendor lock in issue, personally.
Ideally you build your software such that the OS is just an implementation detail that’s abstracted away. In the server world a switch from RHEL to Ubuntu is not as hard as a move from, for instance, Google BigTable to AWS DynamoDB
Besides going the path of least resistance in AWS, possibly to get OS & package support from AWS if there's already an enterprise-level support plan in place, rather than needing to buy other support subscriptions (eg, RHEL)?
AM2 is able to run in other places so there doesn't seem to be much vendor lock-in compared to a service like DynamoDB, though.
It provides newer versions for a few key packages, e.g.: Docker 20.10, PostgreSQL 13, Ruby 3.0, Kernel 5.10, nginx 1.20, PHP 8.0, Python 3.8, Redis 6.2, Go 1.15, Rust 1.47, etc.
Some newer packages, e.g. OpenSSL 1.1.1 and zsh 5.7 are provided in the main repo.
Outdated packages wasn't a major pain point in my experience. The bigger issue is a relative small selection of packages.
These are either available via 3rd party repos (e.g. NodeJS) or EPEL (e.g. libsodium), or by recompiling Fedora SRPMS. That can be an inconvenience, but not a big deal overall.
I hope the situation will improve once AL2022 is out, as Fedora comes with a much wider selection of packages.
...and with this new version, a great support for SELinux too (because of Fedora). Some don't like the push for Snaps as well.
I think SELinux is one of the biggest differences and the hardest to adapt to (as changing apt to dnf is not hard).
If you want a good starter on SELinux, my whole book on deployment[0] is SELinux ready with a full dedicated chapter on SELinux and a SELinux cheatsheet. Today also with a 33% off for Black Friday ("blackfriday" discount code at checkout).
I would prefer Fedora, but if I am on AWS and Amazon Linux is the one that gets the awesome Amazon support, then choosing Amazon Linux might be compelling.
Of course I would prefer they use Fedora and contribute to Fedora directly.
I can see linux eclipsing all the current OS's, it already happened with smartphones, IOTs and the other little things (i forgot how they are called)
Only remaining piece is the desktop segment
macOS has a unix environment, so it'll stay relevant (for how long?)
windows has WSL, it's slow, i don't see myself using it since the host OS is a giant piece of shitty crap
MS missed a chance with Win11, they could have went full steam ARM with a Linux Distro, 100% native Android support, 100% cloud native support, 100% unix support as a host OS, i wouldn't use it myself because i despise the company and its culture, but i can see potential, and i smell a huge missed opportunity
Amazon it getting it right, even thought it's exclusively targeting for cloud usages
Marketing wise it's great and consistent with their offering
For Linux to conquer the desktop you'd need for it to beat out Windows or Mac for market share. For it to do that you'd need it to be have competitive usability for everyday people, and this is still far off. When I use my linux box at work I have to google for things like, "how do I enable this resolution for my monitor which isn't showing up" and then punch in a bunch of commands into the terminal.
The advantages of windows and mac these days are that a lot of stuff 'just works' and secondly that due to their 20-30 year history their desktop application ecosystem is much richer and also much more widely used. User interfaces are also more friendly in general.
There's no technical reason why these faults cannot be overcome, but there are significant hurdles in getting a third OS to gain major market share from the incumbents MS and Apple. The reason they have market share in servers is because the experience is better to develop on and it is free. The reason they have market share in mobile is that it was free and Google could build on top of it, and secondly that mobile was a brand new market so there were no incumbents with decades of history. I don't see those conditions replicated for desktop, especially when you consider that desktop is in decline and therefore less attractive to spend capital on.
If you want to make Ubuntu as nice as MacOS I think you need a private company willing to spend money and time in a concerted effort to get it to that point, which IMO won't happen.
> If you want to make Ubuntu as nice as MacOS I think you need a private company willing to spend money and time in a concerted effort to get it to that point, which IMO won't happen.
You mean because they "failed" with Unity? (which was technically not a failure I'd say, just a political/funding failure)
It seems to underline the necessary financial backing. Canonical barely pulls in 100 million in earnings.
Why there isnt't billions of dollars available for Linux alone from governments worldwide for securing and improving Linux since it is currently becoming the backbone of the worldwide data processing and security infrastructure is mystifying.
Why NVidia, Nintendo, AMD, Intel, Sony, Samsung, all the Chinese handset makers, Compaq, Dell, and all of them don't provide a billion a year to linux desktop so they can have market leverage against Microsoft and/or enable them to push their hardware out to the OS for utilization by the public at large within months (as opposed to decades for Microsoft) is beyond me.
The funny thing with the M1 architecture and OSX: it highlights how the entire PC stack tying themselves to the Microsoft behemoth has them cornered. How do you move to a competitive PC ARM architecture without waiting for MS (even if it has an arm-compatible windows, let's face it it doesn't have the software support or organization behind it) to move it's bloated carcass in about 5 years to support it properly at the OS level.
Meanwhile, Linux can support an Arm arch right now with practically all the necessary software.
If someone smart was with Linux, they'd have long ago been coordinating a desktop alliance, secret or not, and selling it actively to the major powers who could fund it with chump change.
Intel makes 20 billion a quarter. AMD 4 billion. NVidia 7 billion.
And then there's the US military.
And then there's google compute, AWS, and all the other cloud wannabes besides Azure. Linux is the reason your platforms mint money. Wouldn't you like companies and people to move to cloud-based desktops?
Why the EU doesn't fund this for economic competition with american software mystifies me. Why Africa, Asia, and South America don't fund it for language support and an affordable computing ecosystem for their countries is beyond me.
I'm not a Torvalds hater, but a super-technical person leading Linux was fantastic for the first 10 years, but really the last 15 needed a different skillset.
100% agree. Getting an M1 Mac has solidified this feeling for me too; I still want to be a Linux desktop user on philosophical grounds and will never go back to Windows, but as I age I have rapidly lost interest in tinkering and now want it to Just Work. Actually it isn't really the distro which needs fixing, but rather the package management space. Appimage/Snap/Flatpak need to be consolidated into a single open standard which is as easy to use as Homebrew.
Essentially none of the organizations you've mentioned care about desktop Linux. They may care about the kernel, and many of them do help fund aspects of kernel development. But desktops? It's not relevant to them, or their plans.
You're mostly right, but most government orgs care about desktops, Intel/AMD care about desktops, IBM should care about desktops if they could get a foot in the door.
Amazon should be interested in providing a great remote desktop solution. IMO that is a huge untapped market, especially in BYOD orgs: sure, bring your crap device, but you RDP in for anything you need business-wise that needs to be secure.
Really and what would the say? Sorry for the blocking "App Store"? The bs we started with mir and never finished ubuntu touch? That all our "home made" code is proprietary?
If Microsoft actually follows through with their plan to desupport Windows 10 in 2024, then (guessing) over half of their installed base will be orphaned.
Linux now bundles NTFS.
What if a rescue distro, that erased \Windows, and ran all the applications under Wine or appropriate emulation, existed at that desupport date?
Microsoft's desktop dominance would be over on that date. They would cling to corporate desktops, but the general consumer market would be lost.
> Compared to Plan 9, Unix creaks and clanks and has obvious rust spots, but it gets the job done well enough to hold its position. There is a lesson here for ambitious system architects: the most dangerous enemy of a better solution is an existing codebase that is just good enough.[1]
Linux might be "better"; unfortunately Windows is just "good enough" for most people to not care.
While I would love for this to happen, I just don't see it at all.
There are literally millions of Windows workstations or laptops across corporate campuses in the US, because at the end of the day what 95% of white collar workers need to do their jobs is MS Office and *maybe* one industry-specific LoB app.
You have been able to run WSL2 on Windows 10 Insider builds for a long time now (since 2019?). Is it still not possible to switch WSL versions for a Linux distro running on Windows Server?
I used it from 2012-2015 on my main machine, and now after a mix of Windows, macOS and Chrome OS, I'm back to Fedora as of this week. I enjoy Fedora but I really like Gnome.
It was also the only real option for me given Ubuntu's continued push for Snap, which I just don't like.
I have used Arch Linux for 10+ years, then shortly macOS and Windows, and to Fedora as of yesterday as well. Red Hat is employing and paying for most of the desktop and server userspace developers (GNOME, systemd, pipewire, podman, ostree), I might as well use their official distro. It's the closest thing to bleeding edge and forward looking distribution that dictates the pace for everybody else to follow. I like that.
I liked ubuntu when it first released, but modern day Canonical is a shell of its former self. Snap is their sad attempt at EEE from Microsoft's playbook. And I have never liked Debian, even when it's been running on my servers since forever.
> Red Hat is employing and paying for most of the desktop and server userspace developers (GNOME, systemd, pipewire, podman, ostree), I might as well use their official distro.
Yeah, that's pretty much my methodology for why I went Fedora too. Although I didn't know Red Hat was that deep into upstream work (awesome!), but it makes total sense.
> I liked ubuntu when it first released, but modern day Canonical is a shell of its former self. Snap is their sad attempt at EEE from Microsoft's playbook.
Yup. It's sad all around. Ubuntu was my first ever non-Windows OS back in 2006 or so. I feel like every few years I mentally shout "YOU WERE THE CHOSEN ONE!" to Canonical, haha.
An LTS distribution based on Fedora (and NOT RHEL) is something I've been wanting for a long time, but I don't think this is really gonna be for the non-cloud general use case?
Same here. It's not clear to me if this distro is indeed viable for the desktop, and hoe exactly they support the Fedora packages past Fedora lifetime (or whether they'll even supply all of the Fedora repos).
But I can't do direct upgrades between RHEL versions. I want Ubuntu LTS style support but with Fedora and direct upgrades between LTS versions.
For what it's worth - I'm currently running OpenSUSE Leap and MicroOS (immutable OpenSUSE Tumbleweed variant) - and they run a nice middle ground, but I still have to do major upgrades for my Leap systems every year or so between their point release updates, which is kindof a pain. I just wish I could use Fedora with ~3 years of support because that's the system and tools that I'm most familiar with (we used CentOS at work, recently migrated to Alma).
Interesting. For me the amazon-linux-extras part was the most annoying part. Using automation tooling (Terraform for instance deployment and Ansible in User Data of instances) it was so annoying to work with. Fallback to the shell executor in Ansible to get something installed is a pita.
Also good that they finally get out of a python2 default Amazon Linux - only 2.5 years after it's EOL.
Cloud images are harder to find as ISOs ... They are using this 'cloud image' concept where they distribute something very similar or equal to an OVA, which is the hard disk and a manifest.
3) AWS will provide Application Binary Interface (ABI) compatibility for all other packages in core unless providing such compatibility is not possible for reasons beyond AWS’s control.
We've been very vocal to AWS product managers and solution architects about our needs for an Amazon Linux 3 that is a refresh over AWS Linux 2 (at least 5 years support with RHEL 8 compatibility, free kernel patching w/o reboots, official support from datadog, vmware images). Sad that we haven't been heard. We'll now need to plan to move over 20k instances to Rocky Linux.
I suspect that the move to using Fedora has something to do with changes to the CentOS project that AWS Linux 2 forked. Let's hope the beancounters at IBM doesn't have other plans for Fedora.