Hacker News new | past | comments | ask | show | jobs | submit login
Software Infrastructure 2.0: A Wishlist (erikbern.com)
321 points by klintcho on April 19, 2021 | hide | past | favorite | 195 comments



Years and years ago I saw an advertisement by a SAN array vendor where their gimmick was that they supported very cheap read/write snapshots of very large datasets, in combination with a set of transforms such as data masking or anonymisation.

Their target market was same as the OP: developers that need rapid like-for-like clones of production. The SAN could create a full VM server pool with a logical clone of hundreds of terabytes of data in seconds, spin it up, and blow it away.

The idea was that instead of the typical "DEV/TST/PRD" environments, you'd have potentially dozens of numbered test environments. It was cheap enough to deploy a full cluster of something as complex as SAP or a multi-tier Oracle application for a CI/CD integration test!

Meanwhile, in the Azure cloud: Practically nothing has a "copy" option. Deploying basic resources such as SQL Virtual Machines or App Service environments can take hours. Snapshotting a VM takes a full copy. Etc...

It's like the public cloud is in a weird way ahead of the curve, yet living with 1990s limitations.

Speaking of limitations: One reason cheap cloning is "hard" is because of IPv4 addressing. There are few enough addresses (even in the 10.0.0.0/8 private range) that subnets have to be manually preallocated to avoid conflicts, addresses have to be "managed" and carefully "doled out". This makes it completely impossible to deploy many copies of complex multi-tier applications.

The public cloud vendors had their opportunity to use a flat IPv6 address space for everything. It would have made hundreds points of complexity simply vanish. The programming analogy is like going from 16-bit addressing with the complexity of near & far pointers to a flat 64-bit virtual address space. It's not just about more memory! The programming paradigms are different. Same thing with networking. IPv6 simply eliminates entire categories of networking configuration and complexity.

PS: Azure's network team decided to unnecessarily use NAT for IPv6, making it exactly (100%!) as complex as IPv4...


Your comment makes me realize how lucky I am at my job: we do this production environment replication via “SAN magic” and it is so insanely useful.


How do you deal with replicating customer data / data the developer / tester should not have access to?

Do you copy that into the ephemeral test environment as well? How are permissions managd for accessing that data? Does it have the same restrictions as in prod? (i.e. even in this env the developer / tester has no access to systems / data they would not have in prod).


Unfortunately I don't have a good answer for you - I work for a trading firm so there is no customer data. The minimal data that developers aren't allowed to see is restricted in both environments, but in practice the software we build doesn't touch this data anyway.


You know how you can build an app's tests to generate new tests based on a schema, to test all possible permutations of a data structure using all possible data inputs? That, but for databases.

In Security this is known as "fuzzing", but instead of looking for security vulnerabilities, you're generating test data for databases. Your test data should cover all possible customer data inputs. Then you not only don't need real customer data for testing, you catch bugs before a customer does.


That isn't useful when you have a reported bug and need to investigate and debug. Which is the scenario I am assuming this "quickly and easily copy prod to ephemeral test environment at the press of a button" setup is designed to make easy.


Using real person-related data for testing was made illegal in Europe by GDPR. Even with explicit consent of each affected person, the tester needs to make sure that the test really requires real person-related data, otherwise it would violate the principle of "privacy by design" from GDPR. Without consent it is entirely forbidden, as of my understanding of the GDPR.

Due to GDPR I had to implement an anonymization feature for the production database anyway. So I am able to clone the production database for testing and run the anonymization function on every entry in the test database before using it.


This sounds like a needlessly strict interpretation of GDPR. Taken from the UK regulator's site:

> The lawful bases for processing are set out in Article 6 of the UK GDPR. At least one of these must apply whenever you process personal data:

> (a) Consent: the individual has given clear consent for you to process their personal data for a specific purpose.

> (b) Contract: the processing is necessary for a contract you have with the individual, or because they have asked you to take specific steps before entering into a contract.

> (f) Legitimate interests: the processing is necessary for your legitimate interests or the legitimate interests of a third party, unless there is a good reason to protect the individual’s personal data which overrides those legitimate interests. (This cannot apply if you are a public authority processing data to perform your official tasks.)

...

> Legitimate interests is the most flexible lawful basis for processing, but you cannot assume it will always be the most appropriate. It is likely to be most appropriate where you use people’s data in ways they would reasonably expect and which have a minimal privacy impact, or where there is a compelling justification for the processing.

> The processing must be necessary. If you can reasonably achieve the same result in another less intrusive way, legitimate interests will not apply.

> You must include details of your legitimate interests in your privacy information.

I included the legitimate interests bits because they seem most relevant to testing, but even if testing is not considered "necessary" in a particular use case, there still remain at least two more criteria that might satisfy the use of live data in testing, including explicit user consent. Much of the focus of GDPR is on privacy-invasive intrusive processing and prevention of harm, I think a lot of fuss around it can be dispelled when viewed from this angle.


I admit that using real person-related data in a test database for ordinary software development (not considering fixing a production system here) might be legitimate according GDPR based on contract and consent. But the hurds are so high that I cannot think of any realistic use case that involves more than a couple of people at the maximum. Contract and consent are subject to specific requirements here. The constent must typically be voluntarily, i.e. without any pressure or coercion, it must not be coupled to another condition and it must be given for each specific purpose indivdually. And I doubt that it is permitted to completely exlude the principle of "privacy by design" in a contract. And even if this were the case, you need this consent from each customer whose data is in the database, while you must not make your service to the customer dependant on his consent.

As to legitimate interest, Recital 47 of GDPR states: "The legitimate interests of a controller, including those of a controller to which the personal data may be disclosed, or of a third party, may provide a legal basis for processing, provided that the interests or the fundamental rights and freedoms of the data subject are not overriding, taking into consideration the reasonable expectations of data subjects based on their relationship with the controller. Such legitimate interest could exist for example where there is a relevant and appropriate relationship between the data subject and the controller in situations such as where the data subject is a client or in the service of the controller. At any rate the existence of a legitimate interest would need careful assessment including whether a data subject can reasonably expect at the time and in the context of the collection of the personal data that processing for that purpose may take place. The interests and fundamental rights of the data subject could in particular override the interest of the data controller where personal data are processed in circumstances where data subjects do not reasonably expect further processing."[1]

Typically one does not have a contract about software development with a customer whose personal data is stored in a database, but about a specific service, such as for example selling something to him via a Web-shop. So if Uncle Joe is buying something from the Web-shop, can he reasonably expect that his personal data is used in developing the Web-shop software? Most likely not. Ergo, there is no legitimate interest to use his data for that pupose.

[1] https://gdpr-info.eu/recitals/no-47/

[Edit for clarity.]


This is an interesting question.

I've seen 1:1 copy of prod databases running in test environments. Very convenient from a dev perspective but a no-go for many customers.

I've also seen test envs using fixtures only. More privacy-compliant, but it makes it harder for in-depth testing or to uncover weird edge cases.


> Meanwhile, in the Azure cloud: Practically nothing has a "copy" option. Deploying basic resources such as SQL Virtual Machines or App Service environments can take hours. Snapshotting a VM takes a full copy. Etc...

To add to the list of etc, I can't query across two databases in azure SQL.


You can't even copy an Azure SQL Database across subscriptions! This is something that comes up regularly at my day job: there's an Enterprise Agreement production subscription, and an Enterprise Dev/Test subscription for non-production that's much cheaper. This would be great, except for stupid limitations like this.


I believe you can now query across DBs in Azure via a Linked Server. It still feels a little clunky to me though.

https://docs.microsoft.com/en-us/sql/relational-databases/li...


gcp doesn’t even give you ipv6 at all. Talk about limitations...


> There are few enough addresses (even in the 10.0.0.0/8 private range) that subnets have to be manually preallocated to avoid conflicts

I hate to be that guy, but it's 2021 and we should start deploying stuff on ivp6 networks to avoid this kind of problems (beside other kind of problems)...

edit: ok I just saw the PS about ipv6 and Azure... sorry.


Do you remember the name of the original SAN array provider?


It could be NetApp, they had a huge marketing wave 2-3 years ago. In fairness, their appliances are very good, I've seen them in action both on-prem and in the cloud (on AWS). The pricing however... They're very expensive, particularly so in the cloud (since you're paying licensing on top of cloud resources).


OnTAP Cloud? That's what their system was called.


Yup, we were beta testers or something the time


I still have the 8.1 manuals laying around here. Nifty technology but pricey, indeed.


I want to a copy of our Azure SQL database dumped to my local dev MS SQL instance very often. At the moment I do this by using SSMS, where I generate a script of the Azure SQL database and use sqlcmd to import that script. It is a really painful process, and sqlcmd sometimes just hang and becomes unresponsive.

Why is there no simple export/import functionality?


You can export Azure SQL Databases to a "BACPAC" file, which is much faster. This can then be restored onto on-prem SQL equally fast.

It's about 20-30 lines of PowerShell to do this, and then you can just double-click the script and drink a coffee.

Alternatively, for data-only changes you can use Azure SQL Sync, but I've found that it's primitive and a bit useless for anything complicated...


> Azure's network team decided to unnecessarily use NAT for IPv6, making it exactly (100%!) as complex as IPv4...

People like the NAT. It's a feature, not a bug. If your selling point for IPv6 is "no more NAT" then no wonder it never went anywhere!

P.S. No, "you're doing it wrong" and "you're not allowed to like the NAT" are not valid responses to user needs.


> "you're not allowed to like the NAT"

Well, the bigger question might be why do they like NAT?

If it's about having a single /128 address so they can do ACLs then that's easily fixed by just lowering the CIDR number. (unless you have an ancient version of fortigate on prem, which likely doesn't work with ipv6 anyway).

If it's about not having things poking at your servers through the NAT then the "NAT" really isn't helping anything, it's the stateful firewall doing _all_ the work there and those things are entirely independent systems. -- They're just sold to consumers as a single package.


Again: "you're doing it wrong" and "you're not allowed to like the NAT" are not valid criticisms.

People like NAT because it's an easy batteries-included way to manage, secure and understand your LAN.

Taking it away and forcing them to migrate to an incompatible zoo of firewall technologies for no benefit is asinine.

> They're just sold to consumers as a single package.

Exactly. How in the world is this a bad thing now? Do we really want to make network security for the average do-it-yourself home LAN harder?


It's less of a "you're doing it wrong" or "you're not allowed to like the NAT" and a more complex "you've incorrectly attributed your protection to NAT"


Trying to separate "NAT" from "stateful firewall" is pointless and only causes pain to the end user.


If this helps you understand, NAT and stateful firewalling are often the same code base. NAT is basically just a sub-feature, purely for translation, that is it.

The NAT feature solves the problem of:

  - public ipv4 exhaustion
You get in return:

  - higher latency

  - higher user & administrator mental overhead

  - limited public ports

  - protocol breakage (TCP & IPSec)
If you _remove_ NAT:

  - users no longer have to figure out why they can’t connect to their local web/minecraft server using the public address/domain even after a port forward (the very common NAT hairpin problem)

  - do not need to google “what’s my IP” as the machine knows its address in any context

  - do not have to tunnel to a VPS, or use a VPN in cases of CGNAT (ISP provided double-NAT) to expose a server
 
  - no longer have to maintain a split DNS zone for public/private addresses of a *public* server

  - no longer have to google “how do in enable NAT traversal for this VPN”

  - no longer have to learn what addresses are OK and not OK to use locally (rfc1918)

  - no longer get confused into thinking NAT is a hard-requirement for any and all routing (ex: green admins whos only experiance is a basic pfsense install adding 3 layers deep of NAT in labs) 
These are all problems I’ve either been paid to help figure out, or done for free on forums/discord, regularly.

Removal of NAT requires no user interface change, the same “port forward” terminology AND user interfaces can remain. There can still be a port forward tab where you enter an address and port. SOHO firewalls/routers that use stuff like iptables ALREADY add both NAT and normal firewall rules, and hide that fact. The security of user devices with a proper firewall/router does not change, the exposed interfaces do not need to change, everything gets simpler. I repeat, the basic abstraction presented to users does not need to change. The only reason to insist on NAT is ignorance; removal of NAT is removal of complexity.


Nothing changes; the firewall instead says "do you want this port open to this device" [y/N]:

Personally I think this is much easier to reason about.

And all I'm saying is that the NAT part of it absolutely does nothing to defend you, it's trivially defeated, people just conflate the two.


> the firewall instead says "do you want this port open to this device" [y/N]

That's exactly what I don't want to do. I have over 20 devices in the home LAN at any given point in time; why do you want to make my life difficult for no good reason?


then don't click the "I want to open this device to incoming traffic" button?

Honestly I think you don't understand what NAT is.

Stateful firewalls basically work by watching your connections and then allowing the return traffic through. All firewalls are stateful, there are "stateless ACLS" in networking which are stupid and don't watch things; we're not talking about those ... in fact 99% of internet users will never interact with a stateless ACL.

What happens when you make a connection is that your router adds your state to it's "state table" _and_ pops open a port on your gateway to allow return traffic through, if you did not have a stateful firewall in place then the whole internet would be able to poke that port.

If you remove the NAT the only thing that happens is that your router doesn't have to pop open a port and route traffic from that port to your device, the stateful firewall stays in place, meaning that random devices on the internet CANNOT TALK to your internal network at all, unless you manually allow that, which is the same as what happens with port forwarding today.

The only thing you "lose" is that your whole house looks like one device.

You gain a significant reduction in latency, online games will work better and p2p networking (such as voip) will have significantly fewer problems, because the whole internet was designed without NAT in mind, because NAT is genuinely a terrible hack.


> Honestly I think you don't understand what NAT is.

Yes, I do. It's the "masquerade" rule in my router's firewall rules table.

> The only thing you "lose" is that your whole house looks like one device.

That's a feature, not a bug.

> You gain a significant reduction in latency, online games will work better and p2p networking (such as voip) will have significantly fewer problems

99.99999% of those problems are caused by shoddy Wi-Fi. IPv6 does nothing to fix it. (Directional antennas and a standard way to bridge L2 over Wi-Fi is the real solution; expanding the IPv4 address space does nothing.)

Again: what's the benefit of IPv6 to me? So far I only see downsides.


My dude. Please, I beg you, look at what NAT actually is.

You are so mistaken here I can’t help think you’re intentionally trolling us.


What do people like about NAT? I am guessing that the perceived increase in security. But perhaps there are more real or perceived advantages.


What I find common is people conflate NAT with stateful firewalling, and believe that if you lose NAT you lose all forms of edge/perimeter network security. They don't understand that you can still filter and prevent unwanted packets from reaching hosts without NAT.


Of course you can. But why would you? You're replacing something that is simple, easy to understand and works perfectly well with a nebulous something that invites user error and security nightmares.

For example, my (modest) home LAN is five routers, a NAS/media server, a media player, two "smart TVs" and dozens of notebooks and phones connected via Wi-Fi.

What do you propose? Manage a firewall on each of those devices?

I suppose you mean setting up a firewall on the WAN link to block all incoming traffic? How is that different from a NAT? Merely a lack of 'masquerade' setting on the firewall rule? What's the benefit to me and why should I care?

Or do you propose some sort of hybrid scheme to intelligently block traffic while making all my countless devices pingable from the Internet? Not in this timeline, sorry.


Your home network and a cloud datacenter aren’t comparable. Many clouds have host level firewall policies as a core feature, and anyone competent is managing them profile-style using Terraform or an equivalent. It’s really quite easy from that perspective.


> Your home network and a cloud datacenter aren’t comparable.

Of course they are. I didn't need to think about firewall automation before, and now I do. For what gain?

> anyone competent

Not an option for most people. Let's make networking and security things more foolproof, not less, okay?


You really are refusing to listen to what people are telling you.

I have an Internet router, which uses NAT for IPv4, same as everyone else. If I want to punch a hole through for something like RDP or SSH, I have to use unique port numbers because I only have one Internet-facing IP address. Because there are only 3 billion IPv4 addresses for the whole World, all of them are regularly scanned by Bad People for open ports, making this RISKY.

I also have IPv6 enabled on it. No NAT. If I want to punch a hole through for RDP or SSH, I can use the standard ports because each device has a unique Internet-facing IP address. My router alone has 2^64 (millions of trillions) of unique addresses, of which a random 5 or 6 have been allocated. There is no way anyone ever is going to be able to scan these. I can SAFELY open standard port numbers and not have to worry about drive-by attacks.

THERE IS NO OTHER DIFFERENCE.

The router works the same, the firewall works the same, the Internet works the same, the GUI is the same. IT IS ALL THE SAME!

NAT is not magic. It is not a firewall. It is not necessary. It is not beneficial.


Security by obscurity?

This problem is (correctly) solved by VPN.


Nobody is stopping you from running a NAT gateway until 2120 at home. IPv6 solves specific problems in a datacenter context, namely address exhaustion. You'll never run into that problem at home.


Nothing new needs to be proposed. Nothing is being replaced. The current state of things is that whatever your edge device is provides actual security with stateful firewalling and translation with NAT, already. It's simple to understand because most home router products and projects like pfsense make them look like inseparable things that perform the same function. Removal of NAT won't even require a UI change for consumers because all these port forwarding UIs add both a DNAT/PNAT rule and a firewall rule already. You can keep the exact same user interfaces and "port forward" terminology when removing NAT.


> What do people like about NAT?

NAT allows people to be very dumb about networking. Either you "open a port" or you don't.

It works very well as long somebody else is managing the network for you and you just ask for stuff to happen and then that somebody else has to actually make it work.


Replace the word NAT with Firewall and all security-related statements are the same.


At home I really appreciate the NAT. I'm glad not every device has a public IP and gets hammered with attacks 24/7. It's not fail-safe but it definitely adds some security..

As for cloud, no idea what the benefit should be there.


This is what a firewall is for.


Most people use ISP provided routers at home, firewalls in those aren't necessarily great.


It would be literally the EXACT SAME firewall with IPv6 as with IPv4 NAT.

Most IPv6 routers are also IPv4 routers and behave exactly the same for both. They share the same router code, the same firewall, etc...


Virtualization eventually will be seen as the unnecessary layer added to make up for operating systems that lack capability based security.

It's going to take a decade to refactor things to remove that layer. Once done, you'll be able to safely run a process against a list of resources.


Agreed. Operating systems abstract resources, the fact that we needed containers and VMs point to the fact that the first set of implementations weren't optimal and needs to be refactored.

For VMs, security is the one concern, the others would be more direct access to lower levels and greater access to the hardware and internals than just a driver model.

For containers I'd say that the abstraction/interface presented to an application was too narrow: clearly network and filesystem abstractions needed to be included as well (not just memory, OS APIs and hardware abstractions).

I imagine that an OS from the far future could perform the functions of a hypervisor and a container engine and would allow richer abilities to add code to what we consider kernel space, one could write a program in Unikernel style, as well as have normal programs look more container-like.


> one could write a program in Unikernel style, as well as have normal programs look more container-like.

Projects similar to this exist today with https://nanos.org/.

You can run any typical web app (without having to re-write it) in a POSIX compliant Unikernel on most major cloud providers and bare metal. Each service runs in its own Unikernel.

I had its creator on my podcast the other day at: https://runninginproduction.com/podcast/79-nanovms-let-you-r...


Cool. Thanks for the Nanos pointer!

Sadly, two generations of programmers probably need to die first, before we adopt the RnD that's been done in the last decade or three, or for what's been done the systems that languish in the shade of UNIX to be done again in the mainstream.


Most people use containers as a packaging format - instead of .tar.gz.

People run containers with host networking and bind-mounting everything and nobody minds.

(The story for hosting providers is a bit different, but they're not the main driver for Docker adoption.)


(I'm not holding my breath for the refactor though... We're stuck at a local maximum and we're better at adding stuff,rather than removing/refactoring them.

Also- just about EVERYTHING is built on top of these current layers)


Genode is a project in Germany I've been watching for a while... right now I think they are the best chance of getting there. They've been making it closer and closer to the point where I can latch onto it and start using it.

There has to be a microkernel at the bottom of everything, which rules out anything Linux based. Genode does use Linux drivers though, in a contained way.


Yeah, Genode looks awesome, I've been rooting for it for a while.

I love their approach to hierarchical distribution of resources.

I'm embarrassed to admit that I haven't actually booted and played with it yet.


You just subsume layers. Maybe as simple as Xen++ with a containerd frontend.


Yeah, sounds like a good approach.

I liked the direction Joyent was taking with Iluminos and Triton.

Problem is the fact that the tech is tied to a market, which has different rules and rates of evolution.

Best bet for a refactor is with a new market/platform.


It's not just security. There's incompatible dependencies. Virtualization allows you to run Windows and Linux on the same server, 40 different C libraries with potentially incompatible ABIs, across X time zones and Y locales, and they'll all work.

You can provide the same level of replication and separation of dependencies in an OS, but at a certain point you're just creating the same thing as a hypervisor and calling it an OS and the distinction becomes academic.


I think that eventually every OS will converge on the same API, for now there's APE - Actually Portable Executables from the Cosmopolitan C library to get one code base working everywhere without a ton of ifdefs.


We've had capability based security frameworks aka MAC (ex: AppArmor) in Linux since 1999 or earlier. Containers (which also existed long before docker) have been popularized for convenience, and virtualization would still be useful for running required systems that are not similar to the host. If anything it looks like we're going towards a convergence with "microvms".


Capabilities are like taking a $5 bill out of your wallet, handing it to your child, and letting them buy ice cream with it.

You delegate $5, nothing more than $5 can possibly leave your wallet as a result.

AppArmor is like putting a vault around the ice cream truck and giving a strict list of who is allowed to buy what ice cream. Hardly the same thing.


Ehh, isn't MAC nearly the opposite of capability based security though? At the core of capability based security is that you don't separate authority to access a resource from your designation of that resource. MAC though seems to go all-in on separate policies to control what gets access to what.


A convinience that only exists when the target hardware and underlying kernel are compatible with the development environmnet, when that isn't the case, oopla, a VM layer in the middle to fake the devenv, or in the devenv to fake the serverenv.


It already exists in the IBM mainframe. But nobody wants to write apps for it..


Who would want to use those weird, outdated languages and technologies, though?

It's better to use a modern DevOps stack involving Kubernetes, Ansible, YAML, Jinja2, Python, cron jobs, regexes, shell scripts, JSON, Jenkinsfiles, Dockerfiles, etc.


Yep. At this time it wouldn't be great to just imitate the best mainframes compartmentalization, because we have much better experimental systems. But they are still much better than anything common on commodity hardware.


Likely because IBM mainframes are outside of the reach of hobbyists and SMEs.


No, it's because the "capabilities" built into mainframes aren't accessible by users, only the system administrators. The granularity just isn't there.


That too. But I'd argue that not being able to access or set up a mainframe due to its unaffordability or esotericism is a much more powerful barrier, that makes granularity a moot point.


Virtualization, no. A hypervisor running a Windows kernel and a Linux kernel side by side is not about capability-based security. You can even see it like cap-based security approach: VMs only see what the hypervisor gave them, and have no way to refer to anything that the hypervisor did not pass to them.

Containers, yes. They are a pure namespacing trick, and can be replaces by cap-based security completely.


Also, VM's make multitenancy really easy.

It allows you to divide up physical computing power across multiple people/organizations etc.

Containers make this kind of distinction far more hazy.


Emulation and API compatibility layers can be done in more ways than an hypervisor-based VM. And many of those other ways perform better.


What are they? Why the hypervisor approach is the standard then?


You can already make Linux run any binary you want for the same CPU architecture, for example, wine uses this functionality. You can make it run natively binaries from other architectures with a rewriting VM like qemu (although I'm not sure qemu itself can port binaries between architectures, just that it's rewriting VM). Your VM can enforce security with a sandbox instead of a hypervisor, like the Java, C#, node ones do.

Hypervisor is not the standard, it's one of the many ways people do it right now.


I am fairly certain that virtualization is going to be a core tenet of computing up to and including the point where the atoms of the gas giants and Oort cloud are fused into computronium for the growing Dyson solar swarm to run ever more instances of people and the adobe flash games they love.


I highly doubt that. If so inclined you can do that today with SEL as the granularity and capability is there, but how long would tuning it in would take? What about updates - will it still work as expected? What about edge cases? What about devs requiring more privilege than is needed?


Agreed 100%, but you still are going to get the old-timers who insist that virtualization and containers just make things more difficult and that their old school approach is the best.


Good, that's a very universal interview question that will get me a practically direct answer to the actual question. It's not easy today... I'm asking about JS/HTML/CSS if the candidate isn't web-oriented, but what do I ask the JS guys? Perhaps a view layer bait?


going by how the web is doing today, the refactor you refer to will never be done.


I love the test-in-production illustration. This is a fairly accurate map of my journey as well. I vividly remember scolding our customers about how they needed a staging environment that was a perfect copy of production so we could guarantee a push to prod would be perfect every time.

We still have that customer but we have learned a very valuable & expensive lesson.

Being able to test in production is one of the most wonderful things you can ever hope to achieve in the infra arena. No ambiguity about what will happen at 2am while everyone is asleep. Someone physically picked up a device and tested it for real and said "yep its working fine". This is much more confidence inspiring for us.

I used to go into technical calls with customers hoping to assuage their fears with how meticulous we would be with a test=>staging=>production release cycle. Now, I go in and literally just drop the "We prefer to test in production" bomb on them within the first 3 sentences of that conversation. You would probably not be surprised to learn that some of our more "experienced" customers enthusiastically agree with this preference right away.


You can test in production by moving fast and breaking things (the clueless guy).

You can test in production by having canaries, filters, etc, and allowing some production traffic to the version under test. This is the "wired" guy.

For many backend things, you can test in production by shadowing the current services, and copying the traffic into your services under test. It has limitations, but can produce zero disruption.


Until you discover that prod has an edgecase string that has the blood mother of cthulu's last name written in swahili-hebrew, (you know it as swabrew, or SHE for short) which due to a lack of goat blood on migration day is one of the ~3% of edge cases that aren't replicated and now youve got an entirly new underground civilization born that is expecting you to serve them traffic with 5 9's and you can only do 4 because of the flakiness.


IMO this still keeps production a previous thing. Next comes multi-version concurrent environments on the same replicated data streams. There is only production and delivery to customers is access version 3 instead of version 2. This can be proceeded by an automated comparison of v2's outputs to v3's and a review (or code description of the expected differences).


you still have unit tests and similar right? and maybe test in a staging env lightly first? or, right to Live?


Yes we have a 2 stage process with the customer. There is still a test & production environment, but test talks to all production business systems too. Only a few people are allowed into this environment. We test with 1-5 users for a few days then decide to take it to the broader user base.


> but test talks to all production business systems too

I'm not sure I understand this, would you mind explaining more? Do you mean you have multi-tenancy in databases and application (the "tenants" being stage/test and prod)?


I think what op’s referring to is a staging app version that talks to production services and databases. Eg you have a “clone” of your production UI, accessible only to devs. This clone is configured to talk to the same DB and call the same dependencies as the production service but since it’s access is limited it’s used by devs to test their new feature(s).

This pattern is used where I work too. It’s been incredibly powerful. Some of our services allow for multiple staging replicas so that nobody is blocking one another; the replicas are ephemeral and nuked after testing.


Right but I'm wondering how you avoid polluting your production dataset with test data, hence the question about multi-tenancy in the database. Do you use a different schema? Discriminator column? How do you know the data came from the "clone"?


Yes. This is essentially the idea.


This is how to build a scaling DoS issue that bombs prod iirc


This post mingles different interpretations of "serverless" - the current fad of "lambda" style functions etc and premium / full stack managed application offerings like Heroku. Although they are both "fully managed" they have a lot of differences.

Both though assume the "hard" problem of infrastructure is the one-time setup cost, and that your architecture design should be centered around minimising that cost - even though it's mostly a one-time cost. I really feel like this is a false economy - we should optimise for long term cost, not short term. It may be very seductive that you can deploy your code within 10 minutes without knowing anything, but that doesn't make it the right long term decision. In particular when it goes all the way to architechting large parts of your app out of lambda functions and the like we are making a huge archtitectural sacrifice which is something that should absolutely not be done purely on the base of making some short term pain go away.


what would you say is the architectural sacrifice of pushing things out to the edge, or using lambda functions per say in your opinion?

There are issues about resolving state at the edge, though Cloudflare has solved some of that with durable objects, and Fastly is working on something too. Think there are memory constraints as of now however. Security too is better in these platforms at the edge - true zero trust models.


It's similar to the issues with microservices. You've broken something that could be "simple" single monolithic application into many pieces and now there is huge complexity in how they are all joined together, fragemented state, latency, transactional integrity, data models, error handling, etc etc. Then your developers can't debug or test them properly, what used to be a simple breakpoint in the debugger and stepping into a function call and directly observing state requires a huge complexity to trace what is happening.

All these things got much harder than they used to be, and you're living with that forever - to pay for a one-time saving on setting up some self-updating, auto-scaling infrastructure.


My god you've given me hope again


Not the person you asked but for me, the discussion of Edge computing often misses if there are actually enough benefits. Slow response times are nearly always because the application is slow, not because the server is 100ms away instead of 5ms. Saving 100ms (or less for most probably) by deploying logic at the edge introduces much more complexity than shaving that time off through optimization would. Take this page as an example. I get response times of ~270ms (Europe to US-West I presume) and it still loads instantly. On the other hand I have enough systems at my office where I'm <1ms away from the application server and still wait forever for stuff to load.

I don't say that you don't need edge computing. But it'll always be more complex to run compared to a centralized system (plus vendor lock-in) and performance isn't a valid reason.


Serverless is more than functions. Consider S3, LogicApps, or DataFlow.

Serverless is about starting with highly available, auto scaling, et cetera... assets and concentrating more on delivering business value.


> but it takes 45 steps in the console and 12 of them are highly confusing if you never did it before.

I am constantly amazed that software engineering is as difficult as it is in ways like this. Half the time I just figure I'm an idiot because no one else is struggling with the same things but I probably just don't notice when they are.


1) It's just as shitty for everyone except the person who's selling themselves as a "consultant" for X. And actually—they're lying, it's shitty for them too.

2) A bunch of other goddamn morons managed to use this more-broken-than-not thing to make something useful, so this goddamn moron (i.e. me) surely can.

Both almost always true. After the second or third time you realize something made by the "geniuses" at FAANG (let alone "lesser" companies) is totally broken, was plainly half-assed at implementation and in fact is worse than what you might have done (yes, even considering management/time constraints), and has incorrect documentation, you start to catch on to these two truths. It's all shit, and most people are bad at it. Let that be a source of comfort.

[EDIT] actually, broader point: the degree to which "fucking around with shitty tools" is where one spends one's time in this job is probably understated. This bull-crap is why some people "nope" the hell out and specialize in a single language/stack, so at least they only have one well-defined set of terrible, broken crap to worry about. Then people on HN tell them they're not real developers because they aren't super-eager to learn [other horribly-broken pile of crap with marginal benefits over what they already know]. (nb. I'm in the generalist and willing-to-learn-new-garbage category, so this judgement isn't self-serving on my part)


I'm slightly afraid of all the negativity here, but still kinda agree with the sentiment.

However, I still want to note that there are many cases where smart people simply don't have enough time to handle all the stupidity in their products. Often times, it's just little issues like communication cost and politics. But, also often, one should care about the revenue of one's own company or clients', which slows down changes a lot. Even a simple feature can take weeks and months to roll out.

In short, even without stupid people, life sucks. :\


Oh, yeah, world-imposed restrictions are a factor. Problems aren't all because people are idiots. I, pointedly, don't exclude myself from the "people are idiots" judgement, either. But I think it's also true that the Super-Serious Real World's not half as competent, or capable, or polished, as one might hope. That organization that seems impossibly amazing? They're not, actually. Look closer and they produce bafflingly-bad crap more often than not. The institution with The Reputation? It's because they sometimes get things mostly right, and are good at networking and/or marketing, but if you got a look at how "the sausage is made" you'd be absolutely shocked. The Authority on The Thing? OMG you don't want to know.

Which is horrifying or liberating, depending on one's perspective.


The smartest people can build utterly useless crap if the teams they work on don’t prioritize UX. I think of it as “emergent stupidity”; individually they will optimize for and build the best components but if they aren’t well put together the UX is horrible and the product sucks sucks sucks.

On the contrary, users will be ok with suboptimal components if the overall UX is good.


> On the contrary, users will be ok with suboptimal components if the overall UX is good.

Until they -absolutely- need quality components, then no matter good the UI/UX is, you're out of luck.


many things are confusing if you've never done it before.

my first HAproxy setup wasn't great, my 10th is rock solid, just needed a bit of context-specific learning.

same with PG replication (locical and archive) but 3rd go was awesome.

its a flawed expectation to get it right the first time.


> its a flawed expectation to get it right the first time.

If your toilet overflows when you push the lever down and flushes when you pull the lever up, you'll learn pretty quickly how to use it. And other users will too, after they screw up or after you carefully explain so they're sure to understand. But it's just a shitty design! The fact that you've learned to deal with it doesn't excuse it or mean it shouldn't be fixed for all future users.

I want to get it right the first time; I don't expect to only because I've been hurt too many times by shoddy UIs. A good UI, whether a GUI, API, CLI, or other TLA, guides users through their workflow as effortlessly as possible by means of sensible defaults, law of least surprise, and progressive reveal of power features.

Relevant http://xkcd.com/1168


There's still no way around having to learn complex features. For most software, building a really simple first try is very easy, but almost certainly useless.

Once you need to actually do something useful, uunavoidable complexity sets in and all of a sudden defaults and such don't really help anymore.

Some people try to push defaults further with templates and wizards, but that often still doesn't really work for your particular use case, and also makes you first search through many options before realizing you actually need to go for the scary blank one.


I'm so happy to just develop desktop apps in C++ / Qt. I commit, I tag a release, this generates mac / windows / linux binaries, users download them, the end.


I'm very happy in the same camp using C# and SQL (though Windows only).

Thank you for making desktop apps.


I want to live in that world again. It ended for me about 19 years ago when I left my last "desktop software" job. How do you manage to avoid "web stacks" in this day and age?


Most people still use desktop software for large projects ? Adobe stuff, music software, basically any advanced authoring software - the performance of the web platform is just not there for projects measured in dozens of gigabytes of media, hundreds of AVX2-optimized plug-ins, etc (and it's a field with a large tradition of benchmarking software between each other to find the most efficient, see e.g. dawbench)


I write backend code, and it's basically Java + Spring. No web technologies beyond HTTP in sight.


There is hardly an app that I use these days that doesn't require internet connection. I guess once your app needs it, you will have the same problems.


> There is hardly an app that I use these days that doesn't require internet connection.

well, most of the ones I use (my note taking app, my text editor, my music player, my file manager) don't so YMMV


Some of this I agree with, even if the overall is to me asking for "magic" (I wouldn't mind trying to get there though).

For me the one I really want is "serverless" SQL databases. On every cloud platform at the moment, whatever cloud SQL thing they're offering is really obviously MySQL or Postgres on some VMs under the hood. The provisioning time is the same, the way it works is the same, the outage windows are the same.

But why is any of that still the case?

We've had the concept of shared hosting of SQL DBs for years, which is definitely more efficient resource wise, why have we failed to sufficiently hide the abstraction behind appropriate resource scheduling?

Basically, why isn't there just a Postgres-protocol socket I connect to as a user that will make tables on demand for me? No upfront sizing or provisioning, just bill me for what I'm using at some rate which covers your costs and colocate/shared host/whatever it onto the type of server hardware which can respond to that demand.

This feels like a problem Google in particular should be able to address somehow.


> Basically, why isn't there just a Postgres-protocol socket I connect to as a user that will make tables on demand for me? No upfront sizing or provisioning, just bill me for what I'm using at some rate which covers your costs and colocate/shared host/whatever it onto the type of server hardware which can respond to that demand.

How isn’t RDS Aurora Serverless from AWS, available in both MySQL-and Postgres-compatible flavors, exactly what you are looking for?


Yes and no. Aurora Serverless isn't really "serverless", it's PG or MySQL binary compatible wire protocol with AWS custom storage engine underneath.

Provisioning of the compute part of the database is still at the level of "ACU" which essentially map to the equivalent underlying EC2s. The scale up/down of serverless V1 is clunky and when we tested, there was visible pauses in handling transactions in progress when a scaling event occurred.

There is a "V2" of serverless in beta that is much closer to "seamless" scaling, I assume using things like Nitro and Firecracker under the covers to provision compute in a much more granular way.


Huh I missed that particular annoucement I guess?

I'll look into it next time I'm building something.


All cloud providers offer exactly what you are describing, just not for SQL DBs.

In my experience shared servers for MySQL (and I assume Postgres) never work because I always have to set at least some custom configurations very early into the development cycle.


There is AWS aurora serverless https://aws.amazon.com/rds/aurora/serverless/


I know this perspective is not relevant for most companies: But for side-projects and a-like the cold-start time for Aurora is like 15 - 30 seconds, and the first query always times out. Having it always on (if just 1 "compute") will cost you 30 USD a month. I'm hoping for Aurora to eventually be closer to DynamoDB pricing and startup (I'm fine with 5 - 10 second cold start as long as it doesn't time out the first request everytime.)


Have you evaluated aurora serverless v2? It is supposed to have much faster cold starts.


Interesting, thanks for sharing.

How does data get loaded? Can it use s3 as an external source similar to foreign data wrappers?


So, XorNot, do you want to run a YC startup? You've got an idea to run with.

But, if you're going to run with it, can you make it as fast as a "real" database on a cloud instance?


Google Cloud offers Spanner, which is a proprietary distributed database that uses SQL. Does that fit with the kind of thing you're thinking of?


They did, it's called Big Table by Google. Closed behind thier cloud offering few use.


Have you tried hosted CockroachDB, aka CockroachCloud?


Testing in 'production' has been possible for a while: Heroku has Review Apps, and Render (where I work) has Preview Environments [1] that spin up a high-fidelity isolated copy of your production stack on every PR and tear it down when you're done.

[1] https://render.com/docs/preview-environments


There are a variety of other, similar Preview Environment offerings out there, and most PaaS offer their own options built in. If you're using a PaaS with a built-in solution, you'll almost always be better off using theirs. A lot of the difficulty comes when you've "outgrown" a PaaS for one-reason-or-anyother, and need to try to back-fill this functionality in a production-like fashion.

If you're intereseted, I wrote a thing attempting to cover some of the options ~6 months ago, but recognize that there has been a fair amount of change in the space since then (I really should update the post). [thing: https://www.nrmitchi.com/2020/10/state-of-continuous-product...]


Great article and certainly worth an update! Specifically for Render, you can now:

1. Specify a TTL on inactive PRs

2. Pick less expensive plans for preview services.


Hey, a question about Render — do you run your own base infra/colocate or do you use a VPS/bare metal setup or do you sit on top of a cloud? I love the service but I’m wondering if you’ll expand outside of the two available regions anytime soon. I understand this might be hard based on your infra setup.


> I love the service

Thanks!

We will absolutely expand beyond Oregon and Frankfurt later this year. Our infrastructure was built to be cloud agnostic: we run on AWS, GCP, and Equinix Metal.


This is cool - does it also copy over the data and schema of the production db? For instance on an eCommerce site, will I get my production product data to test on in stage?


Given how much capitalization AWS and Azure account for it's surprising how little progress has being made in many areas. A true serverless RDBMS database priced sanely that basically charged for qry execution time and storage without need to provision some "black box" capacity units and would be suitable for a large scale app. Most of all workflows that require contacting support and arbitrary account limits that force you to split solutions across multiple accounts wtf.


Both Dynamo DB and Cosmos DB offer effectively what you're asking for.

Limits are a risk that must be mitigated but they aren't arbitrary, they exist to avoid interference of one customer on to others. They protect the service which, to providers and reasonably so, is more important than any one customer.


DynamoDB is really crappy key value store with poor feature set and really bad limits.I am talking about real SQL RDBMS like Aurora Serverless but not built on top of legacy codebase and shoehorned into the use case. As far as account limits if I consume X+X things across 2 accounts vs 1 account what did it protect exactly? An org can have any number of accounts and add them at any time.


CostmostDB is not sanely priced IMO!


I'm sympathetic to the motivation, but the author is a bit naive in some of the specific requirements --- we absolutely must spin things up locally to retain our autonomy. Sure, allow testing in the cloud too, but don't make that the only option.

The cloud should be a tiny margin business, that it isn't shows just how untrustworthy it is. The win-win efficiency isn't there because there's so much rentier overhead.

Oh, and use Nix :).


While it’s a good idea to lay out a vision for the future of infrastructure as a service, it’s annoying to read this sort of article that seems to believe it ought to be simple to provide true serverless magical on-demand resources for random code. Fact is it’s a lot more complicated than that unless you’re just a small fry without any really hard problems to solve.

It’s silly to complain that AWS doesn’t have a service that allows any random programmer to deploy a globally scalable app with one click. For one thing, you aren’t AWS’s primary target audience. For two, the class of problems that can be solved with such simple architectures is actually pretty small. But for three, it’s not actually even possible to do what you’re asking for any random code unless you are willing to spend a really fantastic amount of money. And even then once you get into the realm of a serious number of users or a serious scale of problem solving, then you’ll be back into bespoke-zone, making annoying tradeoffs and having to learn about infra details you never wanted to care about. Because, sorry, but abstraction is not free and it’s definitely not perfect.


> It’s silly to complain that AWS doesn’t have a service that allows any random programmer to deploy a globally scalable app with one click. For one thing, you aren’t AWS’s primary target audience.

I won't be surprised if AWS makes disproportionately more money off smaller operations which don't want to care about honing their infra, but want something be done quickly and with low effort. They quench the problem with money, while the absolute amounts are not very high. So targeting them even more, and eating a slice of Heroku's pie, looks pretty logical for AWS to do.


They don't need to eat a slice of Heroku's pie. Heroku pays AWS for each of their customers already. Their infrastructure runs on top of AWS.


So I'm creating something that's exactly like the author is describing: https://darklang.com

He's right that there's no reason that things shouldn't be fully Serverless, and have instant deployment [1]. These are key parts of the design of Dark that I think no one else has managed to solve in any meaningful way. (I'll concede that we haven't solved a lot of other problems had still need to be solved to be usable and delightful though).

[1] https://blog.darklang.com/how-dark-deploys-code-in-50ms/


That's super cool. There is a lot to be said about simple things, and I think the language approach makes a ton of sense.

I'm building a serverless document store with mini-databases using a programming language I designed for board games: http://www.adama-lang.org/


> The word cluster is an anachronism to an end-user in the cloud! I'm already running things in the cloud where there's elastic resources available at any time. Why do I have to think about the underlying pool of resources? Just maintain it for me.

Because very often, software that runs on clusters, requires thought when resizing.

In fact, I'm trying to think of a distributed software product where resizing a cluster, doesn't require a human in the loop making calculated decisions.

Even with things like Spark where you can easily add a bunch of worker nodes to a cluster in the cloud, there's still core nodes maintaining a shared filesystem for when you need to spill to disk.


My biggest problem with modern infrastructure is how much resources they use for simple things. They scale upwards, but they don't scale down.

I tried OpenFAAS in Kubernetes, to see if I could run simple lambda like loads. Default setup took north of 8 Gb of memory!

I just want efficient and simple microservices infrastructure, that doesn't hog 8 Gb of memory to run a simple functions or small microservices.


might i recommend running simple (go) binaries on a couple of hosts running freebsd? (running them in a jail if you want).

Proper isolation that is battle tested, rock solid operating system that doesn't try to reinvent the wheel every 3 years and as an added bonus you get ZFS which makes development, upgrades and maintenance a breeze thanks to proper snapshotting.

We did this a simple side project, originally written in PHP, then python and now go and it has been rock solid and easily scalable for atleast a decade or more.

Heck, you can even run linux compatible binaries in linux compatible jails nowadays[0].

0: https://wiki.freebsd.org/LinuxJails


I am currently working on a project attempting to achieve something like this called Valar (https://valar.dev). It's still quite experimental and in private beta, but feel free to take a look and put yourself on the waiting list. I'm targeting small, private projects so it's not really suited for full-on production workloads yet.

The core of it is a serverless system that shrinks workload quotas while running them (similar to the Google Autopilot system). This way you don't have to rewrite your entire codebase into lambdas and the end user can save a lot of costs by only specifying a resource cap (but probably using a lot less, therefore paying less).


Been there. Dreamed that. Built my ideal experience -> https://m3o.com

Honestly I've spent the better part of a decade talking about these things and if only someone would solve it or I had the resources to do it myself. I'm sure it'll happen. It's just going to look different than we thought. GitHub worked because it was collaborative and I think the next cloud won't look like an AWS, it'll look more like a social graph that just happens to be about building and running software. This graph only exists inside your company right now. In the future it'll exist as a product.


>Truly serverless

>The beauty of this is that a lot of the configuration stuff goes away magically. The competitive advantage of most startups is to deliver business value through business logic, not capacity planning and capacity management!

Is this exactly what cloudflare's workers is doing? [0]

I love the fact that I only need to think of the business logic.

Development without the need for VMs and obscure configuration scripts feels — and is in fact very productive. I don't think I will want to use anything else for hosting my back ends.

[0]:https://workers.cloudflare.com/


I'm really excited by Cloudflare Workers. I hope to get a chance to build something tangible with it soon. The developer experience with Wrangler is superb and paired with Durable Objects and KV you got disk and in-memory storage covered. I'm a fan of and have used AWS for 10+ years but wiring everything together still feels like a chore, and Serverless framework and similar tools just hide these details which will inevitable haunt you at some point.

Cloudflare recently announced database partners. Higher-level options for storage and especially streaming would be welcome improvements to me. Macrometa covers a lot of ground but sadly it seems like you need to be on their enterprise tier to use the interesting parts that don't overlap with KV/DO, such as streams. [0]

I played with recently launched support for ES modules and custom builds in Wrangler/Cloudflare Workers the other day together with ClojureScript and found the experience quite delightful. [1]

[0]: https://blog.cloudflare.com/partnership-announcement-db/

[1]: https://dev.to/pilt/clojurescript-on-cloudflare-workers-2d9g


I think we assume too much about where infrastructure is headed. Serverless, containers... they're fueling this notion of increased productivity but I don't think they themselves are keeping up with where Software is headed. E.g. "AI: --- in a lot of cases you're going to be breaking through every level of abstraction with any *aaS above the physical cores and attached devices. Traditional CRUD simplified? -- sure, but I feel like these "infrastructure" dev opportunities will be eaten by digital platforms and cloud solutions in the coming years.


This is a lot harder than the author wants to realize. Elastic infrastructure deployment in sub second latency is not likely to ever happen. The problem is simply a lot harder than serving http requests. It's not receive some bytes of text, send back some bytes of text. VMs are just files and can be resized, but optimal resource allocation on multitenant systems is effectively solving the knapsack problem. It's NP hard, and at least some of the time, it involves actually powering devices on and off and moving physical equipment around. There is still metal and silicon back there. In order to just be able to grow on demand, the infrastructure provider needs to overprovision for worst case, as the year of Covid should have taught us the entire idea of just in time logistics for electronics is a house of cards. Ships still need to cross oceans.

My wife was the QA lead for a data center provisioner for a US government agency a few years back and the prime contractor refused to eat into their own capital and overprovision, instead waiting for a contract to be approved before they would purchase anything. They somehow made it basically work through what may as well have been magic, but damn the horror stories she could tell. This is where Amazon is eating everyone else's lunch by taking the risk and being right on every bet so far so if they build it, the customers will come.

But even so, after all of that to complain that deploying infrastructure changes takes more than a second, my god man! Do you have any idea how far we've come? Try setting up a basic homelab some time or go work in a legacy shop where every server is manually configured at enterprise scale, which is probably every company that wasn't founded in the past 20 years. Contrast that with the current tooling for automated machine image builds, automated replicated server deployment, automated network provisioning, infrastructure as code. Try doing any of this by hand to see how hard it really is.

Modern data centers and networking infrastructure may as well be a 9th wonder of the world. It is unbelievable to see how underappreciated they are sometimes. Also, hardware is not software. Software is governed solely by math. It doesn't exactly what it says it will do, every single time. Failures of software are entirely the fault of bad program logic. Hardware isn't like that at all. It is governed by physics and it is guaranteed to fail. Changes in temperature and air pressure can change results or cause a system to shut down completely. Hiding that from application developers is absolutely the goal, but you're never going to make a complex problem simple. It's always going to be complex.

Remember my wife? Let me tell you one of her better stories. I can't even name the responsible parties because of NDAs, but one day a while back an entire network segment just stopped working. Connectivity was lost for several applications considered "critical" the US defense industrial base, after they had done nothing but pre-provision reserved equipment for a program that hadn't even gone into ops yet and wasn't using the equipment. It turned there was some obscure bug in a particular model of switch that caused it to report a packet exchange as successful even though it wasn't so that failover never happened, even though alternative routes were up and available. Nobody on site could tell why this was happening because the cause ended up being locked away in chip level debug logs that only the vendor could access. The vendor claimed this could only happen one a million times, so they didn't consider it a bug. Guess how often one in a million times happens when you're routing a hundred million packets a second? Maybe your little startup isn't doing that, but the entire US military industrial complex or Amazon is absolutely doing that, and those are the problems they are concerned with addressing, not making the experience better for some one man shop paying a few bucks a month who think yaml dsls make software-defined infrastructure too complicated for his app that can plausibly run on a single laptop.


It’s true that we’ve made a ton of progress. But I LOVE it when developers demand more and I will bet good money that it’s only a matter of time when what the author describes will actually happen.

We’ve been making tremendous strides in making production software systems easy and accessible for developers (and not sysadmins) to understand and operate. We’re currently in the stage where the devops principle has been somewhat realized; devs love the ability to deploy when they want but absolutely hate that infrastructure provisioning takes so long compared to the tools they use for software development. This is a good thing! I want more devs to demand this; someone may actually solve it.


> We’ve been making tremendous strides in making production software systems easy and accessible for developers (and not sysadmins) to understand and operate.

in my experience developers are terrible at doing operations. (for good reason mind you).

Devops only really became possible because a lot of operational problems have been outsourced to cloud providers, but they still persist. Or do developers also want to build the networks which provide fault tolerance, redundancy and scalability while also building software?

Running a large, multi tenant system for different workloads takes a mind boggling amount of work and skill, not to mention deep understanding of how networks and systems operate in detail and at scale.


> The vendor claimed this could only happen one a million times, so they didn't consider it a bug. Guess how often one in a million times happens when you're routing a hundred million packets a second?

This reminds of people on HN who claim BGP is outdated and should be replaced.

They have no idea how mind boggling large and complex the global internet routing is. I would even argue that BGP in the default free zone (aka, the public internet) is the largest distributed system in the world.

> VMs are just files and can be resized, but optimal resource allocation on multitenant systems is effectively solving the knapsack problem.

as someone who works in a datacenter/cloud enviroment. Having optimal resource distribution is simply impossible to get 100% correct. Their is always some waste or over/underallocation. The same goes for large scale networks aswell.


A trainer said to me YAML is like JSON but easier to read, he then said count the whitespace on every line... Sure, An IDE makes it easier to see, but the same can be said about JSON.


I have been observing it at my work - the production systems were getting more and more complicated and it was harder and harder to set up dev and test environments. It also started to break the underlying system when you had to install old libs for the project - and then maybe you needed two versions of the libs for different branches of the project. So I started to develop in virtual machines. Then people started using containers for that. That felt like a step back, because it was a different abstraction - and you had to do things differently on the dev machine that on production. Then I learned that people use terraform or ansible to setup the production and I hoped that I could run the same scripts on my dev machine in a vm and have environment just like production. But no - when I encountered a project with terraform and ansible - the scripts contain lots of AWS related stuff, it is impossible to untangle it and run on dev machines. I have never learned kubernets, I don't do much programming any more, with containers both in production and dev that would make sense - but I still don't see scripts that would just spin off a dev environment. Maybe dev environments on a dev machine is really a dead end now - and what we need is a way to spin of dev envs in the cloud.


>> Software infrastructure (by which I include everything ending with *aaS, or anything remotely similar to it) is an exciting field, in particular because (despite what the neo-luddites may say) it keeps getting better every year

The fact that many people think that software infrastructure is getting worse is a very bad sign in itself (it cannot be ignored). There has always been religious wars over software development practices, but never to the extent we have today.


Literally author is asking for something in the cloud which is abstracted away, fast, easy to use and easy to debug. This is not possible imho. There are always trade-offs. You can't get all of them for "free". e.g serverless services will be slower than your own setup, like always. Debugging, I'm not sure but I don't think it will be as easier as your own setup ever.


I don't know. What he's describing sounds very close to what I've been working on, all AWS Serverless with lambdas and SQS and DynamoDB, with config (CloudFormation) via code in BitBucket. I can launch a new lambda in less than a minute with a couple of aws-sam-cli commands, and get responses from curl in less than 2ms.

I mean, nothing's perfect, but AWS Serverless seems to be the closest thing I've ever seen to truly abstracting away everything but the actual code.


Well written article. This exactly with this spirit - of making AWS, GCP, Azure accessible to any developer that Qovery has been built. The idea of Qovery is to build the future of the Cloud - making developers to deploy their apps on AWS in just a few seconds. No joke. 2200+ developers from 110+ countries are deploying hundreds of apps per day. And this is only the beginning. Thanks again for sharing this article.

You can give a try to Qovery here: https://www.qovery.com


> The word cluster is an anachronism to an end-user in the cloud! I'm already running things in the cloud where there's elastic resources available at any time. Why do I have to think about the underlying pool of resources? Just maintain it for me.

You still need to know the geographic location of your physical cluster for multiple reasons though: legal (Cloud Act is a thing!), performance (you need closeness to your end users) and sometimes redundancy when talking about multi clusters architecture.


> I'm not asking for milliseconds! Just please at least get it to less than a second.

Is there a word for an oxymoron but instead of contradictory words it's sentences?


Code not configuration

We have this already -> https://www.pulumi.com/


I encourage to follow developers like https://twitter.com/tjholowaychuk/ who are making the cloud / infra far better from a Developer Experience perspective.


I've been doing this since many years. Just not serverless. Rollout to production, incl. build took max. 20 minutes for a fat-client relational-database application.

The author is mixing here two things that don't depend on each other.


> "but there was nothing on the checklist to make sure the experience is delightful."

This is the key point.

AWS is alright once you wrap your head around it. But it seems it was "designed" / gobbled together without any form of empathy for their poor users whatsoever. The best you can say about it that it works as documented. But you'll be reading a lot that to figure out what and how or even why. Absolutely nothing is straightforward, ever. I've indeed had the pleasure of figuring out hosting a static website on AWS via https. Took me several attempts top get right.

Calling that process complicated does not do it justice. It's basically a series of hacks to integrate components that were clearly not intended to be used like this. Think doing s3 redirects just to ensure www goes to the main domain. And then another set of redirects at the CDN level to ensure http goes to https. Make sure you name your resources just right or you will have a hard time pointing to them from route 53. (DNS). And of course you want auto renewing certificates to go with that. All possible. But it's indeed a long process with many pitfalls, possible mistakes, gotchas, etc.

This lack of empathy is a common failing with engineers that typically lack the end to end perspective on what users/customers actually need (they don't care, or if they do lack key information) and are actually discouraged from thinking for themselves by managers that are not a whole lot better that report to boards populated by sociopaths who just want their minions to do whatever the hell it is they do. Exaggerating of course. But decision making in large companies is problematic at best. Sane results are almost accidental once you reach a certain size.

SAAS software in general has this problem. Usability stops being a concern once you've locked enough customers into paying you in perpetuity.


My impressions TLDR:

1. "Built for delight" Author wants everyone to suddenly start making nicer things.

2. "Truly serverless" Author wants to throw code at cloud, without thinking about performance or resource usage.

3. "Fast" Author wants every AWS command to take less than a second.

4. "Ephemeral resources" Author wants to test in the cloud with less effort.

5. "Code not configuration" Author doesn't like static config, everything should be an API.

6. "Built for productivity" Author wants everyone to be more productive.

All in all, reads a bit like the random musings of a frustrated cloud server user.


> Author wants to throw code at cloud, without thinking about performance or resource usage.

Not really what Erik is saying. Taking AWS S3 as an example, you absolutely need to think about performance and resource usage when dealing with S3, but you never have to think about the servers. You never think about some cluster that is making your S3 usage possible.

S3 is such a great success that there's very few (if any) examples of companies starting with S3 and then getting too big for it and needing to build their own system (thus wrangling servers and clusters etc). Dropbox migrated off of S3 to 'Magic Pocket' but I'm pretty sure this was for $$ reasons not because AWS simply couldn't handle Dropbox's massive read+write requirements. AWS S3 is a marvel of serverless computing, and we should try and get more like it.


Maybe I'm confused by all those doubleplusgood buzzwords that almost deliberately say the opposite of what's happening (like serverless servers).


The point about the AWS console being thoroughly un-delightful is well made, it's a nightmare.


Google app engine was released in 2008...


The desire to go "truly serverless" and pretend the computer does not actually exist is absolutely delusional.

The refusal to acknowledge that software will never be anything beyond executable data on some kind of computer, somewhere, is why most web-based software is so slow and shitty.

Having someone build the server and assign your code to run on it does not change that one (pun intended) bit.


I don't think that's really the point, but the term "serverless" is definitely confusing. It took me a while to understand it.

By "serverless", usually it's meant that your code doesn't need to concern implementing a server on a socket. All you have to do is have your execution return the right value so that the server in the cloud can do the right thing. No need to setup and configure Express/Fastify. Because cloud providers manage the server aspect of serving a web app, your code just needs to focus on execution rather than delivery, or staying alive until the next request is received.

It's still a very misleading term, however. Every time I hear it, I cringe a little.


But that's exactly what the GP is railing against. Trying to hope you don't need to concern ypurself with setting uo the server and thinking about the resources it needs etc is unlikely to be a good idea, or even to work in real terms. AWS Almbda for example only really works if you understand the limitations of the VM and cluster that your code will be running in, and design your system accordingly.

I once worked on a product that didn't, and had ~10s "cold start" latency for any request, because of the size and reliance on lambda. And of course this "cold start" was actually seen all the time, because it was a new niche product that people pnly accessed occasionally, and because it's actually a "cold start" for every concurrent request.


> By "serverless", usually it's meant that your code doesn't need to concern implementing a server on a socket. All you have to do is have your execution return the right value so that the server in the cloud can do the right thing.

Jesus, the "they just reinvented PHP/CGI and gave it a cute name" folks were more right than I thought.


The desire to go "truly assembly-less" and pretend the machine code does not actually exist is absolutely delusional.

The refusal to acknowledge that software will never be anything beyond executable machine code on the computer is why most systems software is so slow and shitty.


It is not.

The JVM is something that comes quite close actually, but it lacks a few environment features and configuration/specification options to fully cover the space that most people need, hence why everyone is using docker now. But if you think about it: most people don't really want to use docker. It's just the best option currently.

Many don't care what OS they run on, because they don't rely on any specific OS functionality. This is exactly what should be improved for running code that can operate on a higher abstraction.


BEAM, the erlang vm comes even closer to that. Everything is a process and has the same semantics. Erlang and Elixir take this to their full advantage.


I have little experience with BEAM, but I'm curious about it. Is there a good description of the scope of the VM and what it defines and what not?



"serverless" is the equivalent of living "carless" and paying for taxi.

...but with a locked-in contract with one taxi company. And less privacy.

Unsurprisingly, in the long term, it will cost you more.


There will not be a massive productivity boost in the next 10 years. We haven't had a productivity boost in the past 20 years. In fact, we're much less productive now. But I digress.

We're still largely making and running software the same way we did 20 years ago. The only major differences are the principles that underpin the most modern best practices.

"What the fuck does that mean," you say? It means that I can take a shitty tool and a shitty piece of infrastructure, and make it objectively more reliable, faster, and easier, by using modern practices. Conversely, I can take the latest tools, infrastructure, etc, and make them run like absolute dogshit, by using the practices of a decade ago.

Let's take K8s for example. It's like somebody decided to take the worst trends in software development and design-by-committee and churned out something that is simultaneously complicated and hard to use [correctly]. Setting it up correctly, and using it correctly (no, Mr. I'm A Javascript Developer, however you think you're supposed to use it, you are wrong), for a medium-scale business, requires banging your head against a hodge-podge of poorly designed interfaces and bolting on add-on software for at least a month.

This is ridiculous, because what you actually needed was something whose principles match those of K8s. You want immutable infrastructure as code. You want scalable architecture. You want (in theory) loosely joined components. Do you need K8s to do that? No! K8s is literally the most complicated way that exists to do that.

Next let's take Terraform. A clunky DSL that is slowly becoming sentient. Core functionality that defeats itself (running terraform apply is unpredictable, due to the nature of the design of the program, even though it was created with the opposite intention). Kludgy interfaces that are a chore to cobble together. Dependency hell. Actual runtime in the 10s of minutes (for the infra of a single product/region/AWS account). And no meaningful idea if the state of the infrastructure matches its state file matches the code that generated it all. To say nothing of continuous integration.

Do you need all of that to do infrastructure as code? Of course not. A shell script and Make could do the same thing.

So, you have a kludgy Configuration Management tool (terraform), and a kludgy Cluster Management tool (k8s), and all just to run your fucking Node.js webserver, make sure it restarts, and make sure a load-balancer is updated when you rotate in new webservers. Did you need all of this, really? No. The end result is functionally equivalent to a couple of shell scripts on some random VMs.

To get the best out of "Software Infrastructure" you don't need to use anything fancy. You just need to use something right.

---

But there's a different problem with infra, and it's big. See, Cloud Infrastructure works like a server. Like a physical server, with a regular old OS. To manage it, first you "spin up" the physical server. Then you "configure" the server with some "configuration management" software. Then you "copy some files to it". Then you "run something on it". And while it's running, you change it, live. This is how all Cloud Infra is managed. This is known as Mutable Infrastructure. Modern best practices have shown that this is really bad, because it leads to lots of problems and accidental-yet-necessary complexity.

So what is the solution? The solution is for cloud vendors to supply interfaces to manage their gear immutably. And that is a really big fucking problem.

Say I have an S3 bucket. Let's say I have a bucket policy, lifecycle policy, ACLs, replication, object-level settings, versioning, etc. The whole 9 yards. Now let's say I make one change to the bucket (a policy change, say). Production breaks. Can I "back out" that change in one swift command, without even thinking about it, and know that it will work? No.

Assuming an AWS API call even has versioning functionality (most don't) I can use some API calls to query the previous version, apply the previous version as the current version, change some other things, and hope that fixes it. But there's no semblance of `rm S3-BUCKET ; cp -f S3-BUCKET-OLD S3-BUCKET`. You have to just manually fix it "live". Usually with some fucked up tool with a state file and a DSL that will randomly break on you.

There is literally no way to manage cloud infrastructure immutably. Cloud vendors will have to re-design literally all of their interfaces for a completely new paradigm. And that's just interfaces. If trying to do the S3 bucket copy fails for "mysterious reasons" (AWS API calls fail all the time, kids), then your immutable change didn't work, and you still have a broken system. So they actually have to change how they run their infrastructure, too, to use operations that are extremely unlikely to "misbehave".

Do you know how much shit that covers? It took them like 20 years to make what we have now. It may take another 10 to 20 years to change it all.

Therefore, for a very, very long time to come, we will be stuck with shitty tools, trying to shittily manage our infrastructure, the way we used to manage web servers in the 90s. Even though we know for a fact that there is a better way. But we literally don't have the time or resources to do that any time soon, because it's all too big.


I wonder what you think is the alternative to Kubernetes that makes it easier to deploy a piece of code to run on a cluster. I hope it's not some kludge of cobbled together shell scripts and ssh.

In general these ideas of magical 'immutable' infrastructures seem to pan out much worse than just plain VMs with some orchestration solution inside. Anything that tries to capture the state of the world and reify it usually has to severly limit what you can actually do to avoid the halting problem, or just copy some things and hope for the best in terms of external dependencies being the same.


A Kubernetes cluster already runs on cobbled together shell scripts. The K8s build and setup files, Dockerfiles, in-lined commands in kubectl files. Shell scripts are just small simple programs.

Immutable Infrastructure isn't magical at all, it's a paradigm shift which has real world effects. And you can do it with plain VMs and some software inside. But the VM has to be immutable, and the software inside should be as immutable as possible. Otherwise you are constantly fighting a battle to "fix" the state over and over again, and you can't clearly reason about the operation of the system.

Immutable Infrastructure doesn't solve the halting problem, it avoids the halting problem, because it depends on not having general algorithms and unlimited input. It instead uses specific operations with specific criteria in a predictable way, the way safety systems are designed.


Keep your s3 config in version control, terraform apply, if it doesn't work revert the commit and terraform apply again. The DSL isn't that hard: https://registry.terraform.io/providers/hashicorp/aws/latest...


It would be nice if it worked that way.

What happens if your network is interrupted during apply or your STS credential expires? If you are lucky, you may be able to upload the errored.tfstate and plan/apply again. But if somebody else plan/applies before then, the state will be out of whack. Resources will already exist with names that you want to use, and you will have to manually edit the state file, or import the existing resources, or delete them.

What happens if someone changes the bucket without using Terraform? The bucket's changes are blown away on the next terraform apply. Or the apply will sometimes just fail, and you'll have to fix the state or resource by hand.

What happens if someone has applied a different change after your change? Reverting the commit may now impact something else somebody since removed/added, running into naming conflicts and needing custom code changes, and sometimes moving around of state (happens often when a commit changes whole modules).

Or say a code change changes the provider version. A revert may not work and you may need to remediate the infra and code yet again.

Or say someone applied Terraform with a newer version of Terraform. Now you need to upgrade your version of Terraform, and change whatever code you wanted to revert to the new Terraform version.

Or say the resource + action require delete before create, and your reverted commit tries to do create before delete. You'll need to modify your Terraform code after reverting it with a new lifecycle block, or apply will keep trying to do something impossible.

All of this is happening in production. How much downtime is ok while you fix all of those problems by hand?


> running terraform apply is unpredictable, due to the nature of the design of the program, even though it was created with the opposite intention

Citation needed. Run _with a plan_, terraform apply is entirely predictable as to the operations it will carry out and the order in which it will carry them out.


You are incorrect. edit according to your wording, you are sometimes correct. But terraform plan often can't show values that it can't predict ("Values known after apply"), so you often can't predict what apply will do. And you can never know for sure if the operations will succeed.

First of all, what does terraform apply do? It runs API calls. Those API calls may or may not work, based on criteria outside of the control of Terraform. It may be supposed to work, but that doesn't mean it will. Use AWS enough and you will run into this weekly.

Second, Terraform has semantics to do things like "create before delete" or "delete before create". This behavior is controlled by the provider, per-resource. But Terraform has no earthly fucking clue which it's supposed to do, only the provider knows. And the provider does not check before terraform apply runs whether an operation will conflict with a resource, even if Terraform is managing that resource.

So you run terraform plan -out=terraform.plan, and the whole thing output seems fine! So you terraform apply terraform.plan, and Terraform happily performs an operation which is literally impossible, and then fails. Then you go find the code that has been working just fine for 2 years and change it to fix the lifecycle for this particular scenario, and then you can terraform apply correctly.

Thirdly, a lot of terraform plan runs show output with unknown values, because the values won't be known until after you run terraform apply. But those values can conflict with existing resources with the same values. Terraform makes no attempt to verify if conflicting resources exist before apply. Therefore, your terraform apply can fail unpredictably (to be correct: you could predict it if Terraform were designed to actually look for conflicting resources before applying).

There's about a hundred of these edge cases that I run into every day. I already perform all operations in a non-production environment, but as we all know, the environments naturally drift (because cloud infrastructure is mutable infrastructure) so there are times production breaks in a way that didn't in non-production.


> edit actually, according to your wording, you are correct.

That’s uhh... why I chose that wording.

> Use AWS enough and you will run into this weekly.

I _really_ wish you understood how amusing this exhortation is.

I’m under no illusion about the reliability of AWS APIs - much less Azure or any of the other major public clouds.

The easiest way to reduce drift is to prevent read/write access via anything except a designated service account for Terraform. Per one of your earlier replies about a coworker having upgraded TF underneath you, it sounds like you aren’t doing that.

I’m with you on some aspects of your rant - specifically precious resource consumption such as names - for what it’s worth (and am a former core maintainer of Terraform, and still work with that codebase daily elsewhere), but if you’re going to rant about a project whose maintainers (or former maintainers) may be listening, you’d better be technically correct.

I’d suggest issues (yes I know there are a lot of them, I don’t know how they get triaged anymore) or mailing list posts, or <gasp> contributions as a more helpful way to engage rather than complaining on an internal Slack channel.


Been there. If Mitchell or anyone else on the core team doesn't want a feature to work a certain way, or feels that a bug isn't a bug, or maybe they want to wait until the next big release to look at something, tough luck for us. I suppose I have to become a Terraform developer in my spare time, and then I only need to spend years fighting the corporation that owns it over how to design it to actually function the way humans need it to.

Terraform doesn't look for existing resources. It doesn't properly plan for new values. It doesn't have a convention to define resources that can or can't be used idempotently. It doesn't support modern deployment methods. It doesn't make a best effort to fix things even when it clearly can detect what the solution or user's intent is. It can't auto import resources, even when it knows that something already exists, and it won't remove something in its own state when you're trying to import it, even though you clearly want to replace what's in state with what you're importing. And it's a monolith, so it's not like we can change small parts to fix our use cases.

There's no amount of GitHub issues or mailing lists that will fix all this crap. Some of it is poor functionality, some of it is buggy functionality, and some of it is just bad design. It needs to be redesigned, and its components separated so they can be replaced or improved as needed, without editing Go code or a DSL.

But mainly, it needs to work the way humans need it to work in production. Not the way a bunch of idealistic devs think is theoretically the best way. "Just don't ever touch an AWS account at all by any means other than Terraform. What's so hard about that?" Nobody who has an actual live product used by customers [for money] could possibly survive this way without hours of outages and weeks of delays. If your tool is designed for an impossible scenario, then it's impossible to use it "properly".


> Terraform doesn't look for existing resources.

This is 100% at odds with your stated goal of predictability.

> It doesn't make a best effort to fix things even when it clearly can detect what the solution or user's intent is.

As is this.

> It can't auto import resources, even when it knows that something already exists, and it won't remove something in its own state when you're trying to import it, even though you clearly want to replace what's in state with what you're importing.

I’m noticing a pattern with your stated desires being at odds with your stated goals.

FWIW, I manage dozens of accounts which no one touches outside of TF (or other similar tools) and have worked this way since 2016 with basically no issues.


why does this rant start by talking about optimising things for the user, and then end up talking about optimising things for the developer?

I get that they're talking about productivity tools for developers. But missing the point that optimising these tools for developers will mean they're not optimised for the actual user. We're not the customer. The stuff we write should be fast when run in production. Having it be nice to develop on is a bonus.

Having generated configuration that the developer doesn't have to think about means that the system can't be tweaked for the particular use case to make it faster in production. Yes, it's easier. No, that's not better.

Our job is to handle the complexity. To take "simple" business requirements and deal with all the complexity that is needed to make those work. That's the gig.


Looks like someone who had not spend enough time to review modern computing history.

Heck, who voted up this article...

This article is like the NTF frenzy, it's fine, but it's going to be baseless because it's looking too far in the future.


Why? I don't see much unreasonable here.

Make infrastructure changes faster? Why not.

Make it easier to create ephemeral resources? Why not.

Make scaling completely automatic? Would be nice. Though here I'd of course like to have some knobs to control the automatic scaling, else it could easily run out of reasonable budget.

My only disagreement is client library vs declarative configuration. I'd prefer the latter, but using some better language than YAML; maybe, say, Dhall.

And yes, it's now a common wisdom that the cloud is the new mainframe; e.g. [1]. Well, okay; if this is what is required to run services at scale without breaking the bank, so be it. If deploying to millions of ZX-81 Spectrums happened to give 3x reduction in opex, the industry would adapt to that, too.

[1]: https://news.ycombinator.com/item?id=26857859


Yeah, I just love his breathless dismissal of the incumbents (Guess I'm a neo-luddite!)

In fact I think a lot of this optimism is in dire need of a couple of baseball-bat hits to the kneecaps.

> But things are rarely built to optimize for developer productivity.

Well, these a-holes abandoned Rails in favor of serverless-react-lambda-script. They made their bed, now they must lie in it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: