Hacker News new | past | comments | ask | show | jobs | submit login

> For the 80% or more, serverless works just fine.

Cite this. I don't believe you. I'm across a pretty broad slice of industry and can only draw on anecdotes from colleagues, but the majority of people with actual hands-on experience are disillusioned and say that the biggest (only?) drive for serverless at this point is top-down organisational pressure created by technically incompetent strategic management that looks to 'thought leaders' and Gartner to tell them what technologies to use.

if serverless works for 80%, then why is that 80% not even remotely reflected in the sentiment here? HN is usually a pretty good gauge for how the wider engineering community feels about a particular technology.




Hi there, Hey there, I lead Developer Advocacy at AWS for Serverless (https://twitter.com/chrismunns).

I'll give you that this 80% number seems pretty out there. I don't know how that is measured or what it would be referencing.

If you step back and remove all the commercial software from the argument (something like 50%+ of enterprise workloads, the kind of things you buy from a 3rd party and just run it, like Sharepoint, SAP, or similar) and then look at how many business applications take on a trivial amount of load over time, then the author's post becomes more of an outlier. Few folks have apps that do 100rps realistically. And so for data processing/streaming/batch or web/api workloads serverless actually does work out pretty well. Is this 80%, I am not sure.

There is 100% an inflection point where if your operator cost is low enough(human work+3p tools+process+care and feeding) then the "metal to metal" costs can be comparable. Even the author admits that's leaving something on the floor and so it really comes down to what your organization values most.

I would love for most of our serverless app workloads to be top-down organizationally driven but the reality of it is that it comes often from developers themselves and/or line of business organizations with skin in the game of seeing things move faster in most organizations. This will then typically require buy in from security and ops groups. If these folks you know have the trick to driving top down incompetent strategic management towards serverless I'd buy in on that newsletter.

In terms of HN sentiment and in being a member of this community for almost a decade, I don't know if I'd say it widely represents most of the dev world as it tends to lean way more open-source and less enterprisey. I think there's also a larger number of people that represent IT vendors that would love to see AWS fail here :)

Thanks, - Chris Munns - AWS - Serverless - https://twitter.com/chrismunns


> And so for data processing/streaming/batch [...] serverless actually does work out pretty well.

This is my field of expertise. Serverless in the sense of lambda/functions is not usable for serious analytics pipelines due to the max allowed image size being smaller than the smallest NLP models or even lightweight analytics python distributions. You can't use lambda on the ETL side and you can't use lambda on the query side unless your queries are trivial enough to be piped straight through to the underlying store. And if your workload is trivial, you should just use clickhouse or straight up postgres because it vastly outperforms serverless stacks in cost and performance[1]

For non-trivial pipelines, tools like spark and dask dominate. And it just so happens that both have plugins to provision their own resources through kubernetes instead of messing around with serverless/paas noise.

And PasS products, well.

https://weekly-geekly.github.io/articles/433346/index.html

>One table instead of 90

>Service requests are executed in milliseconds

>The cost has decreased by half

>Easy removal of duplicate events

Please explain.

[1] https://blog.cloudflare.com/http-analytics-for-6m-requests-p...

IaaS is the peak value proposition of cloud vendors. Serverless/PaaS are grossly overpriced products aimed at non-technical audiences and are mostly snake oil. Change my mind.


The issue of the application artifact size is definitely real and it blocks some NLP/ML workloads for sure. Consider that a today problem that isn't hard in Lambda.

But we've 100% got customers doing near realtime streaming analytics in complicated pipelines feeding off of things like Kinesis Data Streams. This FINRA example is one datapoint: https://aws.amazon.com/solutions/case-studies/finra-data-val... and this Thompson Reuters one: https://aws.amazon.com/solutions/case-studies/thomson-reuter...

These are nontrivial and business critical workloads.

Thanks, - Chris Munns - AWS - Serverless - https://twitter.com/chrismunns

edit:

-------------------------------------

Missosoup i see you making changes to your comment and it greatly changes the tone/context. i won't adjust my own reply in suit but leave it as it was for your original comments on this.


I'm not going to make any elaborations on my comment now. Please feel free to edit yours or post another to answer anything I raised. Your original reply containing some generic sales brochures isn't what I expected from someone representing aws stepping into this discussion.


That article appears to be discussing a migration from Redshift to Clickhouse. Redshift is a managed data warehouse, not a serverless solution in the same vein as Lambda.

I don't understand the point you are trying to make.

Edit: The comment I am replying to was originally just 'Please explain' and a link to the article in question, and contained no other context or details.


Sorry I have a bad habit of making a comment and then actually writing it in full. I should stop that.


Quite a lot of ETL ends up being some minor transforms + a query or two.

Not all of it is massive ML models doing a lot of computation, and I've had a lot of success using pandas and numpy in it (and gcp cloud functions).

Serverless has its niche and is a great little tool to smooth the impedance mismatch between data stores.


Clickhouse is a really strange thing to compare to Lambda here. One is a method of performing small compute jobs, the other is an analytics database. They serve vastly different functions and saying "Clickhouse or postgres is cheaper and more performant than lambdas" is nonsensical.


and I totally posted this from the wrong account... sigh..

this is me. Thanks, - Chris Munns - AWS - Serverless - https://twitter.com/chrismunns


Kinda wild in your post history you're advocating for AWS as a cheaper superior platform without disclosing that you work there.


Yup! Haven't done it in years and created this different account to be more clear/direct in who I am. That is also why I called it out at the start and bottom of all my responses.

Thanks, - Chris Munns - AWS - Serverless - https://twitter.com/chrismunns


[meta] Please stop downvoting this. Clearly those comments were years ago and clearly the author has been doing the right thing for years since.


Maybe that's how he got the job.


No. I've been at AWS for over 7 years in a few different roles. Came to the serverless space >2.5 years ago because I felt passionate about it (could have literally done almost anything). Again, sorry for mis-posting under my older personal account, it was rarely used fwiw.


I wasn't criticizing you. I was pointing out that an equally likely and more charitable interpretation is that you posted as a fan of AWS before you started posting as an employee.

Turns out I was wrong in this case, but you've explained the situation and everything is hunky dory.


> In terms of HN sentiment and in being a member of this community for almost a decade, I don't know if I'd say it widely represents most of the dev world as it tends to lean way more open-source and less enterprisey.

They also like to join the latest hype more often than not so that should even out the anti-enterprise sentiment. I don't think serverless is over that point yet.

General opinion regarding the topic: I haven't done serverless in any way yet, but if it's similar to "regular"/other cloud services then in my experience it only makes sense of you're so big that building a scalable infrastructure yourself is too expensive (unless you're Facebook or Google). The other use case is if your load is actually fluctuating a lot, to a point where having enough resources available just to handle the peaks at all time is too expensive.

Whenever you can somewhat predict your load, having your own infra is almost always less expensive (at least here in Europe/Germany).


Hugs Chris!


Hug back at ya big guy!


Really? Hahahha


Miles and I know each other, what's wrong with a friendly hug?

Do you need a hug hesburg?


I recall a certain uptight code of conduct which explicitly forbids unasked for virtual hugs.


Yah, but I know him and we hug.


An example of where serverless is lovely for us:

We run on AWS. We log lots of stuff to CloudWatch. CloudWatch allows you to scan for regexps (more or less) and send matching lines to a destination of your choosing. There are about 5-10 such matching events per day that we care about, and when they happen, we want an alert in a Slack channel (or email or text or PagerDuty or...).

Option A:

Stand up a server. Deploy a web service on it that accepts POSTs from CloudWatch and acts on them. Update the box as OS vulnerabilities come along. Pay 24/7 for a server that accepts 10 incoming requests per day.

Option B:

Write a plain Python function that takes the contents of a POST that's already been processed and has no other idea that it's living behind a web server. Drop it in Lambda. Pay for 1000ms of processing time per day.

I wouldn't want to run our entire stack on Lambda and friends, but for certain specific situations it's brilliant and I wouldn't want to give it up.


If you have spare capacity on a server you can probably squeeze in such functionality at no extra cost.


But that makes operational expenses go up. For instance, now you have to tweak the Nginx config to support the service that normally lives on that machine and the new system. And you still have to write and maintain the web service driving it (which may not be hard, like a Flask app with a single route, but still more than zero). And you still have to maintain the server itself, which should involve regular OS updates and any work you have to do to automate deployment of the web service, unless you really like doing all that stuff by hand.

Or you can upload a single function definition to Lambda and let it do everything else.

Again, I wouldn't use Lambda for everything, but I'll gladly let it handle the stuff I won't want to bother with.


>But that makes operational expenses go up

On the other hand, you're probably all set up to do that already. It's no big deal. Introducing a new tech requires training, new deployment procedures, new documentation, etc., and now you probably have a one off component out there. There's a line somewhere, but if I were just adding a webhook for log events I'd throw it on a server I already have.


It's almost never about server capacity. It's usually about who's maintaining them, who controls the updates and versions of software on there, and which dept the server's billed to. All stuff that's reasonable for things of a certain size, but unreasonable for things too small or big to fit the process will.


Ah so it's serverless as a tool to work around corporate red tape, then ;)


I would love to see a study of how much IT spending in general is just for tools to work around corporate bureaucracy. I suspect the figure would be staggering.


A market that will never die!


> ...at no extra cost

Complexity and dependency are costs too, and I would argue often more significant costs than the $30/month to run a server.


[flagged]


Yep, you figured me out. Drat!


How can cgi-bin in Nginx help in his case?


Option A above requires standing up a new server. Option B involves just writing a short piece of code that handles a form POST and uploading it to a server (in that case, Amazon's). Option C, "If you have spare capacity on a server you can probably squeeze in such functionality at no extra cost," has the following disadvantages, according to him: "Now you have to tweak the Nginx config to support the service that normally lives on that machine and the new system. And you still have to write and maintain the web service driving it (which may not be hard, like a Flask app with a single route, but still more than zero)."

But a CGI program, assuming you have cgi-bin already configured, gives you the benefits of option B without the drawbacks of option C: you don't need a Flask app or a route or a configuration tweak. You just write a short piece of code that handles a form POST and upload it to a server. The difference is that it's your server, not Amazon's, and you have to put this at the top of your code:

    #!/usr/bin/python
    import cgi
    qs = cgi.parse()
Then you can version-control it in Git so that when you need to figure out what broke last week you don't depend on AWS deployment logs.

Like, literally, the drawbacks of "serverless" that are being cited in this discussion (startup time and performance cost) are the reasons we moved away from CGI in the late 1990s, and literally every single advantage being cited for "serverless" in this discussion is also an advantage of CGI. "Serverless" also has disadvantages of vendor lock-in, lack of programming-language choice, and difficulty of monitoring and debugging that don't exist with CGI, but those aren't coming up much in this discussion.

The only real advantage seems to be if you can get by with literally zero servers of your own, thus saving the US$5 a month you would spend on a low-end VPS to run your potentially dozens of CGI scripts under Nginx, maybe inside Docker. In that case maybe you should look into shared web hosting. Or possibly get a job, if that's a possibility.

I'm not saying "serverless" isn't a good idea or that it isn't an improvement on CGI. The CGI interface is kind of sketchy and it's produced a lot of bugs in the past. It would be easy to do better, especially with a native-language binding. But the reasons we abandoned CGI are also reasons to abandon "serverless", and the actual advantages of "serverless" (if they exist) are not coming up in this discussion.


Narrator: It can’t.

Well, it can, but that’s the trivial part of all this. Tweaking a web server config is like 1% of the problem.


>HN is usually a pretty good gauge for how the wider engineering community feels about a particular technology.

HN is a pretty good gauge for how a subset of a very startup focused portion of the engineering community feels about technology.

I don't know that the same can be said about it reflecting the opinions of the engineering community on the whole, or if the community is even unified enough for anything to be a representative sample. We're a pretty diverse and opinionated lot.


I agree. All I have done for the past 15 years is startups and I love HN, however in the last few years it has become very apparent there is a selection bias to people in startup/valley type tech. I spend a ton of time explaining to my CEO and CTO why a customer cannot just dump that old way and embrace the new hotness. The valley culture is a bubble and does not reflect that of real world IT in general. Still love HN.


This! There are lots of people in tech that don’t know/visit HN !


I honestly doubt that anyone else in my team of 7 reads HN. There's one guy who might. The rest, it just doesn't seem in their character.

While we might like to think the opposite, I feel like the majority of people in our profession are like the majority of people anywhere -- going to work to work, and going on FB/twitter/reddit to waste time. I don't know many devs personally who take a big interest in the goings-on of the startup world or open-source world in the large. Do they care about spending some of their free time honing their craft or messing around with a new tool? Absolutely. But few seem like the type who would enjoy engaging in a quasi-philosophical discussion on the strengths of various design paradigms. They spend all their day thinking at work; not many want to continue thinking about the same sort of stuff during their off hours.


I spend six figures a month in AWS, on a mix of servers and serverless.

Serverless is more cost effective for sporadic, bursty things where the cost of an instance sitting idle outweighs the more expensive per-operation costs of serverless. On sustained load, it is almost never cheaper.

Another price to pay is complexity; it's simply more difficult to build, deploy, and monitor serverless systems.


> Another price to pay is complexity; it's simply more difficult to build, deploy, and monitor serverless systems.

What do you mean by serverless systems?

My experience isn't with AWS, but with Azure, so maybe it isn't an apples-to-apples comparison.

My experience has been that building, deploying and monitoring even moderately complex, interdependent Azure Functions has been less difficult than building the equivalent services to deploy to IaaS or on-prem. FWIW, a big part of that reduced complexity has been offloading infrastructure management to Azure instead our IT guys, but deployment and monitoring has also been practically painless. Are there some complexity "gotchas" that I've missed?


I too use a mixture of server and serverless solutions on AWS and you hit the nail on the head. Serverless is a really great solution for certain things that don't need to run all the time. If you are hosting a web server that needs to run all the time then just set up a traditional server.

We have a few lambda functions we use that send notifications or perform very specific tasks that are sort of on their own. Running on serverless is perfect for these. These functions are only invoked occasionally (a few times a month). So it is great that the Lambda only boots up the few times it is needed and I don't pay when it sits idle (most of the time). It costs me 10 cents a month instead of say $10 a month with a small EC2 server that might otherwise do the same thing.

Another great use case is bursting or inconsistent traffic/invocation patterns. For example, we have a use case where users can text/SMS in and receive a promotion code. We have this running on serverless because it runs very iradically. A nation-wide commercial will run and ask users to text a code, which triggers hundreds of thousands of Lambda instances within a few seconds. But then it might not run again for a month. On a traditional server platform, we would need a load-balanced server solution with several servers set up for redundancy on a huge influx of traffic, only for those servers to sit idle for who knows how long. You can't shut it off because people might text it at other times too, but you need to have a system that is available for "worst-case scenario" at all times (aka when a national commercial runs). You also don't know exactly when the commercial will run, so you have to always be ready. This setup might costs thousands of dollars a month on a traditional server system, but on Lambda it could cost $100 because we only pay during that huge influx of traffic. What I also like is that you can just trust that the Lamdas will boot up under nearly any traffic load. So you don't have to worry about making sure your server is "ready" or "the right size". If we get 1 hit, then we pay a fraction of a penny. If we get ten million hits then we pay $20 (really - it costs $0.20 /Million). Try to get the same reliability with a $20 server that can do that... spoiler: you can't!.

So I don't look at it as a slower solution that is more expensive. If you are running data analytics or a web server that is always on then this is surely the case. In which place I would say to use a traditional server.

But serverless has a role and it is great for infrequent usage, or for burst traffic. We love that we know we can get nearly infinite traffic and just pay per request without worrying about the infrastructure being ready. It is infinitely scalable, so that we don't need to worry about it.

It is always a pro and a con. I love serverless because it opens up new options for developers and system admins. It is another tool in our toolbelt. It is not the perfect solution for everything. We still run many vanilla EC2 instances for internal CRMs and webservers that run constantly. Or for booting up long computational processes. These are cases where servers are still far better than serverless. If we all learned to understand it as another tool available to us and not a "one or the other" requirement then there is a wonderful world where both tools can coexist.

Note: I use the word "infinite" very loosely in this post. I mean "infinitely scalable" in that it is scalable up to any reasonable amount, beyond what I would ever have to worry about. If the day comes that my servers are getting more traffic than Amazon is able to handle in their data servers then I will be far richer than anyone else here and I will come up with a new solution at that point.


I like serverless for my own use cases, but I don't usually comment about it. People would probably not be interested. I have an HTTP call happen once every five minutes or so and just record the response. Thats it. About 100 ms of time. I would be paying more if I just had an always-on server doing this type of work. The sentiment we normally hear on HN is from people who really love or really hate something. I appreciate the option, but other than that I'm not going to be telling the world about my simple use case.


Hear, hear. I have a use case where I need to run some data extraction a few times a week that takes a few minutes and a few GB RAM to complete. I could simply squeeze it in on one of our other servers but that means the whole setup there would get slightly less elegant and would pollute the logs with weird load spikes.

Lambda is just perfect for such a use case, and the number of processing minutes makes it almost free.


Because comment sections are not good measures of opinion - they are places for discussion.

People make more effort to engage if they have strong positive opinions and minor negative opinions. Minor positive opinions, which (you hope) are the average in an audience, get unrepresented if the effort required to give feedback is great. That's why metrics measuring average opinion - like NPS - aim to require as little effort as possible.


We’re using Serverless where it fits and carrying on with our lives. I don’t feel the need to try and defend tools, because it’s exhausting. There are certain load graphs for which regular servers can’t follow the line in time, some for which regular servers can follow the line but would result in overprovisioning, and some in which request isolation is just a great thing to have. For these loads we use Serverless, for the rest we use normal EC2 or ECS.

Just understand the job and use the right tool for it. This fighting over Serverless and servers is like arguing over whether hammers or screwdrivers are better tools.


Datapoint:

I'm using serverless because it lets me deploy a new API endpoint in a matter of minutes.

There is 0 configuration, and deployment is a single command. I also have my own servers running some of my APIs, the difference how much work I have to do is huge. Serverless on Google Firebase is literally "export a function, now you have a new endpoint!".

There is, quite literally, nothing else to do.

I can move 5x-10x the speed with serverless. There is less (almost 0) infrastructure to maintain.

Yes, it costs more, although the math with Google Firebase Functions is pretty trivial, everything is bundled under 1 service, and sending data around between Google cloud services is free.


> because it lets me deploy a new API endpoint in a matter of minutes

I've seen many projects that do the same without serverless just fine.

It's always been there, unless you had some enterprise-grade stacks. I was able to do this in early 2000s, with PHP and FTP, in a matter of seconds. Just upload that api/new_endpoint.php.

Not like this timing is the big deal, unless you really need to add those API endpoints real quick.

> I can move 5x-10x the speed with serverless.

I'd argue this is only applicable to the initial setup. There is one-time investment in designing your own infrastructure, but you can move fast on anything well-designed.

And it's not like serverless designs are immaculate. Just recently I was listening to Knative introduction talk and my obvious question was "what happens to event processing if servers that power this system fail and processing job just dies without a trace". Turned out, there are no delivery guarantees there yet. My conclusion: "uh, okay, blessed be the brave souls who try to use this in production, but I'll check out in a few years".

> There is less (almost 0) infrastructure to maintain.

It doesn't run on some sci-fi magitech in a quantum vacuum. The servers and networks are still there.

The benefit is that you don't have to design them. Saves you some (possibly, lots of) bootstrapping time when you're about to make your first deployment and ask yourself "how" and "where to".

The liability is that you can't - so when things break your can only wait, or try to re-deploy in hope that it would be scheduled to run on some servers that aren't affected.


Is your API so large and/or unstable that optimizing for time needed to add an endpoint is your lowest hanging fruit?


> Is your API so large and/or unstable that optimizing for time needed to add an endpoint is your lowest hanging fruit?

Startup, I am adding endpoints on a very regular basis as I build out the app. I probably add a couple a month.

As an example, I am writing one right now. It'll take me ~30 minutes to write+deploy it to test.

My long running and CPU intensive stuff is running on a VM of course.

I'm sure if I had spent several days learning yet another tech stack that I could create a system that let me deploy endpoints to a VM that fast, but I have Better Things To Do(tm).

I could also shove everything behind a single endpoint that has a bunch of options, but that brings in a separate set of problems.


Almost anything you were competent in would let you write and deploy and endpoint in half an hour (particularly in the early stages of app development), so I'm not sure that this is positive for serverless.


The amount of infrastructure I have to setup/maintain to do all of that is, well, nothing.

That is where the mental savings comes in.

When I first started, I went from "never wrote an HTTP endpoint" to "wrote a secured HTTP endpoint that connects to my DB" in under 20 minutes.

There is nothing for me to maintain. Setting up a new environment is a single CLI command. No futzing with build scripts. Everything will work and keep working pretty much forever, without me needing to touch it.

The sheer PITA of maintaining multiple environments and teaching build scripts what files to pull in on both my website and mobile app is enough work that I don't want to have to manage yet another set of build scripts. I don't want to spin up yet another set of VMs for each environment and handle replicating secrets across them. I just want a 30 line JS function with 0 state to get called now and then.

(As an example, trying to tell my mobile app's build system that I want a build with production code but pointed at my test environment? Hah! Good luck, the build system is hard coded to only have the concept of dev/prod! I've had enough of that.)

It is frustrations like that Serverless solves.


>When I first started, I went from "never wrote an HTTP endpoint" to "wrote a secured HTTP endpoint that connects to my DB" in under 20 minutes.

You could have had the exact same experience with Heroku or many other similar products that have been around for almost a decade and a half--it has nothing to do with serverless.

>The sheer PITA of maintaining multiple environments and teaching build scripts what files to pull in on both my website and mobile app is enough work that I don't want to have to manage yet another set of build scripts. I don't want to spin up yet another set of VMs for each environment and handle replicating secrets across them. I just want a 30 line JS function with 0 state to get called now and then.

It sounds like your architecture is far too complicated if you're managing all this by yourself. You don't need "a set of VMs" per environment if you're a small start up.

>Everything will work and keep working pretty much forever, without me needing to touch it.

If you believe that, you haven't been doing this long enough.


> You could have had the exact same experience with Heroku or many other similar products that have been around for almost a decade and a half--it has nothing to do with serverless.

See below with Ander.

I don't have much infra, I don't need infra.

(I do in fact have a couple of VMs because I do have a service that is stateful.)

Not having to think about anything aside from "here is my code, it runs" is nice. Debugging consists of "did I write bad code?"

> It sounds like your architecture is far too complicated if you're managing all this by yourself. You don't need "a set of VMs" per environment if you're a small start up.

For the few things I run on VMs I still have to manage secrets so they can connect to the right DB. I also need to point my apps to the proper environment. The app being dev'd on my phone needs to point to the dev server with the dev DB secrets stored on it. Same goes for the website on localhost, or the website on the development server.

In an ideal world I'd have a script that created new environments on demand and it'd wire up all the build scripts automatically. Every developer would trivially get their own VM and could switch parts of the software (website, app, services) to point to various environments easily.

Why in the bloody world some build systems hard code only 2 options in is beyond me.

> If you believe that, you haven't been doing this long enough.

Sure old runtimes will be deprecated, but let's compare that to:

1. Setting up nginx 2. Setting up letsencrypt 3. Figuring out why my certs auto-renew but nginx doesn't pick them up (Turns out default setup doesn't actually force nginx to pick up the certs!) 4. Learning about various process management tools for JS to keep my services running 5. Setting up alerts for aforementioned daemons. 5b. Probably involves configuring outbound email. 6. Realizing something went horribly wrong in 5b and my inbox is now flooded with alerts. 7. Writing scripts to automate as much of the above as possible. 8. This legit happened, there was a security fix to SCP on Windows that changed how one of the command line parameters worked, breaking my deployment scripts when running them from Windows. 9. Does Google really want to charge me per decryption to store my API keys in their cloud API key storage service? They have to make money, but ouch. 10. Security updates not just for code, but for the VM images.

Like, I get it, if my entire job is back-end development. But I want to write some functions that maybe hit double digit millisecond run times and throw away all their state after.

Firebase says "here, export some functions, we'll give you an HTTPS URL you can call them at."

That is the sum total of my relationship with the tooling. "Run my code." "Sure."

About the only other thing that works that simply is a headphone jack, and those are going away.


Again, none of this has anything to do with serverless architecture vs non-serverless architecture. There are many PaaS providers that would eliminate all of your complaints that aren't "serverless". They just manage the server for you. But you still write a normal app that you can easily run locally or port anywhere you want.

If you decide you don't like the service, you're app isn't tied to a proprietary API or architecture.

And you don't need to manage additional VMs for stateful services.


Only 2 endpoints per month? What exactly are you comparing to that doing it twice a month would waste way more time adding an endpoint vs serverless?


With Firebase if I need a new env I just type 1 command and get it.

With VMs I have the joy of managing my own keys.

VMs need security patches, updates, and I have to maintain images. I have to think.

It is hard to explain how simple firebase functions are.

There is literally no mental overhead. There is no worrying. There is nothing to maintain or think about, no settings up quotas or managing what machine something is running on or getting user auth setup. No certs that can expire or daemons that can crash.

I can come back 2 months later and count on everything I setup functioning just as I left it.

Some months I wrote a bunch of endpoints, I average 2 a month maybe, but if I am adding a new feature, I can pop out 3 or 4 a day, and then go on my way and work with the front end again.

Getting debugging up and running for them can be interesting (painful and bad), but everything else is just smooth.

I don't want to maintain a bunch of infrastructure, I want my users to ask for some data and get some data back.

If someone is doing non-trivial processing, serverless is probably not for them. But I have a bunch of 20-50 line functions that are all independent of each other and have 0 state.

I realize I'm paying a premium, but it is worth it to have this stuff that just works.


Ah ok so you're 100% serverless so functions are saving you all the headaches of running your own infra? That makes sense. It sounded like the time to add an endpoint was your main criteria and just seemed strange.


The time saved is time thinking about doing anything else other than "I want to write code". :)


> top-down organisational pressure created by technically incompetent strategic management

Do most people work in this kind of environment? At my company our management couldn't care less what technologies we use. They want to see reasonable cost and solid uptime, and they ask engineering to deliver that.


Imagine a new CTO or middle manager coming in, with a small ego, and a big need to prove himself, and initiating a make-work project.

What defense mechanisms does your firm have against this? Unless your team leads are golf buddies with the C-suite, it is unlikely that their opinions are going to be taken seriously.


How would your hypothetical defence mechanism tell which C-suite initiatives are worthwhile, and which are ego driven make-work?


That's the thing. The two are indistinguishable, unless you give people with strong technical skills veto power on projects. If you don't work in such an environment, the only reason you haven't ran into a dangerous, idiot manager, is luck.

For some reason, in most firms, the c-suite isn't super keen on delegating that power.


> The two are indistinguishable, unless...

So are they indistinguishable or not? Two things can't be a bit indistinguishable.


Here's an anecdote: I'm a one man shop who has been planning, developing and deploying Django applications for six years. I have not got to the point where my deploys are automated because I am swamped with work just keeping my clients up to date and my servers ticking along. I usually deploy one client per $5 DO VPS.

I just deployed my first project to PythonAnywhere and I am blown away at how much easier is. I'm going to pass the cost on to clients and that's that. I'm going to be less stressed and can concentrate on what I want to do: build applications.

I don't know if it's "severless" in the same sense as AWS Lambda or whatever, but I can imagine many solo/small shops seeing less ops work as a godsend.


That just looks like a standard PaaS to me--Heroku has been doing that forever.


FWIW. It has worked great for us, saving us mid six figures per year or more. The savings keeps increasing as we figure out ways to move more of our workload over to Lambda, we are even running PHP there. Step functions, lambda, sns, sqs and api gateway are pretty cool tools for a lot of projects when used properly. I think the paradigm shift in how you need to make things work is frustrating for some, but once you get it, it works nicely.

I have also helped engineers at two other companies start making the switch and they are both happy so far... so, in my (obviously limited) first hand experience it works 100% of the time. :)


mid-six figures is $500k?? Are you factoring in salaries to get to that number or merely AWS spend? I'd love to see a spreadsheet with this analysis.


fair question... $500k in measurable lowered AWS spend(cancelled RI's and end of term and never renewed). if i figured in salaries and was fair about it, id say about 80% of that was actual savings, 20% was related to costs involved with rewriting or porting code. i wouldn't count the costs of new features into that as that is still forward progress, just writing in Lambda rather then for life on a traditional web server.

to put this into perspective we have grown about 10x and our monthly AWS bills have actually lowered. so I have no way of knowing what this workload would cost on EC2, going down that rabbit hole would produce a "savings" number easily into the millions.


Man, that makes no sense to me if it's just aws cost -- very basic back of the envelop calculating assuming you had $500k to spend per year on cpus on aws:

A c5d.2xl (8 cores) on aws is about $1875 per year reserved, $500k per year would buy you over 2100 cores; if you only did 100 req/s per core, you'd be able to handle something like 213k req/s average.

Just the alb costs on lambda at this request rate would cost you $10MM annually... 20x the cost of doing it on VMs.

    >>> 500e3 / (1875.0/8) * 100 * 60*60*24*365 / 1e6 * 1.51
    10158796.8

    $500k to spend
    $1875 per box / 8 cores per box
    100 req/s
    60*60*24*365 seconds in a year
    1e6 requests per $1.51 of alb spend
Staggering the amount of work you can do with just the amount of money you saved and it seems impossible to me that lambda was able to account for this?


I'll color the picture in a little more, maybe that will help make sense of it.

We have a unique workload that includes very large bursts of work that need to be worked on ASAP. Spinning up instances on demand is not fast enough, so we had a lot of servers sitting idle. Lambda allows us to execute this same work for much cheaper because we aren't paying for idle CPUs. Lambda also allows us to execute at much higher concurrency than ec2 would, for the cost. 15000 Threads spread across 100's of IPs is easy and cheap on lambda, not so much on EC2.

We went from ~250 customers per cluster(our term for a set of dedicated workload servers) to 2600(haven't pushed it past there yet). The work that was done on those instances is slowly being moved to Lambda and all new work/features are added to Lambda directly. This has effectively allowed us to drop a few clusters and also move to lower resources per instance.

FWIW we don't use ELB for Lambda, they come in via API Gateway, to Lambda/Step Functions directly or into an SQS queue in the case of workloads that have rate limits. For Lambda based workloads it breaks down something like 56% of cost is Lambda, 34% of cost is Step Functions and 9% is API Gateway.


Yeah, cool, so very unique workload - I'm not surprised there are apps that fit lambda in a cost effective way, they're just not typical I think.

Thanks for the explanation ;)


> HN is usually a pretty good gauge for how the wider engineering community feels about a particular technology.

....no, no it really isn't. HN is a bit of an echo chamber of people who care about being on the cutting edge. 80% of the rest of the industry goes home and stops thinking about software engineering.

Anyway, here's my two cents: I'm building a web app that has the following components:

* A super simple SPA frontend

* Like...5 API endpoints? that allow people to register their emails to the service

* A mailer that sends out a daily email at the time the user specified

* Like 5 more API endpoints for manipulating some per-user data (profile, preferences, etc).

For this use case, serverless is _perfect_. I'm using the "Serverless framework" which allows you to write the definitions for your functions (think routing rules, a la `routes.py` in a Django app) and CloudFormation directly into a YAML file, then it consumes that, updates a CloudFormation stack with all your bits n' bobs all connected together, and gives me:

* 10 or so lambdas

* Monitoring for all of the above, including pings if something goes catastrophically wrong

* A DynamoDB table to hold all the data

* An API gateway hooked up to all the lambdas

* An S3 bucket with my static assets in it

* A CloudFront distro set up to CDN all my static assets

* A DNS record to hook up CloudFront to my URL

* An SMS (emailer from AWS) queue to ship out emails on an hourly or so basis

This whole stack is declared and managed with like 120 lines of YAML and deploys itself in a minute or two, no extra automation or management required. It costs me like 2 dollars a month right now since I haven't launched the site, but by my back of the napkin calculations, it'll probably cost under 50 bucks a month with my expected traffic. Serverless can be great for this sort of thing, and if I do eventually need to migrate this to VMs running containers or something it won't be _that_ hard (it is, after all, just a pile of TypeScript running on Node).

Now, I'll be honest, it took me a full day of tinkering to figure out how to get static assets in an S3 bucket to publish to CloudFront, but that's because I don't read docs very well, and now I know how to do it for the future. I imagine if I had less hubris and found someone else's tutorial for doing so, I could have had it solved in an hour or so. But lambda does work for a lot of simple use cases.

I imagine that most peoples' shitty wordpress sites from 2006 would benefit a lot from being frozen into static HTML, dumped in an S3 bucket, and put on a CDN. They'd get security patches, OS updates, a global CDN and a bunch of other goodies essentially for free. And the sad reality of our industry is that most of it isn't building fantastically intricate real time distributed systems that both cook your breakfast and brush your teeth, it's maintaining some guy's shitty wordpress site from 2006. Tech isn't sexy outside of the FAANG/HN bubble, and _that's perfectly OK_. Serverless is a great paradigm for those people that don't need anything fancy, it's like the modern LAMP stack IMO. It's a good enough default from a performance and cost perspective for most people who have no business thinking about (or who just really don't want to think too hard about) their infrastructure.

Is it a pile of bespoke garbage that sometimes feels like it's going to topple over? Sure, but so does all the software I work on at my day job. At the end of the day, people should use whatever is right for the task, and serverless is great for low- to medium-volume mostly static sites and apps that have a little bit of data manipulation and a little bit of background processing but nothing that rises to the level of _needing_ to manage your own infrastructure. And that does really cover a significant portion of the industry, even if it's not being adopted by the 80% right now.


I don't think the only drive is top-down. I work in a place where devs get a lot of say in which tech we push (other than core languages; more just for hiring/market/support purposes) and there are a lot of devs that seem all about serverless .. and then are on 3 hour support calls with me because they can't get some set of A + B + C components working with lambdas.

I haven't done direct development with lambdas, but I've had to help other devs debug issues (currently in a devops role, but I was mostly a dev for like 15+ years before) and been to a few meetups where I watched people advocate, write and show severless deploys on AWS and Google.

I can see some use cases for it, but I honestly hate losing a lot of the introspection and debugging capabilities with them. I'd personally have to deal with setting up my EC2 instance or K8s/maraton deploy and being able to get to the underlying system if I need to. If things go well with a lambda, then you're all set, but the moment you need to start debugging anything serious, I feel like you're going to be in a world of pain.

In one specific instance, we had a developer who set up a lambda and also setup nginx to route to it because he didn't want to use API Gateway, which he got working, painfully after a lot of work. He should probably put up a blog post on it honestly, as I don't think it's officially supported.

Of course, this is all anecdotal. It's really difficult to measure things like speed and cost objectively with this stuff since everyone's stack and needs and uses are just a little bit different.


API Gateway gets expensive when things get busy. App load balancers also let you point methods at Lambdas though, so nowadays that's likely a better idea (and certainly more serverless) than doing the routing in nginx.


Odd that you demand a citation and then counter with 3rd party anecdotes.


I use both Lambda and EC2 a lot. The decision on which route to go for a particular service/feature/app is pretty straight forward for me: need to support high loads or super low latency? EC2. Low volume, unknown/unpredictable loads? Lambda.


> HN is usually a pretty good gauge for how the wider engineering community feels about a particular technology.

HN is a pretty good gauge for how Silicon Valley feels about a particular technology.


> I'm across a pretty broad slice of industry and can only draw on anecdotes from colleagues,

Wow. Sounds like you should not be so certain.


> if serverless works for 80%, then why is that 80% not even remotely reflected in the sentiment here? HN is usually a pretty good gauge for how the wider engineering community feels about a particular technology.

Every time something sympathetic to a "hyped" technology is posted it's largely derided for being the "popular, hyped" option. MongoDB absolutely ruined the idea of NoSQL for years because everyone just meme'd about "webscale", without being willing to consider that maybe there's merit to the technology.

Are you seriously doubting that lots of people are moving towards serverless infrastructure?

Even internally at big companies they use "serverless-like" models eg:

* Lots of big companies have an internal FaaS

* Those FaaS are owned and managed by other internal teams, and are treated in much the same way as how you'd treat a 3rd party offering.

Serverless just outsources what those companies are doing already.

I leverage AWS lambdas almost exclusively. Yes I am paying a higher price for worse latency. That's OK. I'll pay money for that, if what I get in return is a managed hardware, os, and runtime environment - problems that generate billions of dollars in revenue for many different companies that are working on making it easier.

Performance is a problem I wish to address aggressively, but it's 3rd on the list. A rough approximation of my priorities might be:

1. Security

2. Complexity (including operational complexity)

3. Performance

Serverless gives me the tools to manage (1) and (2) in a way that has worked extremely well for me personally, and I have not found latency to be a problem for me.

Sadly, the site is down so I can not view the article and comment on things more concretely, however I saw the 8x figure and that's fine. 8x latency is fine for many workloads, but operational overhead isn't fine for mine.

edit: OK, so they just don't support HTTPS. It loads now. 8x cost, 15% performance. Cool.

Cool numbers and article, I appreciate what they're getting at. I don't see how you can walk away with "Serverless is all hype and never an option" from it, but whatever. I will once more point to the "Any time this comes up people bash it because 'hype bad'". The top voted comment currently is someone bashing serverless and contains literally no substance.

> Finally, I'm not trying to bash API Gateway, Lambda or serverless in general here, just showing that for some workloads they are a lot more expensive than boring old EC2 and Elastic Beanstalk

Really makes sense to me! Pick the right technology for your constraints. If serverless offerings like lambda could solve every use case AWS wouldn't need other services.


> HN is usually a pretty good gauge for how the wider engineering community feels about a particular technology.

Source?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: