I moved a 1M monthly visitors project from Heroku to Render in December, and other than a few little hiccups, I have a better service (no hard-cap on memory for ex) for less than half the price.
I thought I was happy with Heroku until realizing the absurd cost and lack of evolution it grew to be. I've been there for almost a decade.
I wasn't at quite this scale (yet) on Heroku, but I operate a service with quite a few moving parts, and I was deeply concerned about what an increase of scale would mean for my costs on Heroku. I moved to Render, kept my monthly opex essentially flat, and dramatically scaled up the available compute resources for my users just my biggest customer's needs really skyrocketed.
I'm glad I switched when I did, because I think I'd probably have a $600+/month bill from Heroku right now.
Edit, couple more thoughts about Render:
I've been using Render's managed Redis offering for the past few months in beta, and it's been rock-solid. I'm really happy with it.
Also, I am delighted that Render's two United States datacenters are in Oregon and Ohio. My understanding is that they spread their customers across GCP, Azure, and AWS, and I could not be happier to be out of Heroku with its exposure to the tire fire that is AWS us-east-1.
Well Heroku is overpriced garbage so that's not surprising. They rely on lock in to keep you stuck paying egregious fees for services that are relatively unreliable.
After migrating an entire startup infrastructure from Heroku to AWS I'm even more against Heroku given how easy it is to use something like Elastic Beanstalk to do the same thing without many of the same downsides.
I've been burned by Heroku, though, so I have a strong negative opinion.
Overpriced yes, but garbage? Heroku is definitely stagnating, but they were integral to a whole new genre of PaaS. Heroku got a ton of people's first apps online. Even today their 12 factor app methodology is incredibly useful/powerful.
Now that said I totally relate to having strong negative opinions after being burned. I've got a few orgs like that too. What did they do for you?
Edit: nevermind I should have refreshed. You already answered in a sibling comment
We recently got bit (not quite burned) with the load balancers in the Common Runtime being shared across all apps.
Some piece of malware was associated with one of the IPs of Heroku's load balancers. One of our customers ended up blocking one of our servers associated to that IP because of it. We did some research and we even think we know what app caused it (it's someone's app they host on Heroku), and we don't think it should be flagged as malware, but either way... some other app in Heroku could be truly doing bad things and we would then get our server blocked again.
Heroku's advice was to put up a CDN in front of their load balancers, which has worked for now. But that's extra cost and system complexity just to get around a limitation of Heroku.
We did confirm a Heroku Private Space would eliminate this, since we'd get our own set of IPs for the Private Space's load balancers. But that comes with extra cost, and perhaps even limiting which add-ons we can use in the Private Space.
So although we've generally been happy with Heroku, we are concerned with some technical limitations, like this important (to us) one.
For us it was mainly the fact that every month some part of Heroku is down or degraded. That and the huge price increase when jumping from the standard dynos to the large dynos.
For reference, we migrated to new t4g instances on AWS and received about a 100x performance improvement for a similar (sometimes cheaper) price than Heroku. We are also able to connect to our company VPC and use private resources partitioned elsewhere in AWS (e.g. a redis cluster that costs the same as Heroku with far fewer limitations).
The downside is obviously configuration and learning AWS. Thankfully, we are pretty well versed in AWS here.
Looking at prices is definitely something I’m going to be doing in the short-to-medium term here. We’ve grown our Heroku bill for years, but I’m not entirely sure what all they take care of that RDS or Aurora wouldn’t also handle.
We currently don’t have anyone who is a dedicated sysadmin, so that’s one thing to keep in mind. We’ve been able to rely on shared sysadmin responsibilities here and there, and have Heroku take care of monitoring if our database has somehow “degraded.” But I do wonder if when Heroku identifies that a database needs to be updated, if they’re really just rebranding something AWS does automatically with RDS. I’m just not sure.
Basic things like rolling deployments are still considered "labs" features.
You need double your number of typical connections to support rolling, combine that with the large price jumps and low connection counts on the plans and it becomes very expensive just to have something that should be standard in a modern PaaS platform. This is but one example.
I also find it sad how good Heroku still is when compared to Render or other similar services. Sad because Heroku has built such an incredible experience that others still haven’t caught up to after all these years.
Render is close, but they’re still missing a lot of important details that makes deploying Rails apps as easy as Heroku. Instead they kind of throw the manual at you and expect you to read a tutorial.
Right now I'm on my third job in my career where we've migrated off from Heroku. There have got to be agencies out there that specialize in this, right? If not, it seems like a hell of an opportunity for someone.
It is great to see the 'why not just host it yourself comment?'. Cliche as it is on HackerNews it always reminds me of the infamous Dropbox comment.
I personally think render has a great future, 99% of dev ops is the golden architecture involving a load balancer behind multiple app instances with a db and cache.
I don't want to care about creating automated backups. I don't want to care about managing VPS. I don't want to care about security updates (though unattended upgrades does make it easy nowadays). Let me git push with a dev environment and link available and start shipping features to customers. That is the value of Render/Heroku.
What's not clear immediately is that this isn't a standalone offering. It seems to be meant for use with their hosting service. So you'd already be in the ecosystem, not trying to hook it up to like a netlify page or something on your own vm.
(Render founder) Hi HN, we built our managed Redis service for people who already use Render to host their apps. You can of course use it from outside Render, but ideally you'd use the Redis internal URL (e.g. redis://red-longuniqueidhere:6379) in your code that's also running on Render.
Hi, love what Render are doing, the one thing holding me back from migrating to you from Heroku is Postgres “point in time” backup restore using the write ahead log. Having used it just once before with Heroku it’s indispensable. The minute you add it we will be making the move.
I know it’s on your radar but just want to give a nudge.
I've been hearing fantastic things about Render. One concern I have is about the trajectory of the company-- for example there was a time when Heroku was a great bet, then it got sold and stalled. What are the most convincing points you can give for why Render won't follow a similar path?
Does anyone on HN have first person experience with Render? Their offering certainly looks great on paper. I'd specifically be interested in your experience with regards to:
- Stability / Uptime
- How well the management UI / API / tools work
- Any services you found you needed but aren't yet offered
Yes I deployed a django application to render w/ a couple of additional services: database, 2x background workers and redis. Overall the experience was positive but I feel that it tries too hard to have a "simple" UX which ended up annoying me in the end. I would use it again but I didn't "love" it. I have a feeling it will be pretty good long term. Here are some annoyances I had:
- Environment previews don't always work, hard to easily switch between services running in environment previews
- Slow deploys because it rebuilds the docker image every time, no way to connect it to a registry
- Documentation is shallow, especially when things get technical or complex related to their "blueprints"
- Ran into a bug with their environment groups
- They proxy through cloudflare and had some intermittent issues
- Zero downtime deploy wasn't actually zero downtime
Aptible founder here. Since you mentioned compliance: Aptible is a PaaS focused on enabling cloud deployments that meet rigorous security and compliance benchmarks (HIPAA, HITRUST, SOC 2, ISO 27001, FedRAMP, etc.)
We're not directly competitive with Render but our solution is similar — we support turnkey app deployment, PostgreSQL, Redis and other OSS databases. Our focus is on the problem of simplifying compliance in the cloud. We're also building a self-hosted SaaS version of our product for companies who want the benefits of PaaS but direct access to their AWS/GCP/Azure infrastructure as well.
Hi, is it possible to proxy, for instance, /media/ from example.com/media/ to a S3 bucket? I run a setup using `nginx-ingress-controller` and it's possible to use it as reverse proxy to an external http server and I'm wondering if Render supports it as well. Thanks!
The only killer feature missing from Render is a distributing the same app across regions for minimal latency. Heroku only pitifully support 2 regions.
I want the choice for my app to be global on day one with no devops required. Just two clicks and it's available on multiple regions.
This is the first time I'm targeting the Spanish market so I don't have experience with AWS from there. Paris is geographically close so I imagine that would have the best latency (sorry I can't give you anything concrete).
Been very happy with Render after migrating everything off of Netlify (they started charging 10x more after reading our git commits). Great to see them offer a hosted Redis option!
Since a lot of their customers share passwords or only use git automation to deploy without logging in, as they charge by team members, they've started to count git committers as "team members" which blows my mind: even if you can justify some sort of "cost" associated with having a team member, I can't fathom why you would charge by the git committer, except for jacking up your revenue in preparation for a sales or the next VC round or some non-customer related matter.
wow, dang. So one time committers cost you the same as a daily committer? Sounds like they're discouraging open source models (even internal open source where people on different teams can contribute PRs). That would really suck for us.
We have a large team of devs who didn't have Netlify accounts. They couldn't log in to see build logs, change configs, or anything else. Aside from seeing the deploy preview URLs, they wouldn't even know that we used Netlify to deploy our static sites. We were paying ~$120/mo for three accounts for managers, bandwidth, and extra build minutes.
Then we get an email saying that because we have 20 devs who commit to our private repo of the site we deploy, we need to pay for 20 member accounts. Each member account is $99/mo. So, because they started reading our git commits to count authors, our monthly bill was going to go from $120 to over $2,000 (way more than what we pay for all of Google Cloud).
I mostly don't write CI pipelines, I generally use my existing templates. Those are little annoying quick tasks, not a big deal.
There's a nice poetry to seeing a system autorecover from outages or scale correctly when it gets a huge burst of traffic. From being able to help an engineer do something they think is impossible, and do it easily.
It’s loosely typed, you don’t know what parts are compatible or incompatible with one another, you can’t test without deploying to CI and seeing the results, the syntax sucks, the white space sucks, etc etc.
In Berlin, quite a 'cheap' city for tech wages in Europe --you'll probably 'only' get around 100/hour unless it's finance related or funded, but that's still like 800 if you work standard days, which isn't too far off.
It is a part of my job, however so far I have found that if your pipelines are getting too complicated, it is time to break them out into build docker images, bash scripts etc.
Our Redis offering is covered under the BSD-3 clause. We do not include Redis modules that need a Redis Labs license, but it isn't a problem in practice for the vast majority of Redis use cases (caching and queueing).
Render supports building / deploying docker containers so it supports everything, including Java.
render.com and fly.io are in a similar niche of "deploy your app easily".
To me fly.io prioritized the wrong thing: deploying apps close to users. I care more about low price, ease of use and features than optimizing latency.
render.com is the best (that I know) "run a server for your app" service. I used Digital Ocean vps and then apps before.
If you don't care about optimizing latency, we definitely didn't build Fly.io for you. That's not because you want the wrong thing, though. It's because we have a narrow and, we think, very valuable focus.
We're actually not in the same niche as Render! At least, I'm pretty sure we're building towards different things. But we are each (a) small (b) scrapping with large public cloud providers and (c) attempting to generate a self sustaining startup reaction.
Attracting people with new apps is a way both of us prove the value of what we're doing. Attracting people who want to run their apps close to their users is the way we make a lot of money.
I think GP means adding a redis container to an existing k8s cluster. Which I suppose, especially without persistence or HA, is not much more than specifying the image.
GP is saying adding a Redis container to docker/k8s is (nearly) zero config, and they are correct, with the caveat that only if you don't need persistence features.
Hi,
I'm actually really sorry I don't post too much and I think when I do I am making the comments a bit short and its causing confusion. I hope this clears things up and is a bit helpful.
To explain the context, I find there seems to be a lack of clarity around K8s. Lets just say, for this post's purposes, I am a _user_ of k8s. That is, I don't actually run the cluster at all, I don't manage its storage, I am operating as a User without full admin access.
I also work in VMware who has the whole Tanzu thing going on so what happened was internal IT set up a K8s cluster for production hosting (maybe it was dogfooding or something I'm not really involved with that team, user only as I said).
I got frustrated when figuring out how to use it however. A bunch of online material is about setting up the K8s infrastructure/cluster, rather than using it as a Dev.
I hope this makes a bit more sense when I said redis was practically zero config.
To explain the setup. I code a python app, and I use docker compose and a single docker-compose.YAML. This YAML gives the build instructions for my app, and minimally (by this I mean with absolute minimal config options) also sets up a postgresDB, RabbitMQ and Redis.
I consider this minimal as most of it is boiler plate and I'm just configuring its resources.
So for dev when I want a local spin up I docker-compose build, docker-compose up.
As you can see above,
volumes:
- ~/.docker-conf/redis/data/:/data/
Gives it persistent storage across builds and deploys.
So to my mind this was pretty easy so far. Then I looked at what was needed to deploy on K8S and I nearly puked. Sorry there is no way I was touching that mess of yaml.
So I just use kompose-convert which uses the single docker-compose.yaml to auto generate all the little yaml babies needed for deploying to staging and prod (the K8S cluster).
That's it. Regarding persistent storage which I use, I define that with the docker Kompose label in the main docker-compose.yaml. The goal just being a single file to config everything.
When I kill the deployment I just don't kill the persistent disks, and then deploy all the auto generated yamls at once (including the persistent disks, which wont overwrite them if they exist)
(note I haven't made any edits on any of these yamls they are all auto-generated)
To me this is minimal config compared to trying to integrate with a third party outside of my app network (needing corporate firewall exceptions etc etc). But yea there is internal hosted redis offerings I think but even then I made a pass as the above is just very easy and neat.
Also the persistent disks are all backed up in the background.
I guess it's minimal config when you already have a nice K8S setup ready to use would be a fairer statement :)
Hope this is useful to someone just trying to get a dam app running!
Congrats on the launch, Render team! Like so many others here, I was a former Heroku user turned AWS-customer that is longing for an option with the positioning, UX/DX that Heroku offered ten years ago.
There's this bit of copywriting I see on many startup landing pages that annoys me a bit:
"You can now set up a Redis instance in just a few clicks and let Render handle the heavy lifting to operate it reliably and securely."
To me, it always feels somewhat patronizing to refer to yourself as doing 'heavy lifting' and the customer's work as (obviously) not-quite-as-heavy-lifting. Maybe in particular because I've set up Redis in the past, and as many others have mentioned it is really the epitome of hassle-free, easy-to-setup server software. So referring to hosting Redis as heavy lifting just kind of feels wrong or over the top.
I'm excited to see a solid competitor to Heroku. While I trust Heroku due to using it for so long, they have become complacent with pricing and features. For example, affordable autoscaling looks awesome.
That's great. They make good points on securing databases as well. Everything has to be open with Heroku's basic dynos. It would be nice to secure PostgreSQL or Elasticsearch so the IP is not publicly available.
I didn't quite understand one of the things which is offered in the blog post.
- "Access to the Redis CLI"
Does that mean we can access to Redis CLI in web interface? I need a product like this and I signed up to create an instance, and I created a Redis instance but I couldn't find the redis cli on web interface. I would like to access to the Redis-cli, and I don't want it to be accessed by outside of my cluster. So If I connect with my local redis-cli, I have to turn on External connections, which will make the instance accessible by everyone.
Clustering/HA is next on our list. We wanted to get the core release out quickly so customers can avoid mucking around with their own Redis container deploys.
We've been using https://upstash.com/ which charges per request, but has a nice SDK that allows easy Redis connections in Cloudflare Worker & AWS Lambda environments. It's been a very nice experience so far.
Are there are any good open source libraries that use REST connections to Redis so you can use it in "serverless" environments?
I use Caprover https://caprover.com/ and a DigitalOcean droplet to have my own toy PaaS for experiments that can run almost everything. They have 1-click install apps, and for apps that they don't have, as long as you have a Docker image you can run it
May I ask what the benefit of this is vs spinning up your own Redis server?
An equivalent solution would be to spin up a $5 droplet on DO, do basic due diligence wrt security (lock down ssh, firewall, etc.) and you end up with more memory, more connections, for less money.
I realize I'm a Luddite, in the field of DevOps.
Is it just the benefit of not needing to maintain the server?
Because we've been led to believe that hosting any kind of database yourself is an enormous challenge and unsafe and not-cloud-nativie-enough and god bless you if even for an instant you believe you have enough knowledge to run such complicated pieces of software that are harder to get right than rocket science.
At least that's what the cloud vendors want you to believe.
As a person from both devops and SWE worlds, it all comes down to "do you already know how." If you know how already, it's the way to go for sure. If you don't, the extra money to outsource it will enable you to focus on adding value elsewhere.
Now that said there's definitely a cultural belief in startups that you should outsource everything you can. It works for some people, but I've seen it kill others because they were sending all of their revenue to their vendors. The outsource model always suffers as you scale up.
> do basic due diligence wrt security (lock down ssh, firewall, etc.)
This is exactly the benefit. You don't have to do any of this stuff. For example, my organisation has two developers neither of which are experts in network security or system administration. So if can outsource this work to a 3rd party provider then that's a huge benefit. The difference in cost between a $5/month DO droplet and a $10/month managed Redis instance is negligible for us.
Yes, and the convenience of not needing to know how to maintain the server. You can point a person with a different type of experience at this problem.
In what way is it different than other hosted redis solutions? Also, the sizing suggests it's a single replica, IIRC most other hosted redis platform support clusters.
I'm guessing the appeal of such services is "hassle-free" deployment? Because I really don't get it.
My DigitalOcean instance is 2 GB RAM, 30 GB disk. With backups it costs $11.40/month. And yes, I'm free to run whatever on it, be it Postgres or Redis or...
To have an equivalent on Render would be 3-4 times more expensive?
This is basically hosted redis, it looks like. It's like any other hosted db. It means they take care of monitoring, backups, configuration, fixing stuff that goes wrong, etc. RDS is the same idea and people spend a fortune on it.
With Platform as a Service (PaaS) you are paying a premium for isolating the problem to an individual component which is down. You can either spend a few hours going through the logs or have PaaS vendor let you know that problem is not at their end so that you can spend time isolating the issue.
(Render founder) The HN title doesn't reflect the title/content of the blog post, which is a launch announcement for fully managed Redis on Render. We didn't submit the post, so we'll need @dang to edit it.
I guess the point is that people who haven't spent much time with Redis, would just spin up a Redis instance but won't typically set up everything that's offered (backups, monitoring, idk if there's any automatic failover or clustering out-of-the-box - this kind of stuff).
I agree, and unclear why you're being downvoted. From the title I thought that Render must be a database company that built a database protocol-compliant with Redis. It took a while of searching around their blog, github, and careers page to understand that isn't what they do.
Generally “me too” HN comments are frowned upon, but I also clicked because I thought this was a rewrite of a redis-compatible offering that perhaps offered something like zero-click sharding or something.
I thought I was happy with Heroku until realizing the absurd cost and lack of evolution it grew to be. I've been there for almost a decade.