Not trying to be discouraging, but my fear would be that your product is just a feature that other video editing software products will add if it is popular. Have you thought about your endgame? It is a good idea though -- I'm surprised they don't already do that.
Yeah, that's something I've considered. I'm surprised they don't yet too. Even though it might be a short term opportunity, it feels worth pursuing for now because it's a tool that can save me and others a good chunk of time.
My other line of thinking is that this could expand into an editor that's purpose-built for screencasters, with whatever other niche features that might entail.
I applaud your efforts but it's disingenuous to title this how you "made a geolocation service" when you are just reading a field in someone else's geolocation service.
Indeed. Using another service and being a wrapper around it is not all that impressive. He is also using workers which makes his solution expensive.
I run ifconfig.io which now gets just over a billion hits a day. It is basically an echo service, and it just parrots back what cloudflare tells it. But since I run it on linode, it costs me a whopping $40 dollars a month for 35 billion returns a month. Using workers would cost me several thousand dollars.
But if I expect bursts, Lambda could potentially cost me a few thousand dollars in a day while my own instance could handle it fine for like $50. Even paying $50/mo over a few years would pay off in that case.
Exactly. developer time lot more expensive. most people want to look cool & do cool things so they drink these cool aids.
it sounds lot cooler to say that " i made geolocation service using serverless/lamba" than "i made geolocation service using boring monolithic technology". where the former takes 150 hours to get it right and painful to add/manage features. where the later is very easy to maintain and very predictable priced.
There's TONS of use cases for Lambdas where the operational costs (both in terms of time/energy/expertise and to lesser extent monetary) of running something persistent are much higher.
In my last gig, we ran plenty of services that would never need to scale and would likely never hit the point where Lambda became too expensive. Low-utilization services (Example for order submit, because we only ever got a couple hundred orders per day), Cron-type jobs (Admins could use Cloudwatch to monitor), Temporary fan-outs using queues. Anything that won't ever hit the millions-per-day level is a good candidate for something along these lines. Most of those services cost <$5/month.
I think there's situations where Lambdas make sense in spite of their pricing. I applied for a job on the Warren campaign and their tech stack relied on AWS lambdas. I think that makes sense for them. It's a short-lived endeavor that isn't going to be around next year. Spending a bit more to not have to deal with the headache of maintaining your own infra makes sense in scenarios like that.
If that’s your opinion I’d like to hear your rebuttal to this [1]. It’s hard to argue against actual numbers for an actual production app running at actual scale.
if you are spending 279 using complicated cloudflare worker/azure function setup, you could have gotten away with $100 month if you have choosen linode/droplets.
32 million requests can be easily served with $50/month heroku node.
I think you massively mis-read the article. $279/mo is what he would have spent. What he actually spends is under $1/mo. Sure a $50/mo Heroku box would do the trick but why spend $49/mo MORE when serverless is cheaper?
What I want to hear is how you argue serverless is “snake oil” when evidence says it’s massively cheaper even at scale. Less than $1/mo to serve 141m requests. How is that snake oil when even you say an equivalent “serverfull” solution costs 50x more?
But see, none of that is true. Everything you just said is misleading at best and completely false at worse. Especially the part about the free tier. This isn’t the 12 month free trial, this is the “free forever”. Sure prices can change but they can change for servers too. That’s no different, and you can’t waste time/effort trying to engineer a solution for a problem that doesn’t yet exist and may never exist.
I think I’ve heard enough to say you have no idea what you’re talking about and this conversation has become pointless.
I almost never do frontend work. I'll gladly take a PR if you want to make that one page faster and prettier. Though I don't want to make the build require yarn or other js/css compiler framework for a single page.
Exactly!. I was about to tell a similar story but yours is MUCH better.
I run an internal service that just loads (a filtered version of) the maxmind geolite2 db in memory using it's great C library bindings for perl and on top of that I fork (so memory is shared) a mojo server and it's able to serve millons of reqs per hour on a very small vm. Response time is not much more than a "hello world"...
It's just a normal linode running archlinux. There are very minimal tweaks to handle the new connection load.
I should write a blog post about it. It has steadily gained more traffic over the years and hasn't need much care and feeding.
The go code can be found here if you're interested, though it some of the ugliest code I've written. I made it when ifconfig.me was having load issues many years ago.
I love the simplicity. I'm sure I could find similar source code for similar services that is way over engineered and doesn't perform nearly as well. Do you serve this with a reverse proxy in front, or just as is? I'm assuming as is since you have the TLS configured right in the code, but I figure I'd ask.
Having a reverse proxy on the same machine effectively doubles the amount of connections and requires coping the request around. There is no practical benefit to a reverse proxy for this use case. So the go program is listening directly on the internet.
I do have the service behind CloudFlare, which is essentially a reverse proxy. The reason is for CloudFlare is non-obvious; it is deserves a blog post, and that is connection pooling.
If I have all the requests go back to the origin, the bottle neck is not the go code, but Linux opening and closing all those single use TCP sessions. CloudFlare creates around 100k persistent connections to the backend, but then just keeps them open. This makes Linux much happier.
How would it reduce the cost? They are only return data from inside the header of the request they received and doing (an unnecessary) lookup to turn the country code into a country name.
What would you cache it cheaper?
No matter what his worker needs the smallest time/memory slot there is. You cannot make it cheaper while still using workers.