Since the whole issue boiled down to "providing some functionality to users who saw the app in the App Store and downloaded" and completely moved on from the initial IAP discussion, I believe a good compromise for all parties would be to allow developers to submit such applications but simply unlist them, making them only discoverable via search or deep links.
This isn’t actually one of those solutions where Lambda shines, pricing wise.
I would simply trigger a Lambda function once a minute (or every X minutes) using CloudWatch to fetch the latest articles and save them to an S3 bucket which I would expose and cache using CloudFront or any other CDN service.
This would lead to:
- No Lambda costs as it would be covered by the monthly free tier of 1M requests.
- No storage costs as the size of the stored data would be extremely small.
- Really fast responses as the “response” would actually be a static file cached at the CDN.
- The only parameter defining your cost would be your CDN of choice, which would cost somewhere between free and as low as $10/TB. For a project like the one in the article, that’s hundreds of millions of requests for just $10.
Yep, that is exactly the architecture that I use to watch over 600k Github repo changelogs for https://changelogs.md
Lambda generates static HTML in the background, puts it in S3, and the static HTML get served via CloudFront
The Lambda costs are a whopping 26 cents per month, for over 2 million Lambda invocations per month. If anyone is interested in the architecture, I've developed this website as an open source project here, for people to learn from: https://github.com/aws-samples/aws-cdk-changelogs-demo
> No Lambda costs as it would be covered by the monthly free tier of 1M requests.
That's far from the full picture. AWS Lambdas are charged by units of computational resources that are expressed as multiples of 64MB of RAM used per 100ms, each rounded up to the next value and with a minimum charge of 128MB of RAM used. So you are only charged a fixed fee per request if all your requests are short-lived and barely use any computational resources. Long-lived processing tasks that require a bit of RAM are charged multiple times the value of a single request.
You’re right, I should have mentioned that as well.
I didn’t go into those details because I was strictly talking about the project in the article and the compute time limit would not be exceeded for this project either.
400,000 GB-s is free every month, and even if the Lambda function ran for 2,592,000 seconds (equals to a month, way more than enough) using 128 MB of RAM (again, more than enough for a task like this), it would only use 324,000 GB-seconds.
> I would simply trigger a Lambda function once a minute (or every X minutes) using CloudWatch to fetch the latest articles and save them to an S3 bucket which I would expose and cache using CloudFront or any other CDN service.
Lot of upsides to this design, and this pretty much outlines a toned-down version of a very large, high-throughtput, low-latency globally distributed configuration system with strict write-ordering but near-realtime write-propagation guarantees a sister team worked on (though, I hear, they're redesigning it for reasons not relevant in this context). There is much to like about it.
Fetching items from S3 (fronted by a CDN or not) would require managing credentials at the client-side, though? Do-able but may require additional code for an auth-service (AWS Cognito or AWS STS or...)?
You can simply whitelist the IP addresses of the CDN (many of them provide them in their documentation or provide an API for it) in your bucket policy. It’s important to schedule a Lambda to run every now and then to check whether there are changes to the IP addresses and update the policy accordingly.
Another way would be to set a custom header with a token on the CDN to be sent in requests to the origin, which you can, again, whitelist in your bucket policy.
They recently added splash screens to Instagram and WhatsApp apps which I presume was a step towards smoothly introducing the Facebook company logo to their app brandings with updates soon.
Impressive how well MS-DOS games stood to the test of time. I still find them much more enjoyable than modern games.
Well, since the topic is relevant, a quick question to those who know their MS-DOS games well...
The other day, I was trying to remember an MS-DOS game in which you were a Pac-man like character that was eating chips on a motherboard or something. For some reason I recall the name of the game as “Yep” but nothing turned out when I searched for it. Any ideas?
If you're looking for a great Windows remake that keeps the original style, check out Megaplex. Apparently the original website is gone, but archive has it as well:
If you really want to play those games, I recommend you install Rocks and Diamonds, which implements Supaplex, Boulder Dash, Emerald Mine and Sokoban and even has some additional content. It's open source:
I would list all of Panic’s apps if the question did not specifically exclude well-known apps. They are hands down the best at delivering just beautiful software.
---
I use Reeder a lot (both on macOS and iOS), and love its beautiful simplicity!
I recently started using Paw and am in love with it with its design and features! There are so many well-thought features, I wasn't even aware I nedeed.
As a long time user of Folx GO, I recently discovered CloudMounter from the same developer that allows you to mount cloud services as a local drive. I enjoy using both of them, and would definitely recommend!
I've tried both Transmit Disk and Mountain Duck, and I've been disappointed. Transmit Disk works but is very slow compared to using the client directly. Mountain Duck causes all sorts of odd things to happen—files disappearing, files that appear to have been moved aren't moved on the server, etc.
I'm using CloudMounter to mount an OwnCloud instance in our company network using WebDAV. It was able to properly pause and resume syncing multiple large files, several times.
> Concerns about server going down or changing cloud provider imo is not particularly interesting or even useful advantage to mention for personal infrastructure.
I look at this from a different perspective: I have plenty of actual things to do, personal infra should be the least of my concerns and I should be able get them up and running in least amount of time.
> I've never got a case when an unmaintained docker setup can run 6 months later
It really depends on the well-being of the host and the containerized application. I have plenty of containers running for more than a year without a single hiccup.
I once had a credit card with the CVC code of 007. Although it looks cool and everything, I spent a few hours on the phone to manage to activate it.
Why?
Simply because only a few days prior to the arrival of my card, the bank switched to voice-command-only menus on their hotline without a regular menu fallback.
I remember looking at the back of the card and saying, “oh, here we go...” to myself. Unsurprisingly, telling the system “zero-zero-seven”, “double-zero, seven” or even pausing and yelling “SEVEN!” did not work.
In the end, I made the system connect me to a random human-being who then transferred me to someone else and that person redirected me to the regular, typing-based interface.
A few weeks later, they updated their system to fall back to the old system after the 5th error ¯\_(ツ)_/¯
Thanks, I enjoyed the read the first time I came across.
I really don't want to come across as an annoying person but please do not submit the same content marketing articles over and over again. This is the 5th time you're submitting this.
What about tracking everything you type for a month to see which letters you use the most as well as their frequency and then generating a keyboard layout that will actually work for you?
These keyboard layout wars seem just too subjective to me.