Hacker News new | past | comments | ask | show | jobs | submit | d_white's comments login

It's a shame because Google Hire worked really well. I don't understand the motivation.


THAT was the name, ty. Google Hire. Yes, it was an excellent product. Leadership at the top needs to understand the cost in reputation and change the culture. Large corps aren't good at changing culture.


Looks like these were translations provide through a third-party platform. Pretty tricky to review when you've handed off the responsibility.

I wonder how Weblate actually manages quality control in practice? This seems like a pretty bad advertisement for the platform.


A few years ago, I saw @r0ml touch on this in his linux.conf.au keynote. It had some really interesting ideas around how software could be democratized.

https://www.youtube.com/watch?v=i3nJR7PNgI4


It's worth noting that there is currently a gotcha with S3 Event Notifications such that they are not guaranteed. As a result, you may end up missing out on events.


S3 also doesn't provide a linearizable consistency model or even a vague approximation of one. You can't rely on the events you try to schedule happening in the order you try to schedule them in, or even happening at all.

This seems overcomplicated compared to using a regular timed event to trigger a lambda and having it decide what to execute conditionally.


I wasn't really too worried about picking apart the approach. If order mattered, you'd be looking at other approaches anyway.

Mainly just calling out a gotcha where you might quietly miss out on scheduled events with no warning.

For example:

1. Object written to successfully to bucket.

2. `s3:ObjectCreated:Put` is _never_ delivered.

The possibility of duplicate events are warned about a lot in the AWS ecosystem, and this sets up an expectation of "at-least-once" delivery.


Can you elaborate more? Is this related to S3 uptime SLA or is there a different reason?


I believe it is different to the S3 uptime SLA.

You can measure the impact and potentially automate recovery of missed events by:

1) Keep a track of events published.

2) Generate an S3 inventory daily.

3) Compare events received to objects listed in inventory.

You rarely end up with fewer events than objects, but it does occur.

I've personally only observed this with SNS target, but due to it being a problem with S3, I believe Lambda can fall afoul of this too.


I wouldn’t trust S3 events to lambda. Sure lambda supports retries and a dead letter queue but you can’t reprocess the data.

A much more resilient approach would be:

S3 event -> SNS Topic -> SQS Queue -> lambda.

and set up a dead letter queue for the SQS queue.

It doesn’t help with the reliability of S3 events (and I’ve never seen that happen), but it does help if their is an error running your lambda.

Move the S3 object after processing it. As long as you move it to a bucket in the same region, there aren’t any charges.

Then if you are really paranoid, you can have a timed lambda that checks the source S3 bucket periodically and manually sends SNS messages to the same topic to force processing.


Is there an open issue or documentation for this somewhere? We rely quite heavily on lambdas trigger by S3 events and have never experienced this


> an open issue

If only; this is AWS. You pretty much need premium support just to tell them their products are broken. Before someone says "forums", the forums are a joke.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: