Our product is not built on top of Bull. Instead, as briefly explained in the post, our scheduler is coded in Go and leverages Docker, PostgreSQL, and Redis.
Locally (without `DEFER_TOKEN` environment variable), your functions run synchronously, but apart from this, you get the same API behavior (arguments serialization, execution id, etc.).
This is fantastic! I was hoping it would "just work" and seems like it does! Unless I missed it I think the docs should explain this. It would definitely be a selling point for me compared with having to run a local server like with Temporal.
I can see your point; sometimes, there is little consideration/interest in such issues. We created Defer to try to fix that gap and make it easier to understand and identify problems related to your long-running tasks, not the other way around.
> It's very clearly infrastructure. Putting "zero infrastructure" in the title and using the same API as Lambda invoke etc. doesn't make it true.
> I also get double the risk of downtime, security as it's a third party running on top of AWS vs. just running on AWS.
You are right. We provide an infrastructure service. We mean by "zero infrastructure" that you don't have to implement and/or manage your own.
Our service could run on a platform other than AWS, though, as we are not relying on AWS-only specific services (e.g., lambda, SQS, etc.). Of course, like any other cloud or on-premises service, we could have downtime.
> The API looks nice - but no mention of typescript at all in this post or the website, so presumably type-safety isn't the thing.
Glad to hear this. Although we don't mention it, our client is written in Typescript. If you want to know more, you can check out the code: https://github.com/defer-run/defer.client.
I do wish people had less of a tendency to be shy about Typescript in their product documentation. It's a selling point, show it off! Examples end up being a little more verbose, but they actually end up being useful for people that are writing Typescript (probably a closely overlapping group with those that care about their background jobs not disappearing).
Often you want to specify your own unique id based on some property of the job, like a transaction reference, or just a unique combination of parameters. This means you can later refer to it without having to store the reference somewhere. Is this possible?
We specify our unique ID based on KSUID specifications. In addition, we are soon releasing the tag feature, which will enable you to store your references on your Defer executions to identify them your way.
Just ideally the dev experience should be `result = await somethingThatMayHaveBeenCalledBefore(id=txnref, params)`
You shouldn't need to first do a check to see if the task already exists using a different api, and then choose whether or not to run a new task.
At least this is my preference.. a-la Durable Objects.
If you don't specify a custom unique id when calling the task it would then be treated as a task that can (and would make sense to be) run multiple times (i.e. not idempotent).
> I only skimmed through the landing page so maybe I missed it, but the value proposition isn't clear to me.
> If you're going to `await` for the contacts import to finish anyway, what's the advantage of separating the import
logic from your main API? It's blocked, so might as well be part of the same service, no?
> I could see maybe if the API returned right away with a pointer the user can later poll for task progress, but it doesn't seem like this is the case?
As you suggest, the API returns right away with a pointer the user can later poll to get the function result. Also, using `await` ensures the function is enqueued on our system.
> Side note: I like this type of web design, is it an in-house job or did you hire someone external?
Happy to hear this! We are working with a friend who is a professional designer.
We use AWS KMS to perform data-at-rest encryption on all data we store. We perform another encryption pass for sensitive data (e.g., Github Token, Secrets, etc.) before storing the data with a symmetric PGP key.
> Is this CronAAS suitable for sensitive workloads?