Hacker News new | past | comments | ask | show | jobs | submit login
Deno Queues (deno.com)
351 points by 0xedb on Sept 27, 2023 | hide | past | favorite | 165 comments



I dug into the internals of the local, SQLite version of this just now and wrote up some notes here: https://til.simonwillison.net/deno/deno-kv#user-content-deno...

The most interesting detail is probably the schema they're using for that:

    CREATE TABLE queue (
      ts integer not null,
      id text not null,
      data blob not null,
      backoff_schedule text not null,
      keys_if_undelivered blob not null,
      primary key (ts, id)
    );
    CREATE TABLE queue_running(
      deadline integer not null,
      id text not null,
      data blob not null,
      backoff_schedule text not null,
      keys_if_undelivered blob not null,
      primary key (deadline, id)
    );
    CREATE INDEX kv_expiration_ms_idx on kv (expiration_ms);


Is it not normally bad practice to have 2 tables with basically identical fields and move rows between them?

Isn't that exactly what indexes were designed for?


If one table had millions of rows and the other only a few dozen and you are querying the few dozen often then it could make sense to have two tables. Different database treat indexes with sparse data differently.


Yeah, that seems like the reason for this design decision here. The queue_running table sits empty the majority of the time, very occasionally gaining a row or two just while they are being processed.


In Postgres this effect is huge since there aren't primary indexes. All tuples for a table get mixed up together in the heap, so a large table will tend to having only 1 hot tuple per page. Additionally, every update and delete leaves a dead tuple to implement MVCC, and these are much easier to clean up with VACUUM on a small table.

I'm not as familiar with sqlite, but it might have a similar problem in this case since the primary clustered index is on rowid and not a value that correlates with whether the queue item is running.


Partial indexes solve exactly this problem. You add a WHERE condition that restricts the index to a subset of the whole table.


Partial indexes are still slower, the database needs to scan the index, and then look for the data, and data is less likely to appear in the same pages.


The one table model is exactly what "delayed_jobs" for ruby does (at least with postgres). Furthermore there's no broker; each worker does his own query for the next job/thing/action to take on, so as you scale with more workers it can get really contentiony on the one table.


Well, many SQL databases, although not SQLite, have table partitioning feature which does exactly this. The main benefit is that one can configure tablespace (in PostgreSQL terms; essentially the drives to store data) and storage parameters separately for each partition.

Indexes are just the tip of the iceberg (but sometimes just using partial indexes may do wonders), it’s ability to tweak stuff like fillfactor for a frequently updated small portion of the table (e.g. active sessions vs archived sessions) is that makes a lot of difference.


Materialized views handle this scenario, but sqlite doesn't have them


For a background action/job use case though, how often would you have to re-materialize the view?


> I find this particularly interesting in terms of open source business models: they're baking a core feature into their framework which their SaaS platform is uniquely positioned to offer as a global-scale upgrade.

That’s quite novel, but I also find it a bit unnerving. They gotta commercialize of course, but they didn’t have to go fully closed. They could’ve just used a different license (BSL, PolyForm..) for the scaling layer.


I think what Deno is aiming for here is actually very forward thinking.

I'm using Go for the first time this year and some aspects of the language are very C like. Of the list of things that isn't C like (e.g. GC allowing one to return what look like stack allocated pointers from functions) there is the obvious inclusion of things like `map[string]string`. I bring this up because it struck me that inventing a language in 2023+, it would seem almost insane not to have built in syntax for the language to handle map types.

And so it seems logical that a web-server focused eco-system would start to garner libraries and even language syntax (maybe one day) for the kind of primitives we frequently use on web-servers. I mean, I can't even recall the last time I worked on a distributed web server that didn't have a KV store as a cache, or a locking mechanism, or even an ad-hoc queue. A distributed system without a KV store feels vaguely isomorphic to a computer language without a map type. It is such a Swiss army knife kind of technology.

One potential issue is that Deno is going alone down this path. Currently, I don't feel confident that the new features they are adding will be available on competing platforms. Even if they open source the API (it is on a `Deno` namespace), I'm not sure it will just work on AWS if I wanted to switch out FoundationDB for e.g. Redis.

For that reason, I feel I want to avoid Deno. Even if the syntax and exposed features are really attractive, I'm worried about becoming locked in and then having to do a lot of surgery to code to make it deployable on multiple cloud infrastructures. That is a requirement from a lot of clients. E.g. maybe I want to sell to Oracle or Salesforce one day but they mandate I have to run my systems on-prem. I'm now on the hook trying to figure out how to adapt whatever available KV store they have to the features I'm using from the `Deno` package.

It is a double edged sword. Maybe they will succeed in pushing forward this vision to a broader audience. For now I'll probably remain cautious.


In this case, they've also documented the remote connection protocol: "KV Connect" https://github.com/denoland/deno/tree/main/ext/kv#kv-connect

I kicked the tires on this with a pure TS implementation of the protocol called kv-connect-kit that gives you the KV client api in any Javascript runtime (including Cloudflare workers, which does not have anything Deno namespace related)

- github: https://github.com/skymethod/kv-connect-kit

- npm: https://www.npmjs.com/package/kv-connect-kit

- deno/x: https://deno.land/x/kv_connect_kit

- demo: https://keyspace.deno.dev/

protocol seems to works as described on the tin, and it would be pretty straightforward to write another backend


This library looks really cool, love the idea of unifying the api across various envs is great!


I disagree. Tying a language runtime to a specific KV interface which is tied to a specific hosted service is the opposite of forward thinking. In fact the tech industry has made a lot of progress away from vendor locked-in stacks, and this just reminds me of those.

What is really the difference between Deno.KV being shipped as part of the runtime vs adding an `import KV` statement at the top of the file (which could come from Deno or wherever else you want)?

And what are the chances that the Deno team is going to ship bindings for any language besides their own?

If Google started adding Google Cloud specific primitives natively to Go would you call that forward thinking as well?


> Tying a language runtime to a specific KV interface which is tied to a specific hosted service is the opposite of forward thinking.

This is not the case. The Deno runtime itself is not tied to the Deno Deploy hosting service. The KV feature in the Deno runtime can be used without the hosting service.

You can read the details about how Deno KV works in the Deno runtime here: https://til.simonwillison.net/deno/deno-kv (as has been posted in other comments)


> The KV feature in the Deno runtime can be used without the hosting service.

But one writes to foundation, and the other writes to a sqlite file. You wouldn’t be able to self host an app written for Deno Deploy and have it work out of the box.

Are there any plans to open source the KV backend so that people could host their own KV databases? Now that you can connect to remote Kv databases, I suppose someone could implement their own?


Self-hosting should work with a change of configuration, no?

As someone mentioned elsewhere, they have documented the protocol, so yes you could reimplement your own remote KV store.


This is what Java did - JCP created, APIs invented to be implemented by Tomcat, JBoss, fill in your favorite here. Configure your actual instance - in the past this was done with beautiful UIs - total waste of time considering we're in the era of Yaml (FBFW?) configuration.


This is not true, you cannot run your own foundationdb server and use the kv service without reimplementing yourself in fdb


This instantly reminded me of Next.js, which is open source but has a special build format for serverless environments.

The 1st party implementation is closed source: 3rd parties start on the back foot trying to implement alternatives and have to keep up with a 1st party that can move in lockstep.

And sure enough, like every other time I see this kind of behavior: Deno was invested in by the CEO of Vercel.

"Javascript is taken over by venture capital" wasn't on my 2023 bingo.


Vercel needs to stop with this bullshit. It is straight up predatory ”open” source. Like a trapper’s cage, there’s a convenient, tasty bait and then it’s too late.


Is it too cynical to say this might be a lesson devs need to learn the hard way?

Right now the JS community has whipped themselves into a frenzy into building on VC backed technology.

- They refuse to acknowledge that the loudest voices in the room are openly sponsored and invested in by the same VCs who own the companies behind said tech

- They see no issue with a lack of diversity in implementations, instead settling for "it's a standard". Of course, defining a standard without a healthy variety of implementations means you end up with standards that don't benefit from a wide range of voices until well after they land (see RSC)

At the end of the day, those two alone are a pretty harsh combo: A VC-backed network effect machine built across multiple brands, and high technical costs to building something that meets the collection of standards.

I don't think anyone but FAANG can really compete with that without also getting VC dollars, thus reinforcing the loop.


You can build a Next.js app and run it on a docker container or regular linux host almost anywhere. Vercel has some nice continuous deployment stuff built-in but I'm not sure how a Next.js app is locked into it at all.


This is often repeated but misguided.

Next.js and Vercel heavily push serverless deployment: 13 reworked the built-in API support to leverage Web Standards, which discarded interop with the a much larger server ecosystem in order to enable better edge support.

Serverless deploys require providers to support the Next.js Build API: https://nextjs.org/docs/pages/building-your-application/depl...

There is no open implementation of this API (unlike Remix for example)

This means projects like Open Next start from 0: https://open-next.js.org/

The end result is significantly fractured support for a headline feature of the framework and a lot of unnecessary pain (https://betterprogramming.pub/beware-of-next-js-on-aws-ampli...) trying to leverage it on any non-Vercel platform.


Yeah on the same note I developed a moderately complex app on Next but I hit a roadblock when I needed background job support, which is not natively supported (or at least at the time wasn't) on Vercel/other Next platforms and so it was never a priority for Next. Pushing serverless so hard also made deployments janky and production bugs weird when you tried to use things not supported by the underlying platform, AWS (don't remember the details now, but Node version was one of those).


Yep, part of why I looked for alternatives and settled on Remix. It's never sat right with me


This sentiment has been repeated in a few comments. But, why can’t the deno deploy implementation be reimplemented, by yourself, by running a foundationDB server with mvSQLite[1]? That shouldn’t require any changes to the code.

[1] https://github.com/losfair/mvsqlite


That is not the same thing, still. Like you said it would be a reimplementation, not the same thing.


> What is really the difference between Deno.KV being shipped as a standard library vs adding an `import KVService` statement at the top of the file?

The same difference between `#include "myStringMap.h"` and `map[string]string` in C vs.g Go. There is some advantage to everyone in an ecosystem using the same primitives. That kind of language-level standardization goes a long way.

For example, it might increase the ecosystem of available distributed system libraries that only need KV stores. If I can compose several libraries, all of which use the same underlying KV interface and implementation - I can imagine that might result in some interesting use-cases.

As for the rest of your comment, I would humbly ask if you read the entirety of my comment? We seem to agree that as long as Deno KV is locked into their cloud that people should be very wary of lock in. And I myself will likely avoid it for the time being until the dust settles. There is some chance the rest of the community will just come along and fill in the gaps on popular cloud platforms. Or maybe they won't. Time will tell.


> difference between `#include "myStringMap.h"` and `map[string]string` in C vs.g Go.

Except one is a language feature with custom syntax and the other is a library that could be implmented as a library just the same.

Not everyone in the ecosystem is going to use this because its not forced on them through syntax and espically because its inside deno and not node so barely anyone at all will use it.


> If Google started adding Google Cloud specific primitives natively to Go would you call that forward thinking as well?

Go actually ships with a quite forward thinking SQL interface. It's an abstract interface over a DB, and you just import the "driver" that powers it. The driver conforms to a standard interface, so all of them behave roughly the same.

I think this is what everyone wants from Deno/etc - why can't there also be a KV interface that's universal, or a Queue interface that's universal?

People attempted this w/ go [1], where it attempts to use the same nice experience of the SQL logic, but it never seemed to gain traction.

https://gocloud.dev


> Go actually ships with a quite forward thinking SQL interface

SQL is low hanging fruit in this regard, because you just need to standardize the lowest common denominator flavors of SQL types for deserialization and then it's just juggling SQL queries around.

JDBC for Java does the same as database/sql and it's from 1997. ODBC is from 1992.


I don't think GP is saying that this is forward thinking, revolutionary level stuff, but rather that it's generally speaking a better thing to do than ship specific implementations that wall people into your solutions.


If they want to have these primitives then I’d prefer to have a “universal” queue API that would work with SQS, Kafka, etc. with some magic. But the developer API could be just deno.Queue or whatever


Why try to standardize the interface at all? This isn't the job of a programming language, and there is no way it can anticipate all the possible use cases. Let each service publish their own bindings, and you can a programmer can consume whichever ones you want.


Deno is not a programming language though. It's a JavaScript (TypeScript) runtime.

Deno is basically trying to be an infrastructure framework for JavaScript.


> Tying a language runtime to a specific KV interface which is tied to a specific hosted service is the opposite of forward thinking.

Yeah, I'm hella confused.

Isn't Deno the Node.js replacement?

But now it's a database as well?

It's jumped the shark for sure.


I think people are hung up on a distinction between language and runtime that isn't that valuable and maybe doesn't even reflect normal use.

For example erlang ships a persistent KV store, queue, relational database and much more as part of its standard library. I've never heard anyone complain about this or wish it were otherwise.


OTP isn't part of a cloud solution and there is zero lock-in to any kind of commercial offering as part of the batteries that are included in it.


This. It could have been a npm package, why build it directly into Deno?


I love Deno but I'm cautious on adopting for the same reasons. It seems like Deno can only win if they fully support their hosting competitors as well. At the same time, they will need to be as-good or better at hosting as the industry leaders. Alternatively Deno could decide to drop their own hosting and offer a multi-cloud solution with a slim margin over their hosting partner adapter. With this path, Deno wouldn't have to directly compete as a hosting provider.


Deno is just trying to compete with other JS/TS SaaS with this. Vercel/NextJS has a KV service now; Cloudflare has it; in other cases it's Firebase.

I contend that this is not actually forward thinking, but monetising Deno as a SaaS, which every man and his dog is doing in the JS world lately with their 'cloud' offerings (reselling AWS with a framework). That's why there is a pricing page attached to this.

If Deno's KV depends on FoundationDB then they're hardly going to build adapters over other databases - switching DB tech is always a massive ordeal because they all have different use-cases and performance characteristics.


> e.g. GC allowing one to return what look like stack allocated pointers from functions

I think this is due to having escape analysis and SSA rather than having a GC.


Postgres connector actually supports encoding and decoding map[string]interface{} to json and jsonb data types, quite cool.


> Leveraging public cloud infrastructure has traditionally demanded sifting through layers of boilerplate code and intricate configurations, often monopolizing a significant chunk of the developer’s time and energy.

I don't buy this line of reasoning. At the end of the day we are building infrastructure that needs to be reliable. Spending 30 minutes to set up an SQS queue (proven technology) doesn't sound as bad as putting all my trust in a toy queue that Deno built "on top of SQLite / FoundationDB". Is the 30 minute setup cost and added "developer experience" really worth the risk? How often are you setting up queues anyway.


I think this depends on who you are.

As a hobbyist programmer, I don't use the big providers like AWS and Google because they seem rather complex? Maybe it's not so bad if you're used to them.

I also like to make each project independent, as its own repo on GitHub. Ideally, a web app would be easy for anyone else to launch, as their their own independent web app, using a separate domain name, because I don't want to be responsible for their data.

(This is sort of the Sandstorm use case.)

"Try it out locally and get a Deno Deploy account if you want to use it for real" seem like reasonable install instructions?


If Deno only suits hobbyist programmers, it's failed. To pick up steam it's going to have to pull people away from AWS and Google who already know how to use them.

As for making project's independent, so do I, but that's simple in other clouds too with infra as code or one of the various deployment frameworks (fly lets you do this, AWS copilot does this, terraform, serverless)


I think you might overestimate how much other people know about cloud infrastructure. I am very out-of-date and only vaguely know what these things do.

I looked at Fly and they have nice docs and interesting ideas, but they’re also clear that they don’t do “fully managed” databases and I don’t want to be a DBA.

I know of AWS as a huge pile of complexity that I’m not sure I want to get into? Using it directly seems too low-level for me

(I’ve used Digital Ocean and it seems more my kind of thing, but it doesn’t scale down to zero for a website that gets no traffic.)

Looking at Terraform, it’s not obviously about solving any problem I care about, and wasn’t there a huge controversy about them?

Serverless is a buzzword. Isn’t Deno sort of serverless too?

I think of Deno Deploy as a step up from Netlify, which is fine for static websites. Or maybe a reincarnation of App Engine, which I quite liked back when it launched, but it seems to have lost its way.


this is the serverless framework im talking about https://www.serverless.com/

but fair enough.

I do think the simplicity of deno will be a trap though. The benefit of a mature framework is you don't have to get lost in the weeds on obscure bugs or functionality with slightly niche use cases.


Be careful, simplicity seekers! Fly.io famous for outages, Vercel/Next.js v13 and app router saga. Simple is the stuff that ain't really changed for 10 years. C# + SQL Server. NodeJS + Postgres, etc. A bit more work at first but well worth it. Will pay off after a good 7 days when you hit the first snag of the new tech.


Many people don't have experience with AWS or SQS and even those that do will still need to think about it. Deno is taking a batteries included approach which means the Dev can just import something and go. If someone provided equivalent libraries for the cloud then I'm sure people would adopt them. In the case of Deno I'm sure they've identified a common problem many Devs in their ecosystem have and are tackling it by bundling in a good solution.


If Deno is only going after amateur devs by implementing simplistic features like this, then it will remain an amateur product. AWS/SQS may seem complex to some, but they are very powerful. Maybe Deno is doing what Apple did in the 80's by putting free Apple computers in classrooms - try to get new users hooked into their system and then they'll use it for life. Anyone doing anything serious with cloud computing is already using AWS, or should be. I would rather put in the time to learn AWS (which is entirely free in many small use cases), so I won't build stuff in a toy system and then later realize I need something better.


Devs who know they need an event queue but are not technical enough to set one up in SQS sounds like a very niche market to me.


AWS was a toy in 2006. No one in their right mind was switching to the cloud back then. We laughed at it. It took years to mature. I'd say things that look like toys today will be the dominant platforms of the future because generationally a 20 something person out of college is more likely to adopt it than use the overwhelming and complex AWS. Barriers to entry are something you have to consider. Meaning, many of us grew up in an era of transition from bare metal to cloud and we became early adopters of it. These new tools are the same for a younger generation.


> overwhelming and complex AWS

Any other systems a "20 something person" would cobble together for an app of any complexity from "toys" of today would be a complex web of half solutions. They'd be trying to do the same stuff that can be done within AWS, and often what they cobble together would be worse off having it made of disparate components from maybe dozens of different vendors. I can't see how it's any easier to connect all those dots than it is to do it within AWS. If you're new and doing simple stuff on simple systems that's all well and good, but don't expect it to scale easily or at all, and if you try you're in for a whole lot of dev-ops and networking and other bullshit just to get things to talk to each other. There's a lot less of that in AWS. Usually you just copy an ARN and paste it into another box, and the things are connected.

>Barriers to entry are something you have to consider.

There's no barrier to entry for AWS, unless you can't afford $0 per month. Anyone can sign up and use free services, and they are very well documented. There's tutorials galore, probably more than any other current toy platform(s). There's tools and tooling and all kinds of support out there for it. But sure, some toy platform might be more fun to use for your hello-world task tracking app if you aren't building anything serious.


> Anyone doing anything serious with cloud computing is already using AWS, or should be. I would rather put in the time to learn AWS (which is entirely free in many small use cases), so I won't build stuff in a toy system and then later realize I need something better.

So ridiculous.


Hot take: I don't think that people should even use queues any more. It's like using raw HTTP2, or managing threads.

Use a workflow engine which abstracts queues for you so you can "just write code" without managing jobs/schedules/state/retries.

https://www.inngest.com/blog/how-durable-workflow-engines-wo...

(disclaimer: I'm biased as I'm an author of a workflow engine)


In the example in your post, the queue is abstracted away in some sense, but you aren't just "writing code". There's step.run(), step.sleep(), step.sleepUntil() sprinkled throughout. I'd say in something like Celery, which is explicitly about running jobs asynchronously backed by a message broker/queue, you really do just write code, but then call the function with .s().apply_async() and so on. Now, I'm not saying you're wrong that we ought to abstract queues away, just pointing out that the abstraction leaks in your workflow example. Happy to hear where/how I'm wrong on this.


It leaks in that the entire function becomes declarative, and state is colocated to a single function. You don't have multiple jobs for each attempt, passing indexes into each job queue. You don't have to worry about enqueueing each "step" as a separate job.

The code becomes "wait until this time", vs "enqueue this function with this state to run at this time via this message broker, which may enqueue other jobs in ways you can't see".

There's no real silver bullet that will let you run code in the future without specifying "when" that code should run — but workflow engines are by far the more productive of the two.


Check out temporal.io that fully abstract this. Disclaimer, I'm one of the founders.


hah, I was thinking about temporal as I was writing this. I have played with temporal pretty extensively.


By that logic no one should ever launch anything new, because it is a "toy project" by default and only AWS is proven?


How do you know it's a toy queue?


Presumably everything AWS offers is way more battle-hardened and with stricter SLOs just due to the sheer volume of their customers.

But people are way too afraid of simple tech these days. If there aren’t dedicated QA and Ops teams, at least 2k GitHub stars, it’s not web-scale™ and meant to run on a fleet of 726 servers at minimum, it’s not to be trusted!


If it doesn't have 1k GitHub stars it's either trivial or it has high risk of bugs.


I don't recall SQS taking anytime to set up queues, at least when using Celery (python task queue). IIRC you just name it in your code and use it.


That's just at the codebase end of it.

But at the infra end of it, it's a different story. AWS is suited towards a very different scale of development effort than Deno currently is. The effort required from scratch to securely and reliably get to a point where a dev can just name a queue is substantial in AWS. You really need account hierarchies, guard rails, roles, permissions, delegated IaC etc etc in place to be able to do that - otherwise it eventually gets out of hand.

AWS is geared towards larger dev teams or groups of dev teams, while (to me at least without using it) Deno seems far more oriented at the smaller scale set and forget PaaS oriented teams - eg Heroku refugees etc. Those teams could very well outgrow Deno and need AWS at some stage though.


It used to be like that. These days I often see 1-3 person teams just copying around some terraform config they've had forever, and all of their AWS setup is done.


You can use libraries like SST and make it a few lines of code.


To extend this into more productive territory, there is no reason I should have to use either the Deno version or AWS; it should be an interface that can be implemented so that I can choose whatever makes sense, including my own implementation.


> How often are you setting up queues anyway.

Very infrequently. Even more infrequently as we move to IAC and use terraform or even cloudformation.


> at least once semantics.

> user code.

A past life has taught me that users will never properly understand at-least-once semantics. Anytime you redeliver messages, you will get a flurry of user complaints and breakage.

Either you do the impossible and invent a way to do exactly-once semantics. Or you should always redeliver 0.1% of all messages, just so that users don't come to depend on messages being delivered once.


Early SQS basically did this. I'm not sure if this was intentional or not but it quickly taught me to respect redelivery.


If messages need to be idempotent, I definitely recommend creating integration tests to ensure this from the sending side :-)


I'm currently using Deno deploy and found it to be fantastically performant and dumb simple for my lone wolf project. I'm experienced in AWS development in larger teams and it is nice to see a move away from complexity for a change where you can easily set something up without having to think about setting it up at all. The DNS stuff was just dead simple as well and automatic ssl certs was super nice. I have 0 complaints for what this is trying to be and am excited for the road map.


Unless I'm missing something, it looks like each Deno.openKv() instance only gets a single queue.

For the local version you could get multiple queues by calling Deno.openKv("db-2.db") with different SQLite file paths each time, but that feels like a lot of overhead for a pretty common need.

I guess this is a Deno architectural style thing - maybe when you build complex apps on Deno it's expected that you'll have a microservice style architecture where lots of different scripts work together, each of them with their own KV store and hence their own queue?


You are right, there's only a single queue at the moment.

One of the core devs has confirmed this on their discord: https://discord.com/channels/684898665143206084/115671428253...

Quoting here:

> Correct. Currently a single queue is supported. You could multiplex multiple types of messages on the single queue though.


Pricing?

I thought Deno was some sort of node.js replacement. What am I missing and can I use this either locally and/or self hosted without paying for it?


This is the same trick as the rest of Deno KV: the open source version uses SQLite, but when you deploy to their cloud product you get Foundation DB (proprietary) instead:

> Since Queues are built on Deno KV, it uses SQLite when running locally and FoundationDB when running on Deno Deploy for maximum availability and throughput.

I wrote a bit more about this pattern here: https://til.simonwillison.net/deno/deno-kv


Just as an side note, not relevant to anything in particular, Foundation DB itself is open-source (https://www.foundationdb.org/), but the integration layer used by Deno to make it the backend for Deno KV is not.

Although, from reading the Foundation DB docs and checking the Deno KV API, I honestly suspect it is a thin layer.

Self-hosting FDB is somewhat inscrutable though, so their value add is in not having to handle infrastructure while being backed by FDB.


Simon, I love reading your TILs. What's your process? Is there a TIL on writing TILs? :)


Partly it's about building habits around them. I watch out for any time I have to spend half an hour or more figuring something out - that's generally a sign that it's worth tidying up my notes into a TIL, since the internet is clearly missing that piece of information!

I keep very detailed notes on everything I'm doing already - either as a VS Code scratch document or Apple Notes or often a GitHub Issues thread. A lot of my TILs start by me pasting those notes into a Markdown file and tidying them up.


The article says development (locally) it uses SQLite. On deployment to Deno it is part of their cloud offering


(on the off chance the devs read this) I can see pain in the future for the currently excellent KV ergonomics around access control --- is the solution just "implement it in userland and don't write bugs" or is there anything planned?


User-level access control (like ACLs for authenticated users) or something else? Feel free to share longer form thoughts here and I can make sure we keep this concern in mind.


systems level - so if, for example, I can have two services or serverless functions, one of which can access the world and the other of which cannot see, for example, the pii tables.

(I'm not a production customer, and I haven't thought through how this ought to work in deno-deploy-land, I've just seen a lot of painful ACL set ups. But I also like knowing that lambda function foo written by the intern can't read every DB table)


I'm excited about the recent Jupyter support and this queues.. Super cool stuff.

But I won't write servers in Deno and put them to production in my own servers. (I don't like serverless)

You see, Deno(and Bun)'s business model is built around serverless (Deno deploy). And Deno deploy is ANOTHER runtime.

It's clear with KV and queues... Locally and if you self host, you have a barely working version backed by another tech...

This means you're self hosting a version of Deno that's different from the majority of the Deno customer base.

How many people will use Deno to run servers on their own hardware? Since that number is low, then I'd rather use Node.


I find deno to be very exciting. Viable business model, great ergonomics, glorious lack of bullshit configuration busywork.


I have tried to be excited about Deno, but just don't see it. What are they doing that hasn't already been done a hundred times in the past? Deno itself is essentially Node.js + automatic TS compilation (which would otherwise have been 1 extra config file). When the project launched they made a big deal about escaping from the messy NPM ecosystem, but then ended up having to turn around and add support for it, taking that differentiator away as well. Deno Deploy is the same as dozens of other Lambda-like services out there. Their KV store and now Queues is equivalent to Redis/LevelDB/RocksDB/Dynamo/SQS or countless similar options. I have yet to come across a single feature that actually sets Deno apart from the rest.

I could maybe see the appeal if you prefer to have a single company in charge of your language runtime + compute hosting + data storage, but I'd personally want to avoid that, especially when the company doesn't have a track record in it.


I think you're underestimating just how much tooling Deno has built-in. It's not just automatic TS compilation, but formatting, linting, testing, and benchmarking at the bare minimum. I use those for almost every project. Each of those would probably be at least another configuration file.

About Deno Deploy, I completely disagree with the analysis that it's just like any other lambda service. It's not just about easy deploys, the service itself is simply better. There are no cold starts and they don't just do that by keeping a vm up all the time. It's really magical. You should give it a second look.


> There are no cold starts and they don't just do that by keeping a vm up all the time. It's really magical.

Do you know how they're doing that, technically? Is it running on Cloudflare under the hood, or if not does it have a similar set of tradeoffs and benefits vs Lambda? Or is it a third completely different way with different tradeoffs/benefits?


I use Deno because I couldn't be bothered to figure out the Node ecosystem and how to actually wire together the Typescript compiler, package json, builders and bundlers, etc. I just do not care and the mess of tooling is frankly exhausting to wade through.

Deno Just Works for me. `deno run file.ts` and you are good to go.

Not to say it's been 100% smooth sailing; I have hit a few rough patches with deployment into my own infrastructure (some within Deno and its dependency management, some within libs like sqlite3) but overall I am happy to not deal with nodejs.


Even after you figure out typescript with node, then you need to figure out sorucemaps otherwise your stack traces won’t be very useful


Deno is compatible with web code to a much deeper level than Node.js is. As the Deno website puts it: "Built with web standard APIs". This might not make a difference to most. Personally though, as someone who strives really hard to share code between frontend and backend, I'm excited for it. This is pretty much the only reason I see Deno eventually winning out over Node.js (in the long run).

I don't have the time to try to port any of my meaningful projects to it right now, but I look forward to the day when I can do so.


The single piece of extra browser compatibility I have found in Deno is that it supports the fetch API, and Node 18+ has that as well. Beyond that there is really no difference between the two.


Whilst I mostly agree, and Node has arguably been forced down the browser-compatible route (for stuff like fetch() and web streams). There is a bunch of other stuff that isn't in Node, or it has it's own APIs for... alert/confirm for getting user input, Web Crypto, URLPattern etc. You can see them all here: https://docs.deno.com/runtime/manual/runtime/web_platform_ap...


With all those alternatives you need to spend time installing libraries, then for each environment you need to create the db, and setup environment variables. You’ll also probably want to figure out a local environment.

With deno all that setup/configuration is handled by deno. There’s obvious trade offs but I think what deno is doing is pretty cool.


I'm not that familiar with these alternatives. What's a good service provider you recommend for a persistent KV store?


- Redis

- ScyllaDB

- Riak

- Couchbase

- Cloudflare KV

- Many different AWS offerings (DynamoDB, ElastiCache)

- Similar alternatives from Google & Azure if you are in those ecosystems

Of course there's nothing unique about a KV store. You can use literally any SQL or noSQL service out there, or set up your own in any way you want.

They also all have the advantage that you will get bindings for every language, not just Deno.


What makes you believe that Deno has a viable business model?


I wonder how this differs from BroadcastChannel[1] which Deno already implements and DenoDeploy seems to support.

[1]: https://docs.deno.com/deploy/api/runtime-broadcast-channel


persistance?


Yea I think that's the answer. The queues are deeply integrated into a datastore so you have a trace of what happened and you can do it all atomically.


I don't know if Ryan and the rest of the Deno team are lurking here, but given "Anchored on the robust capabilities of FoundationDB" what I think would be absolutely fricking cool is to get full access to the underlying FoundationDB engine, which would surface the "model any data model you want the way you want it" capabilities of FoundationDB directly to Deno developers.

In case anyone wonders what this would make possible out of the box: https://apple.github.io/foundationdb/design-recipes.html


I actually do quite like the overall idea of having a unified API at the language level that a (configurable) runtime can provide implementations for. Sure, in some cases an API might not be easy to abstract over due to underlying concerns, but in a lot of cases it does work.

It does show a level of understanding by the language creator that they know how the language can and probably should be used - I like how KV defaults to SQLite locally and takes on new meaning in a hosted environment. I haven't seen, but as long as you can actually override the behavior to use your own KV/queue technology (so long as it satisfies the interface) I see absolutely no issue with this.

There is a huge benefit where the code always looks the same for common things we all need to do. I wouldn't mind seeing other languages take an overall stab at this so there is some choice other than JavaScript.


Why not instructions on how to use it with FoundationDB so that we can host it on our company infrastructure?


I think you can add your own implementation to Deno.openKv so that you can use your own companies infra.


That requires knowledge of rust and how deno work internally, it would probably be far from production ready…


How do you think they make money?


It's MIT licensed, if anyone wanted to do that they'd just patch deno with it anyway.


Very timely... I'm almost finished porting WakaQ (my custom background task queue to replace Celery) from Python into TypeScript to power background tasks for a new Next.js website. I'm using T3 not Deno so I wouldn't have used Deno Queues, but it's a core part of every web app so makes sense they would build this into their stack.

Some questions:

* Looks very powerful, but at the core is it just one single queue? Does Deno.openKv() create a new queue or re-use the same queue every time it's called?

* Usually I need multiple queues of different priority, so when bottlenecks happen the highest priority jobs run first.

* Maybe they guarantee infinite worker capacity so you don't need queue priorities? No bottlenecks if you can pay for the jobs you enqueue?


Under the hood, does DenoKV implement an abstraction that specializes locally to SQLite and to Deno Deploy in the cloud? Or are DenoKV local and DenoKV cloud two separate products with a common public API?


Code was the api is the same.

When you use it locally the database is only on your device however all deno processes can access the same dB so you can use it to pass data around just like you can for localStorage that deno also supports.

When you use it in the cloud (deno deploy) then saves are replicated across regions.

Deploy is server less.

KV is simply a wrapper around sqlite with that you get atomic transactions that you wouldn't with local storage


When deployed to Deno Deploy, KV is a wrapper around FoundationDB, selected for its use in the distributed/edge environments that Deno operates in. When used locally, it's a wrapper around SQLite.

The API is standardized for both.


What happens to the mantra that "database as a queue is harmful"[1]? Of course Amazon teams internally use DynamoDB for all kinds of queues, which implies that we get to do a lot of things easily if we get a super robust storage solution. So, I guess the question is really whether FoundationDB can be used as a backend for queues.

[1] https://www.google.com/search?q=Use+database+as+a+queue+is+h...


I like how updating the data and enqueueing a message can be part of a single transaction. That's indeed pretty powerful.

A bit of an aside, but not sure we want at-most-once delivery for email.


Didn't the article say at-LEAST-once?


Yeah it did. My bad writing "at most". But the problem is with "at least once" (email sending cannot be made idempotent AFAIK)


Isn't deno running locally no your self-hosted instance? How are they charging?

  Enqueuing a Message: Each enqueue action translates into a KV write operation.

  Receiving a Message: Every received message entails a KV write, and a single request charge.
I would be really curious to know. Are they only talking about Deno Deploy in the cloud, with FoundationDB? Is this "open core"?


> Leveraging public cloud infrastructure has traditionally demanded sifting through layers of boilerplate code and intricate configurations, often monopolizing a significant chunk of the developer’s time and energy. Our goal is to distill these intricacies into user-friendly primitives, enabling developers to design, refine, and launch their projects with unmatched speed.

I really like this


This is amazing syntax. If you are looking for a self-hostable version of this, that can run deno but also python, go, and bash, with primitives similar to airflow and more (retries, cache, suspend, approval steps), check out windmill: https://github.com/windmill-labs/windmill


Amazing syntax? Isn’t this standard JS chaining?


Exactly! It looks and feel like normal JS and hides all the complexity behind it which is the print of great syntax.


Perhaps "API design" would be more apt than syntax here


    With grand promises rode the ploy
    Venture capital, ahoy!
    We make a lovable product to capture audience
    Vendor lock them at the earliest convenience
    Hey look at our awesome developer experience
    Add ANSI and emojis and comments saying it’s genius
    Some next level tech, plebs just don’t get it
    Next round of funding, we’ve already spent it
    I took the big dough, got puppets and framed it
    R-O-I is bad though but layoffs gon’ fix it
    Never mind the drama *cough* are ya gon’ install it
    Run it, ship it, kill yourself with working on it
    Entangle your business with it
    Tell the client to deal with it
    Bring it to the big wig
    Get ’em to sign off on it
    Crown your own sh*t
    Dogfood is tasty, ain’t it


chat gpt?


    Human brains proudly present
    Real poetry? Grouchy resent!
    My rhyme is not gradient descent
    After all, it is barely nascent


Can we deploy both deno kv and queues on our own foundation db cluster?


I feel its odd for a compiler/language toolchain to have a cloud offering. I understand the motivations but 20% of the article was about their Deno Deploy product and how to calculate API costs...

I understand its only when using their cloud and there is a local implementation, but I just couldn't imagine using a feature in LLVM, for example, where if you were to upload the binary to a specific cloud it would use a different implmentation that costs behind the scenes.

It just feels odd to me. I guess it's kind of cool. Maybe different clouds can implement the underlying KV and then this becomes cross-cloud with different underlying benefits without changing the code. But I'm not entirely sure how I feel about this.


These features pay for Deno. If they don't exist, Deno doesn't exist.


I agree projects need funding and developers need to be paid...

Deno was released 4 years before Deno Deploy and existed just fine.


there is also a b2b game in play here by deno. i'd imagine a FaaS JS/TS only platform powered by Deno runtime is attractive to a lot of already existing web services who want to expose the ability to write webhooks/event-based functions in their service offerings?

https://deno.com/blog/netlify-edge-functions-on-deno-deploy


On the topic of Deno's cloud deployment and Deno KV, does it offer a reasonable user story for caching? Or is that something you're expected to handle yourself (for now)?


Deno is amazing for backend.

Is there any full stack solution other than Fresh?


I never liked Agenda (Persisted delayed task queue using MongoDB). This example[1] using Deno Queues & KV is easy enough to be a replacement for Agenda.


How is this different than using RabbitMQ? Can anyone tell me what’s the benefit of choosing this one over some other message queues or even kafka?


Well, a couple things.

For one, this is just a simple queue with no routing capabilities. It's a much simpler tool.

For two, and more importantly, it's built into your application and introduces no additional operational dependency.

You could probably achieve a pretty similar API with a custom AMQP library, though!


I don't want my programming language to "implement" task queues and charge me for it. What is this

edit: people seem to be getting hung up on the "runtime" v "programming language" distinction. I'm not sure why--it's weird to me that this is "part of the language + runtime" at all. Clearly, reasonable people can disagree about that, but hopefully on points more substantive than pedantic

If tomorrow, someone started a for-profit company making a faster Python runtime and introduced features like this, it would be weird, and it would feel as pointless to me as Deno is


You're getting charged for the replication of deno kv across all regions when you use deno.deploy.

KV is free to use locally as I assume this will be too.


Sure, but genuine question: How easy it is to run my own infrastructure for Deno Queue, and, crucially, how can I know that it will still be as easy and transparent in 5 years from now when the VCs want to see some $$$. That the company developing Deno and the services that monetize Deno is the same is a major conflict of interest. And if history is of any guide, in the end economic incentives always win, no matter how good-hearted people are.


When imagining future scenarios, maybe don't get too fixated on one? Another possibility is that competitive service providers will implement the same services.

(It didn't really happen with App Engine, but this seems like a cleaner API?)


If Deno doesn't get adequate funding, the company developing Deno will not exist 5 years from now.


I guess I remembered it incorrectly? This looks pretty good:

> Deno KV databases are replicated across at least 6 data centers, spanning 3 regions (US, Europe, and Asia). Once a write operation is committed, its mutations are persistently stored in a minimum of two data centers within the primary region. Asynchronous replication typically transfers these mutations to the other two regions in under 10 seconds.

https://docs.deno.com/kv/manual/on_deploy


Last time I checked deno was a runtime not a programming language


'.deno' files are not drop-in compatible with any other typescript-esque runtime, which I think is a reasonable enough bar to say that its a different programming language. And not just because of the standard library; Deno supports e.g. URL-based import paths, which have very low support in other ECMAScript-ish languages like Node.

Generally: the terms "programming language" and "runtime" are synonymous. There are a few examples where this is not the case (Javascript is definitely the best one; Java/JDK; python). There are hobbyist runtime re-implementations of some languages. But generally: Only academics worry about differentiating them.


Node is not a language.


Since you're going to be pedantic, I will be too.

Language is more than syntax. Language is the combination of Syntax and Meaning; its a system of communication. The word "Fooloofol" is syntactically correct English, but meaningless within the English language; and thus is not English. The english word "bark" is syntactically identical across more than one meaning; a dog barks next to the tree bark. Language isn't just syntax; its the entire system of understanding.

The statement "fetch("https://example.com")" is syntactically valid in any ECMAScript-compatible language. Its also syntactically correct Python, Go, Rust, and probably a few other languages as well. But: it is not meaningfully correct code in many of the runtimes which implement that language. It was only valid in NodeJS after version 18, after all.

What you're really asserting is: Node is not a syntax. That's more accurate; but still inaccurate. Node does have its own syntax. For example; if you were to execute "await myFunc()" in NodeJS v6, you would get a syntax error. So, if Node isn't a syntax; what syntax does it use? JavaScript? By what definition? ECMAScript? Certainly a subset of the global standard, or a version of the standard released in the past. They're quite bad at keeping up. 99% syntactic compatibility with another syntax is still a different syntax; and thus it has its own syntax.

The old "you're writing HTML and CSS, that's not programming" runs in a similar vein. It is programming, by any reasonable definition of programming. You're instructing a computer to do something. The real assertion is that its not procedural programming; which seems like a pretty dumb differentiation to lose sleep over to me.


Node is a JavaScript runtime with some framework stuff on top. Not more, not less. The language is the implemented ECMAScript version. If you want to add a new (key)word to the language, there is a long commitment process. "Fooloofol" will give `Uncaught ReferenceError: fooloofool is not defined` in English and Node. :-D


See, but this brings up an interesting point: What is JavaScript? The answer may surprise you: its the common parlance we very reasonably use to refer to the programming language where we can do things like `"\t" == 0" and get `true`. But, interestingly: Oracle owns the trademark on that name, and so in very few if any formal specifications or standards bodies will you find the name "JavaScript"; its "ECMAScript".

But, ok: Node advertises itself as a Javascript runtime, but the standards body against which they are tracking makes it an ECMAScript runtime; not more, not less. Probably not more, I'll grant you; but certainly less! You can pick any formally standardized version of ECMAScript, and find that Node.js has incomplete support, in some cases even for years after publication [1]. Again; its close enough nowadays, especially outside of ESNext which really doesn't count, that we're not talking about a useful or meaningful difference; but rather a pedantic difference.

"Languages" formally specified independent of implementation are not languages. Language is the implementation; and the specification guides it. I'm not just talking about programming languages. I'm talking about spoken and written language as well.

For lack of a better specification; The Oxford English Dictionary is not English. The dictionary is both more and less than english. It has a very large center-of-the-venn-diagram. But: commonly spoken English words will always exist for which it doesn't yet have definition for (it's as of yet blissfully unaware of "bussin" and "rizz"). Simultaneously; it will have words that make no sense to modern speakers, or definitions that have fallen out of use.

It's the same situation with programming languages; the language is the implementation. The specification, if one exists, guides the implementation; but without absolutely no-more-no-less 100% coverage of the specification (which has not happened in NodeJS), and absolutely no-ambiguity-in-the-spec (which has never happened in any formal specification), they are meaningfully and inherently different. Because zero-ambiguity is impossible given fundamental constraints in politics, communication, and in a very real way physics: the specification isn't the language.

Here's an interesting fun fact: TypeScript; that programming language we all love. It has no formal specification. Yup! People have been asking Microsoft since 2016, when they last published the specification, to update it, but the team has (explicitly or not, I don't know) taken the stance that the implementation (and its test cases) are the specification. The language is the implementation.

So; what language is Deno-compatible code written in? The only accurate, formal, academic, correct answer is: Its written in Deno. The language is the implementation. Productively and usefully; its written in a language that is approximately similar enough to TypeScript that no one will notice.

[1] https://node.green/


Modern ECMAScript implementations are a superset of the JavaScript language. Look, taking your spoken language analogy, things like the Deno KV store are an implementation of instructions doing something specific. In spoken/written language this could be a cooking receipt. Of course the receipt is not a language in itself, expect cooking is your language, haha, but it is written _in_ a language. You would never title a cook book a language.


I agreed with it until you injected the irrelevant bit about HTML and CSS being programming languages. It is ARGUABLE true, but very unhelpful, you are not giving learners useful direction when you use a blanket term ("programming language") to refer to vastly different things that require different mindsets (js & other languages vs html, css, yml...) to master.


and browsers, the other main JS runtime, is certainly not a language either.


> If tomorrow, someone started a for-profit company making a faster Python runtime and introduced features like this, it would be weird, and it would feel as pointless to me as Deno is

That's not really what Deno is. It's more vertical than purely a Python runtime-equivalent.

Having said that, the CPython runtime includes SQLite, so it's not far off anyway.


> That's not really what Deno is. It's more vertical than purely a Python runtime-equivalent.

Can you explain further?

Thanks


Sure - e.g. it comes with a web server built in, ready to make web apps with a quick Deno.serve. It's seems oriented in a certain direction.


That's great, because JS (and TS, for Deno) doesn't do that. This is just a runtime feature of the engine.


You don't have to use their hosted service.


They actually have zero documentation I can find for using DenoKV outside of the Deno Deploy environment. It may be possible, or you can always use Deno without using DenoKV, but I think the statement "you have to use their hosted service" isn't inaccurate.


The first page of their documentation [0] mentions this:

> Opening a database > In your Deno program, you can get a reference to a KV database using Deno.openKv(). You may pass in an optional file system path to where you'd like to store your database, otherwise one will be created for you based on the current working directory of your script.

In fact, only the last page in the documentation speaks about KV on Deno Deploy.

[1] https://docs.deno.com/kv/manual#opening-a-database


Passing a path to the local filesystem for the SQLite database only applies to local development; it does not speak to how to connect to e.g. a remote-hosted FoundationDB cluster.

But, I think this section hints at how it could be done [1]. Its phrased in the context of connecting to the DenoKV-hosted database from outside Deno Deploy, but it seems like the reverse could also be accomplished with the same API.

[1] https://docs.deno.com/kv/manual/on_deploy#connect-to-managed...


Literally on the home page (https://deno.com/kv).

> Don't want to use Deno Deploy? Deno KV works on any VPS, so you can deploy to your favorite cloud hosting service.


Marketing sites are not documentation.


It's an ad.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: