Not that it's super important, but Web Locks API is getting some circulation after a question about how you keep multiple pages from trying to use a (single-use only) oauth refresh token at the same time. Which is a pretty good use case for this feature! https://bsky.app/profile/ambarvm.bsky.social/post/3lakznzipt...
Makes me wonder what part of the pattern you think is "too clever"? I think it is fairly easy to reason about when the lock is restricted to the encompassing block and automatically dropped when you leave the block.
It’s kind of a weird design that some of your variables (which you can define anywhere in the scope FWIW) just randomly define a critical section. I strongly prefer languages that do a
with lock {
// do stuff
}
design. This could be C++ too to be honest because lambdas exist but RAII is just too common for people to design their locks like this.
If you really wanted it to be at the top level you could probably turn it into an explicit `release` call using a wrapper and two `Promise` constructors, though that would probably be a bad idea since it could introduce bugs
Subjectiveness aside on what’s more readable… your proposing a new language feature would be more readable than an API design. To me, the MDN proposal is declarative whereas your proposal is imperative. And with my subjectiveness, JavaScript shines in declarative programming.
'using' fixes having to do cleanup in a finally {} handler, which you can a) forget, and b) often messes up scoping of variables as you need to move them outside the 'try' block.
It also creates additional finalization semantics for a using-ed value. Which has all sorts of implications. lock.do(async (lock) => {work}) is a similar construct - a scope-limited aquisition that requires nothing special.
Catch/finally not sharing a scope with try is a repeated mistake, that I surely agree with!
I wish the compatibility tables on MDN gave an indicator of when a feature became available.
My ideal would be a thing that says "this hit 90% of deployed browsers 4 years ago", but just seeing the date it was added to each of the significant browser families would be amazingly useful.
Using the Web Locks API page as an example, let's say we want to know when `LockManager` was added to Chrome. Here are the steps:
1. View the page: https://developer.mozilla.org/en-US/docs/Web/API/Web_Locks_A...
2. Scroll down to the Browser compatibility table
3. Find the cell you're interested in, in this example we're looking at where the `LockManager` row meets the Chrome column.
4. We see a check and a version number (in this case 69).
So at this point we know that it has existed since version 69.
Now in the case that we don't know that Chrome's current version is already < 100 and we need to know when a feature gained support:
5. Click on the cell to view the timeline of that cell.
Now we know that it was released on 2018-09-04, and it never had a smaller release requiring browser flags/prefixes/experiment flags/etc...
I do that a lot: most of the commits in that tools repo (which doubles as my "playing around with LLM generated code" repo) include links to the relevant conversations: https://github.com/simonw/tools/commits/main/
I think this is the idea behind the Baseline <Year> standard you see on a lot of mdn features now, it shows the year when the feature was available in all 3 major browser engines
Why only in secure contexts? You can use storage APIs in insecure contexts, doing this by spinning, but the lock API which seems much more innocuous requires a secure context?
I believe most new capabilities are limited to secure contexts, in part as a way of discouraging bad habits, even when there’s no particular risk. The ideal is that everything should run in a secure context, but it’s a hard sell removing existing functionality from insecure contexts. If it were being added now, I’m pretty sure storage APIs would be secure-only.
I'd presume that a MITM could set, release and read locks, by which it might determine that you have certain sites open.
In general¹, I presume that any API that uses "same origin" as bounding criteria, must be secure. Since there's no way to enforce this "same origin" in insecure contexts.
--
¹ Aside from the idea that maybe just make all new APIs "secure only", just to discourage insecure contexts.
Why would that matter? The tabs don't share memory. Any code doesn't run when it tries to acquire a lot that another piece of code from another tab has already acquired. The two tabs don't even need to run the same app.
Well, it might matter for functionality in the application.
After you fix a lock-related bug for example, how do you deal with an open tab running a different version of your code that is erroneously misusing a lock?
You need to account for that when you release new code, yeah? Rename the lock maybe? Some other logic?
I’ve been using this for a while now. But one thing I recently worked on required these locks to be extremely efficient. Does anyone have any benchmarks on usage of these locks? Preferably compared to the use of rust’s tokio mutex from a wasm context.
You should write your own benchmarks! I've been using mitata for microbenchmarks which is what the bun and deno people use for their cool benchmark charts. It's fast and tries to call the system GC between runs which helps reduce bias. github: https://github.com/evanwashere/mitata
I find iterating in mitata super fun and a little addictive. It's hard to write a representative micro-benchmark, but optimizing them is still useful as long as you aren't making anything worse, which is often easy to avoid. I recently used mitata-benchmark-guided optimization to rewrite a core data structure at Notion for a 5% latency decrease on a few endpoints at p90/95/99. One of our returning interns used it to assess serialization libraries and she found one 3x faster. a+++ would recommend
I’d assume that if you have used them for a while, you have a rough understanding of the performance characteristics and/or the knowledge of how to write even a quick, rough benchmark for them.
Weird API to release the lock. What if you want to hold on to it? Then you need to do some silly promise wrapper. Would be better if there was a matching release() function.
The ergonomics here for 99.999% of uses seem great to me. Whatever async function you have can run for as long as it needs. That's using the language, not adding more userland craft. It's a good move.
Probably because there is no RAII semantics in JS and they don’t want to allow forgetting releasing the lock. Although the promise workaround is explicitly opting into this behavior
You mean Atomics + SharedArrayBuffer, otherwise it won't be shared across agents. I can imagine all the postMessage calls and message handlers swirling around in all agents to approximate something like the Web Locks API for a simple lock, but tbh I'd take the Web Locks API any day.
p = Promise.withResolvers()
navigator.locks.request(
"foo",
p.promise,
)
p.resolve()
I guess there’s room for .requestWithResolvers() still, they rarely learn the first lesson. Even the $subj article seems to be unaware of it and uses the silly wrapper way.
I guess I don't actually get the point, because if I am locking a resource in one tab so that other tabs can't use that resource... how is that not going to lead to behavior that a user would think was broken.
Two tabs try to refresh the token at the same time, user opened tab 2, tab 2 can't refresh because things locked in tab 1 - user thinks app is broken? Isn't that the way it would happen. I guess you can detect it is locked in another tab though so you could give the user a warning about this?
I guess I am missing something about the scenario..
It's not locking the UI refresh, it is only locking the access token refresh.
Let's play it out.
- User authenticates, gets an access token A1 and a refresh token R1. R1 is one time use only.
- A1 and R1 are stored as cookies (secure and httponly to prevent XSS).
- A1 is used to access multiple APIs, but is only good for 5 minutes.
- The SPA opens up a new tab for whatever reason, so the browser is making requests to the APIs from both tabs (T1 and T2)
- 5 minutes passes, and T1 tries to call an API. The API call fails because A1 is not valid. T1 then makes a request to the OAuth server with R1, which returns a new access token (A2), a new refresh token (R2) and invalidates R1.
- 0.1 seconds later, T2 tries to call an API with A1, and also discovers it needs to get a new access token. It tries to use R1, but R1 is invalid because it has already been used
With this locking API, T1 could lock access to R1 when it was about to use it. T2 would then see the lock request fail, and not try to use R1. After R2 has been stored, T2 could use R2 (or T2 could use A2).
Maybe it's old person yelling at cloud vibes, but I'm already annoyed enough at the majority of SPAs for breaking so many useful default browser features, such as stable links, navigation, multi-tab functionality, open in new tab etc. This is only gonna make them even more cancer my gut feeling tells me.
IndexedDB transactions give guarantees about atomicity but not about isolation. Note that the word "isolation" doesn't appear in the spec https://w3c.github.io/IndexedDB/
A lease has a time limit. A lock does not. Clearing stale locks manually is a PITA. I still assume, being a Web-scale contract, the lock would be automatically cleared if the browser is restarted or something. But honestly a lease makes users do better design from the get-go.
I might agree in other contexts, but not with the use case here and how the API is designed.
It looks like the only potential for a "stale lock" is if somehow the async function passed to the request method hangs forever. But in web contexts I think that would be extremely unlikely for everyday use cases (e.g. most of the time I could imagine the async callback making remote calls using fetch, but normally that fetch has its own timeout). In contexts where it could happen, I'd argue it's better to make the caller explicitly handle that case (e.g. by using `steal`) than potentially leave things in an indeterminate state because a lease timeout expired.
There is no real safe way to use lock hold timeouts. While a waiter can timeout and possibly handle failing to acquire the lock, there's no generic safe way to steal the lock from the holder after a timeout since the holder may still be accessing the protected resources/have left the resources in an inconsistent state. Adding a wait timeout which generates telemetry on a long wait may be useful for helping catch failures in production, but seizing the lock is almost always the wrong way to go about this.
I've actually used this once! I built a slightly overengineered HTTP client where it would use a Reader Writer lock on the auth token. That way, if a token refresh request was taking place, all new requests would wait for it to finish writing, before being sent
I'm not sure that this API is a good thing. That is already a pain in the ass when webapp are only working in a single tab and kind of log you out of other tabs.
Also, when you have a lot of tabs, I can easily see web page strangely broken.
Expect deadlocks in web applications now? I wouldn't necessarily trust a JS programmer with a lock, sorry. They are hard enough in C++ or other languages that generally require a lot more discipline.
I have a hard time picturing how an application can be considered anything other than completely broken once a couple threads/workers have deadlocked, so I don't know what any of that quote means. Yeah, I get that browsers isolate tabs and that the damage is contained.
You seem to be expecting these locks to block a thread, but they do not. A "deadlock" with these locks is simply a chunk of heap space holding a bunch of promises that will never resolve, occupying a few slots in the global event loop's select statement.