You should check out JPro. It is a JavaFX-based technology where the JavaFX (desktop framework) is rendered in the browser using Web stack technologies. The entire state of the app is maintained server-side, even on a per-user basis. JFX Central is implemented like that:
The code base is actually a desktop application written in JavaFX, where the browser just happens to be a "monitor screen" showing you the desktop app.
As for server-side state and server-side rendered UI, many other Java frameworks have implemented similar techniques, including but not limited to: JSF, Apache Wicket, Vaadin, etc.
I am not sure this project is truly innovating here, with things like:
> What would have perhaps been a more fair comparison is to share the peak load that Google services running on GCP generated on Spanner, and not the sum of their cloud platform.
Not necessarily about volume of transactions, but this is similar to one of my pet-peeves with statements that use aggregated numbers of compute power.
"Our system has great performance, dealing 5 billion requests per second" means nothing if you don't break down how many RPS per instance of compute unit (e.g. CPU).
Scales of performance are relative, and on a distributed architecture, most systems can scale just by throwing more compute power.
Yeah I've seen some pretty sneaky candidates try that on their resumes. They aggregate the RPS for all the instances of their services even though they don't share any dependencies nor infrastructure. They're just independent instances/clusters running the same code. When I dug into those impressive numbers and asked about how they managed coordination/consensus the truth comes out.
True, but one would hope that both sides in this case would be putting their best foot forward. Getting peak performance out of right sizing your DB is part of that discussion. I can't imagine AWS would put down "126 million QPS" if they COULD have provided a larger instance that could deliver "200 million QPS", right? We have to assume at some point that both sides are putting their best foot forward given the service.
The 126M QPS number was certainly parts of Amazon.com retail that powers Prime Day not all of DDB traffic. If we were to add up all of DDB's volume, it would be way higher. At least a magnitude if not more.
Large parts of AWS itself uses DDB - both control plane and data plane. For instance, every message sent to AWS IoT will internally translate to multiple calls to DDB (reads and writes) as the message flows through the different parts of the system. IoT itself is millions of RPS and that is just one small-ish AWS service.
Put yourself in the shoes of who they're targeting with that.
Probably dealing with thousands of requests per seconds, but wants to say they're building something that can scale to billions of requests per second to justify their choices, so there they go.
The Java tests (Loom in particular), and possibly other langauge runtimes too in this benchmark, can't be compared without looking at the runtime settings.
JVM for example has a default heap size of 25% to 50%, depending on how much memory is available in the system. Given that the code is using ArrayList, there will be more need for bigger heap, due to array expansion. To eliminate that noise, the code should be changed to a pure Thread[] array with a fixed size based on numTasks.
Once that is done, you may as well run up to 10_000 tasks (virtual threads) within a JVM that consumes no more than 32 MB of RAM (compared to 78MB as originally reported by OP).
These were the best results I could find after playing a bit:
# Run single task
docker run --memory 26m --rm java-virtual-threads 1
# Run 10 tasks
docker run --memory 26m --rm java-virtual-threads 10
# Run 100 tasks
docker run --memory 26m --rm java-virtual-threads 100
# Run 1_000 tasks
docker run --memory 28m --rm java-virtual-threads 1000
# Run 10_000 tasks
docker run --memory 32m -e JAVA_TOOL_OPTIONS=-Xmx25m --rm java-virtual-threads 10000
# Run 100_000 tasks
docker run --memory 200m -e JAVA_TOOL_OPTIONS=-Xmx175m --rm java-virtual-threads 100000
# Run 1_000_000 tasks
docker run --memory 950m -e JAVA_TOOL_OPTIONS=-Xmx750m --rm java-virtual-threads 1000000
Twitter's mistake was to ever allow such thing being completely free in the first place.
Telco companies will charge fees if an entity (private or public) wants to send alerts over SMS to their customers/users. I can't see why Twitter had to be different other than "we must grow fast!" mentality of a startup hungry for MAU.
NYC Subway MTA has significant revenue (>5 bi/year). Sure they can pay Twitter if this is how MTA users want to be notified. Worse case scenario, they will add a $0.01 "Twitter-fee" into ticket prices.
I wonder if Microsoft's approach for Dev Box is the right one.