Underneath the hood other serverless technologies like lambda are running lightweight VMs running linux. Therefore they can easily accept any linux compatible container and they can run it for you in a serverless way.
Cloudflare Workers are running modified Node.js runtimes. You can only run code that is compatible with their modified Node.js runtime. For Cloudflare to be able to offer a product that runs arbitrary linux compatible containers they would have to change their tech stack to start using lightweight VMs.
If you want to run Node.js, then Cloudflare Workers probably works fine. But if you want to run something else (that doesn't have a good WASM compatibility story) then Cloudflare Workers won't work for you.
Not to be pedantic, but its not a modified Node.js runtime, it is a wholly custom runtime based on V8 directly. They're working on some Node.js API compatibility, but its not at all Node.js[0]
To quote directly:
Cloudflare Workers, however, run directly on V8. There are a few reasons for this. One reason is speed of execution for functions that have not been used recently. Cold starts are an issue in serverless computing, but running functions on V8 means that the functions can be 'spun up' and executed, typically, within 5 milliseconds or less. (Node.js has more overhead and usually takes a few milliseconds longer.) Another reason is that V8 sandboxes JavaScript functions automatically, which increases security.
Depending on your use case, there is a way to proxy gRPC to Cloud Run in a slightly hacky way leveraging the fact that outbound gRPC works.
You can run in GCE a gRPC server that whenever it gets a gRPC request, it temporarily stores the gRPC message and associates it with a session ID. It then sends a HTTP request to Cloud Run with that session ID. Then your Cloud Run instance will take that session ID to make a gRPC connection to your gRPC server in GCE. This GCE instance will then take the session ID, retrieve the gRPC request, and forward it to the Cloud Run instance.
This is admittedly hacky, but depending on your use case, may be good enough.
Sorry, my question was about outbound IPv6. At the backend, we need full IPv4 and IPv6 network connectivity to the outside world. (I haven't tried Cloud Run, but I read through the docs and there are no indications [that I could find] that IPv6 is supported, as with other GCP services.)
That's great to hear, thanks! Is there a page in the docs where this is documented? I'd also like to know if there are any restrictions (e.g., are all outbound ports open, etc)?
If you could comment on when/whether compute instances will get IPv6, that would be great also :)
"There is as yet no absolute challenger to the relational model. When people think database, they still think SQL. But if there is a true challenger, it is in the graph model."
This article is quite biased towards graph databases with regards to the SQL versus NoSQL tension. This video presents a much more balanced view of SQL versus NoSQL, in my opinion.
https://www.youtube.com/watch?v=qI_g07C_Q5I
Here's my attempt at a summary.
Wolfram speculates that the universe is a network of nodes and connections. This network changes over time by simple substitution rules. Specific patterns in the network give rise to effects that on the larger scale that we experience as the physical world. He has shown that if you model the universe this way, you can neatly derive special relativity and general relativity. He is also doing a brute-force search through networks to look for one that exhibits properties like our universe.
I think I have a simpler counterexample to disprove pg's hypothesis than any other counterexample I've read in the comments. Suppose our goal is to admit the top 5 applicants with the following performances:
A - 30,000
A - 10,000
A - 9,000
B - 7,000
B - 5,000 # Cutoff point below this line
A - 4
B - 3
B - 2
Even though admitting the top 5 by score is perfectly fair, the applicants from group A perform better.
pg's argument is that if the average performance of one admitted group is better than the other the admission process has a bias. This example shows that you can have an unbiased process, but the average performance of the groups differs.
It's likely that the two of you are talking across each other because you read slightly different articles. Paul added an assumption to his article, possibly after William read it, which is intended to rule out his posited distribution: "(c) the groups of applicants you're comparing have roughly equal distribution of ability".
This strikes me as a "heroic assumption", but it's true that if you make it most of the flaws in his argument go away. Add in the unspoken assumption that the groups are both are large enough that sampling variation does not matter, and I think he's probably logically correct.
On the other hand, once you make these assumptions, the rest of his argument seems unnecessary, since all you need to know is the ratio of males and females funded. If male and female founders are exchangeable, the process is biased if one group is funded more often than they are represented in the applicants.
You don't even need to look at outcome, since we've already assumed the founders are of equal ability. I think that Paul is aiming at the case where we don't know the ratio of applicants. I think his argument can be useful in this case, but only if you have already accepted his assumptions.
I have no first hand knowledge of this, so I'm probably wrong. Maybe it's more accurate to say that the late stage investments are driving valuations up (ie the a16z analysis). But a lot of these later stage investors would not have felt comfortable without the liquidation preferences. The key distinctions is that liquidation preferences are not critical a VC's decision to invest, but are critical to the later stage large institutional investor's decision.
https://cloud.google.com/kubernetes-engine/docs/how-to/tpus#...