GH/CF Pages are only "free" if you have no/minimal traffic. Also, even that free tier will go away if you start popularizing it as a Wordpress alternative or make the barriers to entry lower. Let's keep it free, please
The name “Envoy” conflicts with a company (envoy[.]com) that has nothing to do with proxies.
I like Envoy Proxy too. I have used it to support a couple of projects at Apple, and the gRPC Access Log Service works well -- https://www.envoyproxy.io/docs/envoy/latest/api-v3/extension... , although, I wish the configuration file was more obvious when you read it, I spend hours decyphering names, understanding the meeting of certain things, and tuning values that otherwise should be self explanatory. As Leon Bambrick said, naming things is hard, and some engineers tend to name things in a stupid way.
I looked into Envoy a few years ago and found it difficult to get started. I recall the syntax involving long names that were hard to look up from the docs.
I'm assuming some of those issues may be why it isnt as commonly talked about as nginx or caddy
GRPC and Thrift can't express ADTs (enums with data) easily, but OpenAPI can. That's worth a lot in my book.
Another advantage of OpenAPI is that you can write your specifications using Rust types (as we do at Oxide with Dropshot: https://docs.rs/dropshot)
edit: Apparently protobuf 3 does have oneof: https://protobuf.dev/programming-guides/proto3/#oneof. They look like they solve the problem but I can't vouch for it, and they appear to have some edge cases ("only the last member seen is used in the parsed message"). Thrift doesn't appear to, still.
And I do think being able to write the spec using Rust types is really nice -- you still get an OpenAPI document as an artifact, and (for external users) you get wide client compatibility.
If you're building maintainable servers you should write the doc first and the codegen from there. Otherwise you're gonna be in a world of hurt when some junior changes your datatype or if you need to version and maintained both versions simultaneously from the same or similar endpoints.
You're right that this is a very difficult problem, but writing the document first doesn't give you much over generating it from types.
We actually have a plan for supporting multiple versions, and conversions between the corresponding types, using Dropshot as the place where that's coordinated.
This "stuff" allows for easy exchange of API definitions and Arazzo goes to the next level to define the semantics of the process of combining API calls.
gRPC requires brittle compilation of the protobuf definitions that has impacted every marshalling/serialization protocol for remote procedure calls since XDR.
Whether you like it or not, HTTP/JSON are the lingua franca of the internet (at least the API side of things). Protobuf is good if you are in control of both sides of the API, less so if you are just the server. It also is much less self-documenting than JSON Schema/OpenAPI.
Apples and oranges?
gRPC might sometimes fit the bill for server-to-server use cases, but it's completely unsuitable for integration with SPAs; grpc-web was never well-supported, and is now dead. I don't understand the "want to vomit" perspective. That implies that adding protoc to your build toolchain and using an opaque binary format/protocol is somehow much more palatable than well-documented JSON / REST over HTTP. What am I missing?
ConnectRPC works over the web and I've built several web apps (SPAs) with it. It works with JSON by just setting the Content-Type header to `application/json`. You can add compression easily on the backend with a single option. Generating typed stub code is one line (`buf generate`). Connect-ES is awesome - it's a Typescript-first protobuf library for the web.
Interacting with the API is very simple, for example:
These function calls are typed and a lot of the code is auto-generated from the .proto spec (i.e. above the gameEventClient and getRecentAnnotatedGames are auto-generated code). On the server side the code is also obviously auto-generated. It all works seamlessly. Even the documentation for the API can be auto-generated. See for example:
The above documentation was auto-generated from my protobuf files. This project uses ConnectRPC on the backend and on the front-end SPA. To me it seems so much simpler and better than the way I've seen people use OpenAPI - where many people seem to create the code _before_ creating the spec. I actually haven't found a good Go generator of stub code from an OpenAPI spec. With ConnectRPC it just works, it's simple, easy, fast, etc. It's easy to add interceptors to do things like parse the http request for an Auth or Cookie header and then insert the user ID etc back into the context for the different service functions to handle whatever needs to be done with the authenticated user.
I could ask the same question - what am I missing?
Thanks for the thoughtful reply. Glad you found something you like that's solving your problems! The docs you linked do look nice. Ironically, I noticed this quote there:
> "It's much easier to use the API with JSON + a web browser, but the protobuf option is still available..."
... which sort of underscores my perspective (which is admittedly strongly biased towards / empathetic with the API consumer side).
Yep, but the awesomeness of ConnectRPC is that you can use the JSON api in the web browser, and indeed I do that in this project. We don't have to, and as a matter of fact it's a little less performant, but it's easier to debug and hack.
[meta: IMHO this is an exemplary HN interaction; instead of shouting past each other, we shared very different points of view, and one of us learned something potentially useful.]
Any idea how monstrous migrating from gRPC to ConnectRPC might be, on the backend?
Unless you're making money off it, $15k + however much you have to spend on installing a new breaker panel is too much to spend on hardware that will be outdated in 2 years. If you're making money off it, but you're still cheap, then buy a Supermicro + H100s and colo it in a datacenter. If you're not cheap, you'll just use Azure. So I'm not sure who this product is supposed to be for.
At the risk of derailing the conversation (although Guix is a lisp so maybe not): I agree 100% but also maybe Nix's pragmatism is why it's more popular? "Pragmatism" being a programming language euphemism for "untyped hacky mess".
Why is this in past tense? Guix development is active. It's just Nix is more popular and it's understandable since it's the original idea and it's older.
Guix even has a very active, high-quality blog where maintainers detail major technical accomplishments, long-term goals, etc.: https://guix.gnu.org/en/blog/
From here it seems like they're growing and advancing well. I wish I could find ready historical data on the numbers of packages and services from, say, 4 years ago vs. today, though. I could have sworn Repology used to show year over year stuff but I can't find it now.
Thank you very much for your feedback! No, it's just great to work through javascript. Moving away from javascript is not intended. It's the opposite: a connection with the server, passing the RequestInit object, and other similar things!