Ok, the people being quoted on the page all look to be under 25, and so are the people commenting here judging by the comments, so I suspect that no one involved or interested in Bridge has ever used RPC systems before and therefore have no idea how bad they actually are. Or possibly, judging by the "No configuration files or IDLs." comment, they may have heard of SOAP while it was being called an RPC system and be suffering from second system delusions.
If you are interested in some of the existing discussion, check out (They're PDFs. Sorry.):
So Google (Protocol Buffers - RPC), Facebook (Thrift - RPC), Twitter (Finagle/Thrift - RPC), GitHub (BERT-RPC), DotCloud (ZeroRPC), Quora (Thrift - RPC) and all major tech companies are wrong? The list of companies using RPC messaging systems is very long.
RPC is widely used. It is mature and well understood. "RPC is bad because network calls look like local calls" is a terrible overused argument.
To some degree, it's a fair critique. RPC is a loaded term, because there were far too many implementations that tried to hide the network, which is a recipe for disaster. We were hesitant of going forward with the name ZeroRPC ourselves because of the historical baggage.
That said, most of the faults people have placed on RPC have been products of individual implementations and not the concept itself, and there really is much more momentum behind RPC engines than anything else. For my Master's thesis, I implemented a generative communication [1] engine. While I still feel like it's a "beautiful" model, it turns out no one cares. Developers understand RPC calls. Not that many get tuples / tuplespaces / pattern matching and why that can be nice.
The terms are fuzzy, but tuple spaces aren't solving the same problem RPC. That is, you can _use_ RPC to implement tuple spaces. But not the other way around!
Message queues are basically the modern tuple spaces, and I'll bet all the companies listed above use message queues too.
RPC has its place but it's overused. It's a low level primitive, not a distributed systems architecture. Some people's architecture is basically just RPC spaghetti, with a naive handling of errors.
The argument is confused by the fact that you can use an RPC with messaging semantics (async, a single argument data payload), and you can use a messaging system with RPC semantics (i.e. tunneling RPC over HTTP, using HTTP as transport).
I think we basically have to stop using the word RPC as a catch-all for so many systems, and identify which choices are good and which ones lead to design mistakes.
The example is obviously a bit simplified, but it's certainly easy to do in generative communication.
Message queues and tuplespaces are very different. A tuplespace allows pattern matching on tuples. And many tuplespace implementations do not provide monotonicity.
Well I should have clarified that I was basing that on my interpretation of the terms RPC and message queue. The important bit is that all these debates are obscured by imprecise terminology.
RabbitMQ supports reciving messages based on a pattern:
That doesn't match my definition of RPC, but OK. I consider RPC server-to-server, not server -> "smart" intermediary -> server. But I guess they want to to define RPC as request/reply. So again it's a terminology issue.
What deployed, modern systems use tuple spaces and are substantively different from a message queue? My impression is that "tuple spaces" were the historical name, from Gelertner's papers, but everyone just calls them message queues. Message queues have lots of different properties but the defining one, as in tuple spaces, is that there's an intermediary between 2 servers. The sender and receiver don't have to be up at the same time.
> What deployed, modern systems use tuple spaces and are substantively different from a message queue?
None that I know of, but that's because no one uses generative communication nowadays :)
> Message queues have lots of different properties but the defining one, as in tuple spaces, is that there's an intermediary between 2 servers.
That's a very loose definition that could classify a lot of things as message queues. Generative communication says nothing about how the semantics are implemented; they may use an intermediary, or they may not. The early version of C-Linda did not use intermediaries, and was intended for in-process communication. Of course, a practical, distributed implementation of generative communication uses intermediaries. But then again, depending on the implementation, it may be distributed across multiple intermediaries.
I would say a message queue would necessarily have monotonicity (the name surely implies it!), which generative communication does not have.
> The sender and receiver don't have to be up at the same time.
In academic parlance this is called time decoupling, and it is an awesome property. ZeroRPC has this as well thanks to 0mq.
I don't really know anything about RabbitMQ to comment. I think I've heard that some engineers at dotCloud tried it out for our distributed communication needs, and it just couldn't scale to our load. But that would've been a while ago, and it has probably improved substantially since.
OK fair enough. But just with regard to your original point -- people should "care" about higher level distributed systems abstractions, and they do (to an extent). I think the main example of a higher level abstraction in wide use is message queues (which I think of as a descendant of tuple spaces, but that point is immaterial).
If your only concept is RPC, that's just too low level and limited, and you're going to end up with a mess. You're right that "developers understand RPC" (in a naive way). But that's just because we are still in the early stages and knowledge hasn't had time to propagate. Some RPC systems have more of the naive properties that the OP was pointing to; some have learned from those mistakes.
Ruby has had DRB since as long as I can remember, a form of remote procedure call in the most literal sense, objects sending messages to other remote objects and getting answers back, but I've never seen it used in any serious capacity.
Perhaps it was because of how XMLRPC and related nonsense poisoned the well.
Times are different now. Developers are embracing asynchronous methods on platforms that used to be strictly synchronous and they're learning to adapt to the fact that your method calls take time to complete, or may not complete at all if you're not careful in your design.
jQuery and related methods are RPC in a very primitive sense. Backbone is a step towards a more Meteor-like paradigm that should serve as a more solid foundation for "modern" RPC.
Actually, one of the major difficulties with CORBA was with the CORBA standards themselves, which did not do a very good job defining asynchronous calls and rendered them pretty darn useless. Check out one of Vinoski's other publications, Advanced CORBA Programming with C++ for more details. (I'll admit I didn't even try to read the CORBA standards.)
I understand you want nobody on your lawn...and i'm sure newsql/nosql is just IMS again...and you have an opinion that either functional or OOP sucks. They're all tools / approaches. In general, each has several perfectly valid applications. I've used RPC in the past and it was a great approach (a fairly standard XML-RPC interface). Would it have been better as REST with JSON? Probably...at the time REST and JSON didn't exist (late 90s). It certainly felt better than CORBA.
From a naive perspective of someone greater than 25, bridge looks interesting. I would expect some (not all) others who have "been there" would feel the same. I don't think the hyperbolic condescending attitude is really called for. I apologize if I'm inferring incorrectly the tone of your post.
RPC is one of the few things that qualifies for my Big List o' Evil. It's not that it doesn't work at all; that would a completely different issue. The problem is that it seems to make a lot of things simpler, right up to the point where it forces a massive amount of its own complexity on you.
RPC is not a "leaky abstraction"; it is semantically invalid. It makes network messaging look like function calls of some style. But they aren't. (At some point, you will find yourself asking what happens when the remote procedure completes successfully, but the calling side gets an error. Or spend some time wondering why some simple code is so slow before you realize that it's spending a millisecond on network transmission to do a tiny fraction of that time's computation.)
If you ever see the claim that you can write your code normally and then decide how to distribute it, run away screaming. Or picture me doing it for you. It doesn't work. When you build a distributed system, the fact that it is distributed, and how it is distributed, are an order of magnitude more important than everything else.
If you're using it to make communicating between languages easier, then yes, that is a big win. But there are other ways to do that. REST and JSON are relatively new, but textual protocols like HTTP and its figurative ancestor, SMTP, aren't. And if that's not good, there's always IETF block diagrams and network byte order.
But, you're saying, I'm doing RPC right, structuring my systems around the communications and relying only on large-grained, asynchronous calls. That's great, but what exactly is RPC buying you at that point? 'Cause it's costing you at least visibility into the communications.
My first question for anyone using bridge is this: what happens when you have 3 clients calling code on a server, which in turn is calling a couple of other servers, and those servers are making calls back to the original server.
Looks cool, but - is Bridge an open source project? Is it a company? All of the above? Do I have to pay for it? What are the prices?
As someone who makes architectural decisions on a regular basis, I need to know that if a company that we're a dependency on dies, we can rip it out and replace it with a) an open source version of what we were paying for or b) our own alternative implementation (which sounds like something on top of RabbitMQ in this case). It's frustrating to see companies like this positioning themselves in a way that makes them difficult to evaluate.
We're a well funded company. We've raised money from A16Z, Salesforce.com, and a bunch of others.[1]
We have open sourced the clients but Bridge Server is not open source.[2]
How do we make money? We're building this type of software for much larger organizations (think enterprise & government). We're not in the business of charging hackers and students $5.
We're likely going to charge some money for Bridge Cloud as a lot of the startups currently using Bridge prefer to use & pay for Bridge Cloud. It's the peace of mind of not having to deal with ops on a messaging server. Bridge server itself will also cost some money.
But the point is that we're only going to charge the big guys who can easily afford such systems. Their alternative today is to spend tens of millions on an engineering team to build systems in house.
There will always be a generous free tier for Bridge. You can download Bridge right now and pump 40,000 messages per minute through it for FREE. Do you need more? Just call me, I'm happy to bump you up.[3]
We have a setup for companies paying us and need to see the source code. If it's really important to you, we're happy to show you the code.
What can I do to help you evaluate Bridge? Can you shoot me an email? I'm at darshan@getbridge.com.
40K messages per minute is pretty generous. I have hopes of surpassing 40K messages per minute. The one thing that would block me from ever getting started using Bridge is the uncertainty of how much it will cost once I need >40K/minute. I do not like the idea of getting started with a product without knowing what my costs will be down the road when I need to pay for the product.
The guys behind this are some of the smartest people I've worked with. I've been using Bridge ever since their beta launch, and they've personally help me get set up, responded to every support request I've had, and delivered on everything they promised for Bridge.
It's been super useful for all types of projects -- I've seen it used in tons of small 24 hour hackathon projects, and have used it to scale my own app and port it over to mobile with very little friction. Definitely recommend this piece of technology.
Shoot me an email[1]. We're not open source, but I'm happy to show you the code!
As I said in comments above, if the source is really important to you, I can walk you through it and give you access so that you feel comfortable about what you're adopting. We're just not ready to open source the server yet.
You don't even need to pay for a server license until you're doing thousands of messages per second! At those scales, you should be able to afford cheap licenses. We're not focused on making small amounts of money from ramen startups or hungry students. We do have a good business in enterprise software to fund the free Bridge Cloud servers.
ZeroRPC is pretty cool but it's a different concept. It uses 0MQ sockets as the transport whereas we use RabbitMQ. This allows us to do zero-configuration service discovery and complex routing.
Bridge allows you to securely expose APIs to "untrusted" clients such as browsers and mobile clients. ZeroRPC does not do this because 0MQ doesn't allow such access control.
sthatipamala is correct. A few more points on the differences between ZeroRPC and Bridge:
* ZeroRPC has been open-sourced by dotCloud with no plans for monetization. It's just a fundamental piece of infrastructure which we think is worth sharing. Licensing fundamental infrastructure software is not our business (also: it's very hard).
* I see very interesting scenarios where ZeroRPC can be integrated with Bridge. There is already work on integrating it with systems very similar to Bridge, for example Stack.io (http://github.com/dotcloud/stack.io).
* Bridge currently uses a central broker, ZeroRPC is broker-less.
* ZeroRPC gives you streaming responses (similar to python generators). Bridge instead provides pub/sub.
* Bridge uses callbacks everywhere, whereas it varies for ZeroRPC based on language implementation. Our node.js version obviously uses callbacks, but the python version doesn't, thanks to gevent.
I can't imagine that WebSockets would be used outside of communicating with the browser -- in fact, a quick glance at the Python client library seems to indicate the use of raw TCP sockets.
> This would allow for some interesting applications such as video streaming, file transfer, image processing and more. Binary data support was one of the key reasons Github chose to roll its own serialization format (BERT) instead of using existing options.
Hi, I'm the first person quoted on the testimonials page. I've gotten a few emails related Bridge and my job, and I'd like to clarify: The email that quote came from was personal, and not at all related to my employment at Facebook (specifically, it is not an endorsement by Facebook). I use Bridge for personal projects, and that's what my email was referencing. I've asked that my quote be made clearer, and I'm sorry if you were confused or misled by the ambiguity.
This is magic as far as I can tell... How can this be so far ahead of everything else out there?! Anyone have any insights on how this works and what the performance/flexibility tradeoffs are for this magic?
It's not that magic; this kind of development is what message queues have been enabling for years (ActiveMQ, ZeroMQ, RabbitMQ, etc). It's a nice way to architect systems. Bridge looks like a nice evolutionary step forward, in terms of elegance & consistency.
Message queues aren't a silver bullet of course, they introduce a lot of potential problems which need to be solved on a case-by-case basis. Such as, what happens if a receiver dies before handling a message? Do senders need to retry sending messages or can they fire-and-forget? Is retrying built in to the queue layer? What happens if a receiver is bogged down with too many messages? Is there a limit on the incoming queue size and what is the limit? Can we start dropping messages if we're overloaded; which messages can we drop? If you make a request that expects a reply, should you timeout and what timeout duration should you use? When a new handler joins the system, should they instantly receive messages that have been waiting for them, or should they dump the existing queue? And so on..
Thank you for the explanation, this is blowing my mind because I had no idea message queues existed! Given that, this is a great abstraction for RPC on top of an MQ.
Looks like a nice API (in a few languages) on top of some basic AMQP functionality backed by RabbitMQ. We're building something rather similar (for internal use, not a product). Might have to keep my eye on it, though it's hard to tell what it would cost for server licenses from the webpage. The core server is closed source with rate-limited free binary downloads.
We've been talking about this around our shop. No small amount of thought goes into considering whether to use a service or tool or not. And that can greatly depend on if it has a web api or needs glue code to fit into our world. Both Facebook and Google use similar message layer systems to keep the chimera cluster f*cks they have working. The problem will be keeping bridge lean enough to do what it needs to, and small enough that you know what it does.
We're really focused on lightweight simplicity right now. Keeping Bridge lean is a priority.
In terms of a web api vs. glue code, Bridge requires very little if any glue code. Check out our roadmap. We do have a REST endpoint planned. https://www.getbridge.com/about/roadmap
i've been using bridge to connect things that you wouldn't think is possible/easy, ranging from connecting android phones to RC blimps to web pages to remote python scripts. it was completely seamless! awesome piece of technology
Bridge was a lifesaver for the blimp project! Without it we would have had to drop $70 on a bluetooth module and send commands over a serial connection.
Full disclosure: I did some work on Thrift in 2007-2008.
I read the blog post you linked and I don't see any real evidence that their choice of Thrift was "grudging". They did say that there are differing requirements that might lead to others to make a different choice -- but AFAICT there is no indication in the article that they regret their choice.
WRT IDLs: YMMV of course, but the post you linked provides the following reasons why an IDL might be a good idea: "A more formal IDL and native code generation may help provide better long-term client stability, so the initial Thrift overhead for developers might be justified for some services that are concerned about client stability and longevity."
Good observation. They are actually pretty similar but with different implementations. We're implemented with an Erlang cowboy server that interfaces with RabbitMQ for routing.
From what I've seen mongrel2 is designed as a HTTP/REST -> 0MQ gateway. We also have HTTP/REST endpoints in the works but we're designed with persistent WebSockets/TCP in mind.
I will be keeping an eye on these types of frameworks because I want to see how far the framework can go in the stack. Specifically I am waiting for frameworks that help me manage my web UI in an easy manner.
First of all you don't really need language name in the name of each of the bindings. It's kind of obvious that i'm using ruby or python at the moment, isn't it? Python version looks especially ugly it should have been much simpler:
awesome. it's not bridge, it's rosetta stone.. this will end religious wars between languages :) will bring them together.. now we can use haskell for X ruby for Y and Z for T. distribute them to different servers. very cool.
It's a shame that it's closed source. I'm trying to install it through the installer, and I'm in "missing shared object file" hell due to things not being named as the binary expects them to be.
I built something similar but simpler not long ago https://github.com/yankov/nourl
But it only supports ruby on a server side and JS on a client. You guys can simplify your framework in many ways by following convention over configuration principle.
Looks fantastic - not being a day-to-day programmer this is exactly what I've been looking for. Building my command and control platform will be significantly easier.
Anyone else notice that the pip install is broke though? Complains that 'README.md' is missing from the tarball.
I'm really intersted in seeing the browser javascript part. The javascript gitHub repo seems to be for node only. How do you call RPC directly from Browser Javascript?
I'm guessing it does something similar to socket.io, but I haven't tested yet.
I see that even on the client side in JS you need to provide your api-key. How is this secure in that someone else doesn't just use your api key? If it is explained on the site forgive me for not finding it.
I've already built a cool hack with bridge! The multi-language support allowed me to use node.js for my IO and python for my logic. They interfaced pretty seamlessly. This is a good tool for hackers.
That clarifies that it's not a good idea, actually. A secure exchange would involve the server issuing a challenge, and the client producing a response, and not sending the key at all.
Viewing the site from an iPad could use some attention. Both landscape and portrait have some unexpected cutoff and alignment issues that are somewhat distracting.
Why do companies keep running their "marketing" sites on application server platforms? Put it on S3 or your Web host of choice, as a good old fashioned static site, and your uptime will go through the roof. Especially for a company that wants to be other people's infrastructure provider...
Not sure about this, but I think it's what you mean. I googled "node broke", and didn't come up with meaningful results, so I assume it's "Your". Correct me if I'm wrong!
If you are interested in some of the existing discussion, check out (They're PDFs. Sorry.):
* [1988] A Critique of the Remote Procedure Call Paradigm http://www.cs.vu.nl/~ast/publications/euteco-1988.pdf
* [1994] A note on distributed computing. http://research.sun.com/techrep/1994/abstract-29.html
* [2005] RPC under fire. Steve Vinoski. http://www.iona.com/hyplan/vinoski/pdfs/IEEE-RPC_Under_Fire....