It's Ruby+Celluloid doing subscriptions on RethinkDB, and it uses Meteor's DDP protocol to talk to the React client side.
Was pretty easy to build. All the modern technologies fit really well together.
Unfortunately in the Ruby world live webservices like this haven't really taken off much yet, so I had to write some low-ish level things myself (The DDP server implementation, which I think is the only non-Meteor DDP server implementation out there, and the celluloid-websocket implementation, which was easy because there's a great ruby websocket library by the faye people)
I'm not sure where your opinion comes from and what concurrency primitives you are missing in Ruby, but Node.js simply no concurrency primitives at all and it's doing just fine. The I/O framework I use in that RethinkDB example is Celluloid, which implements actors, a concurrency model that is widely regarded as superior to the callback style.
That they never will has more to do with the general attitude of Rails programmers, and their commitment to the REST request/response model. Libraries that offer more flexible interactions are simply not very visible yet.
Slightly related but not relevant now, Ruby 3.0 is expected to have some concurrency construct fundamentally integrated into the language.
Never did any Ruby, nor Node.js, so take this with a grain of salt.
What I understood the OP to mean, and what I experienced myself, is that a language needs to actively promote (OS-level) async I/O for it to be usable, not just support it. If sync I/O is the default, too many libraries will use it (it's easy and works fine, until it doesn't).
Because Node.js has no threading, async I/O is the only choice. Result: 100% of the libraries are async I/O.
Because Go has M-N threading, all I/O is inherently async for the OS, even if it's sync for the semantic threads. Result: 100% of the libraries are async I/O.
Scala and Java have Futures and all the other goodness you would theoretically need to make async HTTP requests. But, as an example, the official AWS access library (a wrapper around the AWS REST API) is synchronous. Result: communicate with AWS in your backend during a request? Block an OS thread, or write your own AWS library.[1]
Even having async I/O as an option at all splits your language in two. Most libraries will fall in the sync category, and their mere existence lowers the incentive to write an async one. E.g.: if there were no AWS library for Java / Scala, I would be much more keen to write an async one myself. But now? Ah well, // TODO: async, and continue.
This is the beauty of Go and Node.js on the server.
[1] Or choose an unofficial and, I'm rudely assuming, less complete / audited / reliable library.
> All async I/O functions have a corresponding sync version.
They do, but they're widely frowned upon in most applications.
They're completely frowned upon in server-related applications.
I've only really seen them used in cheap build scripts, and now with async/await (via Babel's regenerator transformation), I don't even see that much anymore.
If you provide synchronous, blocking interfaces and no concurrency primitives or expectation of non-blocking functions, IO, etc., people will use them.
In languages where concurrency is a built-in, understood pattern (Go, Erlang/Elixir, Haskell*, to name a few) people don't need to care.
Ruby has the worst of many worlds (sync I/O by default, incredibly expensive memory model, incredibly expensive lambdas, first-class functions aren't normally used beyond the "do block"-- nobody is passing them around, chaining, etc.)- I just don't see the reason to keep trying to squeeze blood from this turnip.
Not to make this discussion personal or anything, I'm curious have you ever worked in Ruby? You say things like "why bother" and "trying to squeeze blood from this turnip" as if you don't understand why someone would love Ruby.
Anyway, I do agree with your argument. Unfortunately Ruby is from the generation of programming languages in which I/O was just seen as an api, so the sane thing was to just map to the system api directly.
JavaScript is from that generation as well but got lucky because its requirement to run in the browser forced it to have I/O fully abstracted.
You're right, Node.js has no concurrency primitives, but it has true first-class functions.
Ruby lambdas are an afterthought and far too expensive to use in the same expansive capacity as you would in JS (or literally any functional programming language).
Couple that with the fact that the community just doesn't care, and again, I ask why bother?
The actor model isn't superior to callbacks in any technical capacity, and understand, callbacks are only a (perceived) problem when you don't have something like a language-level async/await to transform them for you.
edit: Add in the (MRI) GIL, and you've got yourself a fairly serious impediment to writing efficient, concurrent code.
The choice of I/O model has nothing to do with Ruby (see eventmachine for a commonly-used evented NIO setup). If you want asynchronous io with an event-driven model, there are good options for most languages.
With Go, Node.js, Erlang/Elixir, you don't enter the stage assuming that every operation you do is actually going to block others.
Concurrency is not a first-class concept in Ruby (nor Python).
Yes, EventMachine exists. Ok, it's not the standard nor norm, and I would wager most Ruby programmers have heard of it but haven't actually used it.
Even if you're using EventMachine or Twisted, those technologies are infectious (by necessity, not complaining here)- everything you build or use needs to utilize those primitives to work successfully.
In Node.js, the assumption is that nothing will block a calling thread (in favor of asynchronous callbacks).
In Go, the assumption is that goroutines (via a semi-preemptive scheduler) will not block their caller.
In Erlang, the assumption is that processes will not block one another (with a fully preemptive scheduler).
As a fan of Node.js- I will state, not having a widely-accepted, standard interface for building concurrent programs (vs. the soup of callbacks/Promises, associated async/await implementations, coroutines via generator functions) has been a severe detriment to the community.
I see that your TODOs list has "Optimistic UI updates", I worked in this direction and figured there are multiple things blocking such architecture in RethinkDB:
Hi Slava! Your project was immensely helpful for me to figure out what's possible. Optimistic updating is definitely the biggest and most challenging feature on my roadmap. I figured I'd save it until last so that by the time I start working on it some of those kinks would be figured out :) but I'm going to take a close look at the blocking issues.
Even when those RethinkDB issues get resolved, I still have some conceptual questions understanding how to make arbitrary queries update optimistically. For example, say that I'm subscribed to:
On the client side, because I used the pluck('name') operation in the subscription query, we can't possibly know which rows to optimistically delete since we didn't download the 'votes' field.
Do you know how this issue is addressed in meteor? I'm currently thinking I'll have to limit optimistic updating to queries that follow a simple structure, but I'd love a more general solution.
I am excited about rethink and I use react (via cljs + reagent [1]) every day. The FRP approach of reagent combined with subscribing to rethink for updates is very appealing.
I would still rather connect to RethinkDB with a frontend service and push those to stores and have the component watch the store.
Because while this might work for a small use case, a real world usage will quickly outgrow that and you will wish you hadn't stuffed such functionality in a display component.
So this is probably most useful for quick sketches and debugging stuff.
Could you elaborate on this? This project seems to be moving in the same direction the React team is with Relay and GraphQL – the component describes the query it requires to render itself, and a query/caching layer executes the query automatically.
What I described as a Service would be what this query/caching layer is. I would just rather not have it embedded in the Component, making it harder to follow what a Component does. The beauty of them is that they are simple to read and understand. The more mixins you use, the more esoteric it gets and you have to read a bunch of code to understand why the render function magically has these properties to display with.
Another point would be that the service or store could be re-used in more than just react, and it can be tested without having to consider the UI.
I think the issue is that the client code is running actual database queries on the server, and I don't see any restrictions on what queries can be executed.
So if you log in and authenticate (through that file), it seems like you can just open the javascript console in chrome and run any type of db query you want.
I've been using React for a year and a half, and it's probably the first thing in the JavaScript ecosystem for a long time (maybe since jQuery) that hasn't made me regret building a project using it.
It's not without its issues, but it's really changed the way I build things, which after 18 years is kind of impressive.
Show me a tool out there that already does this in react. Besides relay, which isn't released, I don't know of one. Defining data in your component is a new concept to the entire javascript and web development world. I view this more as innovative than jumping on a bandwagon.
I'm not sure that grabbing yourself a slice of the hype-pie is necessarily a bad thing. Learning is good for the soul, right?
I do get your point, if you had one. It does feel like there's a must-have new shiny thing to prod every other day. However, React seems to be even shinier than others as of late -- I know this because I even read the documentation and played with the tutorial! I do admit that I can't really come to any solid conclusions about how shiny React realistically is because I'm not much of a sample size.
I do find the React docs to be excellent though which means I'm more likely to keep playing with it. Nothing turns me off faster than awful documentation.
Before I jump to conclusions, can I ask what your experience level with React is like?
I think a lot of people make snap responses to the subject of new developments in the JS world because they see a lot of change happening, do not trust any of it, but also haven't bothered to further their education on the subject.
I'm curious to hear comments from those who are very knowledgable on React but also view it as a bandwagon.
> This is similar to solutions like Meteor, Parse, and Firebase. Rather than writing database queries in the backend and exposing API endpoints to the frontend, these solutions allow the frontend to directly access the data layer (secured by a permission system) using the same query API that backend services have access to.
I think it explicitly says that it uses the same idea as Meteor's:
> Rather than writing database queries in the backend and exposing API endpoints to the frontend, these solutions allow the frontend to directly access the data layer (secured by a permission system) using the same query API that backend services have access to.
I don't think so because in Meteor.js you rely on iron-router and need to painsteakingly define and manage who can do what, and subscribing to new documents coming in the wire?
The concept of subscriptions is completely separate from the router you use. At some point you would need to define who can subscribe to what documents, in any system with private data. It is called ACL.
It's Ruby+Celluloid doing subscriptions on RethinkDB, and it uses Meteor's DDP protocol to talk to the React client side.
Was pretty easy to build. All the modern technologies fit really well together.
Unfortunately in the Ruby world live webservices like this haven't really taken off much yet, so I had to write some low-ish level things myself (The DDP server implementation, which I think is the only non-Meteor DDP server implementation out there, and the celluloid-websocket implementation, which was easy because there's a great ruby websocket library by the faye people)