Hacker News new | past | comments | ask | show | jobs | submit login
Virtual DOM in Elm (elm-lang.org)
285 points by chunkstuntman on July 21, 2014 | hide | past | favorite | 112 comments



Having done some benchmarks with TodoMVC before, I knew something was off about these results.

They play to the strengths of the virtual dom approach by manipulating the benchmark via dom instead of what ever interface was implemented within each TodoMVC implementation.

So I forked the benchmark and changed both the Backbone and Mercury implementations to work through their respective apis.

Here it is: https://github.com/smelnikov/todomvc-perf-comparison

as you can see the giant gap between Backbone and mercury is now gone while both tests perform the exact same tasks. (feel free to step through it in the debugger to see for yourself)

Here's my commit log: https://github.com/smelnikov/todomvc-perf-comparison/commit/...

Note: I've added a new method to the Backbone implementation for toggling completed state as the old one was horribly in-efficient. This is not something inherit in Backbone but rather is specific to this TodoMVC implementation. See my comments in the commit log.

Note 2: Exoskeleton, which is basically backbone without the jQuery dependency is roughly 2-3x faster than vanilla backbone, I'm going to predict that it will actually be significantly faster than mercury.

Note 3: I think the virtual dom is great and seemingly has many benefits but I feel as though the speed benefit has been greatly exaggerated.


The problem with those small benchmarks is that it's pretty easy to manually write the optimal sequence of DOM commands to get the best performance. But when you scale your front-end to millions of lines of codes with many full time engineers that may not know front-end very well, then it becomes extremely hard to do it properly.

React originally was designed for developer efficiency and not performance. It is a port of XHP (in PHP) that we use at Facebook to build the entire front-end and we're really happy with. It turns out that the virtual dom and diff algorithms have good properties in term of performance at scale. If you have ideas in how we can communicate it better, please let me know :)


In all javascript apps the part that is slow is the DOM, not the javascript interface.

This benchmark was taken from the webkit source code then forked into http://vuejs.org/perf/ then forked to include mercury then forked again to include elm.

Neither elm nor mercury came up with this benchmark and just added themself to it.

What this benchmarks shows is that async rendering is really fast. Mercury, vue & elm all use async rendering where DOM effects are batched and only applied once.

A better, close to real user experience benchmark would be one using mousemove since that's an event that can happen multiple times per frame.


There is interaction occurring with the DOM in both benchmarks.

The way that the Backbone TodoView is designed does not take into account the possibility of a user adding 100 items using the dom within a tight loop. Probably because such a use case is impossible outside of this type of benchmark. By doing so the Backbone implementation ends up performing a lot of unnecessary renders. Therefore as far as Backbone performance is concerned this benchmark is not indicative of any real world scenario.

Just to re-iterate; when you're loading a set of todos from your local storage to display when the user first opens the page, you would not populate the "new todo" input box and fake an enter event for each item that you want to add. Instead you would reset the Backbone.Collection with a list of your new todos (go through the interface). That's basically the change I made to the benchmark. Sorry if it wasn't clear.


Running your perf test, I consistently get Backbone being the fastest, Angular the slowest, and the projects using Virtual DOM approach somewhere in the middle. Is that expected? http://evancz.github.io/todomvc-perf-comparison/

edit: I was running the original test instead of your fork.


There's another advantage to virtual DOM: you can hot-swap code while developing and have React run its diff algorithm without reloading the page. If your component has no (or little) side-effects, it means you get live reload as you edit for free. This is impossible with Backbone/jQuery soup of a view.

See my proof of concept video: https://vimeo.com/100010922

And actually runnable example that you can edit without refresh: https://github.com/gaearon/react-hot-loader

I plan to write a blog post explaining how to integrate it into React project soon.


Yeah, the Om beginner tutorial demos this in lighttable with clojurescript, it's pretty epic.

edit: https://github.com/swannodette/om/wiki/Basic-Tutorial


Where can I take a look? In my case, this is pure JS, no Light Table or browser plugins needed. No messing with V8.


Om hot reload doesn't require Light Table or browser plugins, just eval support from your editor setup. There's no way to see this easily beyond going through the Om tutorial. But as you say this is just a benefit that more or less falls out of React if you're careful w/ state - Devcards is another ClojureScript example of the possibilities - http://rigsomelight.com/2014/06/03/devcards-taking-interacti...


What editor are you using in that gif? Or is that just 10.10 that makes it look "cleaner"?


10.10, Sublime with Spacegray Light by Gadzhi Kharkharov


Off-topic, but are you using the 10.10 DP as your daily OS? If so, would you mind sharing a quick note of your experience? Any show-stopping issues? Better/worse?


So far I haven't noticed many setbacks with 10.10 (and I'm enjoying the new UI so far). Performance is good, every now and again software will refuse to install itself because it doesn't recognize my OS version. I can't remember the last time it crashed on me. Sublime is a little buggy, it'll go totally black for most of the screen, and I have to resize it to force a repaint, but that usually only happens when I have my desktop monitor plugged in (which is rare).

Overall, it's been a really enjoyable experience - a stark contrast to the horribly buggy mess that is iOS 8.


I am. It's less buggy than iOS 8 betas but it's somewhat rough if you're not used to living with beta software. Dock freezes twice a day, Safari's rendering is faster but app itself is quite slow.

Personally I have high tolerance for this, but I know some folks would get frustrated quickly.


This technique may be the most revolutionary thing in web development in the last several years, IMHO. I've been using React for a while, and starting to integrate Mori for persistent data structures, and the things I can do with it is insane. The fact that it's not only far better performant, but a way better abstraction for dealing with UIs, is crazy.


If this technique is much more performant (batching diffs of the DOM), why don't browsers perform this "back-buffering" natively?


They do perform this "back-buffering" natively. When you manipulate the DOM in a modern (post-2009 or so) browser, it's just changing a pointer and flipping a dirty bit.

The problem is that it's very easy to force a full recalculate of the whole page layout. Whenever you call .offsetHeight or .offsetWidth or .getComputedStyle, you're doing it. The full list of properties is about 2 dozen strong:

http://gent.ilcore.com/2011/03/how-not-to-trigger-layout-in-...

Most web developers don't know this, and so they're actively making their pages slow. Worse, many popular frameworks build this into the library, and so if you use them, there is no way to keep your pages responsive. JQuery, for example, can easily cause 4-5 layouts with a single call to .css; on a mobile phone and a moderately complex page, that's about a second of CPU time.


This got me wondering how React handles this requirement. Can you use React if you need to know offsetWidth/Height to do complex layout?


Once the component is mounted (there's a componentDidMount callback) you have access to the real DOM node and can access them (or perhaps store them as a property if necessary). The DOM node is also available from event callbacks and such.


Not everything needs to be built-in. There are significant trade-offs for baking things into the browser, and it's actually far better to keep things as libraries. You don't have to worry as much about backwards compatibility, it's far easier to roll out updates, etc.

Also, the fact is that the web is stuck in a Web Components-driven approach to building apps which is pretty orthogonal to how this works.


They try their best, but often naive code will make changes and then read values in an order that requires recalculation to produce the right value.


What are you using Mori for that react.addons.update doesn't do?


react.addons.update is a poor man's implementation of something like mori. I'm not going to go into details right now (also still early in research), but mori is way more efficient and provides a better API for working with this kind of stuff (`sort` return a new obj, etc).


Can we improve react.addons.update then? Mori is not syntax compatible with javascript datastructures, and needs to be marshalled to interop with javascript components. If you include marshalling overhead, mori is in the same ballpark as react.addons.update.

http://jsperf.com/sprout-vs-mori/3


The reason why Mori scales better is because it implements arrays via a tree of 32-slot arrays. So unfortunately, you cannot use normal JavaScript data structures and have to re-implement all the array methods that know how to deal with this data structure.

React.addons.update uses normal JavaScript arrays. So it won't scale as well, but at least you get immutability.


If your react state contains large lists, I think React's render() is going to be the bottleneck, not javascript arrays. That's why react state needs to be structured in a tree, so you can implement shouldComponentUpdate and get O(log n) renders. I wrote more in reply to jlongster.


That's the thing I love about React though; you don't have to marshal if you embrace mori wholesale. I can render components based on these data structures and never have to "reify" them into real JS data structures.

That said, I'm probably being overly optimistic and I'm just starting to research it. I don't quite like how addons.update feels like a bandaid, but maybe it is good enough. Haven't done enough research yet. I definitely don't like writing updates the way addons.update forces you to, but sweet.js macros could solve that (and I was going to write macros for mori anyway).


I've done some research too - check out this implementation of cursors, it enables drop-in O(1) shouldComponentUpdate (like Om) and is compatible with rAF batching (like Om). We explored Mori for a bit too and ended up deciding Mori wasn't worth it. (Our app was big enough that we experienced pretty brutal performance without shouldComponentUpdate. Now our bottleneck is methods like Array.prototype.map over large lists in render, which Mori can't help with - we have to restructure our state away from lists and into trees to better take advantage of shouldComponentUpdate)

https://github.com/dustingetz/react-cursor/blob/master/examp... https://github.com/dustingetz/react-cursor/blob/master/js/Cu...

(It is backed by react.addons.update, it provides a mechanism like mori.assoc_in for immutable subtree updates, and it preserves reference equality for equivalent cursors (value/onChange tuples))

(Cursors are also not vulnerable to issue#122 https://github.com/facebook/react/issues/122)


Very cool! I had researched this technique before, when I found cortex: https://github.com/mquan/cortex. Cortex does something very similar, except it doesn't actually do persistent data structures. Building cursors off of the addons.update stuff is a neat idea. (why `onChange` instead of `set` and `pendingValue` instead of just `value`?)

I agree that you can take this really far. At this point I need to sit back and thing about it. :) I think mori could provide better performance for certain types of apps, but it does come at a cost if interop. Time to hit the hammock.


Hm, Cortex looks almost like my Avers (https://github.com/wereHamster/avers). With the major difference that Avers uses Object.observe instead of explicit setters/getters, and Avers mutates data in-place. I also use it with react, and haven't noticed any performance issues. But I have an idea how to make it work with immutable data (to allow === comparison in shouldComponentUpdate).


We chose cursor.value and cursor.onChange to line up directly with react's value/onChange convention. But, we are considering a nomenclanture change and we will probably expose all the react.addons.update - set, merge, push etc. Knowing when to choose value vs pendingValue() is essential complexity, but cursors make it a mechanical decision - in a lifecycle method? use `value`, in an event handler? use `pendingValue()`.

Email me if you want to talk about it.


This is really great work.

Why do all of this instead of just using Om? I'm currently waffling between Om and straight react.js for a project of mine.


Project requires designers committing on your codebase -> use react with jsx

Able to use cljs and don't care about designer friendly -> what are you waiting for!!!


I know you've probably answered this question a million times... but could you briefly explain your enthusiasm for clojurescript and/or om? I've been using the react js library and have been wondering what makes clojurescript a step above javascript and om a step above react.

Thank you!


cljs/om gives you functional programming as a default, with javascript/react you have to work for it



Do you have any code samples online of this combination? I'd be interested to see how these are used together, because they seem like a great fit.


Not yet, I've been dying to dig into this for months but now just getting some time to do so. I'm rewriting my blog with lots of cool technologies like this and going to open-source it and write up about it. http://jlongster.com/


Just wanted to add my support with this - I am using React+Fluxxor+ampersand-collections, and I find myself simply rebuilding the collections on every change (and disabling their add/remove/set/reset functions) to keep them immutable and speed `shouldComponentUpdate`, but I would rather be using mori.

When you use something like mori, but you need to define data transforms, such as float precision, derived attributes, and so on, how do you make that work well in your apps without too much complexity in your components? One thing I really enjoy about ampersand-state and ampersand-collection (forks of Backbone) is that I can define very simple, standard functions per model so that my views don't have to have any idea what was passed to them. How would that be possible in Mori? Do you use something like a `__type` attribute so an external utility can suss out what the object is and transform it correctly?


I'm not familiar with ampersand-collection, so I don't exactly understand your questions, but I think you're also farther along than I am researching this. I can't give a good answer yet as I'm just starting to play around with integrating mori.

I don't know what you mean by "derived attributes", or why it would be difficult to do data transformations on the models that deal with mori data structures. It'd be great to discuss this somewhere, maybe on #reactjs IRC? I'm jlongster there.


Cool, I'll keep an eye out. Any reason to use React+Mori over Om other than more familiarity with JavaScript??


There is a huge amount of JavaScript developers that can't jump to ClojureScript. I love CLJS, but I have a heart for the amount of JS projects out there that can't take advantage of these techniques. It's really important that we borrow from good research and make it available to JS so that existing projects can begin integrating them, because most of them will not jump to a completely different language.


elm-lang.org has some really great demos. http://elm-lang.org/Examples.elm

I haven't dug deep enough to form an educated opinion on the language, but so far its refreshing to see such a drastically different process than popular languages today.


My first thought when looking at the benchmarks is that I find it strange that Backbone is faster than React. Not that I imagine Backbone to be slow, particularly, just that this article is about one of React's key features - the virtual DOM - and that's something which Backbone doesn't have. I'd expect to see React up there with Om, Mercury & Elm.

I've just had a look at the code and React is using a localstorage backend, while Backbone is using its models + collections with a localstorage extension... so I'd expect there to at least be some overhead there, but apparently not.

Does anyone have any quick thoughts on what might be happening here? I can't shake the feeling that these benchmarks might not be terribly useful.


> I'd expect to see React up there with Om, Mercury & Elm.

Not by default because it's missing laziness and immutability: because just about everything in javascript is mutable, React can't prune out the changeset as the developers could be modifying the application state behind its back in ways unknown (or worse, could be using state which is not under React's control, neither props nor state, but globals and stuff).

That is, for safety/correctness reasons the default `shouldComponentUpdate` is `function () { return true; }`. The result is React has to re-render the whole virtual DOM tree (by default) and the only possible gain is when diffing the real and virtual trees.

Because Clojurescript and Elm are based on immutable structures they can take the opposite default (and if you're going behind their back and mutating stuff it's your problem).

Also, I'm not sure React defers rendering until animationFrame.

An optimized React application (or at least one which uses PureRenderMixin[0]) ought have closer performances to the others's (Om is a bunch of abstractions over React after all, so you should be able to have at least as good performances in pure React as you get in Om).

[0] http://facebook.github.io/react/docs/pure-render-mixin.html


Without looking at the benchmark code:

* Development React is slower than production React. There are a bunch of extra checks all over the place along with a profiler [1].

[1] http://facebook.github.io/react/docs/perf.html

* Speed isn't the top priority of the framework, predictability is. There's a virtual event infrastructure and other browser normalization work going on. Om is using React under the hood and more or less represents the best case scenario.

* React isn't magically fast. The diff process still has to visit all the nodes in the virtual DOM, generate the edit list, and apply it, which is a substantial amount of overhead. I'm used to seeing React even or behind when benchmarked with a small number of nodes. The explanation I've seen is that these benchmarks aren't considered important since the goal isn't to be as fast as possible but rather to never be slower than 16ms.

The trick behind most of the "React is fast" articles is that React is O(N_dom) instead of O(N_model) so if you can shrink the size of the output DOM, React goes faster. The Om line demonstrates this and doing screen-sized render over a sliding window of data in a huge data set (most grid/scrolling list demos) is another common example. There are perf knobs that probably aren't being turned here but if the app renders fast enough why would you waste your time turning them?


I've seen some AngularJS vs React benchmarks a while back. I believe it was http://jsperf.com/angular-vs-react/5

I consistently got the result of Angular utterly and completely destroying React. Initially I blamed the virtual DOM approach, but after seeing other frameworks utilizing it and outperforming Angular by a huge margin, it seems to me that React is not written for performing well on small DOM documents. (There might be a turning point, considering how bloated Facebook pages it was designed for.)


Our performance benchmarks suggest that application performance is most certainly better off with: (a) DOM reuse (b) calculating expensive things only once (c) reducing GC pressure == not discarding/recreating things (d) coordinating actions that may trigger reflow. It is independent of what framework you are using.

My limited understanding of React is that it fails in (a), (b) and (c), and only limited measures can be applied to improve them. Re-creating the entire DOM on each update probably does not help. I have no information if (d) is possible with it.

I am using Angular.dart for a while now, and it can be used to get all of them in an optimal way.

Disclaimer: I'm working at Google.


(a) You can use the key attribute in order to get DOM reuse. If you are looping over N keys then React is going to reuse the nodes and move them around.

(b) You can implement shouldComponentUpdate in order to have a quick way not to re-render a sub-tree if nothing changed.

(c) See (b) but we're also working on changing the internal representation of the virtual DOM to plain js objects that can be reused[1]. We were super worried about GC but it turns out that it hasn't been the bottleneck yet for our use cases.

(d) If you are writing pure React, all the actions are batched and actually, it's write-only, React almost never reads from the DOM. If you really need to read, you can do it in componentWillUpdate and write in componentDidUpdate. This will coordinate all the reads and write properly.

A really important part of React is that by default this is reasonably fast, but most importantly, when you have bottlenecks, you can improve performance without having to do drastic architecture changes.

(1) You can implement shouldComponentUpdate at specific points and get huge speedup. We've released a perf tool that tells you where are the most impactful places to[2]. If you are bold, you can go the route of using immutable data structures all over the place like Om/the elm example and you're not going to have to worry about it.

(2) At any point in time, you can skip React and go back to raw DOM operations for performance critical components. This is what Atom is doing and the rest of their UI is pure React.

[1] https://github.com/reactjs/react-future/blob/master/01%20-%2... [2] http://facebook.github.io/react/docs/perf.html#perf.printwas...


I am currently writing an implementation of React in Scala.js. It's inspired by React and by the documentation of React, but I have not looked at the actual source code so far.

You seem to be an implementor, so two questions that maybe spare me looking at the source code :-)

1. How do you batch updates? 2. I am currently using an algorithm for longest increasing subsequences for avoiding superfluous dom insertions of children when diffing lists. I also make sure that the node containing the active element will not be removed from the tree during the diffing (if possible at all). Are you doing the same?


1. The boundaries are currently at event loop. Whenever an event comes in, we dispatch it to React and every time the user calls setState on the component, we mark it as dirty. At the end of that dispatch, we go from top to bottom and re-render elements.

It's possible to change the batching boundaries via "Batching Strategies" but we haven't exposed/documented it properly yet. If you are interested, you can look at requestAnimationFrame batching strategy. https://github.com/petehunt/react-raf-batching

2. We cannot really use normal diff algorithms for list because elements are stateful and there is no good way for React to properly guess identity between old and new. We're pushing this to the developer via the `key` attribute.

See this article I wrote for a high level overview of how the diff algorithm is working: http://calendar.perfplanet.com/2013/diff/


Thanks a lot, very useful info. Yes, I know that because of state "diff" is really not diffing but synchronization of "blueprints" with actual "components". Still, after the update the order of the existing children of a node might have changed, and it is possible to devise a simple, not too costly (n log n, n is the number of children), and optimal strategy for rearranging the nodes.


If you think you can make it better in React, pull requests are more than welcome. For example, we didn't have batching when we open sourced React and it was written by the community :)


1. We buffer calls to setState() and apply them all at once (they don't trigger re-renders) and mark those components as dirty. Then we sort by the depth in the hierarchy and reconcile them. Reconciling removes the dirty bit, so if we come across a node not marked as dirty we don't reconcile (since it was reconciled by one of its parents).

2. I don't think we spend a lot of time trying to make this super optimal, but git grep ReactMultiChild to see what we do.


Thanks a lot! I grepped it but I cannot really figure out the strategy from the source code. Probably you are doing something similar to what I am doing.


While each use case is different, I'd like to clarify a few things in my OP.

DOM reuse is not the same thing moving a DOM subtree to a different place. DOM reuse is e.g. getting an already-rendered table row, binding a new value to it, and modifying only the DOM properties in the complex DOM structure that actually did change. E.g. you modify only an Element.text deep in the first column, and a few other values in the other columns. Or maybe you need to do more, but all you do is delta. You don't just annotate a DOM structure with a key at row level, as it is closer to a hash of the DOM, not speaking of the data-dependent event handlers.

Calculating the DOM (virtual or not) is expensive, compared to not calculating at all. Creating a virtual DOM structure and not using it afterward creates GC pressure, compared to not creating at all. We are talking about optimizations in the millisecond range. A large table with complex components inside will reveal the impacts of these small things.

DOM coordination is not just making the DOM writes in one go. Complex components like to interact with each other, depending on their position and size on their page, and the changes in the underlying structure. They read calculated style values, and act upon those values, sometimes causing reflows. And if such things happen at scale, forced reflows may cripple the performance, and coordinating such changes may be more crucial than the framework you are choosing.

I am sure that people who are familiar with React may have their way get these stuff. I have looked at it, and I haven't seen it to happen automatically, while with Angular.dart, I get it without effort.


You get all of this for free with React.

DOM node reuse is perhaps the central theme of React so it's odd that you bring this up as a criticism (see https://www.youtube.com/watch?v=1OeXsL5mr4g)

Calculating the virtual DOM does come with some processing and GC overhead, yes. But any system that tracks changes for you (data binding) comes with overhead and React makes the right set of tradeoffs for real apps (since it is a function of how large your render output is, not your underlying data model). React has about a 70% edge in CPU time on Angular in the "long list" class of benchmarks (which drops to a mere 40% with the Object.observe() performance unicorn). And steady state memory usage is almost always better with a virtual DOM approach since again it only tracks what you actually render which is usually smaller than your data model (https://www.youtube.com/watch?v=h3KksH8gfcQ).

DOM coordination boils down to non-interleaving of reads and writes to the DOM. React manages the writes for you which happen in one go. Components have a well-defined lifecycle which is also batched and are only allowed to read from the DOM during specific points in the lifecycle, which are coordinated system-wide. So out of the box if you follow the guidelines you will thrash the DOM much less (see http://blog.atom.io/2014/07/02/moving-atom-to-react.html)


On the DOM reuse: could you help me out? I'm sure if I watch all the videos I may be able to figure it out, but I'd be interested in a trivial example. Let's assume I have the following structure (additional cells and rows are omitted for cleaner display, please assume we have 1000 rows and 20 cols):

    <div class="row">
      <div class="cell">
        <div class="align-left">
          Value.
        </div>
      </div>
    </div>
I want to reach the following:

    <div class="row">
      <div class="cell">
        <div class="align-center">
          Value B.
        </div>
      </div>
    </div>
What do I need to do in React that on updating the underlying data, only the innermost Element's class attribute and innerText would change, and the rest of the DOM will be kept intact?


Can't respond to you on react (though my impression is that the entire point of virtual DOM diffing is to do exactly what you're after), but can you justify in some way your HTML markup using <div class="row"> and <div class="cell"> instead of <tr> and <td>?


As I am working on large tables, I may have a different goals than most UI developers are looking for. Diffing a huge structure is just a waste of time compared to not-diffing. Don't re-calculate things that you already know of, and in case of the table, you know pretty much upfront.

On the HTML markup, there are many valid reasons you may want to use non-TABLE based tables:

- it allows better rendering control for infinite scrolling (DOM reuse, re-positioning, detached view for sticky header and column)

- it allows you to have real (CSS style-able) row groups, or if your structure is hierarchical, it allows you a better control to create a treetable (reduced rendering time if you expand a node and insert a bunch of rows in the middle).

- it allows you to have multiple grid systems inside the table (e.g. a detail row may use up the entire row, and it may have its own table inside, which you'd like to synchronize across multiple detail rows). I guess this later benefit is just redressing the fact that you do need to implement an independent grid system anyway :)


It's automatic:

http://jsfiddle.net/bD68B/

I tried to make the example as minimal as possible, so I don't show off a lot of the features (i.e. state, event handling), but I did use JSX, an optional syntax extension for function calls.


Thank you, this seems to do it for the innerText. Would it be too hard to apply it to the class attribute too? (I've tried to just copy the {} binding, but it doesn't work)


Here, I gave it a try: http://jsfiddle.net/bD68B/1/



Thank you both! I now have a much better understanding on how React works. I need to update the related performance benchmarks, it would be interesting to see how they compare side-by-side on our use cases.


Don't forget [PureRenderMixin][1], it can give a big perf boost when used in right places.

[1]: http://facebook.github.io/react/docs/pure-render-mixin.html


All of those points are precisely what React addresses.

The virtual DOM determines which mutations are necessary and performs only those, batched. It also reuses DOM nodes as appropriate. DOM nodes can even be keyed against data in the case that, between transitions, it isn't entirely clear which nodes correspond to which data.


React can handle all of your items just fine depending upon usage.

(a) Using the key property, will give React a manner to determine likeness of elements

(b) Don't calculate the expensive things at render time, do them when loading or modifying state.

(c) Is related to a, but I haven't run into large problems with this personally.

(d) React does batch changes to an extent I believe.


None of the things you have guessed about React are true.


> Re-creating the entire DOM on each update probably does not help

Perhaps you meant virtual DOM here? (in any case, the actual DOM is not recreated on every update)


That benchmark confuses me; is it measuring the entire run of that script, i.e. is it measuring the setup (creating class, inserting into dom) on every run? If so that seems like the wrong way to go about testing performance.


It's worth noting that Om uses React internally[1]. React, like almost every tool out there, can work very well when used appropriately, or poorly if use in appropriately.

[1] http://swannodette.github.io/2013/12/17/the-future-of-javasc...


(Edit: posted this comment before I read the article, doh.)

These is certainly something wrong with the benchmark. Since Om is a layer on top of React, it is obvious that React itself cannot be necessarily slower than Om. (Perhaps idiomatic React usage is slower than idiomatic Om usage for this case, though?)


See the article's discussion of immutability in Elm. Om has the same immutability property, so configures React to take advantage of that property, skipping vanilla React's property diffing.

edit: Rather, see masklinn's comment that describes what actually happens. Point being, vanilla React does extra work to account for anything a developer might do, but allows Elm and Om, which have more rigorous standards for their users, to override that behavior.


Vanilla React doesn't do property diffing by default, it re-renders the whole tree then diffs the whole virtual DOM, because it can't rely on data immutability or on components purity.

Since Om or Elm assume immutable inputs and pure components they can skip rendering components altogether when the inputs have not changed (mentioned in TFA's "Making Virtual DOM Fast").

React can do that, but it has to be opted in component by component either by using PureRenderMixin or by implementing shouldComponentUpdate.


Gotcha—I'd assumed property diffing was the way to go because it hadn't occurred to me that folks would be writing impure components. Thanks!


Why so many strings and so few types, especially for somethings like Elm?

The same in PureScript would be

profile user = mkUI spec do div [ className "profile" ] [ img [ src user.picture ] , span [ text user.name ] ]

See, types everywhere. https://github.com/purescript-contrib/purescript-react/blob/...


Because the underlying DOM is essentially untyped. This is a low-level adaptor library. One could easily build a more strictly typed wrapper on top.


Let me rephrase the question: why haven't they already?


Why haven't you already?

They're literally announcing the untyped base library layer and the post specifically calls out the desire to build higher level abstractions. Elm is moving incredibly fast, so your question is totally unreasonable.


I don't use Elm. I'm more interested in PureScript. But, the API looks silly for a Haskell like language. There is some positive feedback for whoever cares. Like, I wouldn't take that API out in public because everyone will think, "why is there string-based programming in Elm?"


Virtual DOM is fast, but it's not the only way to provide free and fast DOM updates. The author removed the Vue.js implementation from the benchmark, which does not use Virtual DOM but is as fast or even faster than Mercury and Elm.

Disclaimer: I'm the author of Vue.js. The benchmark is a fork of a fork of the Vue.js perf benchmark (http://vuejs.org/perf/).


Ah! Well done on Vue. I'm finding it quite nice to play with outside work, currently we're using Mithril for our "small-non-angular" apps and components, but unfortunately while I adore it, a lot of the other devs Javascript isn't strong enough to deal with the flexibility Mithril gives you. Does Vue have the same issue? From what I've seen it sort of does, but with a bit more structure thus mitigating it a little. Thoughts?


As a startup who's built all of its UI on AngularJS, does introducing React/Om into the stack make sense? We have a B2B product where the users deal with a lot of CRUD forms and dashboards.

React looks interesting, but only if it gives significant advantages (time-to-market, maintainability, etc) vis-a-vis AngularJS in managing a large code-base.

Any first hand reviews?


YES - I've built two enterprise app frontends (CRUD forms and dashboards) in react since react came out. Now that I know what I'm doing and have my tools built out, I am contracting and hitting ridiculously tight schedules that I would never have been able to hit without the level of abstraction react enables. It helps that a react codebase is massively smaller than a comparable OOP-style codebase (I've used Backbone, Knockoutjs, and ExtJS in comparable apps)


Are there any helpers/libraries you use when building CRUD forms? I haven't seen one for React yet - or TBH a well functioning one in any language I use - and it's a pain point I would like to solve.



Fab thank you! I look forward to exploring these... any system that makes CRUD less painful is a system I want :)


We used Angular at Stampsy and it was a pain to learn and debug. React is awesome because it encourages very modular components, has very small API surface (you can go far with knowing 5 API methods, compare this to Angular insanity) and great out-of-the-box performance (which is possible to boost 5x if you use performance hooks like `shouldComponentUpdate`). React gets you very close to browser limits in terms of perf, while staying very maintainable. Moreover, it doesn't impose any kind of structure on your projects, and you can begin using it one component at a time (even inside Angular).

(I'm not affiliated, just a very happy user.)


Why do you say AngularJS was hard to debug? Also any insights around modularity - AngularJS directives vs ReactJS components?

I hear you about the API surface. Currentlu AngularJS has too many weird/new concepts.


Why I prefer React over Angular:

1. Two-way bindings complicate things because there is no single source of truth. (See http://vimeo.com/92687646 at 30:00)

2. Template/directive separation is superficial, in fact these are single concern and should be together.

3. Separation between `props` and `state`, as well as documenting `props` via `propTypes` encourages very natural modularity.


All of the empty brackets to describe the virtual DOM looks bad to me. I'm curious if this is regarded as a language smell.


It's easy to use partial application to get rid of that problem: the type of the function node is:

  node : String -> [Attribute] -> [CssProperty] -> [Html] -> Html
We could define a convenient function div, for example, with

  div : [Attribute] - > [CssProperty] -> [Html] -> Html
  div = node "div"
that would let us say "div [] [] [text "Hello world"]" instead of "node "div" [] [] [text "Hello world"]". Of course, this doesn't fix your problem with the empty brackets. This can be fixed with something like:

  bareDiv : [Html] -> Html
  bareDiv = div [] []
letting us do "bareDiv [text "Hello world"]"


Elm looks extremely promising to me. However, I tried the Elm's implementation of TodoMVC here: http://evancz.github.io/TodoFRP/ and got an unusable list of strings with basically none of the features seen in other prototypes. Is it the one used in these benchmarks? What results would we see in an identical test?

Update: the benchmark uses a correct implementation available here: https://github.com/evancz/todomvc-perf-comparison/tree/maste... so it was a false alarm on my part. Tried it out at https://rawgit.com/evancz/todomvc-perf-comparison/master/tod...


I think you may have gone to the wrong address. The second link in the article is the most recent implementation (using a virtual DOM)

http://evancz.github.io/elm-todomvc/


I assume it was http://evancz.github.io/elm-todomvc/, which is more recently updated and seems identical to the other TodoMVC demos.


I think this is the version they were referring to: http://evancz.github.io/elm-todomvc/


I've never heard about mercury before! Great to see more virtual DOM development.

I hope the new framework gets animation right, I'd love to see it as flexible as in D3, in my opinion this is something currently React currently lacks (the transitions are there but they are too simplistic to cover complex web app cases).


Care to try out https://github.com/chenglou/react-tween-state? I've been experimenting with animation in React and would love to see the general paradigm behind this and implement it in React.


Anyone know of a Haskell or proper Haskell subset that can do something similar (hopefully batteries included)?


Absolutely not "batteries included" but I've put some work into a React interface for Haste, which is a Haskell->JS compiler.

https://github.com/takeoutweight/shade


Awesome. Thanks a lot.


You can run the tests by yourself here: http://evancz.github.io/todomvc-perf-comparison/


I've been thinking why there's no virtual dom implementation other than react. Glad to see there's finally some out there.


Mithril is a JS MVC that also has a virtual DOM implementation.

http://lhorie.github.io/mithril/


If you're using Dart, I have an experimental implementation: https://github.com/google/dart-tagtree

I'm sure others are experimenting as well, so we should see more implementations soon.


Elm seems Incredibly fast. By using requestAnimationFrame like described in the article I managed to bring my own library to satisfying performances https://github.com/evancz/todomvc-perf-comparison/pull/1


What could possibly be done in the React.js version to make its performance closer to Mercury/Elm? I notice some 'shouldComponentUpdate' methods are already overwritten.

Or is this the limit of an overly mutable language like js?


After investigation, it turns out react has the potential to be faster than Om if it fixed its batched updates. Another huge performance issue is Function.prototype.bind which is called extensively.

The Om example does some kind of Event delegation using channels which is much faster.


Anybody have any good reviews on elm?


It is fun to play with and I managed to pick it up very quickly (after having already spent a significant amount of time learning Haskell).

I am a bit concerned about the lack of typeclasses and what that could mean if I try and build something bigger using it. Maybe I could use Purescript and Elm together.


Type classes are syntactic sugar for explicit dictionary (record) passing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: