Love it. I think the most discussion-worthy quote I found in the docs is this:
> It's currently fashionable to avoid two-way binding on the grounds that it creates all sorts of hard-to-debug problems and slows your application down, and that a one-way top-down data flow is 'easier to reason about'. This is in fact high grade nonsense. It's true that two-way binding done badly has all sorts of issues, and that very large apps benefit from the discipline of a not permitting deeply nested components to muck about with state that might affect distant parts of the app. But when used correctly, two-way binding simplifies things greatly.
I wonder what the author considers "used correctly" and "done badly" and how Svelte approaches this.
My understanding is that there are two issues with two-way binding that are structural, and will be difficult to fix no matter how you approach it.
First is that if you look at the system as a graph of updates, automatically closing all cycles between variable update and widget means that any additional two-way binding links you add become a cycle, which makes it difficult to implement, model, and debug. If the framework only initially links the variable to the widget or the widget to the variable, the graph is a lot less populated to start with and creating a cycle is much harder.
Second is that the developer ends up wanting more and more complicated transforms over time, and having to implement both directions of them at once is much more difficult than having to implement only one direction, because not only are you writing two transforms instead of one, you also really ought to make sure the transforms are able to be roundtripped without data loss, and also that any invalid states on either side of the transform are handled sanely in some manner. Very few developers think this way; it's one of those places in programming where you really need to approach it with a mathematical state of mind, but it's generally being written by the "I don't see how programming is connected to math" types. (Which is something a two-way binding advocate needs to keep in mind when writing their library; you're not going to get your users to deeply understand the way the library works before they can benefit from it, they're going to want to just dive in and get something useful going.)
These are not necessarily insurmountable problems, but they are fundamental problems to having two-way binding. I think these two things are why it has never really taken off despite the fact I've seen at least half-a-dozen attempts over the years. I can imagine programming language tools that could help with both problems, but as what I'm seeing getting sketched up in my head requires a type system at least as strong as Haskell's to be practically usable without so many holes as to be insignificantly different from what we already have, it's not going to take off anytime soon.
Two-way binding is being used without popular mention in some industry-specific areas. The area I'm familiar with is control software for audio processing hardware. Lots of different processing platforms allow the creation of control panels with a drag-and-drop UI, and all widgets assigned to the same hardware state variable are also linked to each other, in all directions.
I also implemented my own two-way/omnidirectional data binding system ages ago (before I knew what it would even be called) in Java Swing for another audio UI, and all the challenges you mention are real, but as you say, not insurmountable. Multiple copies of my UI can be controlling the same hardware, and they will all remain in sync without infinite feedback loops, but getting there took lots of work.
This is a really helpful way of describing the problem. Concretely describing behavioural coupling in terms of graphs is much less hand-wavy than the normal, "things become tightly coupled and which becomes hard to deal with."
Dataflow constraints with solvers like DeltaBlue solve these problems and allow you to incrementally add additional constraints.
I show how this works for a simple example in my paper "Constraint as Polymorphic Connectors"[1]. I also show that constraints are useful for expressing the high-level architecture of many interactive systems and suggest how to two might be connected.
"...you also really ought to make sure the transforms are able to be roundtripped without data loss, ..."
In my experience this happens inevitably; either you code it right the first time or end users discover it and it comes back as an issue that gets fixed later. There is little math involved in my experience and it's pretty hard to miss this kind of issue unless you're a complete cowboy coder or have never seen it before. Once it bites you in the ass it's not likely to recurr.
I didn't say it involved math. I said it involved mathematical thinking.
You may have bodged together the relevant concepts by experience. That works perfectly fine. But there is a faster way to learn and teach it... if you can get people past the idea that there's some sort of virtue in bragging (for lack of a better word) about how math has nothing to do with programming and how little they know about the mathematics involved in programming. It's all way easier if you start with the concept of an isomorphism and learn how to compose them from the beginning, rather than having to rewrite the same concepts in your code over and over again without realizing it.
If not "used correctly" means a buggy app, and it's easy for an inexperienced developer to make those mistakes, you're still in for a world of pain. If you need to be an expert to avoid accidentally screwing everything up, there's still work to be done.
This is the whole "pit of success" thing Facebook talks about, the path of least resistance should be towards a functional nearly bug-free app. Certainly an app where bugs are relatively well-quarantined.
One counter argument to the author's point is that there were many not-very-popular 2WB frameworks, until React and Angular came along. 1-way, top down clearly outcompeted 2WB.
In my own experience, I worked on an app that migrated from a 2WB (forgot the name, sorry) to React, and it was night-and-day difference.
Trying to avoid boilerplate is in my experience not about laziness - especially when you consider the extreme lengths a lot of developers will go to to eliminate duplication and boilerplate from their codebases. Copying a Redux reducer that implements a standard set of CRUD actions is lazier in my mind than trying to abstract common patterns away.
For me at least, the reason boilerplate bothers me so much is that it impacts the readability of my code. Ideally, I'd like my codebase to express the business requirements of my solution as succinctly as possible - every line of boilerplate code I need to include to express yet again how to make an AJAX call, or update a piece of state, is a distraction from what I'm actually trying to accomplish with the application.
If the thing that I'm abstracting is routine, and unrelated to the business problem I'm solving, I don't need to know what's happening (at least in the sense of me or someone else coming back to read the code later). I don't need to know how an HTTP server, or the particulars of an AJAX request work most of the time.
true, but when there is something unexpected about how the AJAX request or the HTTP server are working then debugging is going to be harder because of the abstraction.
No, the work in fact will be easier - because something unexpected will either happen above, below or on abstraction boundary - as opposed to "somewhere in this big blob of function invocations". Also, you'll get to fix the bug in one place instead of having to hunt down every similarly looking piece of code because it may contain the same issue.
I guess I agree with that; I was thinking specifically in framework defined abstraction / usage.
And if the framework has abstracted away something that you end up needing to understand - well bad framework or bad usage I guess, but still difficult to figure out what's going on.
Taken to its extreme, that argues for writing everything in assembly language.
React's virtual DOM diffing makes components more readable than the previous style of writing code to explicitly manipulate the DOM and totally hides what's really happening. However, we trust that it's going to do the right thing, just as we trust that the JavaScript engine will do the right thing when it JITs and interprets our code.
It's not that people said "Everything is bad, do assembly". It's just that people identified 2WB as problematic and you just shouldn't do that one thing.
What I find interesting is that while I actually agree that 2WB as was done before was problematic, MobX which looks like previous-generation 2WB may actually be just fine because of features specific to its implementation of the idea (actions, and better tools for use in development)
Currently JS frontend folks also classify MVC as "problematic", yet a lot of successful software was written that way. So along with J2EE folks, this isn't really the community to look for style standards by default.
Honestly, given that I know of no two developers that have the same understanding of what MVC is and how the code should be divided, I have a feeling there's something wrong with the pattern itself, too.
I don't have a link handy, but I read a piece by one of the original MVC pioneers claiming that the pattern was designed for encapsulated components, usually fairly small, not entire applications. I suspect many of the quirks and disagreements between different MV* architectures derive from this fundamental misapplication of the pattern.
'illusion' and 'readability' are interesting choices for words, given that both refer to how something is perceived.
The argument I'd make is that the illusion is the point when you're writing well abstracted code. The abstraction makes it possible to write code with the assumption that some specific detail is taken care of automatically. While it's true that the detail is hidden, that lets the author and reader of the code focus on other details that might be more important.
Of course, this doesn't absolve anybody from needing to be at least somewhat aware of the abstractions, but hopefully most of the abstractions can fade into the background most of the time. (Anybody who's ever worked with a buggy compiler can attest to how frustrating it can be when this is not the case.)
Hard to say when his only argument is "is in fact high grade nonsense". He's missing the big picture though: performance matters but not completely, it's not two way binding that makes two way binding bad, its developers using two way binding that makes it bad.
I also think it's ignorant to say it's fashionable to avoid two way binding, I've seen teams greatly benefit from avoiding it, teams with less experienced devs and devs who I've previously seen write terrible code. Maybe now two way binding would be easier for them, though I don't know; I'm unlikely to suggest they 180 on tech that has made them much more successful.
Redux fixes two way binding by forcing the developer to emulate it using such a convoluted mechanism that there's no way to see that it's still broken; great job, FB rockstars!
If u don't like it you don't have to use it. And redux is nothing like that. It reduces a tonne of bugs among a lot of not so smart developers or beginners.
Wow. How can someone even understand it like that! No way. I actually meant what I wrote and im not really worried about what someone else might think about my comment. Cause it's way way softer.
TodoMVC is a useless benchmark for the problem that this claims to be addressing. The limits we're hitting with our applications now are with BIG applications, with many routes, many views, and lots of client side logic. We're talking hundreds of files (in some cases, thousands). Of course a framework, with it's fixed overhead, is going have a bigger payload for a tiny demo app like TodoMVC, than something like this which compiles to some amount of overhead which grows seemingly linearly with application size.
That said, I'm not criticizing the framework here, and I welcome new ideas, but a better choice of benchmark is sorely needed to be persuasive here.
Just to follow up on this. I wrote a little CRUD app (managing a list of users). I wrote it in both preact and in svelte, wanting to see how the two grew as I added new features.
When adding the "Edit User" feature:
Svelte's unminified size grew 1.39x. Its min+gzip grew 1.2x.
Preact's unminified size grew 1.06x. Its min+gzip grew 1.05x.
The final tally for an almost line-for-line equivalent app in both:
Preact:
- Minified: 15.8KB
- Min+GZ: 6.2KB
Svelte:
- Minified: 23KB
- Min+GZ: 4.8KB
So for a trivially small app, when minified and gzipped, Svelte does produce very compact results. But its growth rate (1.2x) vs preacts (1.05x) indicates to me that it would probably outgrow preact on a normal-sized app.
It would take a long time for it to outgrow the typical React or Angular stack, though.
Yeah the lack if a realistic benchmarking tool is a major part of this whole front end fatigue, is it just another project or can it realistically prove itself to be a real benefit for performance.
I find wrong that most frameworks do not include any component library. Even if there is a vibrant community, you end up with 30 dependencies which do not play well together.
I'm the author of a commercial framework for BIG applications [1]. Once you put everything on the table you can see which features should go into the framework and you end up with things like localization, date/number formatting, form validation, keyboard navigation, layouts, tooltips, routing, modals, etc. Once you have all that, you can build a coherent widget/charting library on top of it.
The 7-GUIs suite looks better, but so far only one JS framework implements it. The problem with more realistic benchmark apps (or suites) is that they require more effort to implement, so you have a selection bias towards easy-to-implement yet too-small-to-be-useful demonstrations like TodoMVC.
About 100-200LOC to implement, does recursive views, xhr, routing, dynamic state tree, and the server periodically responds w/ errors (on purpose) to force you to implement loading and error states.
Agree. It is useful insofar as you can quickly get a sense of what working in a given framework feels like, but the community could use some alternatives.
> You can't write serious applications in vanilla JavaScript without hitting a complexity wall.
This somewhat annoys me. You can definitely do it.
Sure, it's easier, more comfortable, safer and quicker to use something like React. But you can still do some modularization that scales reasonably well without it.
I've been working on an app and spent a lot of time looking into frameworks to handle its complexity but ended up sticking to vanilla JS to keep more control, especially for detail optimizations.
> I've been working on an app and spent a lot of time looking into frameworks to handle its complexity but ended up sticking to vanilla JS to keep more control.
On a recent hackathon, I wanted to implement instant search and ended up using mostly plain javascript for this reason. Not pretty, but worked: http://i.imgur.com/lLi3noX.gifv
I agree. Ten years ago, writing application in vanilla JS was hard because of the browser incompatibilities. Today, browser vendors really do an effort to avoid such issues. Furthermore, today the DOM API is very well documented and, over time, I find it getting easier to learn and use.
I agree that it's become feasible, but even for relatively 'light' scripting my experience is that already very soon something like jQuery becomes worth using despite the increase in payload size.
For example, I'm currently working on a project with mostly plain js, and I've already had to spend quite a bit of time adding code to make relatively basic things work across browsers (including mobile).
Even though it's much less work than it used to be, it's still extra time and code that others will now also have to understand. And who knows what other browser issues linger or will pop up as more features are added!
The worst part is that if I'd be allowed to lazy load some of the images and make a few small other optimizations, I would've shaved off enough Kb's to load multiple version of jQuery...
"The worst part is that if I'd be allowed to lazy load some of the images and make a few small other optimizations, I would've shaved off enough Kb's to load multiple version of jQuery..."
We really need to stop comparing the size of JS and images. Images do not block the app while JS often block it in many ways: download, parsing, slow executing at first (JIT), GC pauses.
Also, hard limits on download size seem silly to me. Either it's worth spending those couple kb for jQuery or it's not, but you shouldn't need to compensate for it by reducing payload in other places.
This looks nice: a Javascript framework for expressing _concepts_ that compiles down to vanilla JS. It looks like it ships a lot less code to the user. From the project's first [blog post](https://svelte.technology/blog/frameworks-without-the-framew...):
> The Svelte implementation of TodoMVC weighs 3.6kb zipped. For comparison, React plus ReactDOM without any app code weighs about 45kb zipped. It takes about 10x as long for the browser just to evaluate React as it does for Svelte to be up and running with an interactive TodoMVC.
I didn't study the thing, but the first question that comes to my mind is: if each component is rendered as a self contained piece of vanilla js, isn't the size of an app with lots of components going to increase much faster than with the library approach?
A complete library by itself can be pretty big, but it stays the same size no matter how many components you add.
Good point. I tested the demos and found the output (compiled down to ES5 and minified) is around 2-3KB for the minimal demos, and 9KB for the complex SVG clock demo.
The equivalent JSX output for that clock demo is about 1KB. So yes, I would guess that apps with many components would end up bigger (in total JS bundle size) than equivalent React apps.
Possible counterarguments:
- Bundling and gzipping several Svelte components together might compress well – a lot of their size comes from repetitive substrings like `.parentNode.removeChild` and `.setAttribute` etc.
- Once downloaded, the Svelte approach would probably be faster than React (at both rendering and updating) and would use less memory (no virtual DOM, no diffing, just fast granular updates).
- The self-contained nature of Svelte components makes it easier to treat them as atomic downloads and use them as needed. For example, you could get to a working UI extremely fast, and then download more components for below-the-fold or other pages in the background. This could work well with HTTP/2.
Storing state in Dom and reading from it could be dangerous. It's very easy to force layouts when you don't need. Frameworks like react and preact essentially do this. Abstract updating the view in a performant manner. Preact is 3kb. Magnitudes smaller than React.
I don't think it stores state on the DOM and reads from it. Pretty sure it just stores state in an object, and it updates the DOM granularly when that state changes. (That said, I did notice a bit of DOM traversal (node.parentNode...), which doesn't count as reading state from the DOM, but does rely on the structure of the DOM not having been altered by someone else since the last render – not sure why it needs to do this.)
The biggest problem with web apps today is the initial load time, not the total size of the app. If you can't render the first page the user lands on without serving a large framework, you're stuck.
But yes, an app built entirely out of standalone components would eventually overtake the total size of an app built using a more conventional framework. (By the time you get there, your app is probably already too big anyway, and you should be code-splitting.) We're going to add a compiler mode that addresses that very soon by deduping some code within an app.
I tried peeking at the EachBlock example and it came out at 7.5kb uncompressed. Copying and pasting the loop a few times makes that grow fairly quickly. Four copies of that example code is enough to make it grow over 22kb. For comparison, the render method in Mithril.js is about 21kb uncompressed. I imagine you would only need another 20 or 30 times more code to reach the size of React (~140kb).
I wondered that as well at first but then I realised that it probably doesn't have much shared code as it will be using the DOM directly and that is the shared dependency
I think now the community has settled on the Component based development, especially with JSX templates.
What we need are multiple react-kernels implementations. I see React community providing component API specs. Similar to Linux distros, we should have API compatible implementations, where application code can run on multiple kernels without any changes. We are now seeing multiple react-shim layer implementation - Preact [0], Inferno [1].
The precompilation is a nice idea, but I'm not buying the "vanilla JS" argument. The reason why React and other frameworks have a runtime is also because they optimize rendering.
The Svelte doc says: every call to `component.set()` produces a synchronous DOM update. I can already see how this leads to very poor rendering performance in applications with a large number of nested components.
React and the Virtual DOM solved this problem, and that's why web apps today use a lot of components. So until Svelte can demonstrate fitness and speed comparable to React on large apps, it just looks like Yet Another JS Framework.
We'll publish some benchmarks soon showing how much better Svelte's performance is than React's. There's a lot of misconceptions around virtual DOM diffing.
I don't know, it could work. React almost does the same thing actually, except it batches updates occuring in a single event handler, but otherwise, there is nothing optimized about it, except the reconciliation. A limited string template makes reconciliation a non problem.
Haven't studied the SvelteJS documentation but as far as I can reason there is nothing in their idea that prevents them from shielding the DOM behind a lightweight, virtual DOM, ReactJS style. In fact, I'd be slightly disappointed if they don't shield the DOM somehow. It's the #1 bottleneck, after all.
The virtual DOM isn't a replacement for the DOM, it's just a string diffing algorithm. It is only needed because of React's "render everywhere" approach, which is a tradeoff sacrificing efficiency for simplicity. If you don't re-render the entire page any time the model changes, you don't need a virtual DOM.
I'd say your mostly right with the one caveat that a virtual DOM also helps out with batching changes to the DOM, but I suppose there are other ways to make that happen.
> Normally, this is the part where the instructions would tell you to add a <script> tag to your page or install something from npm. But because Svelte runs at build time, it works a little bit differently.
Something that doesn't say "normally, you'd be asked here to install something from npm, but" and then immediately goes on to ask that you install something from npm.
Precompilation is cool, but please don't imply that your framework is the first to do it. Precompiled templates significantly predate HTML, they were common in the mainframe world. In the HTML world there are tons of other templaters that allow precompilation. Handlebars is a common one: http://handlebarsjs.com/precompilation.html
This is a huge part of Javascript fatigue for me. I love the fact that we're recycling old concepts and mashing them up to get constant improvements and better tooling. It's the relentless hype and pretending that everything is new that really gets to me.
Handlebars templates have a runtime dependency (even when precompiled). Svelte does not.
Also, a precompiled Handlebars template is just a function for outputting an HTML string (with a runtime dependency). By comparison, the compiled Svelte output is a dependency-free JavaScript module for a dynamic view, which knows how (and when) to granularly update the browser DOM in response to state changes. It's unprecedented.
Svelte has a runtime dependency too, it's just bundled into the output. Everytime I've used handlebars I've bundled the runtime into my final application file. But I do it myself so it only gets bundled once rather than N times like it would be for svelte.
See the `update` method in the output of the Hello World example:
update () {
text1.data = root.name
}
This is the entire DOM manipulation code for this component.
There is no runtime library. The trick here is that the generated code is aware of exactly what DOM updates are needed. Instead of a large, general-purpose reconciliator like React's, you have specialized code for changing the DOM, generated from your templates.
The Svelte compiler uses some ES2015 features (for...of, etc) that aren't currently supported in all browsers. It's designed primarily for use in Node.
Why not transpile the Svelte compiler to ES5 so it will run on all browsers? Otherwise devs may get the impression Svelte generated applications will not work on all browsers.
Exactly. My main gripe is that it won't even display the source code. I would be okay with the execution part of the REPL not working, but this just creates the impression that Svelte literally cannot display a block of text without ES6.
I don't want to be too harsh on the Svelte author. It's a minor inconvenience.
The guy did some incredible work - he wrote all of this code in a week. I'll look at the REPL code to see if there's a simple fix. All the levels of indirection, code generation and bundling may take a while for my non-Rich-Harris brain to sort out.
The way I see it is Svelte treating the browser JS implementation as "machine code": while frameworks such as React or Vue are by definition a (runtime) layer on top of vanilla JS, Svelte _compiles_ your code to vanilla JS.
Vanilla JS has long been used to refer to a lower-level JavaScript; ie not using any abstractions. Vanilla JS was often used to compare to jQuery. Of course jQuery is JavaScript.
React fanboy chiming in to say... an interesting approach to what is probably a problem.
i wonder, though, if compilation is really better than creating a library? if we look at the REPL output [0], we can see what comprises the bones of a Svelte component, and how much generated code will be replicated.
it also shows that the more interesting part here is it's method for rendering. it's use of data binding and compilation means we don't really need something like VDOM to stay efficient (at least, not a VDOM running in the browser).
Because we did!
For instance, this is exactly the approach that we have released back in 2009 with http://opalang.org
Naturally, there are things we would implement differently today, but the OCaml codebase of the compiler is still, in my humble and biased opinion, pretty valid.
is opa still alive and being used? i was pretty excited about it when it was released (especially when it still had the ML syntax), but it never seemed to gain any traction.
Web components and polymer solve this problem "completly" by full encapsulation.
With some css preprocessing + vulcanizing you can even use separate js/css files from your component templates if that is your thing - they can get inlined in the build process.
The 'Mustaches' bit killed it for me, but up until then I was enjoying the simplicity of the API. I like React (and React-like implementations) because of being able to use JS to build up a view
In the past, I used virtual-dom. It works fine, but it's discontinued and doesn't address the problem of encapsulated component state. Now using this: https://github.com/AlexGalays/kaiju/
We should do a better job of clarifying: this isn't Mustache-the-language, it's just using {{ and }} as delimiters. The syntax is simpler than Mustache, and allows you to use any inline JavaScript expression, which will become fully reactive.
See e.g. http://bit.ly/2fRcyJq. The {{#if ...}} part is a control flow directive that allows Svelte to understand the structure of your app, but the condition of that if block is just a JS expression
Yeah I interpreted his comment to be about syntax. OTOH React (JSX) lacks these control structures inline in the templates, it has to be done in separate steps, not a clear win for all situations.
Looks neat, but also like a non-standard implementation of Web Components, which, though slow coming, already have universal browser support with a small shim.
What's the advantage of this over something like Google's Polymer library, which is a toolkit for rapidly creating reactive web components?
Polymer offers a lot of UI widgets, which are enormous when you glob them together, but the base polymer library is actually incredibly small due to the fact that's it's leveraging the browser's native custom element functionality.
Can't reply to your comment so editing here:
Works natively on IE11+, Edge, Safari 9+, Chrome, Opera, Firefox, and mobile browsers. The polyfill for everyone else (webcomponents-lite.js) is 41kb.
Isn't the problem of including only the code the component actually needs already solved with dead code elimination? So the framework authors can include new features but as long as your component doesn't use them the code for those feature does not end up in the final bundle you serve to the user.
I sure am missing something here because the author of Svelte is actually also the author of rollup.js [1] which does exactly that, it eliminates dead code via its tree-shaking mechanism.
As far as I understand, ReactJS is a tightly weaved 40kB chunk of code which will be delivered to the browser no matter how ferociously you shake the tree.
Unfortunately, UI frameworks are extremely hard to modularize in a way that makes them amenable to tree-shaking. So you're left with the alternative – cobbling together your own pseudo-framework out of 'small modules', which loses all the ergonomic advantages of a good framework. As you might imagine, I've been thinking about this problem for a long time :)
Vue.js has single file components [1] and template precompilation [2] (or optionally JSX support, which is basically a precompiled template).
I don't see the point of yet another framework here.
The "no dependencies" argument seems to fall down for me, since it would seem to me that they would need to duplicate a lot of code for state management and rendering, bloating the code for anything more complex.
And if there's some clever tree shaking or whatever going on, than I don't see the point as opposed to just including another JS file...
I couldn't find any explanation of what's so different here, worthy of creating another framework for it. Would love to be enlightened!
Yep, we'll be adding more examples soon – definitely on the TODO list. In the meantime you can take a look at our TodoMVC implementation – it's 3.6kb zipped (Vue is 17.2kb without any app code. Not a criticism, just context)
there is always room for a framework smaller than other frameworks. mature frameworks tend to get bloated overtime, become hard to reason about and learn for new developers.
if it wasn't for projects like that we'd be coding in some feature rich ims/cobol framework.
So if you want to use a router in Vue what do you do? Well, you install vue-router. The point of Svelte is that it can infer all of the features you are using for you, and not compile the ones that you aren't.
I like the idea of abstractions that exist only in the source code and at compile time, and have zero runtime footprint.
How does this compare to Google's Closure compiler and library? Does Svelte's compiler do anything that the Closure compiler cannot? Or how about running the source through a hygienic macro system like sweet.js and feeding the output of that to the Closure compiler?
I ask because I'm skeptical of a new, unproven compiler. Google has been using the Closure compiler on production code for years, and its optimizations are pretty awesome.
But the google closure compiler have no concept of a tree of components and their lifecycle. The library is old, java-like and I never heard about someone using it :p
It IS old (well, 2009 counts as old I guess), but it's still maintained, and still doing what it says on the tin. That's quite an achievement in and of itself.
I think Rollup already took time away from Ractive. It's really too bad, I think Ractive has some great ideas and I really enjoyed working with it. It was the best way to build web UIs I've come across. But I feel its future is uncertain so it's hard to recommend others begin using it.
Actually, ractivejs as it is right now is pretty mature, so I will still use it. The ecosystem and momentum behind it could suffer from a _shift in focus_ from /u/rich_harris- the creator of both libs- but the library has flown under the radar all this time so I feel it might not be a huge change.
Perhaps, but at least to the team I was working with at the time, Rollup indicated we shouldn't expect much to change or improve in Ractive. I think Ractive was partially a victim of timing; as Harris himself has said, it has many of the same ideas as React and he may well not have made it if React existed when he began. And it never really seemed to gain much popularity which you kind of need for a project to grow beyond just its creator. I still haven't investigated this new thing, but if it was indeed inspired by Ractive that gives me hope.
Why are there complex MVC frameworks that run in the browser in the first place? I might get some flack for this, and I'm prepared for it, but why can't we use JS as just a view manipulator? Leave data processing and business logic to the back-end, on the server, and that can take care of needing a front-end framework and large app.
> Leave data processing and business logic to the back-end, on the server
In general, this is not good. Of course there are specific applications where it's useful but in the general space of applications it would be very limiting.
For example, go to http://square.github.io/crossfilter/ and do the filtering on the 5MB data such that you wait for the histograms to be updated from the server. Instead of the tens of milliseconds, it might be seconds or tens of seconds if the server is loaded, i.e. orders of magnitude slower, not to mention unpredictable.
You can think of the network as a data flow constraint. It constrains latency, throughput, privacy and security; the constraints can be unpredictable (network outage; DoS, MiM attack etc.). There can be many good reasons for wanting part of your domain specific logic to fall on the client side.
In particular, dynamic media e.g. interactive data visualization, games and most interactive things that use data or modeling are best partly in the browser.
We're past the point where the rule of thumb was to do business logic on the server and the client only did the presenting of the view and acted as a controller.
I'm not sure what you're advocating for. Most of what you're doing in React/Angular is exactly that -- grabbing data from the backend and displaying it in some view, then handling interactions. If you're building an application and not just a website there's not really any other option.
It's hard to build in an interactive user interface when you have to wait for full page loads to complete between actions. It can be done with thoughtful planning, but sooner or letter that JS you sprinkle in to do view manipulation is going to be unwieldy.
"Why waste energy doing stuff on your machines what you can get your users to do for you?"
...is the general reasoning. However, personally, I find that unless you're very well organised, you're not saving yourself much, if anything, in the long run.
GWT was backend oriented and sucked at delivering a nice UX in a decent timeframe. So it's actually extremely different.
You could compare this to vue, ractive + a strong compilation step.
You only write your template once, you don't have to think about two separate paths: creation and update.
With backbone, you either did that and had performance issues or did updates separately, which is a pain to maintain.
Also, in backbone, parent<->child communication is a complete after thought.
A side note about that website. Thin and Extra-Thin fonts have no place in web design (there are some exceptions in huge title text). They are an unnecessary burden for people with limited eyesight and elderly people[1]. Even for me, on my 1366x768 screen (which is still the single most common resolution for cheap notebooks, dear HiDPI web designers[2]), the bullet text is hardly readable, the copyright line is completely unreadable. This effect is made even worse by choosing inadequate contrasts[3]. One can use subtle colors for increased contrast.[4]
Not sure why you were downvoted, I think it's entirely reasonable to bring up his concerns in an issue regardless of whether they are eventually implemented.
The only other thing I like is CSS scoping, though. I think that CSS scoping is a problem in React, and current ideas on how to implement that in React are absolutely horrible IMHO.
Two-way binding is a step back I think, I don't love the name (lots of people will judge a new technology by its name), and the thought of introducing yet another framework is a nightmare.
I'd personally go with Polymer if you like scoped CSS, since it's already established and it's a good project.
I wonder if these ideas can be somehow applied to React, doing precompiling on React components, so that one can continue taking advantage of all freely available React components out there, and keep using Redux.
You don't have to use it – its effects are restricted to the subtree where you've explicitly opted in to it. I've personally found it to be a huge timesaver, and would never go back to a world where I didn't have the option of using it. But you're in no way forced into it.
> I wonder if these ideas can be somehow applied to React
A lot of people have wondered that, including me. Unfortunately, a compiler wouldn't be able to generate a good picture of the structure of a JSX component – because it's 'just JS' it resists the kind of meaningful static analysis that Svelte can take advantage of. I'd love to be proven wrong, but sadly I just don't think any JSX-based framework will ever be able to fully embrace these techniques.
Got it. I've been reading the guide, it looks good.
You did an awesome job.
I'm really scared about trying out yet another framework since I've tried pretty much all of them before settling on React, (and I guess a lot of people will be, too, since there's a new one every couple of months) but I'll try this weekend.
> Unfortunately, a compiler wouldn't be able to generate a good picture of the structure of a JSX component – because it's 'just JS' it resists the kind of meaningful static analysis that Svelte can take advantage of.
Can you elaborate more on this? It seems like passing the compiled JS through Esprima and checking the call graph would give you a fairly detailed structural representation of a JSX UI. You may need to do some graph stitching across module boundaries, haven't tried this myself.
Look at css-module-values [0]. It's not perfect, but it lets you share values between JS and CSS. All values in the file are visible on the imported module.
I think the name will work out just fine (witness MySQL and PostgreSQL).
From looking at the docs, the two-way binding seems entirely optional and I can't find anything that would prevent a developer from using a redux model of state management.
I've got framework fatigue, too, but on the surface this project seems to embody the best of what I love: a tiny API, little magic, and getting out of the way.
Yes, I was thinking whether it would be possible to use Redux. You would need to add some code to hook that up to the components, though.
As for the name sure, but now there's a lot of competition between frameworks, and you want to get all help you can marketing-wise. I might be shallow, but I've not looked at projects a lot of times just because I didn't like the name. I just have no time to make informed decisions, there's too much stuff to look at. I'll see if besides the concept/purpose I like the logo and name a lot of times.
So I've just wrapped up using React + Redux and just begun to uneasily accept JSX.
It makes me uneasy because React wraps up existing HTML with Javascript. Because it violates using existing, established standards that have truly stood the test of time and wasn't broken at all.
I believe most of the brokenness of current day frameworks comes from templates. Here is what you get by JSX being JavaScript:
1. Easy to write typechecking for the templates.
The compiled template is just function calls (or well, factory function calls) and the typechecker can work with that.
How many template language authors will write a typechecker for the template language?
2. Sane scope sharing - you can take advantage of all existing encapsulation and modularization tools of JavaScript. How do I expose functions to JSX? I simply bring them in scope e.g. by importing them or defining them. How do I expose functions to the template? Well... it depends. There is this $scope object (Angular)... Or there is this components property which informs the template whats in scope (Vue: https://vuejs.org/v2/guide/components.html#Local-Registratio...). I guess there are worse options, like having to register your helper functions or components into some sort of global registry and pray that there wont be a conflict.
3. First class components - want to write a list component that takes an item component as a parameter in props? Tough luck, its likely that the template language of your average framework doesn't support this. JSX gets this for free because its just JavaScript, and JavaScript is a proper language with first-class support for passing around functions and classes.
Its not that its impossible to do this... its just that most template languages aren't advanced enough, and somehow we think thats a good thing.
Angular template, and expressions, should be treated
similarly to code and user-provided input should not be
used to generate templates, or expressions
If you are distributing the compiler with your framework, there is this urge to expose it to the developers. If you expose it to the developers, there is the chance that it will end up compiling user input. And then you spend ungodly amount of time building a sandbox, unsuccessfully.
-
svelte seems to be built by people with deep interest and expertise in compilers, so I'm somewhat hopeful that its template language will not suffer from the usual template language problems. On the other hand, the task is much harder (compiling to code without runtime), so that seems like a driving force in the opposite direction (less powerful template language). I guess we'll see how it goes.
IMHO the next step in frameworks isn't this (compilers are hammers, and everything else is nails). The next step is, after current frameworks fight things out, we take the common subset of the "infrastructure code" that the most popular ones use (DOM diffing? events that are the same across browsers to remove the need for synthetic events? PropTypes? Maybe someone at whatwg should start thinking about this list) and include it in the browser API, then let frameworks build on top of that and be 10 times smaller on average.
Thank you for the detailed answer, I have a few questions here but I think you've nailed it for the most part, I'm just trying to gauge how well I understood what you wrote.
1) what is meant by type checking in this context? do you mean that now that HTML is converted to JSX in JS context, you can test HTML as if they were Javascript Objects?
2) so the ease of importing functions to JSX + use of existing "encapsulation and modularization" (I'm not sure what this menas) libraries.
3) what makes it difficult to support it from templates point of view? JSX is easier because?
4) why is sandboxing harmful? how does giving developers compilers create sandbox fallacy?
With the last paragraph, you are saying that the lowest common denominators such as DOM diffing will become a common cross platform feature in all browsers?
By type checking I mean something like typescript which is able to statically verify whether the right types of attributes are passed to the right component, or whether expressions embedded in the template are of the right type.
Re encapsulation, the way that JS provides it is scope. JavaScript's lexical scope ensures that things you define in a function are only visible in that function, or that things you define in an ES6 module are only visible in that module. Even with the tiny quirks of function based lexical scope, its natural and intuitive: if you can see the definition (or import) in the (function/module) parent(s) of the code you're looking at, its available. Its a tried and true solution.
Template languages don't always bother to add something like "import" or the concept of lexical scope, so when you want to share something from JS (or other templates) with them (like data, or processing functions, or components) you need to somehow put it in their own custom scope. Many template based frameworks make the mistake of working around this not by adding import mechanisms to the template language, but by registering things globally - e.g. when you register a component in Angular or Vue it becomes globally available. This has the same problem any other global variables have. (I suspect this in Angular 1 is what led to the totally parallel "module system" with dependency injection, although maybe they just didn't like the existing ones)
Regarding supporting first class components in template languages, its not really that difficult. Its just that we believe the fallacy that we need the template language to be underpowered. Also, once you add support for first class components and JS scope sharing (or at least import/export) mechanisms to a template language, you've basically reinvented JSX (perhaps with a different syntax)
Sandboxing isn't harmful, its just extremely hard. Angular spent years trying to write a proper expression sandbox and every new version of it was broken successfully - thats why 1.6 removed the sandbox.
What I'm hoping for is that someone works out what contributes the most to popular framework's size, takes the common parts and provides them as built in API in the browser in such a way that the frameworks can take advantage of that instead of writing their own implementations.
I can not keep up with all the new X flavored js frameworks, where X is [React, Angular, Vue, Mithril, Redux, Mobx,...]. This is exactly why I chose Clojurescript for my pet stuff and with hopes, that I will be able to use it for making money too. Learning curve can be steep for someone not used to functional way and/or lisp, but at the end you get a very mature, simpler and consistent language with great tooling. Figwheel workflow is just awesome. Well worth the time.
It's not that I cant, I don't want to do it anymore, sorry I didn't express myself correctly above. And there aren't that many client side framework choices in Clojurescript, which I see as an advantage. But not everyone will see it that way, I can understand. Whatever makes you happy.
> It's currently fashionable to avoid two-way binding on the grounds that it creates all sorts of hard-to-debug problems and slows your application down, and that a one-way top-down data flow is 'easier to reason about'. This is in fact high grade nonsense. It's true that two-way binding done badly has all sorts of issues, and that very large apps benefit from the discipline of a not permitting deeply nested components to muck about with state that might affect distant parts of the app. But when used correctly, two-way binding simplifies things greatly.
I wonder what the author considers "used correctly" and "done badly" and how Svelte approaches this.