The most over-engineered blog that I know of is Casey Muratori's at http://mollyrocket.com/casey/ . "View source" and you'll see a long list of auto-generated variables and function calls, unlike anything else you've probably seen for any website.
Casey attempts to explain what he did in a podcast[1]. Apparently, he was frustrated enough with CSS's margins that he built a layout engine in C that calculates offsets for every piece of text in the page, then generates hundreds of lines of Javascript that apply fixed positioning to arrange the text.
To be fair, the most over-engineered blog I know of is anything hosted on top of this: http://arxiv.org/abs/1312.7152 ;)
"This paper proposes a new microblogging architecture based on peer-to-peer networks overlays. The proposed platform is comprised of three mostly independent overlay networks. The first provides distributed user registration and authentication and is based on the Bitcoin protocol. The second one is a Distributed Hash Table (DHT) overlay network providing key/value storage for user resources and tracker location for the third network. The last network is a collection of possibly disjoint “swarms” of followers, based on the Bittorrent protocol,..."
(and I am sure someone will come up with an example of a blogging platform that starts at the level of specialized mesh networking hardware or something like that...)
This isn't quite as crazy, but in the spirit of overengineered blogs, this guy tried to make every single element of his blog optimized by automatic A/B testing: http://www.metamorphosite.com/about
You should have seen it about a year ago, it was even crazier. The same layout engine in C, but it also just did a massive innerHTML set at the root of the page with a huge string of HTML. It was awesome.
Good point, though. He beats me by far.
(PS. you probably know this, but I forgive him because he's an awesome low-level games programmer so it's in his blood to do stuff like that)
LOL. When I saw the OP headline in the list, I immediately thought, "Someone is complaining about Casey Muratori's blog again"...it seems virtually every time something of Muratori's is posted somewhere (on HN, reddit, etc) -- which is frequently, given the quality of his articles -- someone will chime in to bash the source code of his blog.
While the effort is remarkable, the web isn't fixed and this isn't responsive. Just wondering what happens if someone hasn't that font installed or the browser rendering is still different and everything is off then. At least the whole website isn't an image file ...
For me it's rather an expression of ignoring that websites are always slightly differently displayed than a good solution.
The browser is still figuring out the sizes (the invisible div with the innerhtml). If it doesn't have that font, it will give the size for the font it does have, and then the layout engine will use that size. It's wonderful.
Does that explain why Pocket fails on that blog? Every few months I pocket something from it and then am disappointed when it comes up blank while I'm on a plane, etc.
It might be worth turning your GitHub links in the article into permalinks (hit 'y' in the GH interface once you've targeted the file and line) so they don't fall out of date.
You're correct about it being sort of awkward at first when you are figuring out how to structure an isomorphic applications. It gets even weirder when you only use JavaScript as a compilation target.
I created an example isomorphic Clojure and ClojureScript application (https://github.com/domkm/omelette) and wrote about how it works (http://domkm.com/posts/2014-06-15-isomorphic-clojure-1/). It was an interesting experience. I wouldn't recommend it for production quite yet but I'm looking forward (and working toward) the day when we can easily deploy isomorphic applications that are not written in JavaScript.
I've seen that! I'm happy to see other languages do this as well. I love Clojure(Script) and there are several things I've borrowed from it (if you look in my code, I use CSP channels and transducers). :)
No, I will readily admit Clojure(Script) is a better language. The only concern might be getting all your team on board with a Lisp, but if that's not a problem, go for it.
If you are not entrenched in the JS world, I would look into Clojure(Script). They've got a lot of really cool stuff going on.
I have significant mental investment in JS, and I work on the JS debugger for Firefox, so I feel like I need to be a heavy user of JS. Also I know so much about all the little corners of server/client JS, package management, how to deploy, etc. I don't think the return would be great enough to re-learn all of Clojure's tools.
If I were to start on certain types of apps (high-performant data modeling, or anything that requires special care about complex flows), I may look into Clojure. But honestly I'm interested in starting to write games so I'm probably going to spend mental energy learning Rust.
If you come from a JS world and like the code->refresh->code->refresh workflow (I do) Clojure is hard adjustment because it requires a lot of boilerplate code up front and you have to remember to run a slow leiningen daemon for each project you're working on.
I'm also experimenting with isomorphic React apps. I'll be presenting some of my work at React Conf at the end of the month (so this isn't yet as well documented as it should be), but here's what I'm trying to handle async data with ReactRouter:
In that example, the editBike view depends on bikeID. actionsForRouterState makes sure that the viewBike action is called to populate the CurrentBike store before any route that includes bikeID is rendered.
That's interesting - so the named parameter for required data in routes.jsx is something you're planning to add? (I can't see it, just has the standard name/route/handler at the moment)
I've been trying to do the same sort of thing - currently I'm parsing the routes separately in koa and setting data directly on the stores before the React components load, so they're pre-populated with data. Seems a shame to do all the routing twice though.
In my system, the data and routes aren't coupled. Here's the flow:
- ReactRouter figures out which routes are being rendered. It yields routerState, which includes a dictionary of all the active parameter names and their values.
- The actionsForRouterState dictionary declares which actions and stores are correlated with a particular route or parameter name.
- callActionsForRouterState filters actionsForRouterState to include only the actions that are relevant for the routes and parameter names dictated in routerState.
- It calls these actions, returning a promise that resolves when all their stores are filled.
- After that promise resolves, it's safe to call React.render. Any calls from the active components to Store.listen will be resolved with the prepared data during mounting (e.g. immediately before the render pass).
bevacqua, I don't know what you are trying to achieve by that, but if this is an actual serious project, know that this commit log is unreadable and would repel any potential contributor to the project you might find. And I don't mean to offend because, going by your profile, you're obviously a very active open source contributor - but I also found the exact same log quality issues in all your other projects.
If you truly care about them, you should care about that too.
Would this make it possible to have the whole page in <noscript> tags to make it crawlable by all bots and browsable in Links, Lynx, Dillo, w3m and offByOne?
Yes, that's a good idea. You'd just need to introduce a little wrapper around the client-side rehydrate to move the contents of the <noscript> tag out of the <noscript> before calling React.render.
Then again, you don't really need <noscript> in that case. It's rendering server-side, so the page already displays correctly, you just need to make sure all your links work.
I've been doing something similar using Backbone and Ezel[1] (I first started using Rendr[2], but couldn't get into it): As the OP says, it can be a bit confusing to get your mind around. Particularly the routing; part of my intention was to use the "isomorphic" technique -- btw, as a mathematician, I really hate that this term is used...but I digress -- to create a SPA that falls back to a traditional multipage site if/when JavaScript is disabled. It was a bit of a hurdle, but ultimately proved possible.
I might give React a closer look, having read this.
I'm also experimenting with "isomorphic" apps. I wrote a router with an expressjs/hapijs like API (middlewares included). The main point is handling the state before choosing the view. The concept is general enough to avoid being tied to React although React makes things easier. Once you have such a router, handling nested views (a la react-router) is a trivial particular case.
I'm interested! I can't find your main point back in your examples, however - I need to be able to asynchronously get data from all kinds of places before I can even think about calling React.renderToString. Does tom allow that? If so, what am I missing?
Thanks! No express experience so the term didn't mean anything to me. My understanding of the word "middleware" is "well, it has no UI and it isn't a database or robot arm either, but we don't really know what it is either".
Nice writeup. I can definitely see "isomorphic" or client/server frameworks (React and Ember especially) being the go-to choice for new sites.
I guess an exception would be richer "experiential" type of apps and toys, where some DOM might be replaced with Canvas/WebGL, and transitions/animations/etc take precedence over SEO and initial load time. I haven't seen much innovation in terms of MVCs or frameworks that target these types of sites.
Great job! Also love the write up you gave and that you shared the code. I tried doing something similar a while back but never got around to finishing it, so I'll definitely dig through this source for some inspiration. I'm very curious how you'll develop the integration between webpack and gulp (or whatever you plan on using) as I tried doing something similar with Grunt but eventually gave up. Having only one build step would be awesome.
Here at Hootsuite we just finished rolling out webpack, and we're using it with grunt. If there is interest, I could probably get the project lead to do a writeup on our code blog.
That would be appreciated for sure. Or even just a quick gist of the setup files. I can probably figure it out, just haven't had enough time to play around :(
Yes, yes, yes. At my company we've been switching everything over to React for both client/server rendering using our own ExecJS rendering engine for Ruby on Rails. It certainly adds a bunch of complexity to the stack, but every improvement we make provides new benefits in terms of the ease with which we can keep the API stack we like while cranking out rich UI while worrying much less about the client/server distinction.
Needless to say, I am a React fan.
That said, here are some things you will run into as you build more complex applications with this architecture. Some are related to shared browser/server JavaScript and some are React in general:
* You will find that some JavaScript libraries you want to use should not be loaded into the server context.
* On occasion, you will need to add server-vs-browser conditionals. And creating components to deal with browser-focused JS libraries (think Google Maps) will require some liberal and sometimes awkward use of React component lifecycle methods.
* Until there's a nice, agreed-upon open-source layering library for React out there, or you make your own, things like click-based (not hover-based, which can work with just CSS) dropdowns and non-modal dialogs are weirder to get to dismiss (when clicking outside of them) than they should be. Just because of the way React events happen and are bound.
* Also because of the way events are bound, it's awkward to write nice reusable form components. Without some trickery, you can only bind events from the component you're writing to its descendents. You can't bind events from one descendent to another. You end up writing a new component for each form every time, which isn't necessarily terrible until you want to make a fancy form builder like Rails provides in ERB. I've tried every which way around this, and I've had some success, but the solution always feels wrong and ugly no matter how I do it.
* This is really more of a good thing, but it's so enabling that you'll never really be done adding convenience features. There are just so many "Wouldn't it be nice if ____ just worked?" scenarios that get you thinking about the next reusable tool you want. I spend a LOT of time trying to create the "perfect" browser-side data store abstraction and the "perfect" lazy loader/renderer component for not-yet-fetched models.
The tradeoffs thus far have been worth it, but we haven't yet found or built the holy grail. So many things are so much easier than the old ways, but sometimes they're harder.
Do you have some example use cases for your form descendants vs. event bindings issues? I'd like to see how they could possibly map to React form libraries.
It's kind of an abstract problem to describe without code examples or without diving pretty deep into React, but let me see if I can just make something up without getting too far in the weeds. Imagine you're trying to make a ModelUpdateForm component that you can just drop in, which manages all its own state:
var ProfileUpdateForm = React.createClass({
render: function() {
return <ModelUpdateForm model={@props.user}>
// I don't actually have a reference to `form`.
// I can't have a reference to `form`.
// But I need to link state between the ModelUpdateForm and its inputs.
// Anyway, this is never going to work.
<ModelInput type="text" valueLink={form.modelValueLink("name")} />
</ModelUpdateForm>;
}
});
Weird workaround that I'm not sure is an officially sanctioned way to deal with it and fear might break in future React versions or have unintended consequences:
var ProfileUpdateForm = React.createClass({
render: function() {
return <ModelUpdateForm
model={@props.user}
do={function(form) {
// Instead of taking children, ModelUpdateForm takes a `do` function prop.
// It passes itself to `do` which give you a reference to which you can link its inputs.
return <div>
<ModelInput type="text" valueLink="form.modelValueLink("name")} />
</div>;
}}
/>;
}
});
Another idea I've been kicking around and haven't tried would be to create some sort of linker object. Here's the hypothetical end result:
var ProfileUpdateForm = React.createClass({
render: function() {
var valueLinker = new ModelUpdateForm.ValueLinker()
return <ModelUpdateForm model={@props.user} valueLinker={valueLinker}>
<ModelInput type="text" valueLink={valueLinker.linkState("name")} />
</ModelUpdateForm>;
}
});
tl;dr - The way React typically wants you to do things, ProfileUpdateForm would have to manage the state. But what I wanted to build was a reusable model-bound form component. If I have to write a lot of model-binding code every time I write a new form, then there's no point to trying to make a reusable component at all. So you start having to try weird-ish ways to get it to become feasible.
It looks like this could make use of React's context feature once #2112 [1] lands.
I have similarish form components [2] which need to do this the hacky way in the meantime (cloning [3] their children in order to pass the form prop they need all the way down):
So are you traversing the entire tree of children here? I'm familiar with cloneWithProps, but I've been wary about this technique because of how deep the components I actually want to bind could be and how much traversal it could take to find them.
This is awesome. I've played around with React's server-side rendering [1] and was really impressed once I'd hooked it all up. To see React hook up the event handlers and render nothing at all was really satisfying.
I haven't had a chance to use react-router, but it's great to hear that they've put in the work to make server-side rendering a reality. I only hear positive things. Kudos.
Very cool, I haven't ready looked into react. I have just started with meteor and it's been fun and a big learning experience. My previous background was all with php and traditional client/server nteractions. It's quite hard overcoming that way of thinking. Not really sure if it's relevant but my first meteor app was completely built with calla & methods rather than realising the client/server boundary isn't what it used to be...
Awesome stuff James :) I've been having a lot of those weird client vs. server realizations myself. Still getting used to thinking about stuff that renders in both places.
I'm particularly interested in hearing about your ansible + docker setup. I've been using both a lot lately, and trying to figure out the best way to use them together.
Cool stuff. Was planning to do something similar but never got around to implementing the client-side rendering bit (http://www.jbernier.com). The codebase I work on at work is also isomorphic JS with React. Sharing client/server code is definitely the way to go.
Yes it is over-engineered. Heck, even WordPress is engineered. Blogs are essentially static pages with the comments as a moving part (i.e. dynamic). Most blogs can happily leave on a static web folder if bound to something like Disqus or whatever you decide to use.
It's true. I've over-engineered my blog as well to be isomorphic, but then I realized there was zero need for client side functionality so I've rendered it all static via a simple crawler. Much easier to deploy and serve with NGINX. Comments are powered by Disqus.
This is very nice! A question though: it does need a knowledge of client state right? How big is that data per session? Is it viable to keep the data on client and send with each request via a cookie so that you don't need more server resources?
Which client state? The only penalty is that the server needs to send a "payload" of data to the client on the initial rendering, which is all the data that the server used to render the initial HTML. Once the client picks up this payload, from then on it just uses normal REST patterns to query APIs for data.
You may need a session for authentication, but that's it.
Wow, JavaScript is approaching science fiction levels of sophistication! Love the idea of moving functionality from client to server or reverse. Knew about Docker but not Ansible, thanks for that. Now we need some Sweet Macros to help too.
> Unfortunately, full client-side apps (or "single page apps") suffer from slow startup time and lack of discoverability from search engines.
This is simply not true. Google announced that they can crawl single page apps just fine nearly a year ago (http://googlewebmastercentral.blogspot.com/2014/05/understan...) and although Bing hasn't announced anything they are likely working on the same thing if it's not already rolled out.
Google does this already; it works. Rendering on the server has little actual value in 2015 but it some how became a critical feature all of the JS frameworks are fighting for without proper explanation of its benefit.
EDIT: It's fine if you disagree and I don't even mind being downvoted, but please do explain.
Rendering on the server has little actual value in 2015
There is life outside of Google, you know. There are other search engines, other crawlers that are not search engines, other programs people use on your website that are neither browsers nor crawlers.
Can you be more specific? If I'm going to spend a great deal of money on compute and tailor my library decisions based on this feature, I need to know that it is really important. Alluding that Yandex might not have this technology yet is not good enough for me.
There are thousands of pieces of software that hit a URL and attempt to do something useful with the contents. Just try linking to a page on your own site on Twitter and tail your logfiles to see all the interesting bots that show up.
I use http://hn.premii.com/ to read HN on mobile devices. A handful of sites every day aren't compatible with whatever technology they're using to scrape page content. If the comments don't indicate that the article is worth visiting or there aren't any comments yet I don't see the content.
Now this is just one anecdote about one pair of eyeballs but I'm a old-school PC full browser using pair of eyeballs, I suspect more and more people and machines are consuming web content in new ways, including without a Javascript interpreter mediating the web experience for them.
It's still true that SPAs suffer from slow startup time.
Google may be able to index SPAs but they can't do it nearly as effectively as static HTML - which is why still recommend you use progressive enhancement rather than going JavaScript-only: http://googlewebmastercentral.blogspot.com/2014/10/updating-...
My god so many incredible links of crazy blogs here...
I took a certain pleasure in the opposite recently, by writing on a few simple .html files on an Apache server with headers in <h\d> and content in either <p> or <ul>
I've been doing this for a private project, and a LOT of my time is sunk into thinking how my app should be structured. It's an absolute nightmare, but also great fun!
this is a really cool idea, however I can see how some people would try to abuse the app playing pranks on other people and the like.. I don't have an iphone so I can't check if there are some ways that the users are verified (basically a test to see if they are really blind) seems extreme but this might be better to help grow a more 'healthy' community
Casey attempts to explain what he did in a podcast[1]. Apparently, he was frustrated enough with CSS's margins that he built a layout engine in C that calculates offsets for every piece of text in the page, then generates hundreds of lines of Javascript that apply fixed positioning to arrange the text.
[1] transcript: http://mollyrocket.com/jacs/jacs_0004_0010.html