The current front-end web stack today is not the answer, I think that is not so controversial except to the most die-hard fanboys. Most of the contents in this handbook are going to be worthless in a year.
Most front-end work is repetitive, with some minor variations. Rather than a framework, I think that perhaps an expert system for making web apps that is basically an interface for metaprogramming, would be a vast improvement over the current tech.
Let's see, going by the index, which technologies are younger than a year (the logic being "would your statement have made sense a year ago" because I obviously can't prove if a technology will still be relevant a year from now):
Internet/web: no
Web browsers: no
DNS: no
HTTP: no
Web Hosting: no
UI Design: no
HTML: no
CSS: no
SEO: no
JavaScript: no
DOM, BOM, JQuery: no
Web Fonts: no
Accessibility: no
... and so on. But maybe you meant just Javascript libraries, which seem to make up just a small part of the content, but ok... Let's use "Module/Package Loading Tools" as a sample, because this thing is really to big to go through everything:
Browserify: Started in 2012. So... nope
Rollup: Started in 2015. No (but close!)
SystemJS: 2013. Nope, sorry
Webpack: 2013. Nope
And let's arbitrarily add the behemoth:
React: 2013. Nope
It appears there was an awful lot of new frameworks in 2013 and that let to the impression of framework churn. To perpetuate that narrative four years later seems to be more groupthink than reality.
jQuery has had the reputation of being very antiquated for a while. The problem that it set out to solve no longer exists. The same will be true of what you mentioned above.
Now there are tools that solve problems that arise from other tools, it is getting very meta. An ecosystem like this cannot last indefinitely.
I've been getting into web2py recently. It's a backend framework for python that autogenerates crud forms for you. It'll autogenerate an entire admin backend just from your table definitions in the ORM. I use Pjax to make page loads a bit quicker. Don't miss my front-end SPA framework whatsoever. Especially since my sites are SEO ready out of the box.
This will either be my most highly upvoted post ever or will get me hellbanned, but that MOC blog has got to be some of the most aggrandizing, self-obsessed nonsense I have ever read. It's like he's living in his own fantasy world where he is simultaneously the smartest person on Earth and incredibly oppressed.
The idea that someone is going through life thinking that way is more than a little depressing.
Agree with this. I think that many of the people named here may be more talented at marketing their ideas and themselves within certain developer communities. Their work may seem popular and exciting, but are lost in abstract thought and have little to no connection to the real world. Rarely will you hear about the programmers who write software for the things people rely on every day, these are the unsung heroes.
Looks pretty impressive at first glance, but it seems that most of its features come from dependencies such as PouchDB, RxJS, JSON Schema, crypto-js, and more. The built version is 1.3 MB, minified file is 520 KB, or 140 KB gzipped.
I found the "key compression" feature to be an amusing micro-optimization. It truncates names of keys, making DB migrations tricky. There are better ways to save more bytes, namely by using an actual compression algorithm.
I don't buy into the GraphQL hype. Actually it is a rehash of older technologies like RDF & SparQL, but marketed to a newer generation who didn't know that old things exist. I'd wager that the 20-something "engineers" who designed it didn't give a shit about prior art either.
I'm sure that Facebook employees think it solves a lot of problems for them. But I'm not convinced that a single vendor technology is going to be viable on the web. Web standards come about through standardization processes, with multiple stakeholders reviewing and revising drafts. Facebook one day puts up the GraphQL spec out of nowhere and defines the "standard" by themselves, based on their own implementation.
A little history: Facebook tried to subvert HTML with FBML, a proprietary markup language designed for use within the Facebook ecosystem. Long story short, it didn't work out. The various SDKs and APIs by Facebook have been notoriously unstable.
People tend to excel at short-term thinking, kudos to Facebook for that, and lack the foresight for long-term thinking, except for a few visionaries. The architecture of the web has lasted a few decades already, it will outlast a single vendor specification.
The GraphQL ecosystem has grown amazingly quickly over the last year. It's definitely not a single-vendor technology at this point. Check out this list of GraphQL libraries, tools, and implementations:
https://github.com/chentsulin/awesome-graphql
The majority of people who are using GraphQL are using an implementation from someone other than Facebook (on either the client or the server, or in many cases both).
(And for what it's worth, I did see a "Why not RDF" slide in one of Lee Byron's decks, and those of us at Meteor who are working on GraphQL are definitely aware of the RDF/SparQL roots. I think what's driving GraphQL's growth is, first, it addresses a very timely problem - fetching all of the data for a screen in a mobile app in a single round trip without coupling your backend to your UI - and second, the focus on tooling and developer experience which has been a weakness for SparQL.)
I didn't say that Facebook's reference implementation is the only one (it is in fact the most popular based on downloads), everyone else is performing free labor for Facebook. The point is that they can make any change they want to the spec for themselves and imposing their will, bypassing standards processes because there are none.
>fetching all of the data for a screen in a mobile app in a single round trip without coupling your backend to your UI - and second, the focus on tooling and developer experience
There is no reason why one can't do this with existing web technologies as an additional feature. There was no reason to ignore what already exists and works for the web at large.
I think you should disclose that you are founder of Meteor and have a vested interest in GraphQL. So when is the Facebook acquisition?
While I also have a vested interest in GraphQL (competing in the same space as Scaphold) I agree partially with what you stated above but it's not all that bad as you seem to imply.
Yes, FB does have the power (now) to change the spec over night but so far it was mostly stable and whatever changes were added were only for the better. No new technology comes to life as a standard, it needs someone inventing it, maintaining it, promoting it up to the point where people can get behind it and form a group and make it a standard. Let's give FB a chance, so far it has done an excellent job with the GraphQL standard.
On the other hand, graphql-js and Relay are not that stable yet and their interfaces are changing very fast and probably for people that use them it's a bit frustrating but it's only been a year and i bet they will get to a stable interface in 2017 enough for people to be able to really depend on them.
>There was no reason to ignore what already exists and works for the web at large.
GraphQL came to life (and a lot of people adopted it practically over night) precisely because the things that "already existed" did not work really well (REST for SPA/Mobile).
When a lot of your business logic (whatever that means :)) moves to the frontend (browser/mobile), the backend API tend to become very complex in order to support the frontend and REST can not express very well that complexity.
Whenever people "sell/promote" GraphQL, they bring up 2 main benefits, fetching the data you need in a single request and integrating multiple backends/microservices.
If you look at GraphQL only form this perspective i would agree with you that it brings nothing radically new to the table.
You can have a REST api flexible enough to be able to get only the data you need in one request (see http://postgrest.com) and you can integrate multiple microservices behind it, we've also seen in the past "typed" schemas (WSDL and things like that).
Imo what makes GraphQL so nice is the the simplicity (Rich Hickey's definition), how you immediately get what a query does, how it relates to JSON and the shape of the response you get back. Making something "simple" is very hard work and i think FB nailed it.
I think the nuances of the offering are not entirely clear from the marketing website, so here's what I've interpreted.
The product itself seems to be a Node.js framework that glues together various modules, including popular ones such as Express, Mongoose, & Socket.io. How they monetize it is via consulting and hosting, which they offer a fixed price for "unlimited" bandwidth and storage (very unclear how they may throttle this). The pricing is also exceptionally poor, a fast HTTP implementation may respond to 250k requests per second vs a month, and 5 GB of storage for $50 a month...
They seem to not have any sort of release strategy and the readme states to clone their repository. It would require pulling from their repo to update it as a dependency. There are also absolutely no tests, so you don't know if the latest commit is working or contains some work in progress or not. The signup process seems to be needlessly difficult, one needs to manually craft an HTTP request to some endpoint with some payload.
The "AI to build exceptional apps" pitch is vague. I can think of some possible cases such as automatic indexing based on querying patterns, but this is just speculative. I wouldn't trust it unless I know what it does.
- Yes, it does bundle frameworks but more importantly, it bundles databases like Mongo, Elastic, Redis etc. The idea is we completely abstract databases away from you and give you an API which does everything. We handle the rest like data sync between these databases, managing databases, scaling, etc.
- We're on a "Pay as you go pricing". Let me know your thoughts on this.
- I completely agree, we dont have a release strategy and we need to work on it. You'll see GitHub releases every week within few weeks from now. Thank you for the feedback.
- One of the visions of the product is to learn how your app uses the service and how your app queries the db - and auto optimize data between databases, so you never have to think about storing your data in different types of databases ever again.
On the last point--I think trying to do this is one of the big reasons Parse failed. Auto optimizing DBs is an incredibly complex and delicate task.
My Parse app's write performance suffered because my biggest table (27m rows, 10 small columns, 15GB) ended up with 17 auto-created indexes, taking up an additional 15GB of space.
I personally wouldn't call Parse a "failure" and without going into details, this has absolutely nothing to do with why Parse shutdown.
That said the real challenge, that other such platforms might not have, was handling all the very different DB workloads for all the apps we were hosting.
I worked on some of the pieces of that auto indexer. In most cases, this is a feature that was both necessary for us and extremely useful for our customers who didn't know how to manage their own DB. What was arguably missing was a way to expose the indexing operations to the developers, although this would have brought it's share of other challenges for obvious reasons.
If we created 17 indexes on the same collection (in some cases, it was way! more), that's because there were query families issued that needed those 17 indexes. I can't say this is your case, but in almost all instances I've seen, this was a result of poorly designed DB schemas and query patterns. Of course, for developers who know what they're doing, it's hard to design properly when you're dealing with a blackbox.
Yes, amplification and DB size are an issue when over indexing but our auto indexer was under constant tweaking and wasn't creating indexes "just because".
Actually the site loads about 4.3 MB of compressed data over the network, or 6.5 MB uncompressed.
Edit: out of curiosity, I looked at the package.json of their open source project, there are 51 top-level dependencies. After installing, there's 140 MB of dependencies, or about 800k lines of JS.
I haven't heard of Fabric before. Fabric seems to have an ambiguous name and their marketing website is equally ambiguous. Something to do with mobile app analytics? I find this trend in developer tool marketing to be appalling.
They used to be called crashlytics. Best crash reporting tool . Just works. I guess they probably are a lot more interested in ranking for stuff like crash reporting than people chancing upon their website and trying to figure out who they are.
I don't think it's a good idea in general to mock server responses because they are subject to change while the mocks don't, better to just run the server and make a real request.
The main difference between Service Mocker and pretender is that we are using service worker API, therefore the HTTP requests and responses are real and can be inspected in devtools.
And usually, we mock an API while the server isn't ready to use. For example, when you just started a new project with backend dudes, it's highly possible that the server is unusable for the first few days, that is the best time to mock your APIs with such a tool :).
1) Doesn't always work if you want to target embedded systems or need performance, and all you know are scripting languages with huge overhead like Ruby, JS, Python, etc. Some languages really are better than others.
2) Could say avoid distributed computing if your problem is not distributed. This is more about being a blind follower of the latest hype.
3 & 4) Complicated DevOps are a bad idea in general. Stuff that seems to simplify things on the surface like Docker are actually hiding tons of complexity underneath.
5) To most people, Agile = JIRA = Sprints = Scrum. It's corporate mentality codified, so it's no surprise that a lot of startups avoid it.
Most front-end work is repetitive, with some minor variations. Rather than a framework, I think that perhaps an expert system for making web apps that is basically an interface for metaprogramming, would be a vast improvement over the current tech.