> In fact, I think Python is becoming the secret weapon behind every good Javascript application ;)
My recent experience says this is not true. Python is so widely used as a backend that it is in no kind of secret weapon - at least not in the way Lisp was a secret weapon for Paul Graham with Viaweb. Every startup I've dealt with recently is building their backend in Python, Ruby or Node. I think the real story here is that compiled languages are on a major downswing in use for startups that build web apps. I haven't run across a startup building their backend with .NET or Java in a couple of years, except for one payment application that built the business logic with C# and MS SQL Server (they still built their web API with RoR).
Recently I tried to go back to compiled languages for a new web project. I really wanted to try something fun in Haskell that wasn't just data processing (what I normally use it for). I picked up Snap, Yesod and a few other libraries only to find that it was just too much of a pain.
Dynamic languages remove so many little annoyances with web development that it's hard to justify going back to a compiled one when you could just cheaply spin up an extra VM or two to deal with the performance differential between dynamic and compiled languages.
I have been going most of my web app development in Clojure. For many years, I did most in Sinatra or Rails. I would argue based on my own experience that Clojure + Noir are every bit as quick and Agile as Ruby + Sinatra because of auto-reloading of Clojure, CSS, HTML, etc. in dev mode. (Yeah, BTW, Noir is deprecated but I still use it on 4 projects.)
Interesting. I was the someone who wrote the snippet about my app becoming a zeromq => Websockets routing layer and I've been thinking that the correctness, speed and fluency doing anything stream related of haskell for this layer would be perfect. When you add in being able to share datatypes between the server and client via Fay it seems ideal!
Can you go into more detail about your architecture (a quick diagram of the connections between these components)? I am very interested -- my own dabblings have been leading me in this direction and firm belief, but I would love to learn from someone else's experience with it. Please feel free to email me if you prefer.
.net and java aren't unpopular because of their compiled nature. They are certainly much more performant and development lifecycle is quicker. The problem is their licensing fees. Python, Ruby, Node, and PHP have no licensing free and deployment costs are negligible. Most developers would agree that PHP is horrible; despite that it is very popular, because it is free.
Once performance becomes an issue, compiled languages become more advantageous. This is why twitter is now on java/scala, and stackoverflow is on .net.
Dynamic languages are certainly less restricting and may be faster to development at the start, but have a major maintenance and debugging cost. I shutter when I think about trying to modify and huge javascript or php codebase. With .net, no big deal.
As a personal preference, I like objective-c, it has the benefits of dynamic and static languages and it very performant. Too bad it is mostly limited to the apple platform.
FWIW Java doesn't have any licensing fees to use for web apps, there are plenty of OSS frameworks available also.
Compiled languages have never been super popular for web development. Before Python/Ruby it was PHP/Perl that were the driving force outside of "enterprise". I guess enterprise adopts Java and .Net mainly because it is what most college grads come out of school knowing well, also the type safety may be advantageous when building big systems across hundreds of developers with ranging skill levels.
I think the main reason for dynamic languages being popular on the web is that the "Save Text file -> Press F5 in browser" and deployment by copying text files to an FTP server feels intuitive , has a quick feedback loop and a lower learning curve. Adding a compile & linking step just feels like it adds friction.
I worked with developer with a design & some PHP background on a small project that involved Java and he really struggled to understand why I was messing around with tomcat and .jar files rather than just copying .java files to the webroot.
That which was old, is new again. I find the cyclic nature to be interesting, and possibly a response to the increasing nature of computer power.
Originally, the mainframe was where the power was. Then the PC was enough to do a lot of the work the mainframe was doing. Then the nature of delivering experiences to those around the world necessitated a shift back to the server for a lot of tasks (try running Google or Facebook locally), with a shift to making the Gui thin... Performance in the browser was woeful.
Now, the performance of the browser across our increasing amount of disparate devices is nearly adequate enough for thick guis once more, with the bonus of it working across a multitude of OSes (Windows, Linux, Mac, iOS, Android, Windows Phone, Chrome OS, etc.)
This is more a case of "old is still old", I think. We've been hearing about how web apps or browser-based apps will replace desktop apps for many years now, yet it never happens.
I can still remember all of the Java applet, ActiveX control and DHTML hype of the late 1990s. It died out when it was proven to be nonsense. Desktop apps continued to thrive.
Then it built up again a few years later, during the middle of the last decade, when AJAX and jQuery became all the rage. These offerings were marginally better than what we saw in the DHTML era, but still lacking. The only traction they gained was at the very low end. Most serious users still preferred to use real desktop applications.
Now it's a few years later, and there's been a lot of HTML5 hype. Yet the browser-based apps are still no closer to replacing desktop apps. Gmail, for instance, has been around for almost a decade, but a great many people much prefer a real mail client. We also haven't seen much traction when it comes to modern online office suites, and other web-based alternatives to more traditional desktop apps.
Native desktop or mobile apps are going to be around for a long time. I question whether it is even possible for any browser-based app to compete. In order to do so effectively, it would be necessary to eliminate the browser itself, as it is the browser and its related technologies which provide the poor experience we get from many web apps. It just looks like a complete no-win situation for web apps, aside from perhaps at the very low end.
This is all anecdotes unless someone's done a survey, but in my experience very few people use a native app for email anymore. I do so, but can't think of a single other person I know who does. Almost everyone's on Gmail, and a lot of people only do IM (Gchat, Facebook Chat) in the browser, too. And there was never a native desktop app for Facebook, but that didn't prevent it from getting a billion users. Likewise, where's the native app for YouTube? We all used to download videos and then watch them in native video players. Not anymore.
Maybe it's a matter of age — I suspect you might be a little older than me and the crowd I hang out with (mostly mid-20s). From my vantage point web apps have obviously already replaced many categories of native apps, and are on the way to replace more. It seems as if you are denying something that already happened.
I think a lot of people who have only been exposed to web based email don't realize what they're missing in a good native client. I have several friends in that mid/late-20s age range that went to work in corporate jobs, used Outlook 2010 for the first time, and then asked me for help getting their personal Gmail/Yahoo/Hotmail accounts working with Outlook because they like doing email there so much more once they learn to use it well.
edit: BTW, I see your comment was downvoted while I was writing mine. I just wanted to say that I did not downvote yours; you make good points about Facebook and YouTube, to be sure.
Outlooks is so much worse than gmail in almost every way. The only good thing about it for work is integration with all the other MS applications. My experience with Outlook after many years of gmail has been nothing but frustrating, and I hear the same story everywhere.
I'm genuinely curious what you mean by "almost every way".
I use the Gmail web interface daily because its keyboard shortcuts for labeling, archiving, and deleting are unbeatable for triaging my inbox, but it pales in comparison to Outlook 2007, 2010, and 2013 for composing and scheduling (and I schedule against an Apps for Business account, not Exchange). In particular, pinning search folders like "Unread Mail" to the "favorites" area is invaluable to me because I filter a lot of my mail with "skip inbox" and "keep unread" so lower priority mail doesn't clutter my inbox or distract me with push alerts, but I want easy access to it later. I've tried using g + l "unread" in the Gmail web interface to replace that, many times, but the experience is just so much worse that I don't find it usable at all.
I use both, and I think a lot of people in my age group (30s) are the same way. Gmail is for low-importance things generally I don't need to read, like order confirmations and the random lists I get on because I gave my address to some site. Apple Mail is for stuff I care about - work, personal projects, family relationships, and other serious topics that are worth investing time in.
Gmail has top-notch spam detection, and there's no need to wait while thousands of emails download, which is great. As for negatives, I find the GMail interface to be cluttered and confusing, with its ads, the annoying black bar at the top, random hovering elements, and the occasional exhortation to Buzz / gchat / new Compose experience / whatever it is this week. The UI is unpleasant to spend extended periods in, which isn't conducive to thoughtfulness. Also, its search is bad ("1-20 of many"), so I am uncomfortable storing important data in it.
It's part of a larger pattern. The web is for large quantities of stuff, with low signal to noise; desktop apps are for focusing. I read blog posts on the web, I read books with a native app. I watch cat videos on YouTube, I watch movies with native players. I think this separation is robust.
Incidentally, YouTube does have a native iOS app, which I find to be much better than the web site on my iPad. The same is true for Hulu, NetFlix, and ABC's player. Oh, and Facebook!
> The web is for large quantities of stuff, with low signal to noise; desktop apps are for focusing
Interesting point, noted!
> YouTube does have a native iOS app, which I find to be much better [..] on my iPad
Yup, hence my spelling out that there's no native desktop app. With mobile apps we have a few confounding factors:
* less-powerful processors that may be unacceptably slowed down by the HTML/JS layer
* immaturity of multitouch and notification APIs in JavaScript
* limited (if any?) support for putting pure web apps on your home screen, or in an app store
As the APIs mature and tablet and smartphone processors get faster, I expect there to be fewer and fewer reasons to do fully-native apps. But yes, of course, they do offer a better experience today.
I went from using a webapp exclusively for email, back to a traditional client. The reason was that the webapp didn't expose enough functionality, and I couldn't change the top-posting default value. Joining several email lists where bottom-posting was the norm, it made sense to adapt to that, and I ended up adopting Thunderbird. Prior to that, I hadn't used a "traditional" email client since pine and Pegasus Mail in the late 90s.
In my experience people tend to use webmail for personal and native for business. I guess because native mail clients can be a pain to setup unless you understand IMAP , Port numbers yadayada. Also people don't use personal email much apart from in a few instances whereas for business it is the defacto communication method.
Using the gmail web interface for work email would drive me nuts.
Email can be pretty important to me so it's something that needs to be omnipresent on my desktop.
This is where thunderbird integrated with Ubuntu/Unity is perfect. When I have a new email I get to see the sender and subject instantly on the top right of the screen, the notification icon lights up and the launcher icon tells me how many unread emails I have.
If I were using gmail I would have to keep a tab open and would spend half of the day OCD switching tabs to check if I had anything important.
Of course you could fix this by adding notification features to gmail that integrate with the desktop and showing it in a window that is distinct from my other browser windows.
But then is it really a web app or is it a desktop app that happens to be written in JS?
I'm not certain that you appreciate how crucial and widespread native email clients are in the business and academic worlds, both on the desktop and on mobile devices.
Maybe these users do have personal Hotmail or Gmail accounts. However, for anything serious and work-related, they're very likely using Outlook, Lotus Notes, Mail.app, Thunderbird, or one of the numerous other native email clients.
Many people, especially those in management, sales, accounting, finance or other communication-heavy areas of business, often spend many hours each workday using a native email client, on both the desktop and on mobile devices.
Within technical and academic circles, and also the open source community, many people prefer native email clients because they're so much more powerful than the web-based clients.
Furthermore, I personally know a lot of people who have an @gmail.com account, but they essentially always use it from a native client via IMAP and SMTP, rather than using the web interface. They use it mainly because it's free, and because of its large storage space, rather than because of its web interface.
The netflix example is interesting. I recently installed with super-hacky Netflix for Ubuntu PPA (works surprisingly well, but can cause your screensaver to activate). It's basically just windows firefox wrapped in a custom version of wine.
The important thing though is that it launches fullscreen and attaches to the unity dock as an independent application whereas with Windows it's just a website that you browse to.
I think that the way it integrates on Ubuntu actually makes much more sense in many ways. Though VLC integration would obviously be the most superior option.
I think things are still moving in the direction we expected.
Java applets and ActiveX were horrible experiences, that's why they didn't survive, not because desktop apps are fundamentally better. And DHTML is just a different name for webpages that use Javascript - that was doing pretty well last time I checked.
Google Docs and Gmail replaced traditional desktop apps for me a long time ago. Sure, not everyone has moved, and it will certainly be a long time before the enterprise sector throws out their desktop apps, but in the consumer space many people just use web based mail clients. According to year old Litmus stats [0] 31% of email was webmail (and how much of the outlook traffic is from office workers?).
Xero have done an amazing job of pushing into a space dominated by and old desktop app (Sage). People are moving, slowly but surely.
I know that desktop apps are not going to vanish soon but it's not fair to say that web apps haven't gained much traction.
Any issues you're seeing in the experience at the moment will be smoothed out. There's mind boggling amounts of resource being poured into making web apps faster (V8 etc) and the emerging frameworks for rich interfaces (Angular etc) are going to enable a whole new breed of web app.
It's funny, this architecture he mentions of one-page-app clients on the frontend and microframework servers on the backend with Python is precisely the model I choose for a recent side project.
What I learned:
- Javascript requires a lot of architecture to be sustainable over even medium size apps (I stole N. Zaka's architecture)
- Writing functional tests for Javascript is difficult. (Writing unit tests is easy)
- Most web browsers are nearly there with everything you need for one-page-apps. IE8 is the only real problem, and I think my problems are soluble.
- Python's single threaded nature requires that you make explicit effort to add offline jobs (I added a priority queue using redis)
- Elasticsearch is actually quite complicated once your needs move out of the completely trivial
- Most of the documentation for Elasticsearch assumes a working knowledge of Lucene (which I don't have).
- Mongodb is really really easy to develop against
- Getting a single page app to be sensibly indexed by Google is not easy.
Getting a single page app to be sensibly indexed by Google is not easy.
I think this is the most important point when people are deciding whether they should move their entire frontend layer to the browser in their new projects, and frankly, a lot of guys pushing (or developing) for single-page webapps don't get that.
While it is possible to have a mix of client-side routes/pushState/server-side generated template, it's actually non-trivial. So I really don't see a lot of cut and dry separation of layers anytime soon until Google can start indexing sites that fetches data after the HTML structure has arrived.
Big full-stack webapp frameworks are here to stay, tho we might see more webapp frameworks that are great at doing API first to emerge.
I've been using AngularJS for a project over the last few days and I've found that it really helps with the structure of the JS.
Some other points:
- haven't tried Angular testing yet but it has end-to-end testing (and you can swap out your backend services for mocking)
- you can use Celery (uses RabbitMQ/Redis/Mongo/whatever underneath for the broker) for offloading the jobs in Python. Just add a decorator to your work function, makes it really easy (you can also use PiCloud if you need something with more crunching power)
- ElasticSearch docs are still a bit of a minefield. I was already familiar with Solr and Lucene and I struggled with them (ElasticSearch is really nice though)
- Lucene itself is very complicated when you need it to do more for you. ElasticSearch can't protect you from that complexity but there is a mighty engine down there to solve all your searching needs
- Mongo makes dev easy, but I worry about the long term cost when things get complex. I did a job recently and found that my brain struggled to keep the internal Mongo doc structure intact.
- I wouldn't make a single page app if I wanted it indexed by google. I guess there are usecases but I haven't run into them yet.
It's still early days for this stuff and it still has a way to go. We're all going to learn some lessons along the way I'm sure.
Thanks for sharing your code, I'll look at it later when I'm not supposed to be getting a job finished :)
> Mongo makes dev easy, but I worry about the long term cost when things get complex. I did a job recently and found that my brain struggled to keep the internal Mongo doc structure intact.
I don't worry too much about that. You can use the repository patten to manage the serialisation and storage of your objects. Given this is something you'd typically do in most backends, I don't think it's a big ask. I do worry about the Mongo scare stories.
> You can use Celery (uses RabbitMQ/Redis/Mongo/whatever underneath for the broker) for offloading the jobs in Python. Just add a decorator to your work function, makes it really easy (you can also use PiCloud if you need something with more crunching power)
I can't remember evaluating Celery, so possibly I overlooked it. Perhaps it didn't work with Python 3 at the time?
I've also successfully implemented Zakas' [1] approach after watching his presentation "Scalable Javascript Application Architecture" [2] and abandoning Backbone.Aura [3]. Aura theoretically implements this, but in practice we found that it made some assumptions that were incompatible with our needs. Consequently, we rolled our own implementation.
(which was also posted by politician) was very useful in explaining the theory. I also made some changes to it, but offhand I can't remember what they were. I think it was to do with the separation of starting vs initialising modules (parts of a page).
> Getting a single page app to be sensibly indexed by Google is not easy.
if the backend is done correctly it is not a problem at all , an app should be able to degrade gracefully when javascript is deactivated. if not then it useless to bring the issue of SEO, SEO is not something to be thought about after the app development.
I think you're misunderstanding what a single page app is. If a single page app is degrading gracefully when Javascript is turned off then it is probably not a single page app.
A web app where this is just one HTML file but the JavaScript gives it the feeling of being a whole site (client side template rendering, etc). I think in essense a single page app is one where there is no server side temp laying.
What do you think of the single page app you've described with multiple html entry points?
Say / responds with the homepage as per usual
and /feed responds with a different html start point, but the same application code.
Using this technique you could create a site that responds to multiple urls with html and at the same time creates a SPA experience using a single js application (say, with backbone for example).
Although there is definitely a shift going on, I suspect it won't be half as radical as people like the author tend to evangelize. This model may be a big shift for a simple web app, but for larger applications with lots of business logic it will barely make a difference.
The "big web app full of things" will still be there, only with a few things less, and it will still be built with tons of tools, not just a micro framework.
The future of web development won't be that different. There will just be more of it, but that's pretty much what has been happening since the 90's.
There's enough exiting new stuff happening without going overboard and announcing it as a new era or a major paradigm shift. Maybe I'm getting old, but I find it tiresome.
The people live in a bubble, they don't considerer that there is others kind of software that required more (so much more) business logic, that only an insane man can try to solve with js.
The problems that can be solved using js grow large with each passing year though. I wrote a video recording and exporting app for a client this year using web technolgies (html + js + node). I couldn't have imagined that being possible 10 years ago.
I am not talking about this. What I means it's when you have to deal with more complex interaction, between different systems, old technologies, ...
What you say it's true, JS it's going to evolve in the next years until to convert in something totally different, you will see. Some people call that reinvent the wheel.
I am not against JS at all, I just dont think every single problem can be resolve with that, sorry.
In this case, it captures and streams from the computer it is running on via a video capture card. The application is for archiving audio and video from an external box that it monitors.
The nice thing about doing it that way, is that adding the capability to stream real-time from the external box that is being monitored is as easy as opening the ports to external access.
Looking at MVC, a common recommendation is to put the business logic into the model. "Fat models, skinny controllers"
IMHO the JS frontend should contain almost no business logic. Rather it should be handled either by "the" backend or if it is really that complex, by a separate web service. (Self-speaking that this service can be written in any language that makes sense.)
(On the other hand, the success of node.js proves that medium sized applications can be written in pure JS.)
MVC isn't just a good idea for making development easier, it's also the natural resting state for most applications. In the future, well just be distributing the models, views and controllers across multiple machines and networks, just as was done on the past.
Still Server will render the main page.
Main page is " html + main content that user and search engine wants to see". Because the search engines need to index this page ( meaning the URL ), so the main content must be same, unless it is a private page. Also the browsers are designed to render html from the server. HTML5 brings in a lot of dynamic features, but the main feature is not going away.
Interestingly enough we still prefer to use a Twitter client instead of twitter.com, in fact those clients are what made Twitter big or at least so a lot of people claim.
Why would that be any different for other things? Why should that change in the foreseeable future?
wait... this guy thinks the future is going to be simpler? when has that ever happened?
However, I largely agree on the right side of the big vertical network line in his diagram, and I feel this has been obvious for years now. The left side, not so much. I think the reduction of web services to JSON type data will enable an explosion of client side diversity, and that is a good thing.
Also there is nothing classic about "server-side" MVC : it is ridiculous and never made any sense.
"wait... this guy thinks the future is going to be simpler? when has that ever happened?"
It happens constantly. It's such a strong urge that making things simpler itself has become simpler.
I'm touching simpler right now. This linux distro (lubuntu, doesn't matter) took me much less than an hour to download and install, fully functional with no configuration by me. In fact it was fully functional before I installed it, because it was a live distro that connected to the internet with no configuration on my part.
15 years ago that would have required me to download a bunch of separate files, copy them to a stack of floppy disks, and keep a book by Matt Walsh open on my knees while I tried to figure out again how to make my computer talk to my modem and to the wider internet.
I'll completely reinstall my OS on a whim, with confidence. I used to approach it with dread and a good night's sleep.
I also don't adjust the choke on my car when I start it, and I have no idea how to safely crank the engine from the front of the car; actually, I can't find the place to put the crank handle in, and I can't find the crank.
30 years ago you could actually "understand" the whole computer. While 64 KiB is quite a bit of RAM for human to keep track of, it was still possible to quite easily remember how the address space was laid out and what exact addresses were used for storing individual pieces of variables used by the OS/environment and what addresses to PEEK/POKE to interface with the OS services. People really knew how the machine operated at hardware and software level. Inside out.
20 years ago it was much harder to understand the whole machine or it's operating system. While there was direct access to low-level services(BIOS/DOS interrupts on PCs), some APIs started emerging, including OpenGL and soon DirectX to make things more complex. People had hard time understanding the exact inners of the hardware and especially software, although specialized understanding was still very relevant for many.
10 years ago it had became quite useless to try to deal with hardware directly. Compilers started being so good that hand-written assembly started having bad benefit/cost ratio. People stopped caring about optimizig code or really understanding about the hardware details. Software became already quite complex to truly understand, focus shifted from understanding the environment to understand the tools and some libraries.
Today understanding the hardware is essentially impossible. You can't really deduce what exactly the CPU is doing when you read the instructions, many programmers don't even know what an instruction or a register is, let alone understand branch predictors or memory caches. Lots of code run on virtual machines rather than directly on real machines. Operating systems have virtual layers here and there and programming culture(at least to some extent, my own observation) has opted some perverse idea that you should not optimize or even consider optimizing code, because "PREMATURE OPTIMIZATION IS THE ROOT OF ALL EVIL!!". Programming work requires very little understanding of the system or platform, and many skills are very short-living and focus on specific technologies/libraries. Most programming problems are researched from the Internet via StackOverflow. There's very little room for truly understanding the environment, things change so fast and are so complex that it's just not important.
Things get more and more complex at exponential rate. I find this quite sad because I for one love low-level tinkering with stable interfaces and problems/domains which require extensive studying and investigating to come up with creative solutions to uncommon problems, rather than running into problem X and asking on Stackoverflow about what pattern to use or what library call to make to get around the issue.
If you work on low-level software, you can and will use tools like VTune and cachegrind to understand where the performance bottlenecks are for your code.
If you are working with a lot of people who tell you not to optimize, you're probably working on high-level software where the focus is on getting it out the door as fast as possible. Try working for a company like Intel or NVidia. You'll write plenty of assembly language there if you want. There's also things like compilers, virtual machines, and storage software.
> Also there is nothing classic about "server-side" MVC : it is ridiculous and never made any sense.
Given that the server doesn't produce any HTML, I agree with you more or less. On the other hand I like a separation of apps/modules into data aquisition (pulling the data into memory, producing something digestable) and the actual (business) logic. Mixing both up can bring you into hell's kitchen if the complexity is high enough.
After all MVC is anyway only a very loose recommendation. In practice it is not possible to distinct between M, V and C 100%. Anyway, I haven't seen any other global pattern being comparatively successful.
My recent experience says this is not true. Python is so widely used as a backend that it is in no kind of secret weapon - at least not in the way Lisp was a secret weapon for Paul Graham with Viaweb. Every startup I've dealt with recently is building their backend in Python, Ruby or Node. I think the real story here is that compiled languages are on a major downswing in use for startups that build web apps. I haven't run across a startup building their backend with .NET or Java in a couple of years, except for one payment application that built the business logic with C# and MS SQL Server (they still built their web API with RoR).
Recently I tried to go back to compiled languages for a new web project. I really wanted to try something fun in Haskell that wasn't just data processing (what I normally use it for). I picked up Snap, Yesod and a few other libraries only to find that it was just too much of a pain.
Dynamic languages remove so many little annoyances with web development that it's hard to justify going back to a compiled one when you could just cheaply spin up an extra VM or two to deal with the performance differential between dynamic and compiled languages.
Maybe I'll try Go next...