Hacker News new | past | comments | ask | show | jobs | submit login
Our web development workflow is completely broken (kenneth.io)
294 points by codelion on June 26, 2013 | hide | past | favorite | 117 comments



A small nitpick: the article lists the "IE Developer Toolbar" for IE6 first, and suggests that everybody else's development tools followed that model. In fact, the earliest reference I can find to the IE Developer Toolbar suggests[1] that it was released in early 2007, while the Firebug version history[2] shows Firebug v0.2 released in January 2006, so it's about a year older than IE's developer tools.

Even if Firebug itself is no longer head-and-shoulders better than other tools, the fact that the basic model was copied and polished by Microsoft, Apple, Opera and later Mozilla themselves shows that Joe Hewitt had a pretty great idea back in early 2006.

[1]: http://blogs.msdn.com/b/ie/archive/2007/01/09/ie-developer-t...

[2]: https://addons.mozilla.org/en-US/firefox/addon/firebug/versi...


No, you're wrong. You weren't around obviously, so you probably don't know it was in Beta for donkeys. I actually remember first using it in 2006.

Here's an article about the beta 1 refresh from Nov 2005, so it was out even earlier than that.

http://blogs.msdn.com/b/ie/archive/2005/11/01/487833.aspx

Your google-fu needs a lot of work as even my first search showed a lot of articles in 2006 and 2007 talking about the IE dev toolbar, e.g.:

http://betanews.com/2007/05/10/microsoft-releases-ie-develop...

EDIT: Interestingly reading through some of the comments on the first post, it seems people claim the IE dev toolbar was inspired by http://chrispederick.com/work/web-developer/ and the initial release date for the IE Dev toolbar was Sept 2005 (see http://c82.net/posts.php?id=23)


The link for the IE Developer Toolbar in the original article carries a date of September 19, 2005 in the URL. Which makes sense, because it was demonstrated at PDC 2005, which was the previous week.

Link: http://betanews.com/2005/09/19/microsoft-issues-ie-developer...

Note also that the release notes for Firebug v0.2 state: "This is a very early release - the code is only a few days old." Whereas the IE Developer Toolbar probably began a couple of months before its first release. So even by looking at the first mention on the web, we're comparing a days-old Firebug to a fairly-complete IE Developer Toolbar.


That is my recollection — the frustratingly hard-to-google Web Developer extension was the first thing along those lines I ever heard of. I continued using Web Developer for a long time after Firebug came out because I sort of wrote it off as a ripoff.


The Mozilla DOM inspector is even older, it was even bundled in the Firefox installer in 2003. If the IE toolbar DOM inspector which was the reason some devs used IE 6 like the article suggests, I would bet that they had never even tried Firefox.

http://kb.mozillazine.org/DOM_Inspector


And once again the top comment doesn't do anything to continue the thoughtful discussion that began in the article, instead providing a pedantic correction.


And once again the top reply to the top comment has even less value, sending us off into pointless meta-discussion.


Pike: That's a technicality.

Spock: I am Vulcan, sir. We embrace technicalities.


It's not the most important comment on the subject, but it is right that the firebug developer get credit.


If it weren't for pedants, we wouldn't have computers.


Yes, because the introduction of the computer was a minor change to the world that only a pedant would have bothered with?


Good point, it wasn't a minor change. I meant to specify working computers.


That doesn't seem relevant. If it wasn't for hunter-gatherers, we wouldn't have much of anything; but that doesn't mean hunter-gatherers have much to contribute to particle physics.

There are times when it's important to be pedantic, like when you're writing software. (I don't think we normally use the word "pedantic" in these cases, but I'm not going to argue the matter.) And there are times when it's not important, like when an article mixes up the order in which two pieces of software were released.


A cornerstone of writing any kind of history is establishing an accurate chronology of events. In peer-reviewed articles the details of who came first in related work sections are critical. At any rate, I think many of us simply find it interesting to hear the real story. It's not like the guy was correcting a grammatical mistake, and it's not like he was impolite about it.


There's nothing wrong with the comment in and of its self, in my opinion. In fact, I appreciate it. But it's too bad that relevance to the original article seems to have so little correlation with votes.


This would be much less of a problem if we could collapse comment trees like on Reddit. It would be just easy to minimize and ignore. I think someone made an extension for that.


You might find the Hacker News Collapse extension useful: https://chrome.google.com/webstore/detail/hacker-news-collap...


I was about to point that out myself giving credit where credit is due is important and that is a considerable mistake in my eyes.

Not to mention i was using Firebug before IE had any development tools so it saddens me that Microsoft gets credit for it.


Yes, before Firebug, we were using MS Script Debugger/Script Editor in IE. It wasn't great, but it was more reliable for script debugging than Firebug for a long time.


Before Firebug, there was Venkman[0] which was definitely shite but worked if you had enough courage to understand its byzantine behavior. It provided a profiler on top of the JS debugger.

And IE development could use Visual Studio (and later Visual Studio Express web edition) for a slightly better experience than the MS Script Debugger, although the core issues such as not being able to see let alone debug toplevel javascript remained).

[0] https://developer.mozilla.org/en-US/docs/Venkman_Introductio...


I tried Venkman but found it so terrible I would do most of my script debugging in IE.


It admittedly had a learning curve roughly on-part with Eclipse over POTS with a drunk monkey at the other end of the call. I was rather happy when Firebug came into my world.


This is an interesting discussion. I'm deep into a project now (as the sole developer) where I was surprised to learn that the frontend js is about 30% bigger than the backend (python).

In ipython I use autoreload (thanks to a tip on HN) so that the code in the objects in memory is kept in sync with my editor. My workflow is: load up a bunch of objects I'm working on, try to get the expected output, fail, change code in my editor again, check the the output, repeat. Within ipython you can even '%ed obj.broken_function' to edit it directly in vim. There's no doubt that the auto reload discovery improved my efficiency. Aside from the time it saves, I'm able to think of my running code and on disk code as the same thing. Don't underestimate how powerful that is.

But as I said, my code is now mostly frontend code. After over 15 years in web development the reload cycle has become second nature to me. It's silly though. I have a lot of state to maintain in the js and every reload is expensive. In my specific case I could be looking at 2 minutes to even load the objects from the web server.

I'm going to have a look at the chrome options to see what's available in terms of hot code swapping. I disagree that it's important for all browsers to support it. Even just finding a good solution in the browser I do most development in (chrome, presently) would be a major win.


this autoreload/live edit is sounding more and more like a REPL...


Hmm. Evidently I didn't explain that very well. In ipython you're working in a REPL. Say you have a good many classes loaded up with data but some function deep inside one of them isn't producing the correct output. You can debug as you normally would inside the REPL. Use a debugger, deconstruct the function etc. When you get to the end you now have a fix for your function. Make that change in your file and, thanks to auto reload all of the objects in memory are immediately running the fixed function. You can then carry on working with the objects as though the bug was never there.

Depending on your scenario this can be a major win. On my project it's not uncommon for it to take 10 minutes to load and process that data from disk to get it into objects in memory. For me I can do that once and then carry on working with my data an code all in unison.

Whatever works for you. I'm pleased with my current environment on the server-side. It works very well for me.


I think chii was referring to a much more sophisticated REPL experience which doesn't require blowing away application state. This is possible in truly live environments like Smalltalk (checkout Squeak) or a competent Lisp environment. You can incrementally update your program from your editor and keep going all without reloading. This is how ClojureScript developers interact via the browser REPL for example.


Ah ok, fair enough. I'd heard that Smalltalk had a really good environment for this but I've never looked into it.

Let me rephrase then. There are amazing server side REPLs that allow me to change code on the fly, why can't I change code in my editor and have it reflected directly in my browser?


You can, if you're in control of the framework you're using.


Ipython is a REPL.


Many of the workflow elements discussed in the article apply more to designers than developers; for example: live reload which is very useful in the context of rapidly iterating on design changes but quickly loses appeal when working on a rich-client web application using a JavaScript framework where you have long-lived state present in the page.

Much of the way the web is being built is evolving from static pages and presentation to full-blown applications running completely in the browser which is why there is such a shift to providing tools for visualizing complex pieces of browser operations like rendering, compositing [1] and painting [2].

I don't think there's any fundamental problems with the tools being developed, the reason workflows are broken is because many web developers aren't being empowered to learn web development fundamentals. I can't imagine starting out as a web developer in 2013 and trying to jump in with all of these abstractions, tools and workflow items to try and understand; it'd be like trying to jump in as a newcomer to Rails at the latest version without all the context of the changes that lead to design decisions that currently make up the latest iteration of "The Rails Way". [3]

I think tooling is in a pretty good place now; browser vendors need to start educating web developers about how to craft workflows and use the tools out there. My goal has been to try and educate more web developers about browser and web fundamentals [4] along with workflow fundamentals like automating tasks using Grunt. [5]

[1] - http://www.youtube.com/watch?v=x6qe_kVaBpg

[2] - http://www.youtube.com/watch?v=Ea41RdQ1oFQ

[3] - http://words.steveklabnik.com/rails-has-two-default-stacks

[4] - http://www.youtube.com/watch?v=Lsg84NtJbmI

[5] - http://www.youtube.com/watch?v=fSAgFxjFSqY


I started my career in web development in 2013, and the number of abstractions and dependencies that seemingly every toolset has can be daunting.

Thanks for the fundamentals video & grunt intro, both were helpful to me.


Glad to hear my efforts have helped you out; let me know if there are any other fundamental workflow / webdev topics that you would be interested in learning more about and I will add them to my screencast backlog :)


LightTable has some pretty interesting features in this direction: http://www.chris-granger.com/2013/04/28/light-table-040/


I actually thought that was where this article was going, and was very surprised when it wasn't mentioned.

I will say though (as a lighttable backer with the fancy-schmancy t-shirt to prove it) that the current builds of light table are... incomplete. The idea of lighttable is very exciting, but I haven't really found the current builds to be much use. They are too similar to existing editors / workflows, and as it's beta you're sacrificing stability for not a whole lot of gain.


I agree. would add that a comprehensive solution is needed - not just one for particular types of development.

I was working in background on my own system before light table appeared. am back on it now in my spare time since light table is aiming away from my requirements. every time I see an article like this i think I should have worked harder to get it working by now


> aiming away from my requirements

Out of curiosity, what are your requirements?

It's hard for me to imagine that LT is really aiming away from anyone :) It's a platform for building whatever workflow you could want, integrated with any system/language/service you can find. That particular aspect of it hasn't been released quite yet, but it will be the major focus of the beta.


What about document oriented editing? Forms and word processor kind of functionality?

I'm researching browser development frameworks and languages to develop an XML editor. Editing should work more like Google Docs than CodeMirror, and with rudimentary form support (think key-value metadata, not complex forms).

So far I have settled on using contenteditable, but structural editing of XML in document oriented fashion is still an open question. One user action can result in many XML tree changes, like pushing enter twice inside a paragraph should split it in two (and thus close and create any number of tags). Also, there is no schema to describe XML document's editing workflow. Your post on IDE as a value gave me insight for a possible implementation.

Also, big thank you for LT. I'm enjoying using it immensely.


Hi. Apologies if that came off as too curt - typed quickly on my phone. With the above said, I still light table is miles ahead of what everyone else is doing.

My background is as an architect (buildings). For me, most of the issues I'm interested in solving are to do with interaction with visual information (2d and 3d) rather than working with reams of code.

I was most excited where I saw the Light table demo where there was dynamic editing of a game (inspired by the Brett Victor video?). The more recent developments seem to have largely focussed on making it have the full functionality of a traditional editor with some additions. What I am looking for is something that can take over not just from a code editor but also take over from other kinds of programs. Examples might be something which can replace my photoshop actions palette or something where I can organise my screen to compare Google Earth and Openstreetmap data side-by-side or on-top of each other (and click on things to query what they are). Generally these are conceptually simple tasks which current tools and workflows make very difficult to implement.

I have to run but I hope the above makes sense. I'd probably need much more text for a fuller explanation. Happy to discuss.


What is missing for you that makes it "incomplete"? We're always trying to make sure we have the necessities taken care of and I'm happy to say quite a lot of people are doing their work in LT these days - so if there's stuff we're missing that keeps you away entirely, it helps us to hear about it!


Light Table is very interesting (I've used it for some Clojure stuff), but it's competing with emacs. It's gonna be very hard to beat emacs. The legacy of thousands of lines of very useful code & unbelievable extensibility is a big one to overcome. I don't mind saying that I foresee emacs getting a lighttable-mode for dynamic languages before Light Table beats emacs. But I usually have a LightTable install on my computers; it's that promising of a project. :-)


First developments of an LT mode for Emacs:

http://www.emacswiki.org/emacs/LightTable https://github.com/Fuco1/litable

Also interesting discussion at:

http://www.reddit.com/r/emacs/comments/1c923n/how_does_the_d...

I am an Emacs user for decades, and I am amazed that Emacs is still superior to almost every other editor. I think the incredible easy plugin system and many powerful plugins (like org-mode) are the reason why Emacs is still alive today. But I am curious how LT develops.


Trying it out now, thanks for the link!


The workflow might need polishing, but tools like SublimeInspector are not the answer. They are miles behind the Chrome Inspector. Try it if you're not convinced.

The Chrome team has shown that they are very interested in moving the Inspector forward and have succeeded in integrating local files access via the editor, SASS support, Source Maps support, and a lot more. It is to the point where it would not be crazy to consider building a site entirely within the inspector. While the editor is not as good as others, you do gain simplicity and the ability to patch running code. The workflow is getting better.

Writing a great inspector is the hard part. Comparatively, writing an editor should not be as difficult. Chrome built the inspector first and is circling around.

The only integration I would really care to see (perhaps via ST2 Package Control) is the ability to directly pipe into the Inspector and patch running code. For large projects, especially in development mode, it can be a drag to ajax in >200 source files (even from localhost) and refresh every time you make a change.


I'm interested in understanding why you are "ajax[ing] in > 200 source files" in development mode? This provides a pretty poor developer experience IMHO; I've found a much better workflow in setting up a single concatenated unminified bundle for development using grunt-concat-sourcemaps [1] to provide source mappings that Chrome Developer Tools can browse.

This gives you the snappy page loads you'd expect with a single script element in the page, and still allows you to debug in your separate source files due to the sourcemappings. The only difference between javascript in development and production should be the minification step.

[1] https://github.com/kozy4324/grunt-concat-sourcemap


With requirejs, I run into the `mismatched anonymous define() module` issue when using concat-sourcemap. The "correct" way around it is to use r.js, but it takes about 2.5s to compile that way, which happens to be more than it takes for ajax to work its magic locally.

Running SPDY locally helps a lot. Even with all those files I hit DOMReady at about 2.7s with no concatenation.


I wouldn't choose to use requirejs on a project where I was making choices about the architecture for reasons mentioned here [1], but the current project I'm on was setup similarly and loads over 350 .js and .tpl files via AJAX in development mode pushing the DOMReady to a staggering 7.5s. (This is inside of Rails 3.2 using the requirejs gem).

Every time I've encountered requirejs in an application this has been representative of my experience with the tool and the pain of having to wait that long every time I reload a page simply doesn't seem worth it to me. There's also somewhat of a mismatch between async loading assets in development and sync loading them in production which I've seen responsible for bugs that show up in one environment but not the other, and/or vice versa.

A couple of questions for you: do you think the r.js optimizer taking 2.5s to compile is related to the complexity of the dependency tree in your application or just the number of files being loaded? Also, considering the previously mentioned mismatch between dev/prod and async loading, do you think it is appropriate to use something like SPDY to obviate the pain of a lengthy DOMReady event in development?

[1] - http://searls.testdouble.com/posts/2013-06-16-unrequired-lov...


Yes, I've run into issues with things running differently in an async environment (dev) and a sync environment (prod). To mitigate the issue I now throw events at key points in initialization and wait for those events to continue. Has solved my problem so far.

Using r.js in development isn't the worst idea. It's worth seeing how long it takes in order to make that decision. Compiling tpls is much faster (grunt-contrib-jst) and adding that to your grunt watch & including it directly is a good way to save time. I think it takes a long time on my end due to the complexity of the dependencies. I only include exactly what each module needs so some dependencies may be as many as 6 levels deep, or more (haven't really checked).

SPDY makes a big difference for me (big enough to ignore the problem for now) and I don't mind using a self-signed cert in dev.

EDIT: I hadn't been compiling tpls using JST in dev until I wrote this post - a great side effect is, it actually shows me now where the errors in my tpls are! Previously any tpl's stack terminated at the code that ajaxed in the tpl. This is far better for debugging and brought my DOMReady time down to about 1.75s.


One solution would be to stop anonymously defining modules... or...

Instead of having >200 files that need to be compiled with r.js every time, what about compiling them to intermediary builds? Abstract your code in to some bigger modules and wrap them up in a little bow with a nice interface, and basically consider that part of the code "solved" and focus only on what is currently changing or being built. Your build process should reflect this!

I've been doing this on larger projects with RequireJS and r.js and my build times are very short... it just builds from maybe 15-20 files, where a few of those files are the result of some other r.js build.

Now, should r.js possibly do things to better manage this sort of approach? You bet! There are many mature build environments that have a similar approach. Maybe r.js needs some sort of concept of "linking"?

Personally, I don't mind having to manage intermediary builds, because... that intermediary build is making something that I can use in OTHER projects... I haven't really come across a situation where pure business logic is dominating an application. Almost everything that a program does can be abstracted out and reused! I'm sure people out there can provide plenty of examples to the contrary, and I'd love to hear those!


I just started using Browserify with browserify-middleware and coffeeify, and it's fantastic. No build tool needed (even in production if you setup caching)


Browserify is wonderful. I had trouble with browserify-middleware on nodejs, because it took so long to load -- several seconds, compared with under 1 second using the browserify "binary" tool. So I paired the browserify binary with livereload and grunt-contrib-watch, and it's now a very quick process every time I change the code.

However, it doesn't at all address the issue of losing JavaScript state upon reload, which is one of the main problems this post is emphasizing.


Between browserify and requirejs I've definitely had a more positive experience with the former but my experience is that both seem to be subject to scaling problems with even a trivial number of dependencies in play; I wish there was a really good sample project setup that would allow accurate comparisons between the tools at the scale they are frequently used at, but for now I choose to use simple concatenation and a simple namespacing tool [1] to avoid lengthy watch-time issues during development.

grunt-contrib-watch is a pretty vital part of my workflow, but I don't use the livereload options because I can't afford to lose that state in the browser.

[1] - https://github.com/searls/extend.js


"the issue of losing JavaScript state upon reload"

Is this really that desirable? I can imagine a lot of scenarios where this would cause unexpected behavior.

Most other platforms don't support this, do they?


Shameless plug: I've written a small VIM plugin [1] that permits a workflow similar to what the author suggests.

[1]: https://github.com/Bogdanp/browser-connect.vim


Nice one !

Would it be possible for you to add the source code of the play application (server) to your github ?

Thank you for your plugin :)



Thank you very much!


Wow - this is exactly what I was looking for. I'm going to try this as soon as I can.


I can vividly imagine this conversation:

'We have a lot of stateful frontend code that's hard to debug and test.'

'Maybe you should strive for less stateful frontend code?'

'Nonsense! We should completely reingeneer our workflows and toolchains to accommodate whatever we're doing right now, because everything else is stupid and outdated.'


The article emphasizes CSS as much if not moreso than JavaScript.


I feel like this sentiment has been bubbling under the surface of my thought process every time I open up my editor. I've never taken the time to scrutinize that feeling though, too busy stressing over the details of whatever I'm making.

Perhaps only tangentially related, but our workflows are significantly related to the technologies we use, right? Ian Hickson in an interview [0]

"The Web technology stack is a complete mess. The problem is: what would you replace it with?"

[0]http://html5doctor.com/interview-with-ian-hickson-html-edito...


You can use the Google Cache version if the server is down http://webcache.googleusercontent.com/search?q=cache:G2FK1vS...


While these comments are indeed useful, they are an aside to the article. I wonder if there is a way to make it so that the link will redirect to the google cache version if the server is down automatically.


Please try weinre[1] to fix this part:

"The remote debugging protocols are incompatible with each other, and each has a different features."

[1]http://people.apache.org/~pmuellr/weinre/docs/latest/

With weinre you start a node.js debug server and add one script tag in your html. Then you start the debug client in a webkit compatible browser and finally the browser with the page you are debugging (it can be anything: mobile, remote or not).

weinre strongest points are:

- "weinre supports remote interaction, so you can run the debugger user interface on one machine and can debug a web page running on another machine. For instance, debug a web page displayed on your phone from your laptop."

- "weinre does not make use of any native code in the browser, it's all plain old boring JavaScript."

- "Because weinre doesn't use native code, the debug target code will run on browsers without specialized debug support."

Also try Live Reload[2], mentioned in the article, for a nicer editor+your_tool+browser integration.

[2]http://livereload.com/

- "LiveReload monitors changes in the file system. As soon as you save a file, it is preprocessed as needed (SASS, LESS, Stylus, Coffescript and others), and the browser is refreshed." (You don't hit reload, it uses a browser extension or a script tag)

- "LiveReload can invoke a Terminal command after processing changes. Run a Rake/Cake task, a Shell script or anything else you need."


Most web development workflows emerged in an era of monolithic, server-side driven application development. Lots of server side code (as well as navigation logic and session state), relatively small amount of browser side code. In recent web app development, client side now has a larger proportion of the code and complexity. Developers tend to make incremental changes to their workflow rather than "starting from scratch" seem to miss the shift that has taken place.

One of the best examples of an efficient workflow (at least in theory) that I have seen is the Play framework, along with a browser plugin that causes the browser to refresh every time source code is saved (http://www.jamesward.com/2013/05/15/auto-refresh-for-play-fr...).


Not to take anything away from this but that's not really achieving the desired effect. There's still a browser refresh happening so you lose your state every time. You're still stuck in the same cycle. In an ideal world you would be able to change code in your editor and have those changes reflected within the objects already loaded in your browser.


It looks, like "web" is slowly moving towards Smalltalk's concepts (everything including dev-tools is dynamic and changeable live in runtime). I wonder how many years it will take us to get there


I give it two decades.

People are trying to "fix" the web, when it should be replaced or redesigned. Thus problem is human, not technical. Whom should decide what replaces all the TLA that is required for the "modern" web? Too much time and money is invested in browsers, servers, languages, frameworks, training, tools etc...

Not that's all bad. A lot of technology that wouldn't have been developed, has been. It's changed a lot since I started building sites in '94. But I still can build nicer apps for the desktop, with better interfaces, better performance using a single programming language faster than I can develop "rough" equivalent web application.


Our frontend workflow is quite nice.

We use a static node server with live reload that monitors the source tree for modifications. It compiles and hot-swaps stylus files the moment you hit save (the ability to do that obviously depends on how we implemented style appliers for our view classes). Localization sources, and templates are also compiled on modification, and the browser is told to refresh once it detects a modification on javascript. This means any change I make in whatever text editor I chose, is immediately reflected in the browser. And since we built our front-end app with verbose and stateful routes, I never am taken away from the current view I am working on.

We plan on releasing our dev toolset open source soon, but we want to retool require and handlebars to be able to also be hot-swapped without reloading the browser.

I guess I am trying to say you are the master of your workflow, if it sucks, make it better.


I am getting "Application error" when trying to view this post, which proves the point, I guess.


Also, javascript must be enabled prior to the application error even appearing.


That quite qualifies as an example of a broken web development flow. A blog post that depends on JavaScript and Heroku?

It's a bunch of static text, for goodness sake.


Indeed. My first thought was "speak for yourself". I wanted to read the article before writing my justification for my opinion, but I suppose this works.


Back again. Heroku instance has more power now :)


It seems workflow isn't the only thing broken.


The workflow cycle suggested in the article is broken, but the normal web development workflow is not. I would definitely not restart my editor and browser after every edit!

The workflow is actually just like with any other programming environment. You make changes to the code with your editor and then you reload the page or restart the application. If you really need to, you can use a debugger to see what's going on.

Fancy integration between the editor and the debugger is nice, but it hardly breaks the workflow.


Indeed, his workflow is broken. For me the problem he describes doesn't even exist as tweaking pixels is such a minor part of my "web development" work. The vast majority is backend stuff or simple HTML that doesn't need debugging tools.

That being said, there is certainly room for improvement when problems happen.


Just because he's doing different work than you, his workflow is broken?


He wrote it himself, didn't he? You're totally reading that into what I wrote...

Some part of "web development" could use improvements.


> When using the browser as code-editor, we are entering a world of new problems. The browser is designed to abstract away the local file system, and is based upon a read-only/execute-only model. In order to “fix” this we have introduced a new type of browser extensions, that’s trying to fix this.

It isn't really a read-only environment! It can't read anything! All the browser can do is GET and POST things...

...so maybe the problem with web development is that it is all based on files?


I like my workflow with Arch/Xmonad/vim/Chrome. More broken than workflow is javascript, rendering engines, and the DOM. We shouldn't need as many tools as we do, but unfortunately we simply cannot trust our "correct" code, markup and style sheets to result in the expected. As long as these things are broken, workflow will be too.


A remote protocol is super-handy. Anyone who has used CL w/ SLIME and a compatible editor can attest to this. There's even a little project called slime-proxy[1] which proxies the javascript compiled from parenscript in a REPL to a connected browser via websockets. It is a much more elegant way to develop applications for the browser than the reloading hacks that are popular today.

[1] https://github.com/3b/slime-proxy


I like the idea of changing the dev workflow a lot. What I don't understand though is what makes the Webkit Developer Tools better than Firebug.

I don't want to defend Firebug, but in my experience the Developer Tools are a pain to use in comparison to Firebug. (e.g. autocompletion of css properties and property values only works with tab and not also with return, it takes too many clicks to see the metrics,....).

What's the features that make the Developer Tools better in your experience?


If you haven't had a chance to check out the Developer Tools in Chrome Canary, you're missing out!

Check this out: https://developers.google.com/chrome-developer-tools/docs/ti...


http://yeoman.io/ tries to solve it... (like it mostly for its live reload feature)

Here is another great example - http://remysharp.com/2012/12/21/my-workflow-never-having-to-...

My two cents: find solutions... People who make tools are just like us, some empathy, accept what you cannot change :)


Back in the days, where livereloading wasn't even thought, I was just using META REFRESH every 5-10 seconds to accomplish the same job and I was quite happy.


This is one of the problems that TDD tries to solve.

To make changes to a small piece of code, you don't want to have to go through a huge iteration of running up your server, going to your browser, moving around the app to get into the required state and then trying out the bit of functionality. No, you just write a test for that bit of code and run it. No browser needed.

The cycle he describes is what you do at the very end. Once.


I thought the problem he was trying to solve was to speed up simple things like text color changes or pushing elements around a few pixels. TDD doesn't try and solve that does it?


If anything it makes more of a PITA


From my experience, frontend tests break easily in unexpected ways e.g. div moved by an inch? Error - element-not-visible!


Depends on your definition of frontend tests. TDD alleviates these workflow problems for javascript (the actual code of the application). But using automated tests for the UI layout is gonna be a bad time. As @davemo mentioned, this post seems primarily directed at a designer's workflow with CSS and less about javascript development.


Ah yes true. I glossed over the CSS bit and just took in the javascript bit. I don't think TDD would fix a CSS workflow.


Yes, the workflow is broken, and that's why I've been working on LIVEditor (http://liveditor.com) and the past 2 years!

I'm so glad that Kenneth's post just proved my concept!

LIVEditor combines a Scintilla-powered code editor together with a Chromium-powered browser, it's very deeply integrated since it's a same software.


Application Error. Looks like the server is down.


Back again. More power now :)


I wonder how much of this applies not only to front-end javascript style stuff but also the backend server rich code. You also debug with a navigate to page, find the right code in editor, change, reload cycle. Granted, you miss out the "change values in developer tools in the browser", but it isn't so far removed.


Seriously what about tincr?

I debug css that way. No more more memorization and you don't even have to start another program.


Good article. To complement the WebStorm / SublimeText + Chrome example, I'll also mention Paul Rouget's work on integrating Firefox with SublimeText: http://paulrouget.com/e/devtoolsnext/


One of the most incredible things released in visual studio 2013 is a thing called Browser Link. In the keynote they used it only for live refreshing the page. However the way it is architected adding extensions is easy. There's going to be some awesome things being built for it.



"What if you could edit a file in your editor, and have the changes reflected directly in the browser?"

Adobe's new Edge Code does this...

http://html.adobe.com/edge/code/


Why are we doing this backwards compared to everyone else? Integrating debugging tools into the runtime (browser) is great, but everyone else tends to integrate them into the IDE. Wouldn't that make more sense?


It's nice to be able to deal with stuff in the environment in which it will be used.


Sure, and that's why when you are in eclipse and you're debugging java, you are connected to a real jvm. I don't see any reason why you couldn't be connected to a real browser from inside an IDE. In an ideal world, all browsers would export (or have a plugin that did it for them) a standardised debugging interface.


This was all much, much, harder 10-15 years ago.

Of course, this may only make sense in 10-15 years when there's even better tools than today.

Regardless, there will always be a desire for it to be easier, and it will always be work.


But the JavaScript development tools today really aren't that much different, never mind better, than what we had 10 or 15 or even 20 years ago when using C, or C++, or Java, or even Turbo Pascal and Delphi.

The basic principles and functionality has remain essentially unchanged the entire time. We're still mainly setting breakpoints, stepping through code, inspecting variables, and so forth.

Those of us who have been in industry a long time have seen much greater gains from the use of strong, static typing and unit testing, for instance. The best way to use a debugger is to not use it at all, because many of the bugs have been prevented outright by the nature of the language used, or at worst caught immediately by the compiler or automated tests.


Surprised he never mentioned Coda 2. Isn't that considered the leading edge as far as web development workflow goes these days?


JetBrains webstorm looks really cool, I wish they'd push that functionality into PyCharm.


The Live Edit functionality is provided by a plugin, so it should work in PyCharm as well. I know that it works in RubyMine.


You're right: http://plugins.jetbrains.com/plugin/?id=7007

It must have been just over a year ago I last checked, and it looks like it's been available since last July!


You should try the Intellij IDEA with python and javascript plugins. They should have the same code as standalone IDEs.


I tried that, it wasn't nearly as slick as pycharm.


Application error.


On one hand, the whole web stack is completely broken and insane.

On the other hand, if you use the right development methodologies such as unit testing, MVC, MVP, MVVM, and/or frameworks that translate statically typed or functional code to JS, the write-build-run-debug cycle is not an issue, because in general it only becomes an issue with a wrong approach to programming. This is totally the same as in non-web-programming.


Yeah, I also think there should be a more fundamental discussion about where the web is heading. Instead of fixing the tools perhaps one should think about a new foundation for everything. Not sure HTML etc. is it.


I'm fairly sure that's been tried before - with XHTML2 and disappeared down some black hole never to reappear - http://www.w3.org/2004/04/webapps-cdf-ws/summary .

And HTML trundled along unwanted in WHATWG through those times, scorned, rejected as inadequate and not suitable. And yet, XHTML2 got closed down, everyone switched back to the HTML path.

And processes involving changing web development from an environment to an output format - Google's GWT and Dart, don't seem to have gained much traction.

I don't know what happened to Xanadu, either.

You probably should dig into the history of Rich Internet Applications (formerly XUL, before Mozilla decided to stamp out the naming confusion with their own XML vocabulary), in the days before Ajax really stabilised and things drifted back into the browser.


I would say these approaches were still just bandages, so it makes sense they didn't catch on. GWT, for example, was a fairly nasty leaky abstraction. I don't think it removed the need to understand the rest of the stack, and it added a whole lot of its own Java inspired complexity. XHTML wasn't solving any fundamental problems either.

In an ideal world, now that the web has shifted from being a bunch of linked documents to complex applications, we'd have a development stack aimed at developing applications, with sane means of specifying UI layout and behaviour, low overhead client-server comm protocols etc.


My web development workflow is great actually. Just my browser application development is broken. Trying to conflate the two very different things just confuses the issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: