Hacker News new | past | comments | ask | show | jobs | submit login
Browserify handbook (github.com/substack)
154 points by mambodog on May 23, 2014 | hide | past | favorite | 68 comments



I must say, there was a bit of a learning curve to Browserify. Now that I have it working I feel it's one of the best tools we have for front-end development.

I auditioned Component and RequireJS but both of those felt like a kludge. Browserify lets you do node style includes as well as compile your JS into separate bundles.

I wrote a very small structure and build system [1] for our front-end projects at work that uses Browserify / Gulp and can compile everything into separate modules. It uses nginx/apache for routing and since it's all HTML/CSS and compiled JS, it's lightning fast to deliver.

I think this guide is fantastic; we just need more things like this to lower the barrier to entry.

[1] https://github.com/TeachBoost/mishra


I'm so very glad that Browserify exists. After living in RequireJS/Bower/etc. hell it really is a breath of fresh air.


What problems did you run into while using RequireJS?


I write with Node in the backend and JS on the front-end, so switching between CommonJS and RequireJS styles was a source of constant irritation (and let's face it, the RequireJS style is annoying). I tried using the CommonJS style in RequireJS but I could never get it to behave correctly.

Also, though I found bower useful I disliked having yet another package manager, yet another manifest and yet another install step included in my development workflow. Using npm for both server and client side modules has been a dream.


With Browserify you have extra steps as well. Install browserify. Install a watch thingy (in grunt or gulp or write a shell script or whatever). Remember to run that watch thingy every time you develop.

That's why I prefer client-side loaders. All you have to do is copy the loader into a folder and create a .html file. I have my dev folder hosted by apache so I don't ever have to worry about starting some process, running npm install and watching it install the entire internet, etc.


When people complain about these types of "steps", it makes me wonder if they're not thinking clearly about what these steps actually mean, especially for larger projects or when working on a team where synchronization is absolutely key. These processes are here to help you help yourself.

Steps like these allow you to systematically control every aspect of the development process, and adapt. In the long run, these steps work in your favor.

They also allow you to hook into the more modern aspects of development. I mean, you don't want to type `npm install` --that's fine; what you're essentially saying is that this incredibly useful ecosystem is irrelevant to your needs. I find that very hard to believe.


You misunderstand me. The benefit of a client-side loader is that these things can be added gradually. That doesn't mean you don't use many of the same tools. There's just not that 15 minutes of setting up your project folder, and of course not having to run a watch task every time you develop.


Remember to run that watch thingy every time you develop.

Well, that's the beauty of it, for me. I run everything through gulp, so I don't need to remember to run the watch thingy - I type:

    gulp watch
which starts up my dev web server, complete with LiveReload shims enabled. Then when I'm ready to build I do

    gulp dist
and it packages everything up.


I'm confused, you say you don't have to remember and then you say you typed gulp watch. Is the fact that you typed something an indication that you remembered to do so?

I understand that some people like this workflow, what I don't like about it is when I start working on a project I don't want to spend 15 minutes setting up build scripts, downloading the entirety of npm, etc. I just want to start hacking in the browser. Client-side loaders make that really easy, just drop in the .js file and go.

Again, I know some people like that workflow and I'm not saying you're wrong, just pointing out that there is a tradeoff, it's not a clear win by any stretch.


I see that you are trying to stick to your guns, but it sounds like you never even gave this a try. It's so easy to do, and if you write Sass, LESS, or any other css preprocessor languages, then you're already doing this somehow (and if you don't you're working on tiny projects).

Client side loaders (et al AMD based RequireJS) are dumb. I've been there, done that. I'll minify my js code and then "just drop the .js file" into my page, and let gulp watch for changes.


Client-side loaders support plugins too. Less, Sass, CoffeeScript, ES6, literally anything you want. I don't know why you would accuse me of not giving Browserify a try (I have several projects that use it, but I guess that's not giving it a try) when you didn't even know that client-side loaders support plugins.

Hell, you could even use a client-side loader exactly as you are doing with Browserify/Webpack. Just set up a watch task that builds unminified and there you go! It's just nice that they aren't dependent on a cli process. But unlike you, I'm not trying to convert anyone here, just pointing out that there are advantages to using client-side loaders. For example, it promotes separation of client-side code from server-side code. Using the same package.json file for both promotes bad practices like developing with the server running and writing code that's not easily testable without the server running.


I'm confused, you say you don't have to remember and then you say you typed gulp watch.

Well you have to type something in order to start your project unless you like using file:/// URLs. "gulp watch" doesn't just run the watch task - it runs the entire dev environment with watch. How do you run multiple projects with Apache?

And while setting up these modules is a process, I only really have to do it once - I have a template directory I just copy into a new project. Then everything in the src/ directory gets processed accordingly once I type 'gulp watch'.


> Well you have to type something in order to start your project unless you like using file:/// URLs. "gulp watch" doesn't just run the watch task - it runs the entire dev environment with watch. How do you run multiple projects with Apache?

Apache serves everything under ~/dev and that's where I stick new client-side projects.


Do you have a build process? I still want to concatenate and minify for production, so I'm using a build task anyway. At that point having everything run in one place is no bad thing. Especially for whoever comes after me.


That really depends. If I'm writing a small module that others will use, there's no reason to build that. Or I might just be experimenting with some new browser API.

It's nice to be able to just start coding without the friction of setting up every project as though it were some large thing that would include production builds, automated tests, and many other developers. Of course you can work on those types of projects with client-side loaders just as easily.

I recommend trying out jspm: http://jspm.io/ It is all about removing the friction that people often have with client-side loaders (I have to maintain another config file, the horror! ;) but is also forward-compatible as it implements the upcoming ES6 module loading stuff. You can use CommonJS, AMD, or ES6, and mix and match the three.


Your process also has "extra steps", like installing and configuring apache, installing a loader, and creating an html file for it. There isn't anything that's just free, and different people just understand and prefer different workflows. For instance, I don't use browserify, but its workflow makes a lot of sense and "sounds right" to me, whereas yours sounds really strange.


Installing Apache/Nginx is a 1 time deal. A browserify/webpack workflow requires that you set all of that up for each project, and to start your watch task every time you develop. So if you're working a few different projects you either have to switch between them or start new watch tasks for each. I understand that some people enjoy this workflow and am not saying they are "wrong", just pointing out the added requirements.


Hmmm... I'm sure most people are like me and have their own scaffold set up, and don't rebuild everything from scratch. For every project that I write, I do...

1. git clone https://github.com/WINTR/grunt-frontend-scaffold

2. npm install; and then

3. grunt dev

...which watches everything on localhost:3000, including my unit tests and source code, and reloads on change. Dead simple, fast and consistent stuff and everything I could possibly need set up in less than 1 minute, every time.

Most teams, I imagine, work in a similar way.

Not to mention, if I wanted to pass over the project to another developer, I would just have to tell him to clone the repo and hit `npm install`.


Maybe this isn't such a problem, but having a system-wide httpd means that all projects have to be on the same version.


Use the browserify-middleware package, like so:

    app.use('/js', browserify('./app/js', {
      transform: ['reactify', 'envify'],
      extensions: ['.jsx'],
      cache: app.get('env') !== 'development',
      minify: app.get('env') !== 'development',
      gzip: app.get('env') !== 'development',
      debug: app.get('env') === 'development',
      precompile: ['./app/app.js']
    }));
The server will dynamically bundle the JS for you; all you need is to run the server. In our apps, we do this for development, but in the production environment we do the packaging at deploy time (via Google Closure to minify) to create static files that can be served by Nginx.


Not to mention that the require('path/to/file') "sugar syntax" is hardly mentioned anywhere and you end up having to track your imports 1 to 1, in order.


Browserify is awesome, and I look forward to reading this guide in detail, but I must say I'm disappointed about the "avoiding ../../../.." section. I've never seen a solution to this problem that wasn't a hack. The doc is correct that using NODE_PATH makes your app "tightly coupled to a runtime environment configuration", but what it fails to note is that the other two solutions offered are just as ugly. Checking code into a directory managed by npm is simply asinine, and putting symlinks in there is just as "coupled to the runtime environment configuration" as the NODE_PATH solution. What is really needed is a reasonable (and supported) method of programmatically managing the node search path.

Why the node community is so stubborn about this point is a mystery to me and it makes me wary of node in general, because who wants to be locked into an environment where such an obvious pain point is ignored due to stubbornness?


I think it is acceptable at the application level to use NODE_PATH, provided you have a normal node_modules directory at the root of your app for npm modules, and are strict about pulling out any generic modules to that location (and ideally, to npm), and that they never reference your private modules.

Most substantial applications are going to have at least some small amount of internal coupling around their business logic and configuration, and as long as you limit knowledge of the custom NODE_PATH to that code you should be alright.

eg. you could have

    # assuming git, find repo root
    APP_ROOT=$(git rev-parse --show-toplevel)

    # /app is for private, checked in modules
    # `npm root` will find your root node_modules dir 
    NODE_PATH="$APP_ROOT/app:`npm root`"
and then all of your app-coupled private modules can live in /app, and get checked into your git repo, while all the dependencies would be referenced in the package.json at the root of your app, and installed to node_modules (eg. npm root for the app package).

I should emphasise that taking this approach requires quite a bit of developer responsibility to avoid coupling between modules when it's not required, and it's really overkill for small apps.


Do you have a better solution in mind? I'm actually wondering because I've thought of workarounds but they'd all be hacks like the ones in the handbook. I'd imagine other node developers are in the same boat (not due to stubbornness) otherwise we would've seen something by now.

Basically, what would be your ideal solution?


The obvious thing is what most other environments do: there's a system default search path, you can override that using an environment var, and then once your runtime is up, you can further override it via an API at runtime. E.g. with ruby there is an array called $: which is the current search path in array-of-strings form, and manipulating that array affects the module loading behavior. This provides a lot of flexibility in how apps find their modules. Node provides basically none.

Dictating that the only way to control dependency-loading is by manipulating the filesystem is really braindead.


There are good discussions of this issue at https://gist.github.com/branneman/8048520/ and http://stackoverflow.com/questions/10860244/

My preference is to add structure with file names instead of directories: app.foo.bar.js instead of app/foo/bar.js


Unfortunately in that first link, isaacs is on record as saying we should just check our code into a node_modules directory. "Problem solved." This is exactly the kind of stubbornness I'm talking about.


All the pain and exhaustion is a sign you're fighting your way uphill instead of downhill. Node works brilliantly if you publish lots of tiny modules in public, @substack style.

Unfortunately, some of us need to live uphill.

I find the tiny module style works pretty well for me, but I need private code. I haven't yet found a private registry solution I like. NPM's lack of support for any "get this module from this registry" directive in package.json forces them to proxy the main registry. They're not good at it. I'm still declaring my dependencies by tarball URL, losing semantic version specs, e.g. "most recent compatible with 4.x".

It strikes me as unlikely now to change in any way other than "here, use my paid-for private registry", because shareholders. That makes me unhappy, but not unhappy enough to ditch the public Node ecosystem and go back to what I was using before. It had its own uphill, and living there was harder.


One thing that amazes me is the arrogance with which people who like this scheme can make pronouncements about right and wrong ways to organize code.

From the most popular answer on the stackoverflow link: "If you find yourself loading common files from the root of your project (perhaps because they are common utility functions), then that is a big clue that it's time to make a package."

I'm sorry, but no, that is not a big clue that it's time to make a package. Every project I've ever worked on contained internal modules which were nicely organized and isolated, and were consumed by the rest of the project as black box "libraries" for good design's sake, but breaking them out into separately installable modules would have been sheer, unproductive bureaucracy. Oops, you made your code modular! Get ready to start managing packages!

It's as if npm no longer wants to be merely a package manager, but also have a say in how your project internals are laid out as well. Perhaps it's a self-perpetuation strategy.


I think you might be attributing stronger emotional intent than is present. They disagree about whether it's worth fundamentally changing npm's search method to avoid some ../.. paths in your argument to require. require aside, you can express yourself through your code however you like.


I like the approach found here: https://github.com/DSKrepps/requireFrom

It doesn't work with browserify but it wouldn't be too hard to write a transform that does it.


I keep wondering what kind of applications people build where they have so many dependencies that they need something like Browserify. Am I the only one doing the 1. find CDN link 2. add script tag to body?


If you've ever worked in other languages that have an "import" function, you'll understand. Having to manually manage script tags, order of operations, rapidly growing 2000 loc script.js files, etc, can only lead you astray, introduce errors, and make debugging your code hell.

// App Controller

var AppModel = require('./models/AppModel');

var NavView = require('./views/NavView');

var HomeView = require('./views/HomeView');

// Do stuff

You can immediately know whats going on inside of your class or module simply by looking at the imports, the LOC is greatly reduced, and refactoring, via imports, makes your code about a thousand times more maintainable.


You mean like the "import" function in ES6?


Yup ;)


It's not just dependencies, it's structuring your own code as well. My codebase at work has ~200 internal module files, wired together with Browserify.


I've been working on a front-end project in CoffeeScript using React and running Browserify via Gulp has been both a great time saver and has helped keep my project organized.

One tip I can give is that I ended up organizing each of my React components and mixins as a module in their own folders and files and my gulpfile adds the parent source path to the NODE_PATH environment variable. By adding the coffeeify transform and .coffee extension to the browserify gulp task I can just do this in my code:

    SomeComponent = require 'react-components/some-component'
    SomeMixin = require 'react-mixins/some-mixin'
No need to worry about relative paths or if it is CoffeeScript code or plain Javascript.

The other tip is to NOT require react within your browserfied code. Just load it as a script tag before your browserfied code and use window.React to get a reference.


Huh, care to expand on why you shouldn't `require` React? Just doesn't play nicely with Browserify?


Yeah, sorry I wasn't clear why. The first is that React is pretty large so it slows down the watch/build cycle enough that there is a lag and second is that in production React can be loaded via a cdn. Since everything is bundled into one file with Browserify its better for the end user download speed if you pull 3rd party libs out.

I also didn't give the main reason for using the NODE_PATH method. Browserify doesn't resolve duplicate relative paths so if you have a structure like this:

    /src
      main.js
      components/a.js
      components/foo/b.js
      components/foo/c.js
and a.js requires c.js via relative paths:

    require('./foo/c.js')
and b.js also requires c.js via

    require('./c.js')
then c.js is included twice in your bundled code since Browserify uses the pure string path as the key. If instead you put /src in your NODE_PATH you can do this in a.js:

    require('components/foo/c.js')
and this in b.js

    require('components/foo/c.js')
and c.js will only be bundled once since it has the same path.


I'm not sure that's accurate.. I've been using Browserify for some time, and write code like that pretty much all over (require('../lib/api'), require('./api'), etc.) without having file duplication issues in the bundled output. Do you have an example that exhibits the problem?

Also, if you're requiring large files that you know won't need processing by browserify (like react), you can use the 'noparse' option to prevent the performance hit during bundling. (Of course a cdn is potentially a better option.)


I've just started with browserify and react this week. Can you point to some example or documentation that describes the "double include" issue you talk about? I wasn't aware of this possible problem!


Because writing the require statement in every file is annoying, probably. I do something similar with jQuery - I require it in my bootstrap file, then do

    window.$ = jQuery
Global variables are a bad thing until they're a really convenient thing.


protip, global.$ = jQuery;


Here is a browserify/watchify-based script I use for building a browser extension's JS bundles: https://github.com/ghostwords/chameleon/blob/master/tools/bu...

The script

- Creates multiple bundles, with Underscore template precompilation and source code minification (for some bundles).

- Converts every vendor library into shared modules (you don't want Underscore included in every bundle that needs it, but you want to be able to require() it in those bundles).

- Monitors your files for changes and recompiles bundles as needed when run with --watch.

I found working with Browserify a pleasant experience after RequireJS and r.js (its build tool).



Wow, what a great app. So simple and easy to use. Love it!


I played with Browserify for the first time two weeks ago, building this app:

https://github.com/bcoe/npm-typeahead

Both the libraries that the app depended on (typeahead.js, and jquery) had been published to npm. I found it really straight-forward to setup an asset-pipeline using Browserify.

I'm a big fan :)


I came across and played with beefy[1] today. Has anyone here integrated it into their workflow?

[1] http://didact.us/beefy/


I use beefy for almost all my modules, demos, and protoypes. For websites, I usually go with gulp watch since I need LESS anyways.


Anyone has luck with Browserify + TypeScript while keeping source maps pointing correctly back to TypeScript? (or any other JS transpiler)


I recently also started using Browserify with Typescript (along with tds to get some ambient files for popular libraries). I got it to work after tryping countless of hacky things.

What I ended up with was a seperate ts file that is just the module, and variables I export. See this:

http://jsbin.com/gidaneja/1/edit?js,output

So I do a "<reference path=" for the ambient paths of variables. Then I declare require so I can call browserify's require. Then I create a function with it's return type being used based on ambient files I included. Then I create an export variable and call the function to set it's value. Then I can use that variable without a problem everywhere.

I tried using the whole "import lodash = require('lodash.d.ts') thing and other variations and failed miserably. This is the only thing I got to work, and its working perfectly. Email me (email should be on my profile) if it's not clear, I know the explanation was kinda vague.


Awesome, thanks! will try it :)


Happy to help. Let me know how it goes.


What problems are you having exactly? Sourcemaps work great over here.


Here is the SO question I created for this issue with more details: http://stackoverflow.com/questions/23453160/keep-original-ty...


I use browserify and I'm happy with it, but I have one question that perhaps someone can answer :

how can I develop a library with different modules, say lib/module/func, lib/module/other, and compile the library as static, then import it in another project and be able to require lib/module/other from the other project using only the compiled lib?



Just package the library as an NPM module -- that's the reason why Browserify exists, after all -- then reference it in the other project's package.json file. If it's private (not open source), you can still reference it via Git or a private NPM repository.


Why compile the module? Browserify lets you use a module's CommonJS export, so the only build step is at the application level. Utils etc should not be built except for non-node users (umd).


It would allow to distribute and package the library simply, rather than having to checkout the library in the other projects dir and have weird require(../../lib/module/other)


We must be doing different things. If I start getting relative paths like that, I usually pull it out into a separate module.

I can understand bundling a bunch of modules together for UMD builds, but if you need to bundle all of your NPM/browserifiable modules for other browserify apps, you are probably going about it the wrong way..


After using browserify for a while now, I recently came across webpack (http://webpack.github.io/docs/what-is-webpack.html) which I'm finding more convenient and as efficient.

Advantages:

- requiring and pre-prosessing files other than js is easier

- ability to use AMD modules if you have to

- code splitting


Here's a simpler alternative if you don't want all the bells and whistles of Browserify

https://github.com/jaekwon/demodule


Anyone knows how to use bootstrap using commonjs/browserify? I don't want to use neither bower nor manual download / storing compiled bootstrap inside of project.



Thank you very much!


Main problem is when npm modules are not up to date. This is the reason i still have to use bower but debowerify helps a lot...


require('name/repo#commitish') works with npm/browserify, considering that bower uses git repos you can require them that way.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: