Hacker News new | past | comments | ask | show | jobs | submit login
Why we switched from Webpack to Vite (replit.com)
355 points by schestakov on April 28, 2021 | hide | past | favorite | 222 comments



Author of Vite here. I see many people evaluating Vite as a webpack replacement, so I want to clarify the goal of the project here:

It is NOT Vite's goal to completely replace webpack. There are probably a small number of features/capabilities that some existing webpack projects rely on that doesn't exist in Vite, but those features are in the long tail and are only needed by a small number of power users who write bespoke webpack configuration. Some of the commenters here probably belong in this group. If you do (e.g. you tried to migrate and found roadblocks, or evaluated and concluded that Vite doesn't suite your needs), use webpack by all means! You are not Vite's target audience by design - and you should absolutely pick the right tool for the job.

However, in the context of the general web dev population, 90% of existing devs and 100% of beginners don't need or care about these long tail features. This is an estimation made based on years of experience working on vue-cli, which is webpack-based. (context: I'm also the author of Vue.js and vue-cli is downloaded more than 3 million times per month on npm). Vite is optimized for these common use cases, and we've heard many success stories of painlessly moving from vue-cli/CRA to Vite.

This is also why Vite is a good fit for Repl.it where majority of its use cases overlap with the target use cases of Vite.

That said, we are also seeing frameworks like Svelte/Solid/Marko building SSR meta frameworks on top of Vite, projects that were previously webpack-based offering alternative modes running on top of Vite (e.g. Nuxt, Storybook), so we believe Vite does cover quite a lot even for power users.

So - try it, and if it doesn't work for you, stick to webpack (specially if you have an existing project relying on specific webpack behavior). As many people said, webpack 5 has made decent performance gains, and if you are targeting modern browsers, consider replacing Babel/TS with esbuild. These should already get you pretty far.


Core team on Marko and author of Solid. Vite is exactly the sort of project we've been looking for.

Sure I love coming up with the perfect Rollup setups to produce the smallest application bundles. But Vite closed the loop we were looking for in terms of offering low config client/server applications with both ease of use and great flexibility.

You can see the focus and care put into Vite to make it easier to address the complexity of configuration. We still have plugins/starter templates for Webpack and Rollup, but for the average developer getting started with these frameworks Vite just gives so much out of the box. It really gives the ease of something like Parcel, with the ability to expand. This makes it far superior to solutions like CRA which forced monkey patching or ejection.

We've already seen through meta-frameworks like Next that there is a big desire here, and what at one point seemed like a huge undertaking for a historically single developer project like Solid, is suddenly becoming a reality. And others have the ability to build and share these setups as well.

Evan and the rest of those working on Vite have my thanks and my gratitude.


Hi Evan!

I just wanted to thank you for your contributions to frontend development. Vue has had a huge positive impact on my day-to-day work. For me it’s truly a joy to work with and I am in debt to you and all Vue contributors. Vite makes me exited about tooling in a way that hasn’t happen in quite a while. Speed is indeed a feature.

So thank you for the great work you are doing and keep it up!


First of all thank you for writing so many useful open-source software.

Anyway since you are here is there any reason why Vite won't run with Vue 2.x? I tried Vue 3 with Vite and I was blown away how much dev time it saves. No startup time, no compile time.. but we have thousand and thousands of line of Vue 2 that isn't going to be ported to V3 anytime soon and it kills me that I have to waste so much time looking at webpack compiling all the vue files everytime I load it.

Is this technically impossible to make Vite work with Vue 2 or is it because the community has moved away from V2 and don't want to spend any time on V2 tooling? Thanks.



Well said – everything new does not need to be a replacement for something else. Options/choices are good, especially because each one can lean into a different solution space.


Just want to say how coincidental it is to have Evan You for vite + vue, and Evan Wallace for esbuild and Figma.

Long prosper the Evans :)


It seems you are quite the Evangelist.


There is at least one more Evan working in this space who should be on that list.

Evan Martin created the Ninja build system and works on making JavaScript builds fast at Google. He has blogged about it here:

http://neugierig.org/software/blog/2020/10/scaling-typescrip...


Just wait until the alpha release of EvanScript.


Evan, thanks for your work on Vite. We adopted Vite early on for a large Vue project. My main concern is the testing story. Any ETA on when we'd see official support for something like Jest?


Proper Jest integration is blocked by async transformers (https://github.com/facebook/jest/pull/9889) which should land as part of Jest 27, so we are mostly waiting on that.

In the meanwhile, you can also consider:

- @web/test-runner (https://modern-web.dev/docs/test-runner/overview/)

- Cypress (both e2e and unit testing via its component test runner https://www.cypress.io/blog/2021/04/06/introducing-the-cypre...)

- Check out https://github.com/sodatea/vite-test-example for an example using the above.


Do you mean testing Vite itself or incorporating jest for frontend stuff as a build step?


ViteJS is really faster compared to CRA that I previously used. It's really a breath of fresh air. Thanks for creating it!

Now we only need something that is in the same vein for tooling that generates libraries. I currently am using TSDX but still have a lots of problems with it.


If someone is already comfortable with webpack or something else and has no complaints or issues with it, how would you sell them on trying vite?


At this point I don't think I really want to "sell" it to anyone. I've got enough things to maintain so I'd rather just have users who use Vite because they actually like it rather than people switching from webpack just out of FOMO.


I’m not related to the project, but what stood out to me is the fast reloading between changes, and the optimizations that went into making that happen. Today with webpack, sometimes changes take a full 1-2 seconds to be reflected in the browser and while that’s not terrible, having something refresh in 100s of milliseconds is a game changer at least for me and my frontend workflow.


Some stats not mentioned in the post. Create React App (webpack) vs Vite (esbuild) on Replit containers:

- 1 second start up time on Vite vs 15 seconds for CRA

- React.js hello world project is 234mb on CRA and only 34mb on Vite

- 1GB RAM for Vite dev server vs 3GB+ for CRA

This is a perfect example of how fast and efficient tools can be, and I think we can do even better! Super excited about the future of JavaScript tooling ecosystem with more focus on efficiency and speed.

In addition to the UX win, and I wish we measured this, but I bet this saved us thousands of dollars in monthly cloud spend.


As the "webpack" guy on my team, these numbers look extremely compelling, but I also know Webpack does a lot for us (e.g., through Webpack v4, it includes browserfied node libs as needed). Beyond node libs, there's a long tail of niche things that need to be taken of... am I trading coverage of that long tail for speed?


Previously I've been a heavy user of Webpack + plugins, but I've now moved over to ESBuild for all new projects. This means letting go of many fancy features, but the overall complexity is so much reduced and I'm a lot happier.

Before: Chain together style-loader, css-loader, postcss-loader and the MiniCssExtractPlugin in some weird way. So complicated to understand which PostCSS plugins interacts with resolving imports. I often need to look into the webpack.config.js to understand how everything work. After: Use PostCSS and its tooling for CSS. Yes, there's now a separate process I also need to run in order to watch and build CSS. Yes, I can no longer `import "./style.css"` from the JavaScript files. But it's so much easier to reason about! The CSS tooling creates a CSS file; the JavaScript tooling creates a JavaScript file. Do I want to build SVG sprites? Well, that can very easily be a separate script which I can debug independently and _not_ couple into my builder's plugin system.

In addition, now my JavaScript files are actually just JavaScript and I can be pretty sure that it will work with any new tooling without any problems. Once the successor to ESBuild comes out I will most likely be able to point it at index.js and everything will work.


It’s funny to me that we’ve sort of ended up back where I started with Browserify and a seperate Less compile step, or even further back with require.js in some ways.

It’s smart though. Despite having built webpack plugins myself, my main gripe with it was how overloaded it became.


This is excellent advice. There's something I really dislike about import "./style.css" into JS files - it just seems meaningless especially if you've worked with other languages/ecosystems. A great many tutorials default to this style, so it'll likely live on for a while.

As yet another tiny UI framework creator (forgojs.org), the switch to esbuild-loader was our best developer experience decision. create-forgo-app (our CRA equivalent) takes 3-4 seconds, and running it is instantaneous. Faster builds actually change the way developers write code.


One great problem it solves is namespacing all of your styles. I don't have to worry about classname collisions because the bundle handles it all for me. Instead of having one massive global stylesheet, I have a bunch of small, modular stylesheets that fit entirely on my display without scrolling, and I can reason about the output much easier than before.


You're talking about css modules. It's a different feature.


Plus effortless code splitting.


> In addition, now my JavaScript files are actually just JavaScript and I can be pretty sure that it will work with any new tooling without any problems.

That's been my feeling for years, too: if your team is less than, say, several dozen people there's a significant amount of merit to having something which doesn't require continous care and feeding for anyone to work, especially when it's combining a number of complex libraries with independent development teams.


In my experience, the vast majority of use cases are covered by these "no config" tools. If your app isn't a typical CRUD app you might fall into that long tail, but FWIW almost every app I've built has not :)


You still can do polyfills manually via browserify packages


You still need webpack for production build :). So all is good.


Vite uses Rollup for production builds, which is still much faster than webpack.


Why can't you use esbuild for production builds ?


Tree shaking isn't sorted, so can't currently get an bundle optimised to same extent as you do with Webpack/Rollup et al. It'll come, but not quite there yet.


Correct.


Yes you can, my bad/ But some people have webpack configs, so the easiest for them is to keep those and can use Vite for development. Like if you are not doing new project.


I use esbuild for prod...?


>Super excited about the future of JavaScript tooling ecosystem with more focus on efficiency and speed.

Well they can't get worse than now... 234mb for a friggin hello world app


It’s more of a hello import the world.


Or is it the other way around!


It's worth noting that the tooling and build systems are that big. They include type definitions, binaries, scss preprocessors, typescript compiler, linter, and a lot of other tools to enhance DX. Some of them might include other non-code resources.

The hello world app is not going to be this big.


Would you count the size of the JVM when you do Hello World in Java?


You could, since it’s a runtime dep. But a better comparison would be to the client dev tool chains in other ecosystems, like Xcode (11GB), though still not very interesting. Turns out client dev is hard and it doesn’t make much sense hand wringing over dev tool size metrics.

Better to look at how well they solve their problem space.


No because they are not counting the size of the browser or nodejs


you could and it would still be much smaller.


Java HelloWorld runs in 1mb:

  20:18:01 /tmp > echo 'public class HelloWorld {public static void main(String[] args) {System.out.println("Hello World!");}}' > HelloWorld.java
  20:18:09 /tmp > javac HelloWorld.java
  20:18:14 /tmp > java -Xmx1m HelloWorld
  Hello World!


I’d like to see non-CRA numbers: I’ve found that a simple React + Webpack 5 project is not as bad to get off the ground as it used to be.


Exactly: 99% of the time, "webpack is slow" is just code for "I have Babel in my toolchain" and forgetting to set ts-loader as "transpile only".

Personally I removed it years ago, and Webpack 5 even allowed me to get rid of more loaders now that there are Assets Modules that automatically detect assets and webworkers using the "new URL()" syntax, and Typescript does everything else I need.


The new URL() thing is amazing


Can someone clarify what the new URL() thing is?




I can remove babel in my code?


It depends on your target platform: ES6 modules are basically supported in every browser I care about, so the major reason for Babel is JSX


Even if your target platform didn't have ES6 modules, Babel is probably not the tool you'd want to transpile those with. Let the bundler (which is the topic of the discussion here so I assume a bundler is used) handle them. Tree shaking works better that way anyway.

Also if JSX or similar JS extensions are the only thing you need tramspiling for, you might want to look at Sucrase [1] as a fast alternative to Babel.

https://github.com/alangpierce/sucrase


Yep, I needed to put together a Webpack build tool chain for React last month & thought it was going to be a bit of a nightmare and it just...wasn't. Tiny bit of fiddling, but generally easy -- that was with a fairly complex setup in terms of the project structure and rules around deployment, so quite impressed after a few years of avoiding it due to how much I hated the song and dance over setting it up.


I'm sure it's less disk space because it has less deps but the most of the RAM and CPU usage in CRA is Webpack so I'd be surprised if that changed much.


webpack AND all the extra plugins and preprocessing and postprocessing CRA adds -- its not a small webpack setup by any means and you can get 1s build times with webpack pretty easily without CRA


Agreed, I use webpack for small and medium projects and it's basically instant. I use TS but not the compiler - I tell babel to strip types and use just a few plugins, transforms and loaders.

I'm not in love but it's a core tool in almost all of my projects, personal and professional.


I’d like to see the numbers: the webpack config CRA eject gives you is a monster.


“Only” 34mb for a hello world project? I can’t tell if you’re being ironic but I hope so!


Considering that projects nowadays have all dependencies installed into their folders, what else is both viable and much smaller? If we had to install everything required for a hello.c into c_modules, including a compiler, libc, etc, it wouldn’t take less than 34mb.

Of course a counter argument is that you don’t usually swap C compilers between projects and libc is stable af, but if a toolchain and/or stdlib was constantly changing and subject to non-compat, you’d have to.


Nearly an order of magnitude reduction. What do you think of that?


They probably mean that "hello world".length == 11 // bytes


Most of the build tooling gets installed along the way.

The compiled JS is hundreds of bytes at most, usually because React creates a bunch of boilerplate for you (a basic CSS file, an application, a web worker, etc) which you may remove.


Well, that's part of the jab: that you ostensibly don't need build tooling for an interpreted language, and further, that you don't even need an interpreted language in the first place considering we're talking about a platform that can do UI out of the box via a declarative language.

(Obviously, that's neither here or there when we're talking about baselines for SPAs)


ES6 is a compile target.

The actual languages are often Typescript and jsx, to say nothing of the Svelte compiler or Elm.

You can write vanilla JS without all that, of course.


But why would you? This is the silliest way to judge something ever: what practical web application is anyone routinely building with no CSS, web workers etc.


You would because you're comparing it to a "Hello world" command-line program in another language which also doesn't have CSS, web workers, etc.


Is it? I know there's no sizeof, but what would be sizeof(char) in JS? 1 byte?


The answer to this question is complicated. JavaScript char encoding is roughly UTF-16, which is 2-bytes, but the byte you read may have been part of a surrogate so, depending on your first one, you must read the next 2 bytes to complete your character.

And of course, this basic explanation doesn’t really do justice to answering your question, because depending on your definition of what a “character” is, you may need to take into account ligatures etc.


No need to overcomplicate though. None of this applies to the ascii range, and we're just talking about storage in disk in the first place.


The 2-bit part applies, if not the rest, and it’s not really “on disk” but rather “memory for data type,” right?

Given the nature of the questions, I presumed they were interested in knowing “how does JavaScript load strings into memory, anyway?”

And to answer that question, your rough heuristic should be “2 bytes per character” not 1, even for ascii range. That just leads to additional questions, though, because of the oddity of it.

In order to achieve the ability to do Unicode, there’s a reserved set of values within that 2-bytes, to allow you to extend the encoding to reach Unicode.

Back to the original measurement, for the string “hello world”, I believe a JavaScript `sizeof`, if it existed, would report 24 bytes (22 for the characters, and 2 (give or take) for either the NULL character or for a length header.


The thread was originally about CRA vs Vite size on disk (or implicitly, if we're applying it to real world applications, network cost in CI job startup times). And like I said, surrogate pairs don't apply to ASCII.

See this[0] for reference. Note how the first byte must fall within a certain range in order to signal being a surrogate pair. This range quite deliberately falls outside the ASCII range. This fact is taken advantage of by JS parsers to make parsing of ASCII substrings faster by special casing that range, since checking for a valid character in the entire unicode range is quite a bit more expensive[1].

IMHO nitpicking about memory consumption of the underlying data structure is a bit meaningless, since the spec doesn't actually enforce any guarantees about memory layout. An implementation can take more memory for pointer to prototype, to cache hash code/length, etc, and there are also considerations such as whether the underlying data structure is polymorphic or monomorphic due to JIT, whether the string is boxed/unboxed, whether it's implemented in terms of C strings vs slices, etc.

Regardless, it doesn't change the fact that the octet sequence "hello world" takes 11 bytes in ASCII/UTF8 encoding (disregarding implementation metadata).

[0] https://github.com/jquery/esprima/blob/0911ad869928fd218371b...

[1] https://github.com/jquery/esprima/blob/0911ad869928fd218371b...


All great points. Not trying to nitpick, just trying to satisfy curiosity.


This is informative.

How is the configuration overhead? Is it relatively easy to get Vite + a custom react app template + tsx + testing up and running?

CRA is bloated, but still one of the fastest ways to get a "full app" up and running with React.


I don't think Vite has a testing story yet. As for the rest, yes, it totally is straightforward. We only had to change one line of configuration in the base template for it to work on Replit. See https://replit.com/@templates/Reactjs


Webdev truly is one of a kind. Not only do you have your source code, you also have your build code, oddly dictating the organization of your source code. For extra fun, some libraries don't build with build system a, others require build system b, when you're lucky enough to have a working mix, changing build systems is better to be avoided. Of course periodically the officially blessed build system for your used libraries changes.

The webdev ecosystem is so broken, it's no wonder so many websites deliver assets that are extremely suboptimally optimised, big unused blobs of assets /scripts slowing page load, optimising it all is actually made harder than coding it all.


> you also have your build code, oddly dictating the organization of your source code

It... really doesn't? You generally have an entry point file - which can be any file, you just have to specify it to your build system - and import statements are followed from there on. If anything you could argue the JS build ecosystem is too flexible (which is one of the things esbuild is pushing back against). I've never heard someone criticize it for being too opinionated.

> For extra fun, some libraries don't build with build system a, others require build system b

I've literally never encountered this problem. Library authors virtually always ship least-common-denominator JS that will work without using a build system at all, and then build systems know how to handle lots of different variations of JS and converge them into a single representation. Compatibility is not an issue that exists in my experience.

> The webdev ecosystem is so broken, it's no wonder so many websites deliver assets that are extremely suboptimally optimised, big unused blobs of assets /scripts slowing page load, optimising it all is actually made harder than coding it all.

Now you're just airing your own personal beef which doesn't actually have anything to do with the original topic.


Have you seen the android build system? I mean what the hell is a gradle wrapper. At least js tools are configured in .json or .js files not in a special purpose programming language.

I don‘t mean to discuss which system is too complicated and which is not just to point out a lot of real world build systems are on that level.


Android apps are written in Kotlin, and Gradle config is also written in Kotlin. How come it is different from the JS situation? Even though Gradle can be configured in Groovy, and Android apps can be written in Java, it is still the same ecosystem - basically, the situation of TypeScript/PureScript/WhateverScript and JavaScript.

Gradle wrapper is just a name for a script (automatically created by Gradle btw) which allows one not to have any kind of Gradle-related tooling installed on the machine (basically, to run build tasks you execute `./gradlew someTask`, and it takes care about downloading and running the appropriate Gradle version) - which I think is a clear benefit over the fact that you need to have `npm` installed system-wide or via a tool like NVM in order to build JS projects.

Gradle has its own warts, and a lot of them, but at least there is only one major build system in this area, and it is simply impossible for a library published to a Maven repo to be dependent on the build system it is built with.


For personal projects I just use plain javaScript, ES5 even, without any compile steps, everything loads instantly and runs on every browser that supports JavaScript, and the development can be set up in any environment/OS without headaches. I do use minification for production but that is not really necessary if the browser supports loading JS-script tags async (all major browsers). The bundle gets very small without any frameworks attached. Debugging is easy with the browser built in dev-tools - with small error messages that always have the correct line (eg. no source maps). I'm currently working on a 100,000+ line JS project that uses the plugin pattern with script tags and it's very manageable, adding new features is fast and fun. There are of course trade-offs, I can't just "npm install react-x-y-z", but the browser API's are extensive, you can do just about anything on the front-end with just the native browser components and API's.


I really love TypeScript. But I have always considered the end game of TypeScript to be that it’s inference engine (and third party libraries) become so good that you can just can write ECMAScript and get all the benefits of typescript.

I wouldn’t give up all my structural type inference for the 1:1 you’re describing, but it is tempting.


I see you have never worked on a C/C++ project.


I have done both. C++ is miles better in comparison.


Maybe you can enlighten me how to easily add a dependency (the equivalent of "yarn add lodash") in a platform-independent way.


I have used CMake.

I added the dependency by writing about three CMake lines.

And after that, it just works, for years.

I may have to add from 2, to 5 dependencies for a project, instead of the myriad of dependencies in JS. And they don't become obsolete and need an update each week.

It's a difference of some orders of magnitude, both in number and in upgrade frequency. This is a huge part of dependency management for me.

And there are C++ package managers like Conan, which solve your requirement of adding a dependency by just using the command line.

I will probably use Conan if I have to write C++ again.


CMake only finds the dependencies, how do they get to your computer? Do you link statically or dynamically? What compiler toolchains are these libraries built with, do they use the same standard library as your code or different ones? When you want to cross compile, how do you find libraries for your target architecture?

Conan makes much of this easier, but is not really suited for large software projects with the problems mentioned above - in my experience.


Well, I don't cross compile. If I need to test a new OS, then I get the VM and compile it there.

About dependencies, if they can be found in the package manager then it goes that way. Else, they are a git submodule.

The most problematic has indeed been the C++ MySQL connector, because it has changed a bit over the last five years, so I had to edit the #include lines in the source code files.


Chrome for example does not currently compile on darwin-arm64, cross compiling from darwin-x64 is the only way to build that target.

When you are using git submodules as package manager, what do you do if the dependency doesn’t come with CMake files, but with autotools for example?

I think bazel is becoming a great solution for these problems if you align with their philosophy.


Yeah this appears to be more or less the essential complexity of build+link


Looks like you may be shadow banned - I vouched for this comment and see all your recent previous comments are dead


> The webdev ecosystem is so broken

Allow me to correct this statement: The Javascript ecosystem is so broken.

Working with Clojurescript and Elm is an experience that will make most developers fall in love with web development again.


Hate to break it to you but Clojurescript, Elm and the whole paradigm of "compile to JS" languages are part of what is broken about the javascript ecosystem. So much unnecessary complexity and energy wasted making javascript pretend to be something it isn't and to make it do things it was never meant to do, pressing a small, simple and powerful scripting language into the service of the profane eldritch abomination that is Enterprise Development.


I dunno, I feel that’s like saying that Lisp and Haskell are part of what’s broken about the machine code ecosystem.

ClojureScript and Elm aren’t compile-to-JS languages for the sake of doing something weird, there’s just literally nothing else that will run in a browser (except wasm but probably best not to get into that).

Different programming languages exist for a multitude of valid reasons, the compiled output isn’t particularly important as long as it can express everything you need it to.


> I dunno, I feel that’s like saying that Lisp and Haskell are part of what’s broken about the machine code ecosystem.

It would be if machine code were another high-level text-based language completely unrelated to either Lisp or Haskell with its own semantics, execution model and type system rather than a more directly machine-readable format of those languages themselves.

Javascript is fundamentally different enough from a bytecode for any arbitrary language that, at least to me the distinction matters. I can accept that it works well enough that most people don't care, even though I suspect most of the use cases for doing so (error checking, type checking) could be better served with linters or editor tools for JS itself.

But the decision to avoid actually writing javascript at all costs has contributed a great deal of complexity in the JS ecosystem, which translates into the bloat in the web that everyone complains about, because all of that javascript is wasting time and cycles simulating other languages.


> But the decision to avoid actually writing javascript at all costs

Who's JavaScript is it anyway? ES7? ES6? Internet Explorer 11's? How do you isolate things for the sake of unit tests and then bring them together into a performant build?


>Who's JavaScript is it anyway? ES7? ES6? Internet Explorer 11's?

Cross-browser compatible Javascript was a solved problem when JQuery and shimming came along.

>How do you isolate things for the sake of unit tests and then bring them together into a performant build?

Use one of the many unit testing libraries and frameworks that already exist for Javascript.


Hate to break it to you, but what you call "Enterprise Development" is really just "large software projects", for which many find the "simple and powerful scripting language" inadequate.


But those projects use that "simple and powerful scripting language" anyway, so clearly it is adequate.


Are you using JavaScript when you are using TypeScript? I guess so, to some degree. The additional tooling complexity is just not that significant compared to the gains of a language that compiles to JavaScript vs using vanilla JavaScript.


Also Trunk [0] for Rust-wasm web dev. It’s made me not hate doing the front end for my projects. I look forward to the Rust web ecosystem evolving to the point where I can recommend it in a work context.

[0] https://trunkrs.dev/


Then also make unit, integration, and e2e work with all the transpiling. Good luck with sourcemaps!


You enable sourcemaps on Webpack with `devtool = "eval-source-map"` in your config, and I'm not sure how you expect transpiling to be a problem with testing considering your tests are also transpiled.


Do you compile and run your unit tests with webpack?


In the case of code that needs to be transpiled, like Typescript, yes.


FE web developer is one of a kind and there are good reasons for that. I keep seeing the same responses pop up all the time so I decided to write my thoughts down on why FE web dev is a black swan in software development: https://erock.io/2021/03/27/my-love-letter-to-front-end-web-...


It's more the JS ecosystem than web, because of where it started and how it's evolved organically. It's rather surprising we got this far.


I knew someone would bring this comment.

Webdev can be broken, but doesn't have to be. I am currently having fun & feeling productive on both a large team project and some small personal projects.


I literally just spent days trying out Vite and comparing it to Webpack 5, and I can comfortably say that they are in two very distinct leagues. It isn't fair to compare Vite to a CRA build with Webpack. You can greatly improve Webpack's performance by:

* Making sure `mode` is set to "development" (don't specify it all if in doubt)

* Ditching babel-loader and using esbuild-loader instead

* Adjusting build targets and making sure polyfills are not added in development

* Making sure you don't import the entirety of libraries like Lodash and MomentJS (prefer date-fns)

* [FUTURE] We'll soon be able to tell Webpack to output ES modules just like Vite [1]

Vite still has many problems:

* It just isn't suitable for proper server-side rendering, we'd need access to chunk names at build time

* It has a weird mix of own and Rollup config settings that seems somewhat unpredictable

* Many open issues regarding module imports, for ex, when a CJS module imports an ES one [2]

Our current bottleneck with Webpack is actually sass-loader, taking at least 70% of the time during a fresh build, and we'd have the same problem with Vite.

Something else that is worth pointing out is the ecosystem: Webpack's community has built tons of plugins for basically any use case you can imagine, and version 5 supports module federation, persistent caching, externals (very handy when doing SSR), customizable filename generators, performance hints, etc etc. Totally different game.

Try to keep your build config simple, avoid too many loaders, plugins, and you should be fine 99% of the time. If you hit a wall, install speed-measure-webpack-plugin to get some help.

[1] https://github.com/webpack/webpack/issues/2933

[2] https://github.com/vitejs/vite/issues?q=is%3Aissue+is%3Aopen...


>* It has a weird mix of own and Rollup config settings that seems somewhat unpredictable

Oh god, this is a giant red flag. This is precisely one of my biggest gripes with Quasar (a Vue framework), which, on top of webpack/vue adds its own config that are intermingled with vue's and webpack's, and its honestly a mess, and oftentimes just straight up makes problems harder to solve.

I am obviously not railing against config dedicated to a single tool in a toolchain, but the way you described it rings a bell, especially the "unpredictability".

>* Many open issues regarding module imports, for ex, when a CJS module imports an ES one

This also seems to confirm my and other's suspicion, that the tool isn't really "deliver-grade".

>If you hit a wall, install speed-measure-webpack-plugin to get some help.

But on the other hand, this is at least the fifth plugin I have heard someone recommend, dedicated just to profiling Webpack.

What do you think about the time difference shown in the article? I sort of feel it's a bit disingenuous since the test included a lot of other variables, but it seems hard to argue against it if it's this plain. Is this a config issue, or a "most commonly used loader" issue...?


I'd be dishonest if I told you I was able to compare them 1-to-1. I couldn't finish my Vite setup because of bugs and lack of proper SSR support.

I believe that, at this point, people are just nitpicking. When working on a client-side-only app, Vite is faster, but not by that much. In one of our apps, I saw it starting up in 1s compared to 2s with Webpack, and reloads in 100ms compared to 250ms. This is a Preact/TS/Emotion app that outputs almost 7MB of assets, 61 files to be precise, and works on IE11. And I hadn't even tried persistent caching with it yet. Webpack 5 with esbuild is a fast-enough solution.

NextJS is a well-established tool now and version 10.2 uses Webpack 5 under the hood. V11 is looking insanely fast (I suspect they replaced Babel with esbuild, and did some witchery): https://twitter.com/shuding_/status/1378086219708473344.

So yeah, slow performance with Webpack is definitely caused by a bad set of loaders/plugins. Eject a CRA app and you'll see a monster coming out.


If you figure out how to get Webpack to solve all these issues, that makes one working Webpack installation. When Vite solves their issues, all Vite installations will have the issues solved.


Vite is not a completely opinionated tool. You can still customise its behaviour and use plugins, so both installations would have very similar vulnerabilities at the end. For SSR, you need a self-written server entry point.

Webpack 5 has introduced asset modules, so now you can safely ditch raw-loader, url-loader and file-loader. [1]

You might have NextJS or CRA in mind.

[1] https://webpack.js.org/guides/asset-modules/


> It just isn't suitable for proper server-side rendering, we'd need access to chunk names at build time

You can at later rollup hooks (it doesn't make sense to access chunks that don't exist yet).

Actually, there are SSR frameworks being built on top of Vite such as SvelteKit [1] or vite-plugin-ssr [2] (vite-plugin-ssr is not a framework but gives you a similar DX than Nuxt/Next.js; I'm its author), and many people are implementing custom SSR solutions.

Join our Discord #ssr channel and ask us questions https://discord.gg/PkbxgzPhJv ;-).

[1]: https://github.com/sveltejs/kit

[2]: https://github.com/brillout/vite-plugin-ssr


Have you seen SvelteKit's source code? It looks like a toy project. [1]

vite-plugin-ssr can't even be integrated with Vue Router (I saw you're working on deep integration though). They're both very rigid, early stage endeavours. [2] What happens, for ex, if you used nested lazy components in those pages, are they going to be included in the server render as well?

I mean, fair enough that there are people trying to do better, but it's extremely hard to find the right abstractions for such complex builds and Webpack is definitely on top here.

[1] https://github.com/sveltejs/kit/blob/5c2665ff2280947a2fc6001...

[2] https://github.com/brillout/vite-plugin-ssr/blob/master/src/...


Sounds like you are being biased here.

I mean, if you think vite-plugin-ssr to be rigid, then Next.js should feel like a 2sqm prison cell to you ;-).

If you want more flexibility than vite-plugin-ssr then use Vite's native SSR API.

Whereas with webpack: good luck with 1. using two webpack configs (one for Node.js and one for the browser), 2. synchronising between these 2 webpack configs, 3. implementing server-side HMR; it's incredibly painful and can cost you many weeks of dev time... whereas Vite's SSR API does all of this for you for free.

Sure, things are early stage, but saying that webpack is "definitely" on top for SSR is wrong in virtually any possible way.


From the article, a good explainer:

> Vite works by treating your source code and your dependencies differently. Unlike your source code, dependencies don't change nearly as often during development. Vite takes advantage of this fact by pre-bundling your dependencies using esbuild. Esbuild is a JS bundler written in Go that bundles dependencies 10-100x faster than JavaScript based alternatives like Webpack and Parcel.


I find it useful to have my vendor dependencies compiled and watched for debugging. There are always bugs in deps or poor error messages, and sometimes the dev tools debugger doesn’t work properly, so being able to modify a dev is nice.

Of course it would prob be better if this was a toggle.

I think webpack module federation may improve this situation too. I used webpack all plugin previously to speed up vendor dep compilation.

Thing about webpack though is it’s so complex how all this works that I always have to revisit it every few months to jog my memory.

We need simpler abstractions on top. Next is nice but still, if you need to dive deeper it’s painful.

I think in like 20 years time we will probably be able to get rid of all of this tool chain and the new kids will never know the pains we went through.


When I learned about Prolog in university I had a "eureka" moment where I wondered why I was spending all this time learning to implement algorithms in imperative code, when I could just express the output I want and then let some über constraint solver figure out how to produce it. How liberating declarative programming would be if only we started coding with it more.

When I try to use Webpack, as I click through obsolete StackOverflow posts, trying to figure out the right key-value pairs to get the output I want, I realize how mistaken I was. I hope build tools for the web become less "magical" and more predictable and debuggable, even if it means discarding their declarative form.


Apropos of that, Yarn allows use of Prolog to ensure constraints across dependencies


I believe that SICP handled these sentiments very early on in the book.


I tried out both snowpack and esbuild recently, and while the approach of using es modules was neat, these tools are still wildly immature compared to webpack.

Need to handle non-js assets in your bundle? Need to integrate into both node and browser environments?

You should stick with webpack. Any time you lose waiting for webpack to run you will get back 10x over by not fighting with your tooling because it doesn't handle a set of use cases you have.

I'm sure the next generation of build tools and bundlers will mature over time and we'll all get to enjoy the benefits of es modules, but right now webpack's mature plugin ecosystem, documentation, and stability makes it my default choice.


> Any time you lose waiting for webpack to run you will get back 10x over by not fighting with your tooling because it doesn't handle a set of use cases you have.

Generally I think this is good advice for tools--stick with the mature thing. But my former company's webpack builds were 30+ minutes for a relatively simple site. No doubt something was misconfigured, but you really have to be a webpack expert to get any sort of insight into where the time is going, and even then it may not be especially actionable (or at least not obviously so). In our case, we were using a monorepo and we didn't have something like Bazel, so this was painful for backend engineers, data scientists, etc--not just frontend engineers.

Maybe our case was pathological, but we would have saved a ton of time moving to esbuild and building whatever additional features we needed from scratch.


Your experience mirrors mine to a letter - I appreciate the wide variety of use cases that are covered by Webpack (a number of which are crucial to our build process), but its performance is abysmal (20m+ on CI) and totally inscrutable; making the potential switch from it rather enticing.

Especially as seeing that there is no good solution (but a lot of attempts at them), I feel it somewhat points to this overcomplexity being a problem central to Webpack.


> my former company's webpack builds were 30+ minutes for a relatively simple site

I’m genuinely curious what simple site would cause a 30+ minute build? Is it just one of those things that grow over time?


I wasn’t a frontend developer, so I’m not familiar with the details, but it seemed to be pretty slow from the start, but compounded as our app grew in size. I.e., there was some large coefficient associated with our web pack configuration.


How large was the bundle at the end? I'm not going to claim that webpack is fast for large applications, but I've never seen a build take longer than 3 minutes and this is for applications where our own source code was at the MB scale.


I don’t recall. We split it into multiple bundles in hopes of improving speed, but I don’t know the sizes in any case.


I feel like there just needs to be a good plugin that gives you proper stats as to how long things are taking and tips to improve things.

It’s usually slow css loaders, or too-inclusive patterns for loaders ending up processing node_modules, and not good enough disk caching.

There is a plugin called smc to monitor loader execution time.

Dll plugin was the best speed up to avoid recompiling things that don’t change, but now module federation is suppose to be a better solution. Disk caching in v5 will also do great things.

Thing is, it’s so complicated to setup, and debugging it involves so much config tweaking ans waiting.

I think we are a few years out from when we have nice speedy builds in webpack with good abstractions.

But then it’s a question of whether people move away from webpack because we don’t need all the plugins and transpiration and want native compile speeds.

One thing for certain though is the hype cycle will continue on...


I paired snowpack and webpack together and got something that I can use to mostly get a build pipeline going.

Snowpack's latest releases have been... troublesome, but we've stuck with 3.0 and it's been good enough. It completely doesn't support some libraries, but other than that it's been tolerable. Knowing what I know now I probably wouldn't have invested in Snowpack quite yet.

Really hoping Snowpack stabilizes a bit more, or we'll probably just fall back to webpack (or maybe Vite?) again.


My position on web development for even consulting projects is just to not use build processes as much as possible. I still end up using them for things like JSX, but I don’t bundle anymore, nor do any web projects I build require installation out of the box.

I’ve found that by doing so, I can just basically ignore, for years on end, all of the peddlers pushing how their projects make development easier or faster or some nonsense.

You know what’s easiest and fastest? Flat files in a good directory structure with some getting started template.

That’s it. The fastest build times are the ones that don’t exist. Period.

Web development is as complicated as you choose to make it. Very few fields work like this.


> Very few fields work like this.

Quite the opposite; I would argue that webdev is basically rediscovering the whole shebang, but decades later.

Modern webdev with transpilation, linking, pruning, compilation to WASM etc. starts to dreadfully look like the classical native development paradigm.


Yeah, and basically none of that is necessary. Which is my point exactly.


...which is essentially cargo cult programming, since browsers work in an entirely different way (perhaps with the exception of WASM because of how it's designed).


For people already on webpack, there's esbuild-loader (https://github.com/privatenumber/esbuild-loader)


Ooh next uses webpack right? Might be some free speed. Thanks.


Yes but there are some difficulties you may run into when replacing the default loader nextjs uses, because it includes some custom babel plugins that get blown away when you replace the loader with esbuild-loader.


Thanks for the heads up.


We recently switched from ts-loader to esbuild-loader. It was super easy and it halved our build time with zero complications. Highly recommend.


In the Ember.js ecosystem there's a really exciting new build tool project called "mho", that replaces webpack and uses a novel strategy running in a service worker to interpret native javascript modules with minimal overhead & near instant rebuilds (disclaimer I'm not familiar with how snowpack works under the hood).

https://github.com/ef4/mho

https://www.youtube.com/watch?v=09USvAy7w9g

https://sqwok.im/p/TleLmpJ9BFp1IQ


I used to not get Ember but of all the frameworks its users love it the most. I remember when Ember was the "cool" framework almost ten years ago. The community has evolved it and kept it relevant which is truly impressive. Ember entered the world at a time Backbone.js was cutting edge. Ember is still relevant and has adopted modern best practices, moved to TypeScript, and most importantly provided an easy upgrade path.

Most companies burn money on a total re-write when old tech gets too hard to support. Ember has never accepted this. I love that!


> Ember has never accepted this. I love that!

Totally! It's really impressive how they've adapted, evolved, and kept the framework going, so much that one of the most popular websites on the net has trusted it for their core site (linkedin).

I kind of fell into working w/ember because a startup I was at happened to be using it, and then it just became so familiar that I stuck with it. Really happy to see stuff like mho coming out & pushing the framework forward.


I haven't tried Vite, but I tried ESBuild last weekend on a personal project and was absolutely blown away. Even as someone who's become pretty comfortable working in the Webpack trenches, I don't see myself starting a new project with Webpack ever again. There's just no reason to when you can get the exact same output, much faster, without any of the headaches.

Honestly it's so good I wonder if it will undercut Deno a little bit by lowering the barrier for bootstrapping a TypeScript project


I wonder how this compares to a proper manual Webpack configuration. Comparing it to CRA isn't really helpful to me, as CRA is already known to be very slow.

I would consider trying it if it has significant performance benefits over a manual Webpack config, especially one making use of esbuild-loader (https://github.com/privatenumber/esbuild-loader)


> compares to a proper manual Webpack configuration

What is a "proper webpack config".

It's such a complex and arcane beast, that I doubt anyone could really figure out a "proper" way to do anything with it.


> It's such a complex and arcane beast, that I doubt anyone could really figure out a "proper" way to do anything with it.

I'm sorry but you sound ignorant here. There are a lot of projects that can get away with around ~200 lines of simple Webpack configs.

And yes, Webpack is complex but an important part of that complexity is coming from the intrinsic complexity of the problem we call module bundling. You can see this complexity in older bundling solutions as well.

Regardless of whatever you'd like to believe, a lot of people figured out proper ways to do a lot of things with it. You may not be willing to deal with this yourself, that is understandable, but saying what you are saying about the most popular bundling tool in the ecosystem is a bit much.

I myself regularly write manual Webpack configs and usually ~200 lines of simple configs are enough for me, as well as very performant. And I know a lot of people like me.

Webpack is not perfect, and I'm not really a big fan, but come on. I guess what you have a problem with is people who write terrible Webpack configurations rather than Webpack itself. Maybe think about it.


If you want to learn more about Vite, I recommend that you join Vite Land (https://chat.vitejs.dev). There is a very active and helpful community, and there are tons of opportunities to collaborate in the creation of plugins, integrations, and to improve Vite itself.


Esbuild is an amazing piece of software. At recurrency.ai we had an app that took almost 5 minutes to compile on stock create-react-app setup.

We moved it to webpack and used esbuild-loader and esbuild minifier. Still using webpack because we needed less and styled-component transforms. Esbuild doesn’t support hot reloading. Also I have a lot of experience tweaking webpack so I’m sticking with it.

Our build times went down to 15-30 seconds. That’s with full minification and sourcemaps.

I did try stock esbuild and that did it in 2 seconds. Webpack adds quite a bit of overhead.

From 5 minutes to 30 seconds without breaking any user facing functionality is still a big win.

Dev loop on hot reload is 100-300ms. Just feels amazing.


Ah, the classic build tool evolution cycle! 1. Become frustrated with complex build tool

2. Write a new tool that is dead simple, opinionated (your opinions), convention over configuration, etc

3. Post to HN

4. Achieve adoption

5. As more people use the tool, feature creep ensues

6. In order to satisfy diverse use cases, make everything modular and configurable!

7. Tool slows and becomes impossible to manage

8. GOTO 1.


That's quite a leap there. Vite has a different strategy to overcome webpack's slowness, mainly using ES modules and build on demand.

So, even if it were to support more features, it will still be a net gain over webpack.

Not to mention that vite uses esbuild which is 10-100x faster than next best


> feature creep ensues

I believe esbuild authors (and Vite relies on esbuild in development) are adamant that they are not going to allow feature creep.


I have tried to use esbuild, and over the past 3 months I had at least a dozen times when it has compiled incorrect code that doesn't work or doesn't do what it's supposed to do. So despite the performance benefits, I was forced to go back to webpack, babel, and terser.


Any specific scenarios where it fails? We're using it across a wide codebase and several teams and I haven't seen or heard that issue.


For me personally, it's mainly CJS imports which I can't drop.

Of course the correct way would be to force the dependencies to support es.

But vite has now become default for me as well for new projects.


Did u try using typescript with esbuild? React + esbuild + typescript is a perfect combination for me. I had the chance to kick off a new project a few months ago and started with this stack. It needed roughly 50 lines of code to get a decent server side rendering development experience set up. Compile time is still below 200ms.

The only issues I had in the beginning was esbuild's compilation of typescript enums: the import order was off from time to time. Besides that, no issues. I am also super glad I do not need to configure webpack any longer.


Do you have any repros? So far I’ve found esbuild give the correct output.


Author of vite-plugin-ssr [1] here. Let me know if you have any question about doing SSR with Vite.

(vite-plugin-ssr provides a similar DX than Nuxt/Next.js but as a simple do-one-thing-do-it-well Vite plugin. Vite has a powerful SSR native API which makes not only vite-plugin-ssr's source code lean & small but also enables you to easily implement your custom SSR solution.)

[1]: https://github.com/brillout/vite-plugin-ssr


Webpack was the most complex part of our application - it littered the root project with config files and needed a companion documentation file to be maintained let everyone know what was being used where. At some point treeshaking stopped working properly. I very nearly moved us to using makefiles.

As a side project to investigate upgrading to Vue3 we used Vite and have never looked back. Everyone understands how it works and what the config does (probably because of its built in support for things like SCSS without requiring a chain of configuration) and everything runs much faster.

I still worry that Vite like webpack and other web build systems is doing a lot under the hood and it will get complex to maintain when we want to do something out of the ordinary and a more 'open' build system like Make and an obvious sequence of processing steps would be better.


It's always impressive to me that, the co-founder of Figma (https://twitter.com/evanwallace) who essentially built a browser in a browser (figma.com) built esbuild. Could honestly be the Woz of our generation.


How is Figma considered a browser in a browser? It's a great design tool, I just haven't heard it described this way before.


He has some insanely high quality projects. shout out to his His Kiwi Schema https://github.com/evanw/kiwi project that saved me from protobuff swamp monster hell


I wonder why all these new bundlers don't provide a webpack-compatible API to entice new users. I can't take the risk of investing in a new tool only to find that I can't migrate the last 20% key features that I need. I also don't have the bandwidth to constantly switch between vite, webpack, parcel, etc. every time I switch to a different project.


Snowpack (which also uses esbuild internally) has a webpack compatibility extension: https://www.npmjs.com/package/@snowpack/plugin-webpack -


FWIW that plugin does not make your existing webpack-based code magically work in snowpack. It's just using webpack to bundle your snowpack-based code.


Because my main quibble with webpack is that its API is a mess and sucks.


I can only imagine it's because the authors don't care, right? If they cared, they'd build it. But I think most of these bundlers are out to show you how great they are, and how you should drop everything to do things their way.

So, ultimately I think the answer is to say, "No, solve my problem, not yours."


Frankly, I would pass.

The switchover of build tools in JS land is insane. I get that this has concrete performance improvements over alternatives, but I wonder if the same effort put towards improving an existing toolchain wouldn't get you a lot of similar efforts without breaking everything yet again.


That's the beauty of it, if the concrete performance improvements don't seem like a worthwhile tradeoff you don't have to use it. The idea that everyone should just contribute to one package instead of creating their own solutions is just silly, we are not able to dictate what people code up in their own time. Besides, there are dozens and sometimes hundreds of active contributors to every popular open source package, just adding more developers doesn't necessarily make things better, not to mention that the necessary quality control and code review demands for mature projects means many contributions are often misguided or not useful.


A counter point would be to look at the Python ecosystem. They have many fewer libraries but the ones they have are very well maintained and widely adopted.

JS/Node is full of unmaintained projects that people inevitably need to migrate away from.

Of all the new choices in bundlers today, it’s also inevitable that some won’t work out, yet they will drive hype and adoption, which could have been spent on already established tools such as webpack.

Yes, innovation is good and no one should tell someone what to do in their free time.

But it’s always going to be harder to build on top of old rather than to write something new, but what is new eventually becomes old...and the cycle repeats.


> A counter point would be to look at the Python ecosystem. They have many fewer libraries but the ones they have are very well maintained and widely adopted.

So what? It is what it is - the python ecosystem faces completely different challenges than the JS ecosystem, if the browsers exclusively ran Python instead of JS it would be the exact same situation for Python.

> JS/Node is full of unmaintained projects that people inevitably need to migrate away from

This isn't true. jQuery, Angular.js, Angular, React, Vue etc are all still maintained (check their githubs if you are in doubt) and all still work just fine, so if you want to use the same tooling you did a decade ago that's totally possible.

> it’s always going to be harder to build on top of old rather than to write something new

So what? If you want to contribute to old projects instead of writing something new you're free to do that, and if you don't like new stuff you don't have to use it, complaining about this stuff is just pointless contrarianism, but of course, you're free to do that too.


It isn't contrariarism. The end result is that most JS libraries are immature, have complex edge cases that can only be fulfilled by one package but not from another similar package, and that old packages don't evolve to adapt to new paradigms.

Knowing that pretty much every Python dev knows how to use requests, Django, and NumPy means saving a huge amount of effort in retraining and fighting to pick and choose on choices of ultimately little value.


It is contrarianism, there are many mature JS libraries, the fact that less mature packages also exist isn't a problem, just don't use them.

> Knowing that pretty much every Python dev knows how to use requests, Django, and NumPy

This isn't true, but even if I grant this to be true, you could just as accurately say pretty much every js dev knows node, axios and react.


AFAIK Axios isn't even universally used for requests, and in the SPA framework space it seems to be a split between Angular and React in popularity. None of these things seem to be occurring in the Python space.


Nothing is universal, you're seeing what you want to see. I could say the same thing about django, flask, and web2py or numpy and panda, or trio and asyncio.


Django doesn't occupy the same space as flask; not in the same way React and Vue seem to. Web2py is nonexistent. Numpy is a basic linear algebra library and Pandas is a high-level tabular data analysis library.

None of these things compete with each other in any meaningful way.


> Django doesn't occupy the same space as flask;

This is just wrong. Django absolutely occupies the space as Flask, this is pretty obvious, but you're splitting hairs to defend your argument - the bottom line is that a Python developer building a web application has to decide if Django or Flask is a better choice for their engineering needs, you wouldn't use both (though you could, same as you could use react and vue together).

> Web2py is nonexistent

This is totally wrong. You can selectively discard every example that doesn't meet your arbitrary standard of popularity, but I could just as easily discard a criticism of Vite which is far more "non-existent" relative to webpack than web2py is to Django

> Numpy is a basic linear algebra library and Pandas is a high-level tabular data analysis library.

Once again you're splitting hairs. Yes, Pandas and Numpy have different specialties, but there is clear overlap between them and a developer who is unfamiliar with the ecosystem wouldn't necessarily understand why e.g. they might choose pandas dataframes vs numpy arrays, and this dynamic is also true in the JS world where different packages have overlap but particular specialties that make one more attractive than the other depending on the needs.


You can switch to esbuild now or you can switch later, but you will switch. It is superior in design, configuration and performance by orders of magnitude.


What about using native es6 imports cached by a service worker? If you don't have to support old browsers, this is a much better option than a compile step. Just pull from git and load it in your browser, no complications eating a weekend reading through docs and shotgun debugging.


Can you share more about your setup?


I can't share code yet but we are working on potentially open sourcing it. We have a no-build policy, so just raw JS. Using an in-house web component framework similar to LitElement. There is a minimal deploy step that creates a manifest that hashes all files in the repo, and the service worker uses this manifest to determine if a file is cached or not, and serves it up if so. Service workers can intercept network requests so they work great for this. The win here is if you change a couple files, only those files need to be downloaded and cached.


How do people using Vite deal with Common JS dependencies? As far as I tried, there were issues with those when using "vite dev". Sadly, at this point, I can't get rid of those (e.g. protobufs+grpc-web generated code).


The right way would be to raise an issue with this dependencies to support ES modules.

I believe vite does support cj's dependencies, you would need to file an issue on vite repo on why you are getting the error


A similar tool in this space is Jason Miller's (@developit) WMR:

https://github.com/preactjs/wmr


We just switched from TypeScript/ts-node to esbuild for server-side code at Notion; it's been great. Would love to see the same kind of speed wins for our clients :')


Do you prebuild and run or use some kind of node -r (module register).

Btw I interviewed at Notion but didn’t make the soft skill bar. Big fan of the product.


We’re using a node -r esbuild-loader.js kinda thing with very simple mtime-based cache. It’s like 50 lines; we opted to write our own because we wanted the absolute latest version of esbuild so we could use the recent speed ups to the sync interface.


Did you open source it? Would love to use it as replacement for ts-node.


I switched over to Vite from Parcel, as Parcel v2 seems to be a mess with no actual concrete release date, and not everything plays nicely with v1.

Vite + Typescript + Tailwind JIT = perfect imo.


Hi, Parcel maintainer here. v2 will be released very shortly. We are ~1 month away from rc. Also, we should have a very big announcement about build perf in the next couple days. Apologies for the long pre-release cycle but I think the light is finally at the end of the tunnel. :)


I've been using vite for a while now and it has been super painless. However I use it in a pretty straight forward manner, about the most "advanced" thing I use is a proxy to redirect web requests to the backend during development. I haven't played around with things like SSR yet which is currently marked as experimental.


I recently ported a microfrontend based frontend to webpack 5 and replaced babel/terser with esbuild and the builds are 50%-60% faster now. I couldnt replace webpack because I am using Module Federation which is not available in other bundlers. Would recommend this approach if not possible to fully replace webpack.


Vite is really fantastic, but I've had trouble migrating projects with vendored libraries that use JSX inside .js files. There's kind of a hard requirement that JSX lives inside a .jsx, .tsx, or .ts file.

If anyone has come up with a workaround, I'd love to here about it.


Why do we need yet another tool? Why can't we just improve the ones we already have?


because

1. ES modules support has increased to all major evergreen browsers.

2. Esbuild has come out which shows faster results than tools (babel, typescript)

3. Improving current ones means supporting hacks and incorrect code patterns, which is not wanted.

The above factors has resulted in multiple build tools with vite being one of them


None of these are my concern as a developer. I just want a tool that works. I'd rather take som duct tape and 5 new features I __really want__ than a fancy new tool that'll be the same mess 5 years later and has less features.


Vite looks like an interesting option to fill part of the asset pipeline gap in Asp.Net Core and Blazor.. Could use it for the inevitable TypeScript project and provide HMR for JS and CSS.


How long until "Why we switched from Vite to *"?


Probably when * is doing something markedly better than Vite.


I was a little surprised that Vite, which depends on esbuild and thus should know better, is written in Javascript.

I presume this is a considered choice, and that what Vite does is unlikely to be on the critical path for edit/build/view cycle time…


> thus should know better

I would posit that perhaps they know better than you.

When pure performance is the concern it makes sense to go to a more performant language, like Go. But every time your JavaScript build tool moves away from JavaScript you close yourself off to a huge pool of potential code contributors and make it a lot more difficult to customise. So I think Vite has it just right: optimise the really intensive stuff, leave the rest more available and more editable.


> thus should know better

This reads as a very acrimonious, biased take. Authors choose the tools they use based on a number of criteria and personal preferences. It's one thing to disagree with them, it's another to project judgement without knowing the why.


The speedup from tools like esbuild, swc, etc. are not because of the language they were written in but because they're competing against Babel. Babel is this single-threaded highly-extensible behemoth. On top of this, you're encouraged to install dozens of plugins, and each plugin has access to global state.

The new transpilers have worked around this problem by introducing various limitations. esbuild plugins literally can't modify the AST. It's basically a way for you to run a command on a string/file. So sure, if you redefine the problem as string transforms rather than AST transforms, you can get an O(N) speedup...


Precisely. I'd advise having esbuild behind Babel if you really need an specific Babel plugin, for ex:

* Transform the file using esbuild first

* Then use Babel with that one specific plugin (Loadable Components, Styled Components, Apollo, React Refresh...)

This is much faster than letting Babel do all the work.


Hmmm. I knew I was being a bit of a smartass with, “and thus should know better”, but I figured I mediated that in the second sentence, where I presumed they know what they're doing. ¯\_(ツ)_/¯


I wish Angular could benefit from this new breed of tooling


Vite is frameowrk agnostic, so theoretically, angular can also benefit from it.

The roadblock lies in angular cli. angular cli is webapck based, and angular team will have to dedicate resources for the migration to vite.

Given the situation, i would say, using esbuild with webapck would be more cost effective


Can anyone comment on Vite vs Snowpack?



Thanks for reminding me I'm getting old and corporate.

My first two thoughts were:

* Imagine all the smart people in webdev world would improve the existing tool rather than create a new one. [0]

* "if you do less, it will go faster". I.e. Vite is young and new, supports a subset of Webpack and of course boasts how fast it is. Yet my repos can't use it because it doesn't has the support it needs for my repo sizes with legacy stuff. Of course it is "on the roadmap" and by the time it will be supported Vite is as slow (if lucky) as Webpack. Doesn't matter though as a new tool will be there long before Vite reaches Webpack..

Yup. Getting old and also cynical in the evenings. Dammit!

[0] https://xkcd.com/927/


I cannot say anything about Vite itself, but this is just wrong that there is permanent hype about one helper tool over other one and people switch from X to Y and then to Z and then to the new version of X and then to whatever.


> Esbuild is a JS bundler written in Go that bundles dependencies 10-100x faster than JavaScript based alternatives like Webpack and Parcel.

A JS library calling a golang js-build tool to get the job done. Too funny.


Many golang webapps use JS as their front-end language. Use the right tool to get the job done etc.


Being on the web it would be difficult not to no?


You can server-render pages.


This might shock you, but webpack runs on node/V8 which is written in C/C++.


Nope.


Why is that funny? How is it conceptually different from any of the variety of scripting languages that call out to C bindings or binaries?


It’s funny because in order to compile/minify JS it passes said JS to golang which spits out JS to be run under JS. I’m not allowed to be amused by the circularity? Get a grip.


Why so defensive? I didn't say anything about what you're allowed to do, I simply asked you a question about why you thought it was funny.


Python is a C core that takes Python code and spits out bytecode to interpret and run said Python. What esbuild is doing is not much different than any other interpreted language.

(p.s. JS is not compiled)


It's not much different but yet it needs a solution built in another language to "build" it. I understand the process is not much different from other scripted languages but I find it funny that JS cannot reasonably do this itself.

(p.s. I'm aware)


Well, it's wrong to say that it "can't reasonably do this itself" since JS building itself is actually the status quo and has been for years, this particular solution is just a new take with an aim for improved performance.


And they didn't choose JS to do it is the cherry on top.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: