Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Minified.js – a fully-featured, 4K alternative to jQuery and MooTools (minifiedjs.com)
158 points by tjansen on July 13, 2013 | hide | past | favorite | 142 comments



I know it's awfully tempting to obsess on something that is easy to measure, like file size. I also know it's tempting to imply things like "twice the file size means twice the page load time". But it's not true. It's NOT true.

Look at this video at 6m30s: http://channel9.msdn.com/Events/Build/2012/3-132

Productively optimizing a web page is no different than optimizing any other code. If you want to know why a web site is slow, profile it or run it through a tool like webpagetest.org. Don't just set off to rewrite all your loops to run backwards, or eliminate function calls, or reduce the amount of this or that without any understanding of the result.


There's only one way to make a tiny page that loads super fast - and that's to make every single decision with an eye towards efficiency.

People who say "hey, what's an extra 30KB for one library? It's only like 10x larger" will tend to reach a similar conclusion at every decision point. And that's why the internet is full of massive, bloated, slow web pages. Single decisions don't cause bloat. Bloat comes from a slack attitude towards efficiency.

I'm not saying this project is an important breakthrough, but I definitely applaud the effort. I've always been surprised at the large size of jquery. I'd love to see a benchmark for runtime performance, including a comparison of parsing time and a benchmark of common operations.


> There's only one way to make a tiny page that loads super fast - and that's to make every single decision with an eye towards efficiency.

That sounds dangerously like premature optimization. For example, why spend the time creating optimized image sprites if it turns out they don't contribute significantly to load time? It's better to spend development time doing something else.

> And that's why the internet is full of massive, bloated, slow web pages. Single decisions don't cause bloat. Bloat comes from a slack attitude towards efficiency.

By using the word "bloat" it sounds like you're talking about bytes. Again, byte counts often aren't always the main culprit in slow loading of web pages. Synchronous scripts and/or 302 redirects can be much worse, for example. Don't start with the premise that the bytes are the problem and do a lot of work to reduce them, only to find out it isn't the problem. Profile the page load and see what is really the problem. Then fix that.


But the reality is, everyone that visits your site already has jQuery in their cache, because the common cdn sources are so ubiquitous, this making this 4k library actually add to your page load footprint. Using the most common, most likely to be a cache hit library is more efficient than using the smallest.


People need to stop saying this.


Please elaborate.


It's a reasonable point. Given the vast array of jQuery versioning across just the top thousand web sites, it's very plausible a user will not have each one of those cached, and then a lot of sites vary on what cdn (if not local) they're using.

I would be willing to bet that among just the top 1,000 sites, a user would be required to have 25 to 30 copies of jQuery cached to cover all the sites.


One thing... Mobile Safari didn't use to cache anything over 15KB uncompressed. http://www.phpied.com/iphone-caching/

(No idea about more current versions of iOS or browser-caching on other mobiles OSs.)


Look at this video at 6m30s

What does that video possibly prove? That if you compare completely different sites, you can't use reductionist measures of individual attributes to compare speed? Is there anyone on HN who actually doesn't already know that? This is asking which of a collection of mystery vehicles is the fastest by engine size, and then revealing that one is a dump truck, one a train, and the other a motorcycle, with results that should surprise no one.

The only people who would possibly be interested in this project are people who are attempting to optimize the experience their app provides, presumably in a holistic fashion. It is pretty much part and parcel that such a person is going to minimize elements, CSS, images, and scripts in such a case, and aren't simply going to replace jQuery with something like this and assumes that it will make everything fast.

The reality of something like jQuery, like Ruby on Rails or MongoDB, is if you start with it you'll likely be stuck with it. You can't simply make a full site and run it through the profiler and swap out components without significant rewriting.


> Is there anyone on HN who actually doesn't already know that?

I see a lot of discussion here based on the premise that this smaller script will make a big difference in overall page load time. It's why the audience in that video was leaning towards the site with the most JavaScript being the slowest. The first benefit listed on the minified.js page is "Minified is smaller!" so exactly what benefits does "smaller" imply?

> The only people who would possibly be interested in this project are people who are attempting to optimize the experience their app provides, presumably in a holistic fashion.

I agree, and creating your own homebrew framework for a large project in a holistic fashion has its own pitfalls. Especially if you've never done it before.


"Smaller" on a slow connection does make a big difference. That's almost everyone, everywhere.


My phone is fast enough that the difference between 4KB of data and 80KB is not huge.


Unless you're being targeted as a potential customer or a member of the audience, your mobile performance is irrelevant. What is relevant is that the majority of the devices being used to access sites don't have the performance of mid-high end models.


Which commonly used phone models are that far behind the iPhone 3Gs?


The common phone is a sub-$100 Nokia or nothing at all?

Even in the US there's something like 10% of people with no cell phone at all, and 50% with "smartphones" which is a very broad spectrum ranging from crap to significant household purchases before considering "data plans" are optional and also range from crap to good.

Even non-phone connection quality can be terrible - I rarely find myself on public or hotel wifi with actually good connections. The difference between 200kb of JavaScript and 10kb can often be measured in seconds.


I contest the assertion that we are talking about almost everyone everywhere at this point.


With all due respect: Not everyone else's phones are.


I don't believe you actually read the context of my comment.


I actually believe that I did. I understood the point you were making, and I'm pointing out that while your devices don't have that constraint, others may, depending on your target audience.

:)


There are endlessly discussions on here whether to go with an app or web, with many favouring the app angle because of endemic performance issues on the web when you're running on an ultra-low power mobile device.

The web is often death by a thousand cuts. Every decision by itself may seem small, but the end result is an unenjoyable, inefficient, battery-sucking result that leads people to abandon the web.


This month I decided to go on a jQuery diet, as a kind of fun challenge. I'd increase my familiarity with the standard DOM and selector APIs, and reduce my dependency on jQuery.

After a few hours of working without jQuery, it became obvious that the standard DOM & selection APIs are verbose and a little awkward. In about 20 minutes I produced a little wrapper that was compatible with a small - but common - chunk of jQuery's surface, and it made life so much more bearable. It clocked in at about 1kb minified (no GZIP).

Obviously it wasn't battle hardened and didn't deal with cross browser issues, as it was just for hacking with - but it felt like proof that a wrapper of some form, if not jQuery, is pretty much always necessary


I think that's an accurate observation. Ironically, if you only code with a library then it's easy to lose track of what that library actually does for you.

Writing a bit of "raw" code can really make you understand why jQuery may do things in a certain way or why they have chosen certain abstractions.


I see a licensing problem.

Releasing thing into the public domain is not as easy as writing

"Minified has been released into the Public Domain. You can use, copy, distribute and modify Minified without any copyright restrictions. You may even release it under your name and call it yours, if that's your thing."

As the CC0 page explains [1]:

> a waiver may not be effective in most jurisdictions

> Dedicating works to the public domain is difficult if not impossible for those wanting to contribute their works for public use before applicable copyright or database protection terms expire. Few if any jurisdictions have a process for doing so easily and reliably.

Please use a proper PD waiver tool such as CC0.

[1] http://creativecommons.org/about/cc0


I have CC0 on the download page, but yes, I should also add this to the source code. Thanks.


The full, concatenated and uglified source of one-page JS web apps can easily run to >600kB. Whether my DOM manipulation library consumes 4kB of that or 35kB isn't especially important to me. I'm more concerned about features, cross browser support and familiarity other developers may, or may not have, with the API. I'll be happy to check this lib out though, the API and docs look nicely put together.


This is even more so if your site is served over HTTPS given the latency for the initial SSL handshake[1]. Even better not to load anything and have it properly cached on the client side.

If you do it right you don't even have the client send/receive an HTTP 304. For our app we have a lot of modular dependencies but we serve all of them from a unique prefix per build (eg. 'assets-SHA(build)/js/foo.js'). The second page load and beyond has zero requests for the same static asset. Whenever we have a new build of the app the unique prefix gets auto updated to force clients to use the latest versions.

For HTTPS make sure to mark your assets as public or they won't be cached:

    Cache-Control:max-age=31557600, public
[1]: http://www.semicomplete.com/blog/geekery/ssl-latency.html


Whether my DOM manipulation library consumes 4kB of that or 35kB isn't especially important to me.

Now that jQuery 2.x drops IE 8- support, a credible question is whether such libraries (which are more in the 95KB range, minified) are even necessary at all.

Quite outside of the time to download, there is a measurable cost to parse all of that boilerplate code as well. This has more of an impact on mobile, obviously.


> Now that jQuery 2.x drops IE 8- support, a credible question is whether such libraries (which are more in the 95KB range, minified) are even necessary at all.

Well my answer to the credible question is that jQuery has to normalize and fix bugs for just about all the browsers it supports, it's not just an oldIE problem. Plus, jQuery provides abstractions above the basic DOM operations which are pretty low level. So if you don't want to use jQuery you'd probably want some other library to provide equal or greater abstraction.

> Quite outside of the time to download, there is a measurable cost to parse all of that boilerplate code as well. This has more of an impact on mobile, obviously.

Measurable of course. Significant as a portion of the page load time for most pages? Well that's a different thing. And of course if you want it, jQuery 1.8+ provides the ability to create custom builds if you know you don't need parts of it.


jQuery would never have come into popularity if not for oldIE and its often dramatic variances from competitors: If you are willing to forsake that old browser, there is little added but a layer of inefficiency between you and web standards. We have become so accustomed to jQuery that it becomes the default, many unaware of the dramatic improvements in all major browsers since.

As to the cost, assuming that everyone is pulling it from the same CDN (and there aren't the often considerable times simply to check an etag, which is the case with the overwhelming majority of jQuery hosts), an iPhone 4S takes about 80-100ms to simply parse the jQuery file (and given the plateauing of the integer core on on the An chips since, it is a reasonable assumption that every Apple variant since is similar). On each and every page render. A tenth of a second is a long time (and a lot of battery) for what is often of minimal value.


Just on the surface the potential for the minified library with its reduced size and browser support sounds great. In just checking a few features it seems as though the performance doesn't line up with jquery atm. In a large application space would I rather trade off js performance for saving initial DL time (cache)? I start to wonder where diminishing returns set in, if I should I really care about IE6 anymore or if the clean and simple API you say won't change will in fact never change.

http://jsperf.com/test-jquery-vs-minified

And to the community at large, feel free to add more tests.


That's cool, thank you (even if the results don't look so good, but it's not that surprising).


Why use the .val() method with the jQuery test, but .text() for the minified test?


Love it!

The more experienced I become as a programmer, the more respect I have for minimal programs that work very well. It's often very hard to "work your way up to simplicity," as my father likes to call it. (He's an EE.)


RyeJS is another alternative, though it focuses on speed, modularity and having an understandable source instead of size. Clocking at 6.2kb.

http://ryejs.com/

(disclaimer: I'm one of the authors)


Focus on speed and modularity next to the clear code sounds interesting, but supporting IE 9+ only is kind of a bummer here, because it means "no" for XP users (I use XP64 on my desktop for instance) and there is still substantial number of them.

(And I do curse Microsoft for making IE 9 Vista+ only browser, which is quite likely explainable by use of some shiny new undocumented API, that wasn't available in XP and before. Microsoft and their browser tightly coupled with OS...)


It does support it if you include es5-shim, though IE8 is effectively dead. It's at < 5% share globally, around the same point we all dropped IE6 in the past, and effectively at 0% for any web service / startup out there. It's commonly surpassed by visits from Safari/iOS/Android.

Firefox, Chrome and Opera are available for Windows XP :)


I wonder if libraries from Wine or Reactos could be used to run IE9 on XP.

I seriously doubt the reasons Microsoft gave for not supporting it are valid. Even if the sandbox mode could not be used there is no reason why the browser and rendering engine could not run on XP if the proper system DLLs were present, and these could have easily been included with the install as Microsoft does in many other cases.


Undocumented?


Probably 4kB more than loading jQuery from cache :)


Yep. But with jQuery, the browser parse 91kb of JS, this is the difference...


Sort of, many modern JS engines use lazy parsing, so don't actually parse the contents of a function until it's needed.


Are people really referencing jquery from other servers in production environments?


Yes, it's extremely common to call jQuery from, for example, Google's googleapis.com domain.

Here's a quick sampling from a random group of top sites:

stackoverflow.com (googleapis.com), kickstarter.com (googleapis.com), cnn.com (their own cdn), businessinsider.com (jquery.com), foxnews.com (their own cdn), nbcnews.com (aspnetcdn.com), espn.com (espncdn.com), tumblr.com (secure.assets.tumblr.com), lastfm.com (googleapis.com), lyricsmode.com (googleapis.com), reference.com (sfdict.com, cdn for ask etc), qz.com (googleapis.com), theverge.com (googleapis.com), ehow.com (googleapis.com), wikia.com (googleapis.com), chacha.com (googleapis.com)


Just a passing tinfoil comment... Google has so many fingers in the analytics pie, if you're not being recorded by Adsense or Analytics, then there's always CDN.

No idea if it's all linked up, but it means that using alternative search engines doesn't really matter if you still visit Google-enhanced destinations.


Of course it's all linked up, including other freebies such as Google Fonts. Why else would they be offering it for free.


If that's the case then maybe it would be better for society if we all started self hosted these sorts of resources. Longer load times, more bandwidth usage but better privacy for the public.



If you're not paying for it then you are the product.


Or you're walking into an exchange of value with your eyes fully open.


And also dragging every visitor of your website along.


See https://en.wikipedia.org/wiki/Content_delivery_network.

Use of these (to serve static content) reduces your server traffic/load and gives a faster response times to users in different geographical regions.


I definitely understand the benefit. I also understand the very serious drawbacks of not hosting content that you own. Compared to an initial load time that borders on completely negligible, I'd rather guarantee safety of user information and not having downtime coincide with the shared cdn.


That was what I was thinking. What's the selling point since jQuery is already cached?


At one time jquery wasnt cached... and when you want to use a new version of jquery, there goes your cache anyways.

Doesnt seem like a really big argument against, especially considering the size of the library for the first non-cache hit


On average > 50% of visitors have an empty cache.


Let's hope this one will soon just be a cache hit as well then ;)


This is interesting, but what percentage of the time spent downloading jQuery is actually transferring the file? I would imagine that jQuery is already small enough that the majority of the time is spent just establishing the server connection.


Most browsers probably have all versions of it cached by now too.


Test this, you might be surprised. I strongly doubt it, because there are so many versions in use, and as of 2012 only 25% of jQuery sites even used Google CDN, by far the most popular CDN — most hosted it elsewhere. http://royal.pingdom.com/2012/06/20/jquery-numbers/

I have vague memories of reading a post with a stronger conclusion — that CDN caching was basically a non-issue, it would so rarely work — but to my frustration I can't find that right now.


I wouldn't link to the Google CDN anyway, personally. If your whole site is hosted on your servers and then you add this because maybe some people won't have to reload JQuery, it just means google knows everybody who visits your site. Free analytics you don't get to benefit from.


I doubt Google is getting any useful analytics from their CDN. CDNs are optimized for static content, and in particular are hosted on domains that don't have cookies set (because cookies break caching). All Google would get is

* an IP,

* a referrer from the first page on your site that included the script (no subsequent pages, because now the client has jQuery cached).

All in all, a pretty poor source of information compared to AdWords, Google Analytics, G+, hundreds of millions of Android users, and running the world's most popular search engine. At most they could crunch some browser stats or jQuery usage stats, but they already get that and more from their own services + crawling the web.

And even if this tiny amount of info were somehow a boon to Google, so what? It doesn't hurt you as a site owner.



Yes! Thank you.

tl;dr for lazy HN readers: "using Google's CDN to load jQuery isn't likely to benefit the majority of your first-time visitors." As of 2011, jQuery 1.4.2 was by far the most common version and even that was only loaded via googleapis.com on 2.7% of websites.


The most commonly used version is slightly better these days http://www.stevesouders.com/blog/2013/03/18/http-archive-jqu...

Note the difference in version Steve found vs my comment (due to query strings etc., which will of course destroy caching)


Is this the full comparison? http://minifiedjs.com/#featCmp

It seems like that would cover most things I would use, however surely there is more to jquery (that I may or may not be using..)

I like that they are including IE6-8, that was a bummer when zepto came out aimed at mobile/modern only


What exactly would you miss (other than some utilities like each(), which are I would like to keep out of the library)?

Actually one of the ways I used to decide what features to squeeze in was making a list of all jQuery functions and re-implementing the functionality with Minified. To make sure that I don't miss anything important.

There are some things that are more complicated to do, because you need to work with anonymous functions, but honestly most of them were functions that I had never heard of before and would probably have re-implemented even when using jQuery :)


By the way ... one big use case I'm seeing for jQuery replacements lately is being a drop-in replacement for BackBone apps, since they need just a tiny fraction of jQuery's capabilities but do carry around all it's weight for no particularly good reason.

I'm not sure if Minified.js qualifies for that, but if it does you should definitely make a bigger point of that ;)


Sizzle is actually a big one, CSS1 doesn't really cut it for anything but the most trivial use cases, although if you're only targeting modern browsers than I guess you can rely on their support.

Also, if you don't support each, I assume you don't support method chaining on enumerables?

As for actual missing features, the jQuery data stuff is really useful, but if the intention is to use minified with something like backbone, I guess it's kinda moot.

EDIT: I forgot to say nice library and I like the source code too.


My replacement for more advanced CSS features are the methods filter(), sub() and collect(). For example, instead of $('li:odd') you would write $('li').filter(function(v, index) { return index%2;}). But, admittedly, if you are well-versed in those advanced CSS features, they will always be easier to use.

You can chain list methods, as in $('li').filter(function(v, index) { return index%2;}).set({$backgroundColor: '#000'}).animate({$backgroundColor: '#fff'}) to fade every second list element from black to white. The number of collection functions is limited to a minimum though. The Util module will add a lot more collection features (but at a cost of another 4kb).

Yes, that's true, data() is something I intentionally omitted because I don't think that it's worth its bytes today. For simple lower level effects toggle() offers a very different approach of handling state, and for larger apps you should use a MVC framework, as you suggested.


Thats what I was asking lol :) When I look at that list it seems like it covers most of what I do, was just curious.

Like "jquery gives you all of this, do you really use all of it?"

I suppose it's a bad sign when we are using jquery blindly and not really knowing what all that includes..


A novice question - how significant is the relative size here. Sure - 4 is smaller than 32, but is that really the biggest concern with a JS library (and modern internet speeds)?


In my opinion absolute relative size is actually less important than how many TCP roundtrips it takes. It is no accident the author of this library is trying to fit in under 4KB specifically.


Frankly it is an arbitrary number, mostly. I started with the goal of having no more than 7 functions, because that's the number of functions the first version had. It didn't take long for me to figure out that this wasn't enough for every-day use. My next goal was to be a number of magnitude smaller than jQuery, and thus 3.3kB. That was a lot better, but still not enough to be a viable alternative. So I settled with 4kB.

While 4kB may fit well in certain packet sizes, this does not help with HTTP headers that can easily have 100 bytes or more.


It's meaningful in a few ways I can think of.

1) For poorer countries with spotty internet, or mobile only that is slow.

2) For mobile, where data use matters.

3) Ideally with a library this small, it's mostly doing exactly what you need it to and nothing else. So it should be faster for what you're using it for. It's not trying to be all things to all people, and the trade off for that should be giving up long-tail features in exchange for more speed.


> Sure - 4 is smaller than 32, but is that really the biggest concern with a JS library (and modern internet speeds)?

Yes, it is.


Size itself is actually not that important. The time to load and process it still is to some degree, especially on mobile, and that increases almost linearly with size.


Nice work! But it always starts small, then users need a feature here, a feature there next thing you know you have a fat ass library just like the pre-existing ones.


Actually my goal is that the library will always stay under 4kB. Setting yourself a limit is the only way to prevent feature creep, and 4kB seems quite ok. Over time, when support for older browsers can be dropped, there will be space for additional features. I am already working on a second <4kB module though, with support for collections, strings, templates, dates and number/date formatting and parsing. So yes, I know that 4kB is not realistic goal for a complete app framework. But I am confident that I can do it in 12-16 kB.


The problem with optimization is that you always have to compromise, either speed, features and performance for size or features and size for speed etc.. I think a probable solution would be to create a dynamically generated framework which allows the user to specify the needed features (jQuery-ui style).


Actually there's a builder that allows you to pick the functions you need and compiles custom versions: http://minifiedjs.com/builder/ I don't think that this is a practical approach for most people though. After a while it becomes quite annoying to create a new version everytime you need a function. It would be great, but unfortunately also quite difficult, to automate this.


Thanks for putting this out there. I understand this is supposed to be a swiss-army-knife style library, but I thought we were moving toward focused (Unix philosophy) libraries. For example, why would I use your HTTP request functionality when I could use Superagent, microjax, etc.?


Personally I don't think it's worthwhile to use a large number of micro-libraries, because of the huge overhead that comes with it. Every library has slightly different naming conventions, you have to be sure that they play well with another (e.g. compatible collections and Promises implementations), you don't have a single point to look for documentation and so on.

Also, if you want to optimize for file-size, you won't get very good results with a collection of micro-libraries. A compiler like Closure is capable of inlining functions and removing unused code only as long as you have all of it in a single file as private functions. But when every library exports all its features, Closure is not able to optimize them properly anymore.


Sure, I can switch my code to use this, but how compatible is this with common 3rd party libraries that are dependent on jquery? For example, will this run the bootstrap js code ok? If it doesn't run that code cleanly, then we are still stuck importing jquery.


Not at all. While some things use a similar syntax, Minified is not compatible with jQuery (and does not try to be).


Anyone working on a project that has the remote chance of ever including a 3rd party dependency on jquery should never use this project. Otherwise you risk ending up needing to import jquery AND this project and now you're even farther down the rabbit hole.


That's why it's called alternative, not a drop-in replacement.

But yes, your point is valid. Also, jquery is so  ubiquitous that I see no reason to steer clear of it. If you use a CDN version of it, it's most probably already in client's cache.


The point about the CDN cache is not true. It is commonly claimed, but often times it is not the case that jQuery is often cached if you use a CDN. It is "sometimes" cached, and best performance for most websites is probably to concatenate and minify jQuery in with the site's JS files.


I can see why your statement can be true, but do you have any actual data on this? Or are we both just guessing? ;)

(No, I don't have data on mine, I don't even work in this field.)


I have some data:

http://w3techs.com/blog/entry/jquery_now_runs_on_every_secon...

Of the top 10k sites 58.8% use jQuery. That's good!

But only 26.6% use a cdn. Leaving only 15.6%. That's bad...

But 94.2% use Google's. Which is still 14.7%! That's good!

http://www.stevesouders.com/blog/2013/03/18/http-archive-jqu...

But only 1% use the newest version of jQuery. Leaving only 0.15%. That's bad...

http://www.quora.com/User-Behavior/How-many-websites-does-an...

But users visit on average 89 sites per month. Which gives us 12.5%. That's good!

http://stevesouders.com/cache.php

But default browser cache sizes are small. That's bad...

http://www.webperformancetoday.com/2013/06/05/web-page-growt...

But the average website is about 1MB so with a 50MB cache about 50 sites can be cached. Which leaves about 7.2%. That's... OK?

OK. So I don't actually know if a cdn cache hit is all that likely, but the situation is a bit worse, and a bit better than most people think.

Of course, using a cdn is still better than hosting yourself. If you bundle your jQuery then every time your site updates it needs to be redownloaded. If you serve it seperately on your domain, you won't have the distribution benefit of a cdn.

Should you use Google's cdn instead of a different commercial cdn provider that's faster? In that case I'm not sure.


Discussion of the issue here (disclaimer, I started the pull request) - https://github.com/h5bp/html5-boilerplate/pull/1327

The basic gist is that there is great fragmentation in the cache eco-system, and so there is no guarantee that the user actually has the version of jQuery cached that you are requesting. Alex Sexton brought up in a talk at jQueryTO that there are also the time for the DNS request itself to consider in any discussion of speed, if you concatenate/minify your code it will eliminate a DNS request.

Resources discussed in that page: http://statichtml.com/2011/google-ajax-libraries-caching.htm... http://www.stevesouders.com/blog/2011/08/17/http-archive-nin... http://www.stevesouders.com/blog/2013/03/18/http-archive-jqu...

The end conclusion was that for H5BP it didn't make sense to remove the CDN reference, but for your own site it might - you should test it and see. It also depends on your audience (will the CDN move the files closer geographically?).

In the end, ~40k (gzipped) is not going to make or break your website's performance.


I disagree with your assessment (and so does the majority of people on that pull request, it seems).

It's always better to use a CDN because:

1. It has a chance to be already cached (specially if you use Google's CDN).

2. All browsers nowadays do 6 parallel requests per host. So using DNS prefetching with `rel=dns-prefetch` will be faster.

3. If you bundle jQuery with your site's JS files, every time you change a single JS file of your own, your users will be forced to re-download your bundled jQuery. Seems pretty inefficient to me.


If you're using HTTPS then "it's always better to use a CDN" is unlikely to be true due to the costs of negotiating the secure connection.

1. Is open to debate and we have no real numbers of this - hopefully the resource timing API will all us to shed some light on the issue

2. Not sure how the number of connections is relevant as the connection to the CDN will be a new one.

3. Agree with this, people need to merge files that naturally fit together an have similar patterns of change


It wasn't MY assessment, I just posted the assessment of other s to start the conversation. I don't think you should never use a CDN, there are quite valid reasons to do so. But people shouldn't go around saying that is the only way to go either. It should depend on your site, and the testing you do on that site.


I thought that 'alternative to jQuery or MooTools' makes it very clear that it's just a library that offers similar functionality, especially since I mentioned two completely incompatible libraries. Unfortunately I don't know any better way to describe what the library does and its scope in only a few words. "JavaScript framework" is also misleading, and "client-side JavaScript library" too generic. But I would be thankful for suggestions.

And yes, if you want to use third-party libraries that use jQuery, using Minified does not make any sense. Currently it's mostly interesting if you want complete control over your JS environment and want to optimize for size.


Looks good! Added it to the official list of frameworks on JSFiddle.


In case you need mobile support and not IE, also checkout Zepto. I updated test with Zepto - http://jsperf.com/test-jquery-vs-minified/4

URL: http://zeptojs.com/


Minified.js has potential. If given the chance, it will be successful and I for one love rooting for the under dogs. The Custom Builder is a lovely addition. The Library is well documented and easy to understand.

Trying it out right now! Nice work Tim.


What about event delegation? In JQuery there exists an overload to .on() that can pass a selector string to filter the descendants of the selected elements that trigger the event.

Is there something similar in Minified?


No, not directly. From the jQuery API docs:

$("#dataTable tbody").on("click", "tr", function(event){ alert($(this).text()); });

In Minified, the easiest way to get exactly the same event handler is this: $("#dataTable tbody").each(function(tbody) { $('tr', tbody).on("click", function(){ alert($(this).text()); }, tbody);


That's not exactly the same at all.


Where's the difference? Admittedly I don't know that function in jQuery very well, but I though it would register a 'click' handler for every tr, just with the difference that it passes the parent tbody instead of the tr in 'this' to the handler. What else does it do?


Delegated events let you handle events for child elements that aren't even in the DOM at the time you attach the handler. The jQuery docs explain it:

api.jquery.com/on/#direct-and-delegated-events

> Delegated events have the advantage that they can process events from descendant elements that are added to the document at a later time. By picking an element that is guaranteed to be present at the time the delegated event handler is attached, you can use delegated events to avoid the need to frequently attach and remove event handlers. This element could be the container element of a view in a Model-View-Controller design, for example, or document if the event handler wants to monitor all bubbling events in the document. The document element is available in the head of the document before loading any other HTML, so it is safe to attach events there without waiting for the document to be ready. ... In addition to their ability to handle events on descendant elements not yet created, another advantage of delegated events is their potential for much lower overhead when many elements must be monitored.


Event delegation is entirely different to how you would normally do it. It relies on the events from the inner elements bubbling up through the parent elements. So you put an event listener on some common ancestor (the tbody in this case) to the elements you care about (the tr elements). A click event on some tr will bubble up to the tbody*, the event handler there can check whether the originating element matches the selector "tr", and if it does, it does something like: `yourCallback.call(event.srcElement, event)`.

The obvious benefit of this is that if there are 10,000 tr elements, it is quicker to attach your listeners (there's only a single listener on the tbody) and less memory is consumed (again, only a single listener). A more subtle set of benefits come from manipulation of the DOM. Your listener is able to respond to events on elements that have been inserted after the listener was first registered. Also, when removing tr elements from the DOM you don't have to remember to unbind all your event handlers (a common source of memory leaks, as they can be GC'd)


A HUGE difference, performance-wise. Delegation is O(1), while using any form of `each()` is going to be O(n).

To elaborate -- jQuery does not set an event-handler for each TR -- it sets one on the parent element (the table), and basically when the table is clicked jQuery gets the event and asks: "are you a TR?", and if so, then the handler is called with the clicked element as the context. So even if your table has 1million rows, only 1 event handler is attached and the DOM is only accessed once.

Of course, accessing the DOM 1 million times within a loop is going to take forever.


That's certainly true, but for what kind of use case do you need 1 million rows? It think that 1000 is already a lot, and if you exceed this, maybe it's better to design the app in a way that you need fewer rows...


It's not just for large numbers of elements. If you have dynamic elements that get added/removed you can define a single event handler on the parent with a filter condition on it's children that handles them all.

The following CoffeeScript event handler would be fired for every table cell with the foobar class that is a child of #myTable:

    $('#myTable').on 'click', 'td.foobar', () ->
        # Do something
What's great here is I can add/remove rows to the table without binding/unbinding event handlers. We use this a lot in our app (http://www.jackdb.com/). The main app content is a single page with all content dynamically loaded and added/removed on the fly. If we had to add event handlers for each data cell (or even row) there would be 1000s of them.


That's not how I would do it in Minified. I'd rather set up the event handler when creating the table row, one at a time, using the element factory (EE() or clone()) onCreate handler. Because otherwise you'd have to store data in the DOM model, which IMHO is not a very elegant design. But it's certainly faster or at least easier on memory consumption if you have many thousand rows.


I love what you are doing, but if you are trying to provide a comparable framework, a thorough understanding of what the functionality you are replicating is vital. The delegation of .on() is quite handy (for reasons others have elaborated already).


A good choice would be to customize a download of jQuery to fit your project needs. Similar to what you do with initalizr based on checkboxes. And just unselect any stuff you wont use.


Did you make performance tests and comparisons with other libraries?


No, actually I wasn't really interested in that yet. I wouldn't expect it to be that bad performance-wise, but if performance would be my goal, there'd be a number of things I would have done differently.

I also guess that almost all sites the cost of parsing and executing the libraries is more significant than the actual runtime. I mean, there are sites that parse 200kb of ungzipped source code only to execute the 2 kb of them that are required to have some pull-down menus or sliders a couple of times.


I would make this a priority if you want to try to differentiate in something besides size as compared to jQuery. Performance is quite important, and I frequently find myself spending the time testing pure JS solutions compared to a jQuery solution to make sure my code is optimized. jQuery does a really good job in this regard most of the time, but there is always room for improvements.


In the real world preformance matters my friend. Personally I care more about that than 4kb vs 32 or 69 :P


I see:

   tml xmlns:fn="http://www.w3.org/2005/xpath-functions" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:page="http://tjansen.de/minifiedPage" xmlns:i="http://tjansen.de/internal" xmlns="http://www.w3.org/1999/xhtml">
at the top of the page. Apparently the minification went too far. :)


No, that's the %^&%#! XSLT processor that keeps inserting unused namespaces.

Edit: oh, no, this time it was me unintentionally editing the compiled index.html.


Ha, I'm glad I'm not the only one who has this problem with XSLT processors.


Nice one! i am digging it so far! Gonna try using it in one of my smaller projects to see how it goes.

btw anybody would like to use this in their Rails project, just drop https://github.com/tmlee/minifiedjs-rails into your app for asset pipeline. just did a quick one for that.


It's nice to see a library use Closure Compiler's advanced compilation mode.


What's funny is that it hardly makes any difference. If you feed the source code into Closure in advanced mode, the size is 9258 bytes pre-gzip and 4109 post-gzip. In simple mode, the numbers are 9267 pre-gzip and ... also 4109 post-gzip! It was hardly worth the (relatively huge) effort to get the library working correctly in advanced mode.

BTW to get down to the final size of 4089, I compress the source code with Closure Compiler in Advanced Mode, and then compress again using UglifyJS to get rid of the last 20 bytes. Closure is not perfect...


That's mostly because its a library, and so most of the optimizations you're getting are purely local in-method and no dead-code pruning.

Where Closure shines is building an app whole-world, compiling both the client code, and library code together, then it can prune away all unused methods in the library.


So where does this library compromise? Why isn't jQuery this small?


There are a lot of specialized features in jQuery that Minfied does not have. For example, the only way to register an event with Minified is using on(). jQuery has on() as well, but it also has a number specialized functions like 'click()' and the legacy 'bind()'.

Another example are jQuery's the complex selector/stacked set features, like 'andSelf()' or 'next()'. Minified only has simple lists and a couple of callback-using helper methods like 'collect()' and 'filter()'. If you are doing something complex and know jQuery's API well enough, jQuery is more powerful.


jQuery has a lot of bloat from edge cases and bugs accumulated over the years, and still ships with it's own selector engine.


30KB distributed by a free CDN with a high chance that it's already cached in the browser VS 4KB that probably have to be served by myself.

I am sorry, I probably would still go with the 30KB solution.


Might want to research whether or not there is really any benefit from using the CDN other than offloading the bandwidth and getting a (relatively small) chance at a cached solution. From a pure performance standpoint, concatenating/minifying on your own server is more performant in most cases.


jQuery might be directly cached but does that really matter? You could make the browser cache Minified.js after the first loading. And I'm pretty sure it's faster to parse / run a 4KB script than a 30KB script.


> And I'm pretty sure it's faster to parse / run a 4KB script than a 30KB script.

That's always a dangerous assumption to make. I can guarantee you that size has nothing to do with parsing and running times. As others have already pointed out, this library is already less performant than jQuery for their needs.


The first caveat I saw is all the uses of $left, $top etc. Makes me wonder what other variables the library puts into the global namespace.


In JavaScript’s syntax, if $left and $top are not defined as variables, then

    {$left: '10px', $top: '10px'}
is equivalent to

    {'$left': '10px', '$top': '10px'}
The library doesn’t have to define global variables for that code to work. So it probably doesn’t define them as global variables.


Even if $left and $top are defined as variables, these codes are equivalent.


this looks good! probably its sweet spot is for HTML5-based apps, e.g. PhoneGap, BlackBerry WebWorks and even Firefox OS.


If you use a common cdn to load jquery, chances are it's in the user's browser cache is very high.


It's a bit misleading when you say Zepto has no AMD support, when it actually does.


In the standard version on zeptojs.com? Just did a short search on the source code and can't find any 'define' invocations.


How does this compare to Zepto.js?


has support for old browswers


Aaaq


not enough jquery.


Poor name , sounds like a uglify like lib , not a DOM manipulation lib. You should changed it.

And it's no alternative to jQuery or Mootools since your lib doesnt support a lot of stuffs these libs provide.


Not providing the same features doesn't mean it isn't an alternative. It has different goals.


The headline says fully featured


And I believe it is. It does not support every single feature that jQuery and MooTools have, but you probably wouldn't say that jQuery is not fully featured because it lacks support for Cookies and color animation.

It's true that Minified doesn't support many features in the same breadth as the other libraries. Instead the focus is on making the essentials easy (often easier) and offering you powerful functions to do the rest. Basically that's how I keep both API size and file size down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: