> the renderer hierarchy he created is yet another good example
> of why I believe coffeescript needs interfaces...
What exactly are you thinking of here? Because JS is as flexible as you can be in terms of duck-typing and calling any function with any number of arguments -- I don't see what having an explicit "interface" construct would gain you.
Compiler safety for the design. Implement a base class, and force implementation for sub classes. I know that CS philosophically is about keeping JS's openness, but I feel it would be a time saving convenience if the compiler told me I was missing a method.
If you want a language that compiles to JS and has more compile-time checking, I'd encourage you to check out Dart. It's not everyone's cup of tea, but if you don't mind semicolons and curly braces, it gives you a pretty decent amount of compile time checking while still generating nice JS.
Ah yes -- this would be very against the open/dynamic spirit of CS and JS -- and for that reason, we'd never add it. Many valid uses of subtypes don't need to implement every method defined by a parent type (or interface) in order to be used correctly.
For example, a rich "collection" interface that has some helper functions for key:value hash-like collections, but that a more array- or set-like subtype doesn't have to implement.
If you forget to implement a method that you later try to use, you'll find out when you try to use it. Such is the nature of the beast.
Does the coffee script compiler inline the addition calls? Does inlining actually do anything on v8 or the other javacript compilers?
I guess a better question for me to ask here is:
Is there any good in depth guide to the current javascript compilers so that library design decisions can be made, or are we basically restricted to reading the source code/ making a guess and checking with a profiler?
CoffeeScript doesn't inline the calls. V8 is pretty good but functions still have a discernible overhead. ClojureScript compiler macros are a pretty neat solution around this problem. Inlining can happen where you like yet you don't lose the generality of functions.
It’s clear and unambiguous, and does the right thing. Seems pretty reasonable to me to use the convention of never writing the word "this" in Coffeescript code, if you like. Not everyone in the world needs to have precisely the same style guide.
Nice; it's lovely how position Verlet lets you simulate geometric constraints so naturally. If you're interested in learning more, I recommend Thomas Jakobsen's "Advanced Character Physics" (I could only find a PDF link):
The first demo reminded me of a collision-detection D3 demo I made last year. This one uses a quadtree to accelerate collision-detection, as well as another quadtree for the Barnes–Hut approximation of charge forces:
As someone who's had a lot of experience implementing these things for JS, I wonder what you think of @soulwire's strategy and general structure for applying (say, Verlet trajectories):
The rest of the function is calculating default forces and constraints for graph layout; these can be disabled, and thanks to the resilience of Verlet integration, you can easily implement custom forces or constraints in your "tick" event listener (as Shan Carter did in the budget piece a few weeks ago).
Given that it's a generalized physics engine, I like that @soulwire's code is organized into clean, modular units. That makes it easier to test and modify the internals. D3's force layout is specifically-tailored to graph layout, so I don't consider it necessary to make the implementation so modular; the requirements of the force layout are that it is fast by default, that it can be customized with incremental additional effort, and that it is convenient for the common cases.
Also, with larger graphs numerical integration is one of the few places where JavaScript performance (not just rendering and DOM manipulation) actually matters; while unfortunate, it's often necessary to make a trade-off between generality and performance. Still, I'm considering more modular reusable forces and constraints for a future iteration of the force layout (perhaps similar to the older Protovis force layout). But since custom forces tend to be very… well… custom, I think the best option is simply to modify the nodes' positions as needed and let Verlet do the rest.
The attraction and collision demos are both doing collision detection, which means N! calculations (for each particle check the location of all the other particles that have not been checked relative to this one). Whereas the chain, cloth and I can't recall the name of the third one, are on the order of N calculations (constant interaction with only a fixed number of other particles).
He's right actually if the code is written sensibly. Well it should really be (N-1)! but close enough. Think about it in terms of 3 balls. I check ball1 against ball2 and ball3 then move to ball2. It gains me nothing to re-check ball2 against ball1 so I just check it against ball3. When I reach ball3, I have already checked everything and I'm done.
The loop should look like this:
for (i = 0; i < n; ++i)
for (j = i + 1; j < n; ++j)
I think you need to revisit your understand of complexity. It is O(N^2). If you want the exact count, it is N(N-1)/2. N! is an exceedingly large number even for very small N.
Yep I'm wrong. Mixed up factorial just like mistercow said. Thanks for the corrections. And here I was worrying that someone would complain that my loop wouldn't address the issues of collision response necessitating more checks, thus making a recursive solution necessary.
so the interesting question is, would this have happened in javascript? is coffescript so much better than javascript that projects are in reach now that weren't reasonable before? can a team who wouldn't be able to build this in javascript, build it in coffeescript?
nobody says coffeescript isn't nicer than javascript, but some people say its not so much nicer that it's worth the "abstraction tax" - compare to C vs assembly where nobody questions that the abstraction is worth it.
I don't know if that question is "interesting" so much as "impossible to answer" and "extremely contentious." :)
But my personal answer is that I have, say, 50% more fun writing CS than JS, so there are likely to be personal projects I write in CS that I just wouldn't have bothered with or would have lost steam on before. When you're doing something for the love of it, every moment that makes you think "dammit [Javascript], why are you making me do this?" is a potential moment to walk away and do something more fun. I (begrudge;every;semicolon) in an unnecessary for loop.
So if other people are like me, I expect CS to bring new things to the world, not because it's 10% faster but because the 10% it's taking out was the boring part. If no one's like me, then I hereby award myself one Special Snowflake from the many falling outside my window.
To put it another way: "The single most important lesson that people say they have learned from the Ruby programming language is a lesson that _Why’s work embodies in its code: Programming (or whatever you do) should be fun. There must be joy in your craft, and there is precious value in tinkering and playing around."[1]
Whether CoffeeScript or wire-wrapping individual transistors lights up your eyes is up to you of course -- but we all benefit by giving creators tools they like. Sermon for today over.
Until "web workers" are generally supported by common browsers, there is no way to run more than on bit of Javascript at a time in any given tab. As all the code is running in a single thread it will only use one CPU core (or if the scheduler bounces the browser process between cores it will use at most one core/second worth of CPU resource per second).
Depending on the how much cross talk there would be between the threads, web workers might not be adequate for some algorithms anyway as the message passing (the only way web workers can communicate, there is no "shared memory" access or other such short cuts to communication) could add noticeable latency. Caveat: I've not used them for anything myself so I don't know if any such latency is large enough to be an issue.
You can create multi-threaded code using web workers, but you exclude your app from browsers that don't support them. The major laggard on the desktop is IE, which is due to gain support for them in version 10. As far as I know the only current mobile browser that supports them is Safari under iOS5, apparently they were present in Android's browser but removed/disabled in recent releases.
There is also WebCL, an effort underway to put OpenCL in the browser, and River Trail, an experiment to execute subset of JavaScript using OpenCL as a backend. I don't know how difficult it is to write a CPU compute device driver, so I don't know if WebCL will become a viable general stategy for multicore execution in a browser though.
JS doesn't really support threads so that's probably a big part of it.
Traditionally JS applications have never been very "heavy" so it's never been a problem as you could just use the other cores to run JS for other browser tabs etc.
Of course now both cores and JS usage are increasing so this will have to change.
Javascript supports web workers which are effectively threads with the exception they don't share the application memory, but communicate over postMessage().
This makes possible share the calculations across the cores, but since postMessage() serializes each transferred object the effectiveness of this might not be that good.
Clearing or filling large areas of <canvas> is cripplingly slow in every browser I've tried so far, so I'd assume it's that.
Hopefully that'll be worked on at some point in the not too distant future as it's one of the biggest problems I encounter when dealing with canvas at the moment.
Thanks for the reply. It just seems counter-intuitive that canvas area should matter so much when the drawing logic is essentially the same. We are creating the same number of objects, just on a larger canvas and with different (X,Y) coordinates. Thus, the creation speed of the canvas itself (in memory, before flipping) should be roughly equivalent, right? The slow-down is simply from having to re-render the larger canvas? I'll dig more, of course, but quick insight from those in the know around here is always appreciated.
You just have to think lower level. There's the CPU to figure out what to draw, then there's the CPU (or GPU if using webgl) to actually draw what it figured out. More screen space increases rendering time b/c now you are drawing potentially 4x as much in the same timeframe.
Similarly, an image of 30kb takes more time to draw than an image of 10kb, even if they take up the same dimensions and space on the screen.
Vectors: https://github.com/soulwire/Coffee-Physics/blob/master/sourc...
Collision Detection: https://github.com/soulwire/Coffee-Physics/blob/master/sourc...