We have this already: <applet>. As we all know, it didn't work out so well. Having a compact format for classfiles is great. Loading java.util.* for every applet is not so great. Putting .NET into the browser would repeat this horrible experiment.
Miguel and Joe and solving the wrong problem. The problem is lack of a standard, compact bytecode format for Javascript, not lack of a complex built-in set of framework classes.
Throwing a whole runtime (Mono+CLI) with all of its legacy baggage at the problem won't solve this problem. Once you provide a bytecode format for the browser, web-native tools like GWT can generate more efficient, web-native application.
Couple the JS bytecode format with a global, cryptographically-secure long-term JS cache and you've built something just as powerful as Java or .NET but without the platform impedance mismatches.
Your comment confuses me. CLI-compatible languages compile into CIL, which is a well defined language and is almost equivalent to the bytecode produced from it. So the whole idea could be implemented as a .NET bytecode runner. You could implement the Java runtime using it. The number of actual classes needed to make it usable is minimal. (you need basic types really - a new profile could be created for that and web objects mapping - .net-web)
What I actually think is that if you add a standard bytecode to JS and a global JS cache, you get the same effect as with a stripped CLI profile. Why not reuse something existing since we've got both the VMs and the standard ready?
I don't think anyone wants to throw "a whole runtime (Mono+CLI)" at the problem - the core of it is enough. I'm not sure what is the legacy baggage you're referring to either.
CLI might be well-defined, but you're going to have to implement this VM from scratch in every browser on top of the existing JS VMs. Alternatively you could ship the same VM code with every browser (unlikely - hasn't happened for JS yet) and now you've got two VMs running in every browser with twice the surface area for bugs and security exploits.
Now that you've got the CLR running in the browser, you've got to figure out a way to map it to the existing JS semantics. Microsoft's existing JScript.NET mapping isn't cleanly compatible with the prototype-based inheritance of the Javascript you're writing.
Re: legacy baggage, see:
- the parts of the basic .NET framework that shipped with 1.x and were effectively deprecated during the switch to 2.0, ie: System.Collections vs. System.Collections.Generic.
- the DLL specification, in which .NET assemblies are wrapped.
If you're not shipping a framework, what do you win by shoehorning an existing bytecode specification into the browser, except not having to write that bytecode specification again?
The web has different requirements than a desktop/server application and needs a runtime built for the web.
I think that [a well designed bytecode + VM modification (current JS VMs aren't really designed to run from bytecode) + implementing global cache] is more work than [creating a mapping / compiler from JS to CIL + creating a stripped down profile]. The second option does not require big redesigns, room for incompatibilities, etc. JScript.NET might have some compatibility problems, because it started before the proper DLR - a new, clean mapping would be better. Also a new mapping independent from MS would be preferable.
I don't see the problem with legacy baggage though - you don't have to ship all classes. Stripped profiles such as Compact Framework for handhelds have been already created - there's nothing stopping people from creating a new one for the web. The DLL point is a bit moot - it's just a wrapper, it works. If it's too much overhead, you can start sharing the CIL itself instead (the content is standardised afair).
You win a common execution environment. You proposed a common bytecode, but there are only so many ways in which you can implement the runtime environment in that case - each opcode has to be precisely defined. Even if each browser implemented their own, it would most likely be a complete engine redesign with very limited current code reuse. They'd have to either create everything from scratch, or reuse (for example) the DLR and CLI and implement the JS -> DLR compiler on top of them. So that's exactly what you win - not doing everything from scratch.
Many VMs are designed to run from bytecode. This is true for Squirrelfish (Safari) and TraceMonkey (Mozilla), though V8 compiles directly to machine code. The bytecode isn't compatible, but it's not a stretch to imagine a JS bytecode language would be similar to these current IRs:
Anyways, I still believe that shipping an existing VM, be it .NET's CLR, Java's JVM or Python's bytecode isn't the right way to go. There's a lot of innovation left on the current web platform - both in terms of the performance work that browser vendors are pushing out and the new desktop APIs that the admittedly slow web standards committees are publishing. In the case of a platform as wide as the web, it makes sense that browser vendors sit back and built something from scratch.
Then LLVM is as viable an option as JVM and CLI. Even better, it's open-source and not committee or corporation-driven. And from the political point of view, once LLVM finds its way into Safari (very good chance) then Firefox and/or Chrome, consider the problem solved. I think this idea is brewing somewhere right now, it's too obvious.
LLVM is not an alternative to JVM/CLI in any interesting way. It's too low-level. It doesn't provide memory management, introspection, loading, etc. It's just a compilation target really. LLVM has no dynamic features either.
Yeah.. no. Another commenter already pointed out the issues with llvm being too low-level.. regarding the cli and "open source"... mono is open source in every sense of the word (lib and runtime are mit.. c# compiler is lgpl).. the runtime spec is committee driven, true.. but so is the javascript language.
As for apples investment in llvm: and? Your speculation that it could be used in safari, presumably for squirrelfish, is about as unfounded as the assertion that ecma is plotting a coup by switching from is to ecma.. if you have links to the contrary, ill be happy to eat my hat, not to mention learn how llvm is useful in this space.
also, to understand just how much fun llvm can be, take a look at unladen swallow.. there are major RAM and binary size bloat issues without the corresponding speed increases, currently. Maybe it'll get better, I don't know.
Anyhow.. if the cli were a browser standard then MS could use .net in ie and everyone else could use mono .. wouldn't be perfect ill allow.. but its better than a bunch of incompatible, dialling javascript VMs. Perhaps.
Will never happen of course, but one can always dream.
LLVM may be a bit immature at this stage, true. Then let's look at CLI/C#.
We don't know what patents are there kept in Microsoft's backyards. For one thing, Microsoft is not interested in supporting any technology or product other than theirs, and we know they will do everything to promote IE and kill competition at any cost. Just imagine for a moment Microsoft caring about how smoothly C# runs in Firefox on Linux. As French say, "impossible".
I don't trust them. And Mono is a Trojan Horse, thankfully not as popular and robust as it could (should?) be in principle.
Note that I don't mind the technology itself. C# is neat ("not my style" though), CLI is pretty fast, and they both play nicely with Windows. C# applications in general are smoother and faster than Java-based ones. Love the results. It's just that any port to other OSes will look worse than on Windows, with or without Microsoft's participation.
You're either a troll, or you don't understand what we're talking about. But just to keep things correct:
LLVM is not immature - it simply has a completely different goal than JRE, or .NET.
Noone suggests C# in this comment thread - just the runtime environment for the IL bytecode. MS wouldn't have to support it, because IL is an open standard, which anyone can implement. I wouldn't expect MS to get involved in that project at all. GNU project has a gcc-cil implementation - if CIL is clean enough for people at FSF, I'm not sure what problems do you see...
"not as robust as it could be in principle" has no actual information in it. Try listing specific features and reality and not general terms which can be always true and false depending on your point of view.
In "will look worse than on Windows" I assume you're talking about the GUI. In-browser virtual machine for running javascript does not require any interaction with GUI apart from that exposed by the browser. It cannot look different than any other implementation, because there is no display code there.
> LLVM is not immature - it simply has a completely different goal than JRE, or .NET.
But it's a virtual machine too, among other things.
> Noone suggests C# in this comment thread
It is suggested in the post as one of the possibilities: We could replace `csharp' with any of the existing open sourced compilers (C#, IronPython, IronRuby and others). Obviously, if we are looking at the possibility of integrating CLI into browsers, C# would be one of the main languages just like with JVM it would be Java, for many reasons.
Re: "will look worse than on Windows" -- I meant in all respects, not just the looks but also performance.
Only if what you want to do fits into a wee box. If you want to break out of that box and work with a web page directly, JS is still your only option. Abstractions are possible via LiveConnect (the very thing JavaScript was named for - ability to communicate with applets!), but abstractions 1) leak, and 2) suck.
So: no. What we do have, however, is a number of shots, scattered throughout Mozilla's history, at getting other languages into the browser that crashed and burned. Maybe Miguel has the pull to get uptake for something like this, but I'm skeptical.
There did used to be a way of bridging from applet into javascript. It was introduced in netscape, and IE added support. It allowed you to have business logic in java (including custom jars, such as for middleware) sending instruction to the DOM layer. But firefox didn't reimplement it, and it was dropped from IE too. I worked on a business system that worked like this for a while. I think they rewrote it later to use webstart and swing.
It is certainly possible to communicate between Java applets and JavaScript, and in a widely portable way. One of my current projects does it all the time, for much the reason you suggest: it means I can have the model in a well-structured Java system, but branch out to JavaScript for the parts of the UI where it is helpful.
To call from JavaScript to Java, you can just use the public members of the applet object.
To call from Java to JavaScript, you can use JSObject.
Aside from working around a few minor gremlins in particular browsers, most of the basic things you'd want in this context "just work". In practice, when I'm transferring big chunks of highly structured data from the Java model to views using JavaScript, I use an intermediate layer in the applet that looks up all the relevant data Java-side and builds a JSON representation that is trivial to work with in JS, which avoids any trickiness with things like marshalling arrays and means rendering specialist types in Java world into suitable strings or objects in JS is all done in one place according to consistent rules. This all runs fine on IE, Firefox, Chrome, etc.
The biggest practical problem I've found is that because relatively few people are doing this for major projects, there is also relatively little documentation around on the web (including the official Java pages and the developer pages for the various browsers). That means if you do run into one of the awkward cases, it's not always a five-second search to find the answer. Still, so far I've been doing this for over a year, using the same techniques in a couple of different contexts, and most of the problems with "incompatibility" have quickly been resolved by updating to the latest versions of things.
I like the JS bytecode idea, but it won't be as powerful as Java or .NET in terms of performance.
Pertinent example: Flash improved its performance enormously from AS2 (ActionScript) to AS3, and one thing they did was add type info. Pertinent because AS is JS.
No, not just due to standards-committee politics. Many people didn't like ECMAScript 4 for good reason. I read the spec. It's monstrous. I have hardly ever seen a language with so many overlapping, redundant, completely pointless language features. I'm glad it's dead and I hope it doesn't become undead one day.
A bytecode specifically for Javascript would probably be sub optimal for performance of non-javascript languages. I'd rather have something lower level like .NET/Java bytecode or perhaps even LLVM.
> We have this already: <applet>. As we all know, it didn't work out so well.
I hear that a lot, but I think Java applets didn't gain widespread traction more because of the arrival of Flash as an easier format for simple animations and interactions than anything else.
Java applets remain a useful tool for browser-based UIs today; one of my current projects involves writing one. The biggest technical hurdle is that not everyone installs a modern JRE with their browser any more, which is a downer for casual use because it means there's stuff to install locally. For tools you're going to use every day it's no big hurdle, though.
The primary alternatives are Flash (which is well-established but looking more awkward and proprietary with all the current politics, and somewhat limited in its applicability) and JavaScript (which has potential, but isn't even close to being a good language for developing serious UIs of moderate or large size today, and no amount of HTML5 canvas/multimedia hype is going to change that).
Thought this was talking about "Command Line Interface" for a while. It in fact refers to Microsoft's "Common Language Infrastructure" (I thought this was CLR, or Common Language Runtime, but I’ve been out of the MSFT camp for a while).
Anyway — what’s the difference between this and plugins/Java applets (anything that compiles to Java bytecode)/ActiveX?
I suppose the biggest is that this proposes that 1) they have access to the DOM and 2) the browser implements the CLI environment.
The Common Language Runtime (CLR) is Microsoft's implementation of the Common Language Interface (CLI). It's not unlike CPython being the "official" implementation of Python.
How much of the web's inflexibility comes from Javascript and cross-platform JS issues, and how much of it comes from things like "Canvas is only reliable in 40% of browsers" and "HTML5 video only works reliably in Chrome and Safari"?
Because, CLI could address the former (it seems sensible to standardize a bytecode runtime rather than an entire language and its associated libraries), but probably not the latter.
This isn't just about replacing JavaScript with a CLI VM. It is about replacing the entire web stack, from HTML, CSS, and SVG all the way down to JavaScript.
The primary benefit of this would be that high-level languages like HTML would be implemented on top of the VM, and would no longer be hard-coded into the browser. This would allow you to, for instance, link to the latest version of the HTML runtime the same way you would link to the latest jQuery. This would mean that we would no longer have to wait for the browser vendors and standards bodies to bring us cutting edge technologies. This would also mean that alternative UI toolkits and languages would play on the same level field as HTML and JavaScript.
Doesn't this just push the problem up the stack one layer? Now the W3C needs to define a specification for a virtual machine that can implement all of the current web specifications performantly. If your VM is missing an API to perform some new form of user input (like multitouch) or a rendering method for a new standard (like webgl) you'll need to head back to a standards body to add more hooks, leaving us in the same position.
I think the idea has merit, but the current landscape of software development highly favours the statically compiled C++ browser engines, especially on mobile devices.
That is true, but it is much easier to accurately implement a VM spec correctly than complex high-level specs like CSS and HTML, which leave a lot of ambiguity. New hardware interfaces are also a lot easier to specify when you're talking about functions which are rarely going to be called directly by human programmers, since they will usually be accessed through higher-level frameworks.
Overall, the goal is to make implementing a browser a much simpler and direct process, and to defer as much complexity as possible to libraries which can be easily downloaded and updated. This would not only result in more a heterogeneous ecosystem for web developers, but fewer browser bugs and incompatibilities.
Complex VM? New UI metaphors? ...W3C? Sure, that spec oughta be ready in 20 years or so....
It's a vendor job, not a committee spec. They can take the arrows in the back, keep what works, toss what doesn't. I personally think that modern JS+JIT implementations are nearly fast enough: it's become the DOM and its legacy behavior that's becoming the problem now.
The pain of developing web applications is largely dealing with cross-browser DOM and CSS issues. Swapping in a different scripting language may give you faster code execution, classic class-based inheritance and a packaging system, but as far as I can see it does nothing to help with layout issues, widget creation and other UI issues.
The pain of developing web apps today. Having faster execution makes a new class of web apps possible. (The other two things you mention are incidental next to having native code speed.) The longer you've been developing on the web, the worse your blinders are when it comes to what we could really do with native-code speeds.
OK, how exactly? Web apps now, and presumably in the future, are essentially engines focused on manipulating a DOM structure, which is then rendered (with any luck, correctly) by the browser, right? So unless you're talking about effectively ignoring the DOM and rendering applications solely via use of Canvas or SVG, what is the magic sauce that flavors this <i>new</i> class of web apps?
Image and video manipulation, alternate widget sets (which nowadays tend to require at least one of heavy image manipulation or OpenGL to really work correctly), 3D applications that actually work and aren't just static models rotating (because now you can actually afford to manipulate vertexes with some intelligence). Online games that use a combination of these techniques. Apps that grab video from your webcam and do something with it. (Mozilla demonstrated that: http://arstechnica.com/open-source/news/2009/02/mozilla-demo... ) The dream of doing distributed computing by just visiting a web page could come true (like for X@Home). An OS in your browser would become not-a-joke. Native VNC instead of flash. Native encryption run by the app instead of the browser, possibly permitting ssh-in-the-browser. Actual typesetting could be implemented (surprisingly computationally expensive) allowing actual competition in the Office space.
You rather proved my point, unfortunately. Web apps are not about "DOM manipulation". They are about delivering no-install applications over the internet. They have historically been about DOM manipulation because that was all they could afford to do performantly (and that only barely at times). As that changes, so does the web. We're getting a ways along this path anyhow (http://code.google.com/p/quake2-gwt-port/ ), just with increasingly fast JS and other tech, but it's only going to grow more.
We're talking (or at least I thought we were) about replacing the scripting language in a browser, which visually renders the structure and styling of a DOM. That's how browsers work.
Replacing the scripting language of a browser does not magically make the applications you describe possible, any more than it's possible to draw high quality vector graphics on a 5250 green screen. You're presuming all sorts of hardware-accelerated graphics, network connectivity, font manipulation and many other fundamental sorts of hardware manipulation that just isn't possible within the confines of (most of today's) web browser.
What you describe is indeed possible, e.g. Silverlight and Air, but putting a new scripting VM in today's browsers is not going to get you there. You're not talking about web applications, you're talking about a new class of web browser.
The web browser is the web application platform. Did you follow the link to Quake 2? It runs in browser you can download right now. And while the apps I mentioned do need some more support from the browser platform, that support is actually mostly there in some browsers. The only thing missing is native code speeds.
Are you keeping up with what web browsers are doing lately? They've turned a corner and are burning rubber now that we're increasingly less tied to Microsoft every day, and Microsoft's efforts to hold things back are no longer working. I'm hardly even hypothesizing, you can get demos of many of those things right now.
"Microsoft and its partners hold patents for CLI. ECMA and ISO require that all patents essential to implementation be made available under 'reasonable and non-discriminatory (RAND) terms', but interpretation of this has led to much controversy, particularly in the case of Mono."
Between the unproven nature of the patents in the case of third party implementations, the uncertainty of how enforceable software patents are at all given recent court precedence, the RAND terms, and the Community Promise, one can hardly say that Microsoft can "pull the plug" on an implementation of ECMA CLI. If anything, it would seem more protected from patent litigation than the average bit of software.
The uncertainty is enough for them to "pull the plug" for all practical purposes simply by dragging people into court. If SCO were able to keep IBM and Novell fighting them in court for 7 years on the basis of ridiculous unfounded claims, then how long do you think a well funded company like Microsoft can keep third parties spending money defending themselves for?
MSFT could drag any software company with a piece of software of significant complexity into court over patent infringement however spurious those claims may be. So nobody should ever write software, right?
The parts of the ecosystem that are in danger would be things specifically _not_ included in ECMA CLI. For example, Mono implements API's that Microsoft includes in .NET but not in any of the ECMA specs and not covered by the Community Promise. This is because software written to run in the .NET Framework generally uses these unprotected API's and one of the goals of the Mono project is to run .NET apps with as little hassle as possible. So, they implemented these API's for better or worse.
Using the ECMA CLI does not include implementation of unprotected (by RAND terms and Community Promise) Microsoft API's. This constant threat doesn't really apply. No, Microsoft could not pull the plug on this.
Javascript/html/css etc were designed to be very small, use very little bandwidth. Flash also thrived and won with this aim. Going this route sounds nice but the battles and bloat would be immense. The web is a thin client, it can do fat client stuff but it is ultimately designed to be thin. That is what made it ubiquitous, not being a fat client...
What this is, is making an application for the desktop. They can take a branch of Webkit or Firefox and add CLI but they could also just make their own downloadable app to do this.
I like the idea I just think this might wreck the one standard thing we got going in javascript as the glue of the web. A scripting language can bend to the changes as it has done with minimal versions. It could be faster and bytecode itself sure but still it has worked well to tie things together. Silverlight and Flash can partially do this but you still have sandbox issues, which it would also have to have. You'd still have limitations yet it would be a jungle. Right now the jungle is the plugin. Lots of innovation but also lots of differentiation, legacy, etc.
I do think that the browser cache needs more space, Javascript engines could be faster, Canvas support and hardware acceleration are needed quickly. Maybe the CLI would be the way but everything we do with the web is design by committee, most of the stuff that made it in that works so well was made when they were still small and non-existent as a market. Everything from here on out will be slowwww to change.
HTML/Javascript/css is all just text. If this new system could have app descriptions as small and as readable/runnable by future generations then I think it could work.
It worked at the bytecode level - it wasn't just compiling C# into JS or something like that. I seem to remember that it had a full runtime in Javascript. It was terribly slow.
But, this kind of approach might not be so bad going forward, now that JS engines are faster. If the browser doesn't have a .NET runtime built-in, fallback to JS emulation.
Can somebody explain what technical reason prevent me from using any scripting language to write my web-app's client-side code?
The way I see it, web-apps today are downloading javascript code that calls back to the server from time to time and outputs a DOM tree. Any scripting language can do that, right? Why can't I write my client-side code in Ruby, Python or PHP?
Because you would have to add runtimes for all those languages with ties to the DOM. It's hard enough to keep JavaScript secure. It would be a nightmare to keep different implementations of JavaScript, Ruby, Python and client-side PHP secure on different operating systems.
what was Microsoft thinking when they coined a name whose acronym was already used and very well known in the field?
I think it shows either a little contempt or ignorance, or just bad taste, to do that without first exhausting any reasonable alternative. The title and article confused the hell out of me due to their hijacking of CLI.
Miguel and Joe and solving the wrong problem. The problem is lack of a standard, compact bytecode format for Javascript, not lack of a complex built-in set of framework classes.
Throwing a whole runtime (Mono+CLI) with all of its legacy baggage at the problem won't solve this problem. Once you provide a bytecode format for the browser, web-native tools like GWT can generate more efficient, web-native application.
Couple the JS bytecode format with a global, cryptographically-secure long-term JS cache and you've built something just as powerful as Java or .NET but without the platform impedance mismatches.