To people wondering why Samsung would join this, I think Samsung is a big believer in the HSA Foundation [1] for heterogeneous computing, and they were one of the first partners in this along with AMD and ARM.
Samsung seems to believe in having the best performing devices and components, so it's not that surprising that they want to be at the forefront of the heterogeneous computing movement.
They also probably see themselves as a competitor against Intel, with both the ARM ISA that they support and the foundry that they own. In the next few years there will be a battle between HSA and Intel's Phi co-processor idea, and I think Samsung is one of the many who wants HSA to succeed (using computing power from all sorts of processors vs using only CPU's for everything).
This is a great story, but probably not the case. I would be surprised if the part of Samsung responsible for HSA is even aware of Rust, let alone Samsung is working on Rust. The semiconductor division is pretty independent of other divisions.
This is a great announcement and adds some much needed support to Servo/Rust. I've been pretty impressed with Samsung too. Clearly they want to have their own seat at the table and they are executing quite well on that desire.
I'm not confident that Samsung is interested in developing the Android platform, though. They seem to be working on a proprietary ecosystem to separate themselves from the rest of the Android ecosystem. In fact, this is probably one of their motivations: to have control over a leading browser that isn't Chrome so that they can integrate it into their phones as default instead of Google's solution.
Actually if Samsung wants to be a player I really think they can't afford to be tied to Android long term. Too much control at Google and Google is another player and nominally an 'opponent' at some point. Of course their is "Android" and there is the stuff that Google puts around it. Watching the Chinese companies add value to the OS install has been instructive as well.
What Android has done has shown the component manufacturers some ways to expand their markets outside "known" players. Part of the challenge of making a phone is getting the chip vendors willing to sign off on giving you software to run their chips (weird I know).
My guess is that carriers want to be able to advertise to a captive audience, and in the US and EU, they can largely choose which phones their users buy; so killfiles as a feature are verboten.
Having spent a week at Samsung Headquarters a couple of years back I can assure you that they are extremely platform-neutral. With the scale at which they operate and the pace at which the industry moves, they would be crazy not to... missing a single 6-month window for global product delivery on a new OS platform against their major competitors (nearby Taiwanese HTC, arch-rival fellow-Korean LG, up and coming mainland Chinese companies, etc.) could be equivalent to many hundreds of millions of dollars of lost revenue.
At the time I visited, circa 2.5 years back, they had (usually physically disparate) skyscrapers within their HQ complex for at least three or four different mobile platforms (Bada, Android, Windows Mobile, and something else I can't recall). Right now I'd assume they have some resources on FirefoxOS and Ubuntu, plus Android, maybe some other systems.
Servo is a "high-risk research project"[1], so I've been wondering what would happen if it doesn't pan out. Rust would still be interesting even without Servo, but would mozilla still invest in Rust if they stopped working on Servo?
So it's nice to see some non-mozilla involvement and momentum in Rust/Servo. Maybe that means that an investment in learning/using Rust is that much less risky :)
"I think it's premature to start making a roadmap that makes Servo a competitive product in N years; apart from anything else, such a roadmap is going to be pure fantasy" -- also from that post, from just 6 weeks ago. Curious.
That said, having an ARM backend to Rust on Android phones means big things for the potential of mobile web performance. I'm hopeful!
1) There is no uniform agreement about the state of Servo. Some are more optimistic than others.
2) There is a number of unknown unknowns here; that's what makes it a research project and makes it hard to roadmap. For example doing parallel layout of CSS involving floats is an actual research topic people are currently publishing papers on...
Does it even make sense to parallelize the layouting? When does it take longer than a few milliseconds?
I'd rather see DOM-updating (e.g. via Javascript) in parallel to the rendering. Although, that already leads to problems since rendering changes to DOM as well.
I meant that it can be called a learning experience to justify further development. Besides, Rust is genuinely useful. I can't be the only one who is interested in (eventually) using it as my go-to language for my projects.
Personally, once Rust stabilizes as a language, I plan to write the bulk of my "low level" at-home code in Rust. After tooling support gets up there (providing things are still awesome), I plan to use it professionally.
This is really exciting. I'm glad to see rust being used for serious apps. This is actually the first new browser engine that I know of being written in the past 10 years.
Good for Mozilla for attempting hard problems in innovative ways.
The goal is extremely pervasive concurrency in aspects that no modern engine has yet begun to approach (and likely could not approach without enormous effort and/or a full-on rewrite).
You can drop the "likely". Retrofitting pervasive concurrency onto a sizable existing codebase is to a first approximation impossible. It makes merely trying to retrofit pervasive unit testing (hard, but "merely" a long mechanical slog) or correctness with string encoding a cake walk. "Impossible" here can be taken to read as "would require as much effort to do the retrofit as the rewrite would take".
> Copy-on-write DOM. In Servo, the DOM is a versioned data structure that is shared between the content (JavaScript) task and the layout task, allowing layout to run even while scripts are reading and writing DOM nodes.
I think you're being downvoted because if anyone knows about the pain of using C++ to develop a browser engine, it's Mozilla. They have some pretty strong empirical evidence to back up their statement.
No, you don't bash a language for failures of the people using it. It is hyperbole unsupported by any evidence and it is precisely this mentality that keeps our profession on the Greatest New Thing(TM) every x years treadmill for better or worse.
Language bashing/trolling serves no purpose.
I am rather most likely being downvoted because HN in the past few months has taken an extreme downturn towards a slashdot/herd mentality, but that is just another pendulum swinging.
> I am rather most likely being downvoted because HN in the past few months has taken an extreme downturn towards a slashdot/herd mentality
You're being downvoted because you clearly have no idea what you're talking about. It's entirely appropriate to criticize a language for being too hard to use correctly - your argument would apply equally well to saying that C++ should never have been created because it was simply the fault of people using earlier languages not using them sufficiently well.
This approaches dark comedy because you're also criticizing a company which maintains one of the largest and most important codebases in existence and has a huge list of bugs and security issues demonstrating that even in the hands of very experienced developers it's too easy to use C++ incorrectly.
Oh, so you think the statement that was quoted was that c++ was too hard to use? Because when you read what was actually quoted it says c++ is poorly suited to solve issues related to data races and parallelism which is a pretty false statement.
Are you able to see how that is different now with a little help? Or is it so insufficient to make up straw arguments and put words into my mouth that now you want to do the same for Mozilla foundation?
C++ is the only realistic option for some things, but it's not without serious problems. Intelligent criticism of a formal system, such as a programming language, is what allows us to make better ones in the future.
If you honestly think there's no point in trying to make better programming languages, you're welcome to continue programming in FORTRAN IV. But don't expect other people to take your opinion seriously about what "serves no purpose".
"Language bashing" is merely a derogatory term for "language criticism". Language criticism is, as I said, a necessary foundation for designing better languages; if existing languages are flawless, then designing new ones would simply be part of "the Greatest New Thing(TM) every x years treadmill" that you referred to in your initial comment. On the other hand, if you can identify real problems in existing languages, then you have some hope of designing new languages that are actually better, not just newer. But you can't do that without "language bashing". Without "language bashing", we'd still be back at FORTRAN IV. And if that's what you want, you can probably live in FORTRAN-world.
"Trolling" is bullshitting to provoke a response; your accusation there, if we take you at your word, would have to be that the Rust developers are developing a new programming language as a sort of hoax in order to get a rise out of C++ programmers.
I don't think it really serves your point well to suggest that Graydon Hoare, Brian Anderson, Sebastian Sylvan, Samsung, and so on, are "childish" and "hard to take seriously" and dishonest, because that requires us to choose between taking Graydon and Samsung et al. seriously and taking you, Shawn Butler, seriously. This is a competition that will be hard for you to win. Perhaps instead you could find a way to couch your criticism (whatever it is) in a way that makes it easier to accept. As it is, only people who have a pre-existing hate for Graydon or Samsung will be inclined to accept your argument.
You have failed to understand a one-line sentence and inflated it into some agenda entirely of your own creation and are attributing it falsely to me.
The contention was simple and utterly straightforward. Having a conjectural and value laden statement that is false and using it in an introduction gives me pause. Doing so persuades people holding surface knowledge (as evidenced by this comment thread) but gives people with experience a different response.
Also while I hate to be the harbinger of bad news, parallel programming in any language is Really Hard(tm) for most people. Personally, I think there's a fundamental disconnect in how human cognition perceives the world on the one hand and massive parallelism on the other at which very few people I have met really excel relative to the larger population.
Are you claiming that all languages are equally good for all purposes? Or are you claiming that C++ is in fact particularly well suited for parallel development and inherent avoidance of data races?
Coming from a C++ developer, I don't think you're well aware of its limitations. Do yourself a favor and learn a little bit of Rust. It'll open up your mind a little, and just might make you a better C++ programmer.
The C++ memory model (which btw is what underlies any threading facility) was always sufficient for handling the complexity but it never offered any opinions or constraints on implementations until C++11; the standards committee wisely preferred to defer instead to the writers of libraries to provide appropriate designs and services tailored to the specific needs of user communities and particular operating system facilities.
I'm pretty sure based on your writing that you have little to offer me that I don't already understand about the language, but thanks for your hollow advice though. Here, I'll make an empty prediction in return: in 3 years you will be complaining about how difficult it is to do distributed parallelism in Rust and what absolute garbage it is compared to <insert name here>.
The point was simple but fanbois want straw men and windmills at which to tilt: if you have something you think is better, extol its virtues and provide comparative analysis instead of bashing/trolling what currently exists completely out of any useful context.
I really have little interest in discussing anything technical on HN anymore. Here's a token wikipedia link, that's what passes for knowledge I guess [0].
Also I would keep your "advice" to yourself. I certainly hope you wouldn't speak to people like that in person, and you definitely wouldn't be allowed to speak in such a fashion to me in particular. I am pretty familiar with Rust's evolving semantics and syntax, thanks.
I'm note sure exactly what you mean by "the C++ memory model" but I'm pretty sure that it does not underlie Rust's model of threading unless you just mean that Rust assumes a Von Neuman architecture. First of all the stacks of Rust threads are segmented, unlike those of C++ threads, so there's one feature that cannot be duplicated by a C++ library.
But more importantly, Rust doesn't allow any shared mutable state, meaning that all the synchronization primitives C++11 provides can be used by the language automatically by the language on your behalf. Of course, nothing prevents someone from creating a library in C++ that does the same thing, but there's no way for the compiler to enforce the clean thread-wise separation of state and if you've forgotten to get rid of some pointer to an object you're passing to another thread you won't know about it potentially until you're debugging it in production. Also there are a number of threading optimizations that the Rust compiler can make with re-ordering or eliding memory accesses that the C++ compiler can't.
"Or are you claiming that C++ is in fact particularly well suited for parallel development and inherent avoidance of data races?"
I answered the c++ memory model was certainly sufficient and the implementation was left to libraries until C++11 when it was standardized. I did not mean to imply that the c++ memory model underlies that of Rust.
And I agree these are great points to illustrate. Would make some great text to use on their introduction page in place of the trolling. Still, I believe the same deficiency exists regarding compiler enforcement for Rust since the "unsafe" keyword allows for manual management, correct? I can't intelligently comment on the compiler optimizations to which you refer. If you could provide some reference to further analysis? Regardless, the two same two people are downvoting my comments so I won't be commenting further.
Although you clearly understand, I'll throw in the token links for people who may not understand what a memory model is or how the c++ memory model has been formally standardized. To be honest, I don't know why I bother.
You come off as whiny and immature, which is why your comments keep getting voted down. I have no doubt you're under the impression they are getting voted down because most people disagree with you, but you're wrong.
Out of curiosity, who wouldn't allow me to speak to you in such a fashion? Who's this imaginary authority who controls how I speak to children?
Why is it so massive? Why does it need to specify in grave detail the difference between <span>, <label>, <code>, etc.? How about a spec like this:
- There are two types of elements: block and inline. You can declare a name of an element with a particular type using perhaps XML namespaces, or just a JSON object.
- Inline elements can be floated left/right along other inline elements.
- Block elements may be positioned absolutely inside their parent elements or relatively to their current position. You cannot float them.
- Width/height may be specified as a pixel width, percentage width of parent, percentage of available space, or percentage of screen size. Additionally, you may specify the box model: whether to include borders, margins, padding, etc.
- CSS rules about typography, margins, borders, padding, etc. shall apply. This way, you can include your own basic rules and build on top of them.
I had the misfortune to do a bit of hacking with GTK+ and at first thought "what an archaic way to lay out elements?!" Then it came to me that HTML + CSS is not advanced, it is cluttered. There are many ways to position an element on the page, and they will conflict. Additionally, things like opacity affecting z-index, having a parent element have a size to give the child element a percentage size, etc. lead to a ton of hacks. It's time we have a better, cleaner tool than the browser if we are going to build serious apps on this platform.
Yet cannot do simple things like tell a block element to take up all available height.
The spec focuses on various types of data that could be represented. For example we have a <code> tag. This is done in an attempt to be semantic. However, it fails at being comprehensive, and ends up falling back on things like <code class="python"> instead of <python>. The distinction between <code>, <var>, <span>, <label>, and other inline elements is completely arbitrary and which elements get to be first class citizens is also arbitrary. Giving up and saying that there are only <inline> and <block> elements would simplify things a whole lot. If you can then "subclass" a <block> to create a <p> element or subclass an <inline> element to make a <lable>, go for it!
You have basically described XSL FO. In retrospect that's obviously what we should have used for web page layout instead of HTML, but now it's too late.
http://www.w3.org/TR/xsl/
> Unlike the case of HTML, element names in XML have no intrinsic presentation semantics. Absent a stylesheet, a processor could not possibly know how to render the content of an XML document other than as an undifferentiated string of characters. XSL provides a comprehensive model and a vocabulary for writing such stylesheets using XML syntax.
So the big issue with XSL is that it's verbose as hell. I remember using XSL Transforms to do some really simple things, and getting it right was horrible. Debugging it was worse. Given a piece of code that uses HTML + CSS vs XSL, I'd pick HTML + CSS any day simply because it's more readable.
However, yes the core of it seems much better thought out than CSS.
> but now it's too late.
Is it? Is it possible to have some XSL FO to HTML5 + CSS compiler?
Not "a cleaner tool than the browser" a cleaner layout spec. Plus you will have to build a CSS compatibility layer on top. Hard but worthwhile. Sort of thing Adobe might work on.
> Not "a cleaner tool than the browser" a cleaner layout spec.
Yes, agreed. Though the next thing that we might want to tackle is the whole concept of a web page. Seems like storing application state in a URL is a terrible thing, yet it is so convenient for some use cases. This might mess with the idea of a "browser" more since you wouldn't be "browsing" applications, you'd be running them.
> Plus you will have to build a CSS compatibility layer on top. Hard but worthwhile.
Yes, definitely. I can't imagine anything like this taking off without a compatibility layer. However, I think, the compatibility layer could be just HTML, this new CSS replacement, some JavaScript, and a server-side compiler.
Actually I wasn't entirely, they have a lot of experience on print rendering (PostScript, InDesign), and seem to have HTML interest now but are not really attached to CSS per se. Suspect however they are not...
Not only do you have to implement the massive HTML spec, but you must implement incredibly forgiving error handling. Just about any crappy, illegal HTML must be processed without throwing an exception--and the browser has to render something from it.
Creating a new language that handles concurrency and pointer bugs in a systematic way that is also fast is amazing. Writing a new browser engine in said language is very ambitious and important.
The point probably is that these rendering engines share a number of the basic assumptions that the new project could challenge (like concurrency mentioned elsewhere in this thread).
Try to find KHTML code in modern Webkit. I'd be surprised if it's more than 5%. By that standard, Firefox uses the same engine as Netscape Navigator did.
Original architectural decisions can constrain and push development in a certain direction. An analogy might be made to evolution: one's phylogeny constrains the space of available organisms available in a given timespan.
5% might not sound like much, but I'd want something comparative. How much code in the 3.8 kernel is shared with what Linus released in 1991? I suspect it's similarly low, but we still recognize them as the same project.
The reference prompted me to curse Google for not giving me any good results to "how long does it take for 50% of the atoms in your body to be replaced."
More important than lines of code, how much foundational architecture do they share? Code is just an implementation detail, architectural changes are more feasible in a new codebase.
A similar thing has happened in Gecko. Probably modern Gecko browsers share similar amounts of old code as modern WebKit browsers share of old KTHML code (that is, very little in both cases).
Right, that was my point. It doesn't really matter where it started, whether it's entirely original or not. Otherwise, IE10 should be considered a bad browser, which it isn't.
I am actually surprise this pieces was written by Brendan, given the way Mozilla has been moving lately with asm.js, PDF.js and Shumway which is like Flash.js; It wouldn't be surprise if their Next Goal were to built the Entire ( or Most of ) Browser with Javascript.
I am surprised that Samsung decide to help. Which basically reads to me as relationship with Google is going pretty bad or they are simply hedging their bet.
It was only earlier today I posted that Yahoo should also have a few engineers helping Mozilla to develop Servo.
And i really really hope Servo is licensed like Rust, Dual MIT + Apache 2.0
Google pays Mozilla to push Google Search, not to develop a browser (which Google can do just fine on their own). It doesn't really mean much that they're teaming with Samsung for Servo, other than Samsung's taken an interest in using Servo for their own products.
Note it's not mentioned exactly which division of Samsung they're working with; they may want Servo for Tizen, their Smart TVs, to ensure it plays nice with Exynos, or none of the above.
[EDIT: the post explicitly mentions porting Rust and Servo to Android and ARM; which rather strongly suggests the mobile group. Interesting, as FirefoxOS and Tizen are more or less aimed at the same markets]
I think it is "none of the above". Servo, right now, is cloudcuckooland stuff; this suggests it is a long bet research, not tied to any particular product division. Samsung probably has Tizen, Smart TV, Exynos in mind, so you could equall say "all of the above", but I think "none of the above" is more accurate description.
> It wouldn't be surprise if their Next Goal were to built the Entire ( or Most of ) Browser with Javascript.
Firefox is that already, the UI is XUL (that's one of the reasons why things like Firebug could exist as a plugin, when Chrome and Safari plugins are essentially glorified greasemonkey scripts). But you still need the core engine to run that javascript (and a bunch of other stuff) somehow, and one of the ideas behind Servo is that you could do so with much more concurrency and reactivity than is currently done.
While this mentions Android, I think the biggest winners in this could be Mozilla and Samsung's own mobile OS's: Firefox OS and Tizen respectively. I'm hoping that if they're collaborating on Servo then the 2 platforms will hopefully become fairly interoperable (in terms of app portability).
A browser engine is an awfully portable thing. For a consumer, switching between them (e.g. from Firefox to Safari, whatever) is just a question of minor UI skinning. I don't see that it "benefits" any OS in particular. Assuming it's universally better, Android and Tizen would need to port from WebKit and Firefox from Gecko. If it confers a true competitive advantage, they'll all do it. If not, some probably won't.
For myself, I'm skeptical. There's really nothing "wrong enough" with Gecko or WebKit that I can see another option really being that much better. Developers love to rewrite stuff (obviously both Gecko and WebKit are already effectively rewrites of pre-existing technologies), but the market has a long history of not being nearly as enthused. But I'm willing to be proven wrong.
>Developers love to rewrite stuff (obviously both Gecko and WebKit are already effectively rewrites of pre-existing technologies), but the market has a long history of not being nearly as enthused.
Actually your examples nullify your argument.
Mozilla would be dead today without Gecko, ie with the old Netscape code piled.
KHTML would have gone nowhere much if it wasn't for the Webkit rewrite.
So in both cases, it was the rewrites that made those engines break out.
Surely the impact of Webkit has been greater than that of Gecko? Webkit made decent mobile browsing possible, and until recently Gecko wasn't available in mobile browsers.
Gecko did put a dent in desktop browser usage share early on (when Safari was Mac-only, and Safari for Windows has never caught on – for good reasons), but the desktop is quickly becoming the second screen. Nowadays, most sites are built for Webkit first.
Safari was released in January of 2003. Soon after, Microsoft ceased developing Internet Explorer for Mac. Almost two years later, Firefox 1.0 was released, and it took quite some time after that for it to become a good browser. In the meanwhile, plenty of us had been using a Gecko based browser for years: Netscape.
I'd make the argument that without Gecko the web would have become even more entangled with and dependent on IE. That would have smothered WebKit in the cradle, making it impossible for anyone outside of Microsoft to make a decent mobile web browser.
Until recently, Servo has been labeled as an experiment, and they've claimed that it isn't intended to replace Gecko. I'm skeptical about this claim, but it does suggests that replacing Gecko isn't its primary objective. If nothing else, the insights gained from developing Servo will provide insight into future development of Gecko.
That's not true, both Firefox OS and Tizen use many not-yet-standard APIs. Luckily they are collaborating on many of these APIs and they will be interoperable.
That doesn't help with portability though. Tizen apps and Firefox OS apps have to be constructed in such a way (by abstracting away the not-standard APIs) to make them portable. You don't get that for free.
Throughout its history, HTML has always been a disaster when it comes to cross browser interoperability. Apps have always been desinged for the vogue browser, which has changed from Netscape to IE to webkit now.
The more you abstract, the more functionality you lose and the more generic your mobile apps will look. Facebook learned this lesson the hard way when they tried to make their mobile App in HTML5.
CSS, javascript, and DOM are just kudge piled upon kludge.
HTML5 trying to shoehorn poorly thought out technologies created for hyperlinked static text documents into dynamic applications.
Until some disruptive technology completely replaces that massive HTML5 spec you speak of, the natives apps will always rule the mobile space and performance critical spaces.
The point is the goal is to make all of these API's standardised. Of course you will always have portability issues if you use the newest possible API's as someone will always be first out with an implementation. By the time FirefoxOS has any traction, it is likely a lot of these API's will have additional implementations and/or suitable "polyfills". That'd already put it in a better position out of the gate than Android or iOS in terms of portability.
I suspect you get portability to Firefox for Android for (nearly) free. That's a decent start at the very least (and an important proof-of-concept for other Android browsers).
It's worht pointing out that in addition to any actual software Samsung might get from this, compared to a global advertising campaign this is a bargain in the form of dev goodwill->dev mindshare->general mindshare.
I'm frequently reminded of the spokesperson for the old US Army game pointing out that the 40 million dev cost was ~ one 30 second spot during the superbowl.
"X is an attempt to rebuild the Web browser from the ground up on modern hardware, rethinking old assumptions along the way. This means addressing the causes of security vulnerabilities while designing a platform that can fully utilize the performance of tomorrow’s massively parallel hardware to enable new and richer experiences on the Web."
Probably says more about how long I've been around but I think I've read a mostly similar paragraph 5 to 6 times over the last decade.
None of those were from the ground up. Mozilla was based on the netscape codebase, firefox was based on the mozilla codebase, safari was based on the khtml codebase, and chrome was based on the webkit codebase. IE I'm not sure about, but it seems unlikely that they started from the ground up, surely newer ie uses code from older releases.
Unless there are actually multiple layout engines developed by Microsoft. At least three (Trident, Tasman and some other used by Expression Web) of them are known, and it is entirely possible that they don't share the common codebase.
This will be a big win for faster web apps. DOM performance is the biggest blocker to having a smooth experience in the browser nowadays, not JavaScript speed.
The DOM should have been a cursor based or perhaps a pull-based API. Everything being an in-memory tree was OK a decade ago but not today the node count in the millions across browser tabs.
Dom funcions could have been generic ala C++ STL style for iteration restrictions
While it might be beneficial in some cases to save the DOM to disk (mostly because of reboots but also, yes, because of people opening hundreds of tabs), I doubt the API would make much difference - sane browsers would still keep any single page either on disk or in memory; they aren't that big.
Your comment surprises me. Maybe I don't undertand the DOM as well as you?
I don't think a DOM tree is necessarily all in memory. For performance reasons you might cache the tree in memory, but that's not a requirement.
You have to query it, right? Of course you can query for the same element multiple times, so it's not cursor-based. But the engine can generate the tree in response to user queries instead of storing the entire tree in memory.
All the DOM implementations I know store the DOM tree in memory. Because DOM allows unrestricted, write access to any portion of the tree at any time, a disk cache would work well for some access patterns and fail for others. Yes, you can point out that browser implementations don't hold heavy weight DOM node _content_ in memory. (Usually just a pointer), but that doesn't remove the fundamental problem that the API has mostly restricted the choice of data-structures and limited the set of performance optimizations.
I probably should have not used the term "pull based api", since that is just a fancy word for a cursor based api.
Example could be SQL - we get a result set after a query exection and can iterate over the elements of the result set. SQL databases can be implemented in a wide variety of ways. (Though best practice has condensed this to a few).
A browser api that had a query language to get different types of node cursors: read-only, appendable, live, etc and some well-defined, optimizable operations on such cursors would have allowed implementors to choose a wide variety of data-structures/algorithms for optimization.
[quote]I don't think a DOM tree is necessarily all in memory. For performance reasons you might cache the tree in memory, but that's not a requirement.[/quote]
Yes it is not a requirement. What browser have you heard of storing in the tree in something else than memory? Imagine if the browser used a the disk to store parts of the tree.
If Google is paying Apple $1B/yr to be the default search engine on iOS, it stands to reason that Samsung could negotiate similar per device rates. That seems like enough potential upside for this to be a no-brainer to me.
I'm no lover of C++, but I'm skeptical that it's really that much more difficult to achieve this design in C++. The parallelism shown is still fairly coarse grained and amenable to traditional techniques. It may be more convenient or less bug prone with Rust, but I read as more substantial, the redesign and rewrite from the ground up. If the redesign was using C/C++, they'd likely get the same benefits, just with uglier code.
Isn't it more likely that lessons learned in Servo will simply be back-ported to C++ in Firefox?
You can write equivalent programs in C++, but if you learned Rust well I am not sure you'd want to.
The parallelism shown is still fairly coarse grained and amenable to traditional techniques. [...] If the redesign was using C/C++, they'd likely get the same benefits, just with uglier code.
I should hope so, Rust's parallelism is nothing new after all! (Okay, I know, you probably mean by writing programs using explicit mutexes and all that.) I really want to talk about the second part of the quote: what exactly are those benefits? I think you've excluded safety for some reason.
If Servo offers competitive performance I think it's most likely the whole engine will simply replace Gecko. I mean why would you spend the time and money to go from the safer to the less-safe code when you can use what you've got?
There's obviously benefits in using better languages for safety and error checking, it's one of the reasons why I like Closure Annotations/GWT/Dart over raw Javascript because of my experience on large projects and relatively simplistic bugs silently propagating through.
The traditional response to these kind of language imperfections is "better tests, better practices". Otherwise as, Unit Tests solve all problems. For C++, someone would say to use some set of abstractions or static analysis tools that confers additional safety that isn't available in the out-of-the-box language.
Language level support tends to encourage more consistent usage of something than just depending on vigilance.
My metapoint though, is that large code bases are hard to displace, especially if the enduser benefits can be achieved through iteration. Rewriting Gecko from scratch in Rust is a tall order, considering how long it took Gecko and WebKit to get to where they are now. It seems much more likely that WebKit/Gecko can be refactored to get most of the benefits in a much shorter period of time than a rewrite.
"The traditional response to these kind of language imperfections is "better tests, better practices". Otherwise as, Unit Tests solve all problems."
The security vulnerabilities found in all browser engines stemming from things like use-after-free suggest otherwise.
"For C++, someone would say to use some set of abstractions or static analysis tools that confers additional safety that isn't available in the out-of-the-box language."
It's really, really hard. The language works against you at all levels. Type systems are much easier ways to achieve the same result.
I don't find testing as a solution very convincing, it's just the same problem all over again: more code which may ran yet fail to live up to expectations. It's a tool, but static checks are better for things that can be statically checked. But I think you hinted at being in line with that, and yes you should still write tests.
But it's not right to think of Rust as a safe C++ in the small, particularly because of the ban on shared mutable state. IIRC, Rust is partly a product of the difficulty C++ had in this area.
I don't think they're rewriting Gecko. If the transitive property holds, the code in Servo is at least as different from the code in Gecko as Rust is from C++, and I think that's only amplified in the design. Servo aims to do things that are not realistic for Gecko, like parallelizing more of its operations. I don't think they'd be going this far with Servo at this point if it wasn't for that, I think Gecko would just see the necessary changes and that would be it.
Either way, Servo exists. We're told it started as research, but it is obviously at the point where its promise is more concrete. When the lessons have been learned from Servo, it will be a complete, embeddable (this is one goal Gecko gave up on) engine. There's no sense in backporting its innards to the C++ engine when you can just drop it in place.
> Isn't it more likely that lessons learned in Servo will simply be back-ported to C++ in Firefox?
Usually, with high-risk long-term research projects, its fairly likely that the main benefit will be providing knowledge that can be applied to more conservative projects or to later high-risk, long-term projects.
OTOH, those insights often come from getting out of conservative boxes and trying something radical and new, and C++ is as much as part of the "box" Servo is trying to get out of as is the existing code base of particular browser engines.
My guess is it may reduce their dependence on WebKit, which is mostly "controlled" (for want of a better word) by Apple and Google, their main competitors.
'Controlled' is honestly pretty accurate. In principle anyone can fork WebKit; in practise it would take major investments on the three fronts of engineering effort, standards politics and mass user adoption to make a WebKit fork that "matters" - one that web designers would worry, and have to worry, about being compatible with. If you don't have both the resources and the desire to do that, then you have to play nice with Apple and Google - or with one of their two remaining competitors.
Reduced dependence on Google browsers for their phones.
They have been providing their own proprietary redundant solutions that overlap Google's software territory on mobile, for their own devices. This is just another step. And it gives them the opportunity to integrate with Mozilla to perhaps have a phone using their OS instead of Android in the far future.
A rough guesstimate would be five to ten years. Don't hold your breath either way. Right now Rust is not very optimized and still in flux, while Servo is pretty much a black box for me.
Samsung seems to believe in having the best performing devices and components, so it's not that surprising that they want to be at the forefront of the heterogeneous computing movement.
They also probably see themselves as a competitor against Intel, with both the ARM ISA that they support and the foundry that they own. In the next few years there will be a battle between HSA and Intel's Phi co-processor idea, and I think Samsung is one of the many who wants HSA to succeed (using computing power from all sorts of processors vs using only CPU's for everything).
http://hsafoundation.com