Hacker News new | past | comments | ask | show | jobs | submit login
What Spectre and Meltdown Mean for WebKit (webkit.org)
397 points by pizlonator on Jan 9, 2018 | hide | past | favorite | 285 comments



I wonder if this shouldn't question whether we should still allow all websites to run javascript by default.

There are websites that genuinely need to run some code, like webmails, online trading platforms, online games, etc. But 99% of the websites have no good reason to do so. Javascript is used to make up for the shortcomings of html/css (different rendering for different screen sizes, lack of local validation of forms, etc), for the mere convenience of developers and for user hostile activities (tracking, messing with default behavior of the mouse, keyboard, clipboard, scrolling, etc).

This might be an opportunity for revisiting html and making it better. An ecommerce, a newspaper or a blog should have no reason to execute client side code to render. Once we have a good enough markup alternative, executing javascript can become an optional feature requiring user authorisation like accessing your camera or location.

[edit] also at the very least it should lead us to question the chain of trust of javascript. When I visit abc.com I should only execute javascript from abc.com or a subdomain. No script either hosted on a non abc.com domain or appearing in an iframe should be executed.


> I wonder if this shouldn't question whether we should still allow all websites to run javascript by default.

Plenty of harmful things are done with C and C++ but no one is saying we should deactivate native apps written in unsafe languages, or not allow anyone to program a GUI unless they can justify the use of canvas space. Yet the web, arguably the most successful and free (as in both beer and freedom) and accessible software platform yet devised, is the only one on which people - erstwhile hackers - say that code should be a considered a privilege and not a right, or that it doesn't really belong on the web at all, despite javascript being on the web for ~20 years now.

What you're describing seems a lot like adding Windows UAC prompts to the web - no one would find that better, particularly given that anyone can simply turn javascript off in their browser already. This isn't even a javascript problem per se - you could erase javascript from the universe and make the web as plain as you like and Spectre and Meltdown would still exist.

> also at the very least it should lead us to question the chain of trust of javascript. When I visit abc.com I should only execute javascript from abc.com or a subdomain. No script either hosted on a non abc.com domain or appearing in an iframe should be executed.

That seems reasonable, and I would be willing to bet, possible to configure with modern tools. But pretending javascript is a second-class citizen on the web isn't the answer.


Random, unexpected, untrusted c/c++ written code doesn't execute on my machine every time I browse a website. The only c/c++ code is code I downloaded and installed or that my OS vendor trusted to include in the OS. It's quite different from javascript where the code is literally executing uninvited, often by 3rd or 4th or 5th parties of the website I visit.


No. Unless the application downloads and executes new code from third parties neither you nor application author can control every time you launch it, it's absolutely not comparable to most of the websites.


>Random, unexpected, untrusted c/c++ written code doesn't execute on my machine every time I browse a website.

It does every time you run a native application, for the same values of "random, unexpected and untrusted."

>It's quite different from javascript where the code is literally executing uninvited, often by 3rd or 4th or 5th parties of the website I visit.

Fine. Turn it off then. That's an option you don't really have in any other runtime, so take advantage of it. But just because you don't want javascript on the web doesn't mean it doesn't belong there when authors choose to include it as part of the content they serve.

Also, how exactly does one run "3rd, 4th or 5th party" javascript?


> Also, how exactly does one run "3rd, 4th or 5th party" javascript?

You load a page that includes 3rd party scripts/iframes that themselves load 4th party scripts that themselves load 5th party scripts, etc. You often notice that when you start blocking an ad in your browser and suddenly 10 others disappear at the same time.


The only c/c++ code is code I downloaded and installed or that my OS vendor trusted to include in the OS.

You never brew install anything? Or brew cask install, or git clone followed by make, or...

Even npm install sometimes compiles C.

It's hard to get away from c/c++ code.


Maybe he's running Linux? There, software installed through the package manager from the distro's repos count as code which the OS vendor trusted.


>Plenty of harmful things are done with C and C++ but no one is saying we should deactivate native apps written in unsafe languages,

That's not remotely a valid comparison. If I run a native app, I had to get that app somehow and install it. I know it can modify my computer (and probably have to give it persmission to do so). A web page can automatically load other pages, even in the background and do who knows what. For most of the existance of the web we've all had the mental model of "sandbox". We never had that for native apps.


Same thing could happen to apps. There was a Baidu app in China that secretly downloaded random videos just to eat up the users' data plan, so they will buy more data.


Again: we've always known that desktop apps were risky and could do anything to our computer (and for years, a lot of "free" windows software installed malware and viruses). We were always told that JavaScript is "sandboxed".


>> Plenty of harmful things are done with C and C++ but no one is saying we should deactivate native apps written in unsafe languages

This is not in any way equivalent to browsing an informational site and having arbitrary code execute.

>> Yet the web ... is the only one on which people - erstwhile hackers - say that code should be a considered a privilege and not a right,

Hell no, code execution is a privilege in all situations! I don't just download scores of random executables and run them. Do you?

>> But pretending javascript is a second-class citizen on the web isn't the answer.

Making it one might be an improvement though. There's no way in hell I should be (to give a recent example) browsing a keyboard review and notice that my laptop now sounds like an aircraft about to take off because the site is running something unidentified as fast as it can (likely a coin miner).


NoScript is really easy to install and use.


I've always meant to try it. The various blacklist options have been good but are always leaky.


>Plenty of harmful things are done with C and C++ but no one is saying we should deactivate native apps written in unsafe languages

Isn't that exactly what a lot of people have been saying ever since Rust started taking off? Every time there's a security vulnerability, someone suggests rewriting the program in Rust, if not some other safe program.


> ...say that code should be a considered a privilege and not a right, or that it doesn't really belong on the web at all, despite javascript being on the web for ~20 years now.

Same could be said about advertising and I'm infringing on all these innocent corporations' natural rights by running an adblocker.

For random code to run on my machine a few (IMHO reasonable) steps need to happen: 1) somehow entice me to click on a link 2) get past AdBlock 3) get past Privacy Badger

Seems like a privilege to me --and I'm not someone who cares all that much about the shenaganians the big webcorps get up to since I have no money so spend on their ad campaigns anyway.


Be aware that Adblock doesn’t block all JavaScript code. It blocks code after people have identified that code X run on website Y does something bad (tracking, ads, etc) and they decided to block it. This works well for visible ads because people see them. For tracking scripts, it still happens because most of them have obvious giveaways (come from a different domain, etc) but chances are that you still run random code because it is better hidden (comes from the same domain and does nothing obvious bad). Just because you use Adblock, malicious code will still be able to run. You are just less likely to notice it because it does nothing obvious so people don’t notice it as quickly and it will not get blocked. If you don’t want to run malicious code at all unless you trust this website, you should probably consider running NoScript.


> Be aware that Adblock doesn’t block all JavaScript code.

That's fine as long as it keeps the annoying stuff to a minimum and Privacy Badger seems to do a good job on the tracking stuff.


I get that (and that’s why I use it too) but you made it sound like Adblock is blocking malicious code and thus you do not need to worry about Spectre and meltdown, which is defentily not true (though it does prevent some malicious code).


> no one is saying we should deactivate native apps written in unsafe languages

No we should definitely not, but we should encourage using something like rust or whatever comes after rust.


The language is a red herring here.

Javascript is a safe language, but it's powerful enough to exploit Spectre and Meltdown. Rust would be the same. It's not badly-written programs we need to worry about in this context, it's well-written malicious programs exploiting badly-designed hardware.

We don't need apps written in safe programming languages to fix this problem, we need secure browsers, OSes and hardware. (And yes, those bits should be written in safe languages.)


running untrusted code is a fundamental part of our daily experience.

If you kill the webs ability to do that, people will build something else.

The we have to go through the rigmarole of securing this whole new platform with the same bugs but in different ways.

Instead of neutering the web, let's build secure cpus.


No, drop down menus, pre-validation of forms, adaptive rendering are the daily experience. Javascript is just the way we currently achieve that. I argue that a better html would avoid having to do this in javascript, as these are standard features that are needed everywhere.

And even if Intel comes up with a new design available for sale next month, we will still be stuck for many years with this flaw on all the devices out there. We also need to fix CPUs, but that’s a very long term solution.


> No, drop down menus, pre-validation of forms, adaptive rendering are the daily experience. Javascript is just the way we currently achieve that. I argue that a better html would avoid having to do this in javascript, as these are standard features that are needed everywhere.

You can certainly standardize a certain set of feature and put them in a non-turing-complete language like HTML, but standardizing all the legitimate use-cases of JavaScript is almost impossible, you talked about «drop-down menus, form validation or adaptive rendering», but what about interactive charting, modal view of full-size pictures, auto-completion of search tags ? These are just features I use on my personal blog ! And what about infinite-scrolling ? What about partial content-update ? There are thousands of legitimate use-case for JavaScript: it's not achievable to standardize every single one of them and put it in the HTML, and even if it were possible, do you imagine the nightmare it would be to learn HTML then ?


> but standardizing all the legitimate use-cases of JavaScript is almost impossible

I see the problem of not always having the newest technologies in browsers, but the pain of that problem vanishes the more we already have. One might say that securing JavaScript sandboxes is almost impossible, but still we are working on it. Let's work in the same spirit on advancing the open web :-)


> non-turing-complete language like HTML

I would be careful about that: It wouldn't suprise me if HTML with CSS (at least with all the new things like animations) is turing complete (it probably is).


‘You can encode Rule 110 in CSS3, so it's Turing-complete so long as you consider an appropriate accompanying HTML file and user interactions to be part of the “execution” of CSS.’

https://stackoverflow.com/a/5239256/453783


If history is any indication, getting browsers to design, agree upon, and implement "better html" would take about as long as fixing cpus.


The notion that there are so many users of digital devices and services because we are running untrusted, unaccounted code that is automatically executed is plain wrong.

Hardware advanced in ways we didn't imagine in the past, and interfaces got better. Society learned about computers as they got cheaper and knowledge spread, while better software tools formed, accelerating the growth of the ecosystem. Nothing of that requires running delivered code upon visiting a random website.

HTML (or whatever standard we come up with) can be expanded to add the needed features, and enrich the web that way. JavaScript is used to lock us in, exactly what the web shouldn't do.


> HTML (or whatever standard we come up with) can be expanded to add the needed features

I think it is hubris to think we can ever build a spec with all the needed features.

> JavaScript is used to lock us in, exactly what the web shouldn't do.

While I agree the web shouldn't lock us in I would like to understand how you think JavaScript does that?


I don't think it is possible to have a spec thats always contains all the latest functionality. But that is also not necessary. The better the spec, the more problems we can solve completely.

I don't think we can always support the newest latest advancement to solve the latest obscure problem, but I think it's no problem to wait a period, even if it were months or years, for it to be standardized and supported in browser. The delay is not a nuisance nor a cost without any benefit. We maintain the open trustable web by doing that.

We have already a pretty good idea of what functionality we need. For example, we can look at what common JavaScript frameworks solve, and think about what we should standardized from that. Nobody did that so far, but we totally could, and should.


Waiting months and years for features that your site or app depends on is unacceptable.

> We have already a pretty good idea of what functionality we need. For example, we can look at what common JavaScript frameworks solve, and think about what we should standardized from that. Nobody did that so far, but we totally could, and should.

And once we have captured those concept, what then? Where do we mine the new concepts, designs, behaviours that will need to be introduced to the spec and implemented by each browser vendor individually?


> Waiting months and years for features that your site or app depends on is unacceptable.

If we do a thorough job now on extending the code-less web, I find it hard to think of sites that couldn't be realized then _and_ shouldn't be a native program in the first place.

I simply don't want a web that has place for "apps". For multiple reasons random websites shouldn't have the possibility to make the visitors system execute supplied code. Yes, some people don't agree with that, but they have usually motives for that which I cannot agree with like withholding code or control from users or tracking them. If you have a good reason why there should be apps in the browser, then I'd love to hear it, though.

The _example_ suggestion to look at existing framework is just a shortcut to not redo work, and to use already existing knowledge. If someone has an idea of what functionality should be supported we can discuss and standardize it. We had lots of technology completely independent of JavaScript. There is a lot of movement in the JavaScript ecosystem because a lot of ecosystem needs to be developed and there are big companies pushing for it, because they need it to monetize their users, but thats it. JavaScript isn't that important.


> Instead of neutering the web

If you took JSW off of most websites, it wouldn't be neutering the web it'd be making it better.


A taxidermist could argue that dead animals are better than living ones : they are indeed more convenient, less dangerous and cheaper to feed. But they aren't animal anymore, are they ?


Absolutely true. However, I'm not also not likely to find my stuffed cat slowed to a crawl and using all my electricity by mining Bitcoin because some 3rd party advertising network got compromised.

Any other daft analogies you want to use?


Your stuffed cat could not, but a living cat could bring you rabies if some third party in your neighborhood got infected, that's my point ;).


So your point is that we _should_ JavaScript?


Who knows! This comment thread is so wrapped up in its own analogy i just stuffed my cat full of JavaScript.


I think it's obvious that the point was no javascript... but a live cat is way more fun, so I'm not on board with that.


> However, I'm not also not likely to find my stuffed cat slowed to a crawl and using all my electricity by mining Bitcoin because some 3rd party advertising network got compromised.

You obviously need to upgrade your stuffed cat...


> If you took JSW off of most websites, it wouldn't be neutering the web it'd be making it better.

Neutering animals is seen as making them better pets, it's still neutering.


MathJax though.


I disable JS by default and only enable it if the website requires it (and it seems good enough). Since I've started doing it, quality of my internet experience have raised significantly. Almost all websites load much faster, are more responsive and less intrusive.

Highly recommend it.


Unless you're considering client-side encryption, I don't think webmails qualify either (squirrelmail, for example, works pretty well without JS).


Indeed because auto-complete of recipient address from my contacts is useless, right ? Same could be said for retrieving new emails without reloading the page I guess …

Had JavaScript not existed, the web would be dead a long time ago already.


Both good points (although the second one could be handled through a meta refresh I guess). But I'd like to point out we were discussing where javascript is absolutely needed, not where it's convenient.


It's absolutely needed in the sense that «without it the web wouldn't exist», something else would be there, with the same security issues or worse.

Remember when the web allowed no video playback ? Flash was everywhere ! And Flash was non-standard and closed-source, maintained by a single company without competition. The current state of the web is a massive improvement compared to what it was 10 years ago, especially on the security side.

If you want to create a webmail (or any other web platform) that needs no JavaScript, go ahead. I doubt you'll be successful competing against people using JavaScript, but if the demand for a JavaScript-free web experience is big enough, you could live happily in that niche.


That will only happen if users will stop visiting websites with Javascript enabled. Which will only happen if an actual exploit becomes so widespread that it affects most of them financially.


This whole thread became a big "throw the baby out with the bathwater". Of course there is a long way to a secure web (if that is ever achievable), but disabling javascript is just a lazy alternative, creating a lot more nuisance than whatever damage is being done via javascript lately.

OTOH the "chain of trust" idea looks promising. And increases the responsibility on the visited website, because if it wants third party content to run javascript it'll have to host it itself.


I can't help thinking that some large percentage of what Javascript is used for could be replaced with a much less powerful language.


> An ecommerce, a newspaper or a blog should have no reason to execute client side code to render.

The main reason is advertising, which is ultimately how this web content is paid for.

But there could be alternatives. We could create static img elements that could securely and discretely record whether they were viewed by a human (to prevent ad fraud). We could support simple animations and interactivity in a secure, resource-friendly way. And we could allow these to be disabled by user agents (e.g. epileptics).

Could newspapers exist without online advertising? Possibly, but only in specialised niches. Consumers have shown that they will always seek the cheapest, most accessible forms of information - even when provided by highly biased, unreliable sources.

The danger to killing online advertising is that it creates a world where the only people who can afford to publish are those with other interests - plutocrats, politicians, churches - and who will inevitably bankroll news only to further political ends.


Static img elements with built-in surveillance and tracking, unblockable and intransparent, on a browser level? You have to be kidding.

The ad-based web economy had produced the current publishing dystopia in the first place and prevents the development of a sane business model.


> The ad-based web economy had produced the current publishing dystopia in the first place and prevents the development of a sane business model.

Ah, the 'better business model' argument! A curious specimen, men often speak its name, but rarely agree how it looks or its habitat. Is it a donation model? Wait, no, that doesn't work. A paywall? Ah, but you want content for free. Philanthropist support? Oh, you don't like the conflict of interest. Micropayments? I suppose you have some cunning way of mitigating the transaction costs, without computationally expensive solutions like bitcoin.

Here's my proposition: post a model that is known to work, known to scale, for general purpose content, and you will have a startup on your hands. Otherwise - I ask you to withdraw your comment.


See Netflix, spotify. Those or others will come for content when ads are gone.


But then why haven't "Spotify for Journalism" projects ever worked? How does a business go from $2 per paper to 0.2c per article? Why are so many artists so unhappy with their meagre Spotify royalties? I think this falls under my "proven to scale" condition.

[In reply to the below - as we have reached the thread limit: there will _always_ be free competition. The problem is: without advertising, the only players left will be those with deep pockets, and deep interests to protect - the state, churches, oligarchs and political groups]


Because there is infinite free / ad-based competition crowding out paid services.


Regarding royalties: My understanding from school is that radio royalties are paid out by samplings from radio playlists. If you song was in the sample then you got paid, otherwise, nope!

Don't the current streaming tech pay artists per literal play? If so, wouldn't that mean more money to the artists since all plays are counted?

My understanding is a bit outdated now perhaps, but would like to understand better if someone knows better :-)


>The danger to killing online advertising is that it creates a world where the only people who can afford to publish are those with other interests - plutocrats, politicians, churches - and who will inevitably bankroll news only to further political ends.

You say that as if server space is super expensive, but it isn't, certainly not compared to other forms of media. It's entirely possible to publish to the web without being an elite, people and businesses did it for years before advertising on the web became a thing.


> people and businesses did it for years

The people were hobbyists and the businesses had other projects.

Relying on "interested members of the public" for news is how people like Alex Jones get started.


>The people were hobbyists and the businesses had other projects.

Yes, but my point is that publishing to the web doesn't require advertising on the web. Nor does advertising on the web require javascript.

>Relying on "interested members of the public" for news is how people like Alex Jones get started.

It's how a lot of people got started. You seem to be implying some relationship between the presence of political extremism and the absence of advertising, but I don't see any evidence the two are correlated.


The point is: large businesses have reputations. They are knowable and generally accountable. They can also protect themselves legally.

Semi-anonymous writers posting blogs are none of those.


For me, making a better HTML/CSS and forcing it on everyone contradicts with the openness and accessibility of the web. It's 2018 and still there is no reasonable support of not-so-popular languages and locales in any browser or OS.

Consider a simple blog whose users are used to Jalali calendar. Blocking JS will only make the web more hard to use for them. It took many years for simple date utilities to land in HTML5 standard, and I bet things like decent global localization support will not be part of any standard in the next ten years.

User hostile activities should be prevented, but it should be done by developing more PrivacyBadger like tools, not limiting everyone to a small subset of available technology.


It's too late.

For many years (decades?) I got downvoted (here and on Reddit) for mentioning that I used NoScript. I even disabled JavaScript completely on Netscape Navigator.

It was in the context of sites that were unusable without JavaScript or security problems that only affected JavaScript in browsers.


It's too late for what?

Sure, Google and other advertising companies will never stop pushing technologies that are favourable for them, but we don't have to use them. We can always build open and free alternatives.


It doesn't matter. The playing field itself allows for shitty practices, and people/companies engaging in those shitty practices get more profit than those who refrain from it.

As long as shitty practices are supported by the browsers general population uses, nothing will really change.


Money doesn't win every fight. And not every fight is about money. For example, Mozilla is a non-profit organization. It's not about making everybody stop doing the wrong thing, it's about doing the right thing. Having one modern browser ditch JavaScript would already be a huge win, and people who care could use it. If Mozilla pushed the open web, it would already be much better, because all those open projects could use that. Not everybody uses Googles or Facebooks websites.


This fight is about money, because you're going against the entire industry, asking some players in it to self-sacrifice.

> Having one modern browser ditch JavaScript would already be a huge win, and people who care could use it.

For a short while, maybe, but as people making websites don't care about minority browsers, the amount of important websites you wouldn't be able to use through that browser would only grow, until the point that browser becomes useless.

People making money off user-hostile activities won't voluntarily stop making money off user-hostile activities.


It is not about making everybody stop doing user-hostile activites. That is my main point here, it is not about that.

There are people who have a website and don't want to monetize the visitors. I want a standardized web for them. Not everbody has to belong to that group for it to have merit.

I don't expect Microsoft or Google to stop making their online office stuff. Honestly, I couldn't care less what they are doing. I want solutions for those that serve the users, and nobody else.


Not everyone who uses javascript does so to monetize visitors. It's just a scripting language, it can be used for anything - and any user or publisher who doesn't want to use it already has the freedom not to.


I agree, and I think it is a pity that JavaScript is necessary for a lot of useful functionality which could be standardized to be made available without JavaScript, so people who care can browser the web and their sites in a browser that doesn't support JavaScript and still have a rich experience.


What benefit would there be to doing that? If the functionality remains the same, you've just transferred Turing completeness somewhere else.

I can see the benefit to having languages other than javascript run in the browser, though, but the only way there seems to be WebAssembly.


The advantage of not supporting programming languages in the browser from content that comes from servers is that one doesn't accidentally fall into that trap.

The functionality in my browser is important. The more the browser is scriptable, the better, but not from the website.

The code on my machine is reviewed, vetted, accounted, maintained and free. That is the big difference.


So you're advocating site developers add elements to their HTML which browser plugins then execute like a scripting language? I may not entirely understand what you're proposing.

If the end result is the same functionality as was being provided by javascript, then if you didn't trust the javascript, you can't trust the (now Turing complete) HTML. The same trust and verification issues exist, just moved to another level of abstraction.

>The code on my machine is reviewed, vetted, accounted, maintained and free. That is the big difference.

Is it? By whom? Not by you personally, not all of it. Blind trust is unavoidable at some point.


No, I'm not advocating for any Plugin, but for extending the standard (or creating a new one) with common acceptable functionality that we now use JavaScript for.

For example, we can do tables in HTML. We need no scripting nor plugin for that. For a contrary example, without JavaScript or plugin we can't serve our website as a torrent that is then loaded by the visitors from other visitors. But we can standardize that functionality and add it to the browser. The code for that functionality would then be developed by the browser developers as a part of the browser, and delivered with the browser. The website would just deliver the torrent file and the necessary meta information for it. That way no code or script from the website would need to be executed to have the desired functionality.

I don't blindly trust developers. I nearly exclusively use packages from my distribution. These programs are maintained and signed by people with recorded track record of their behaviour as a maintainer, and they are vetted for by others. They don't have an interest to track or montized me, and it is a limited set of people I need to trust. Also, I can hold back on updates, and refuse them, if I choose to do so. Programs that come from other sources are carefully looked at by me. there is a big difference to automatically downloaded and executed code from some website.


If that - just that - is your scope, then count me in.

I'd love to have a "standard web" that's entirely focused on providing visitors information in an efficient manner. And by efficient I mean not fighting attempts at providing better UIs or aggregating information through machine methods.


It’s too late until we are facing an unfixable hardware flaw that we will be stuck with for the next 10 years (until users fully retire the current CPU architecture).


This is false. Many websites are using Javascript to render the websites in full to give the users a better experience, such as rendered a SPA (Single-Page-Application) to prevent unneeded amounts of data loaded on each page request and only load exactly what you need, to give a faster, smoother and higher quality experience for the user.

Sure the mom down the street who wants to blog about the her kitchen recipes wont be needing it, but a lot of websites "do"


Websites are only bloated because of the MB of javascript to make them applications (images do not count as they need to be rendered either way and can be cached). Most of the websites I visit every day don’t display that much content. Once gzipped it's a tiny file.


Yeah and if you visit a site everyday, the JS is cached, too. After the initial download, it shouln't be an issue. This is besides the fact that it should not take MBs of JS to make complex applications.

The issue is what people are choosing to do with the tools, not the tools themselves.


HTML can be cached, too. Even if JavaScript permitted some clever way to safe network traffic like for example delivering the site as a torrent that is then downloaded from other visitors [0], then we can standardize that and implement it as a part of HTML or so.

[0] For example something like https://github.com/xuset/planktos but without the JavaScript.


> the JS is cached

Except that the current trend in webdev is to deploy small incremental changes multiple times per day and cache-bust the assets that change on each of those deploys. So the user is probably actually downloading the JS asset at least once per day Monday through Thursday (because no deploys on Fridays).


Well right now the problem is that the tool is broken. We have a long term non fixable issue (outside of replacing the hardware) with non trusted code running locally.


I would be absolutely astounded if there was any SPA which moved fewer bytes over the network than its plain HTML equivalent. HTML can be written compactly, and the repeated structures compress well; any increase in the size of the data over JSON will be more than made up for by not needing to load megabytes of javascript.


The statement "a better experience" is debatable.

I cannot remember how many times I have visited SPA websites that break the browser url history.

Also, if a SPA fails to load a request for any reason, try reloading it. Ops you start over.

And I am not talking about some people that don't know what they are doing. At times I've had issues with Google's new developer console, gsuite admin, analytics, product hunt and others.


These are all examples of bugs or badly designed/coded applications. Don't blame JS for bad coding...


Nonsense. Bad tools are bad tools as evidenced by the fact that literally no one can use properly at scale.

Evangelists of bad technologies always like to trott out the nonsense about "a good carpenter never blames their tools"... well of course they don't: they buy the tools and always buy the appropriate ones. Further, if they did complain the customer would wonder why they bought bad tools in the first place.

A carpenter would never try to build a building with a "JavaScript" equivalent in the first place. They recognize the tool as garbage (perhaps after trying, and failing, to use it) and switch to something more appropriate.


But it's really really really hard to not do bad code in JS - a whole ecosystem seemingly built to encourage terribleness.


Well, if google can't do it properly then who can? Also, I was talking about SPAs specifically not JS in general.


If you took away JS from Facebook and Twitter, they'd literally be a better experience. They'd be forced to give up much of what makes them painful to use. At least Twitter had a character counter, that was a useful JS feature, but they broke even that.

And if you need a diagraming tool, maybe a website isn't the best solution.


It's that simple, huh? Why not save everyone the trouble and replace GUI's with terminals while we are at it.


You mean those eternally scrolling pages whose main purpose is to hide content that belongs to the user in the first place? These sites are a pest.


if you aren't ready to wipe and trash your box at a moments notice and start fresh then why are you even on the internet?


> There are websites that genuinely need to run some code, like webmails,

Not really, not if you think about it. Webmail doesn't need anything more than html(<5)+css.

> online trading platforms

Ditto.

> online games

Honestly, I think running webgames in a super-sandboxed flash (or similar) runtime is the best thing to do. Again, no use for js on the web.


>Honestly, I think running webgames in a super-sandboxed flash (or similar) runtime is the best thing to do.

Why is one ECMAScript sandbox (flash) so much better than another?


Because it came with an awesome IDE/editor.


Yeah lets all just use bash commands to browse the web, way more secure and sure you can get the information you need... oh wait wasn't that 20 years ago?


And it would still be possible if web standards didn't accrue so many capabilities that allowed people to treat them as a new medium of publishing color magazines (with invasive tracking as a cherry on the cake).


Built-in JavaScript has a better security track record than Flash and it’s ActionScript engine.


That's true, but probably only because writing games and animations in js is sooooooooooo much harder than doing the same in flash.


>Not really, not if you think about it. Webmail doesn't need anything more than html(<5)+css.

Because redownloading the entire page every time you click a button is such a great experience...


The page without the javascript? It likely would be, since that would be much smaller.


That Javascript would already be cached and have no impact on further page loads.

Clicking "star" and losing your half-typed message would be very impactful, though.


This is a great article on the subject and I like the clear explanation, with code examples on their fixes.

I don’t think I will ever forget the line “we cannot trust branches”, it’s just amazing that such a fundamental part of the CPU was broken here.


I've been explaining to friends that Spectre is like discovering that ESP exists.

Not that Spectre attacks are going to be easy to pull off, but it really makes you reconsider everything that you thought you could take for granted.


The scary thing is that without mitigations, it is shockingly easy to pull off. WebKit engineers made multiple fully working exploits internally and we would be hard pressed to do that for other kinds of vulnerabilities. Building full exploits out of your run of the mill use-after-free bug is much harder and requires specific expertise.

Fortunately these mitigations make it way harder.


I really hate to see this rush to "make sure our code doesn't use branch prediction". It may be the best fix we have available now, but it's going to create a legacy of bizarre inefficient machine code that will last forever.


I feel the opposite. I'd like nothing better than this to be the end of branch prediciton. I can think of no other technology that's been so deterimental to high level languages than branch prediction. How many elegant, beutiful algorithms are replaced by bizarre data structures, etc., because avoiding pipeline stalls will trump any other optimization you can make.

For most things the CPU is doing, we can ignore in high level languages because the compiler/optimizer can turn our high level code into the appropriate byte code to take advantage of deep hardwar magic. Not branch prediction, though, no matter how high level my language, I still must consider pipeline size, cache alignment and so on when designing a novel new container or algorithm or risk it being inexplicitly slow.

Branch prediction is, and always was a hack. Let it die.


but removing branch prediction would be to restore those stalls and inefficiencies as the way things are when you don't use those strange data structures, right?

The effect of this seems to be "I wish everything was slower and then the naive implementation would be the best implementation".


>restore those stalls and inefficiencies as the way things are when you don't use those strange data structures, right

No, a stall is only a thing because branch prediction exists. Get rid of it and I can go back to not caring about the deep underlying architecture of the hardware I may potentially be running on. And not doing branch prediction isn't inefficient... that's a bizare statement. Branch prediction and speculative execution are themselves inefficiencies (i.e. spending power to do things that will turn out to not be needed fairly often).

>the naive implementation would be the best implementation

Not the naive implementation. A well thought out, developed implementation that is defeated by deep hardware details. The promise of high level languages was always to avoid having to know these sorts of details. In designing an algorithm I should be worried about things like how often I access data (e.g. can I cut down on how many accesses with some clever structure?) and the like. Now that's all trumped by deeply knowing the low level implementation.


Nothing changes for high level languages. If they are high level enough they can still fix Spectre and abstract away deep hardware details.


That's exactly my point: you cannot abstract away details of pipeline size. You must know this to get the best performance out of your structures and algorithms. There is no way for a compiler/optimizer to look at your container implementation and say "oh, this will potentially overflow the pipeline, let's split the data structure into multiple parts and change the code to handle this new access stategy"... that would be creating entirely new code (what would stepping through a debugger look like on such code?).


Our mitigations do not prevent branch prediction.


You are removing branches, and adding slightly less clear code to do it. If this resulted in faster or cleaner code you would already be doing it. So the OP’s point holds: if these security holes are fixed at the CPU level, there will still be a generation of software compiled with these mitigation’s in and no way for a user on a newer CPU to disable them.

EDIT: I just wanted to clarify here that I did not mean “all branches” just some of them, hence the name of the technique which is “branchless security” (which is a really good piece of engineering). The frustration is basically we have to bake in CPU model specific mitigation’s into binaries. You can of course patch it out, but the affected CPUs will in theory be in the wild for a long time to come even once corrected ones are released.


The mitigations are largely in code produced by the JIT. It's not hard to patch the JIT's codgen to back this out (perhaps even based on current CPU) if it becomes unnecessary. It would be nice if CPU architectures never had this fundamental vulnerability, but they do. So this seems like the best response for the reality we actually live in.


That is super cool. I had no idea a lot of this was JIT side, and I do agree this is one of the best fixes I have seen for this yet.


We are not removing branches.

To fix this in hardware without software changes you would have to disable branch prediction. That would perform worse than what we are doing.


Sorry maybe I missed something but you had some really cool techniques to avoid using branches to do things like array length checks and type checks? So you replaced an if branch with a new, potentially slower, and harder to understand concept. I was not trying to imply you removed all branches.

It is a really cool piece of engineering but we can still be sad we have to implement it.

(On the hardware side I was assuming sommeone clever at Intel will come up with some security non Perf killing change to make branches trustable again).


Now we are doing both bounds check branches and index masking. The index masking is purely additive. We didn't replace branches.

Similarly, where we use pointer poisoning, it is in addition to some other security check, which is almost always a branch.


Also, security aside, removing (mispredicted) branches is a legitimate way to try to speed up code!


How will this last forever? Browsers seem like a relatively competitive and performance-oriented space.


These mitigations feel like a half measure. To quote the Spectre paper:

"Even code that contains no conditional branches can potentially be at risk."

"long-term solutions will require that instruction set architectures be updated to include clear guidance about the security properties of the processor, and CPU implementations will need to be updated to conform."

It seems too early to declare Spectre class attacks mitigated by the mechanisms presented in the OP.


I don't think anyone is claiming this is solved. Mitigations are by definition a half measure. "Mitigate (verb): make less severe, serious, or painful."


Maybe more than half. ;-)

I think that the branch aspect of Spectre is the thing that WebKit is most affected by.


Translated this basically means- for real security to exist, the chip has to be open source down to the layout.

This will not happen. So basically, the interest of the one outweighing the interests of the many, results in the many suffering for what exactly?


> open source down to the layout.

I don't see what open source has got to do with any of this.


More security experts would be encouraged to have a look at the design and to find flaws early on.

Of course, we all know that this doesn't always happen, see OpenSSL. However, once a major incident (Heartbleed) happened, they did: Many more OpenSSL issues were found and fixed, forks with different trade-offs came into place. For example, LibreSSL traded backwards compatibility with ancient systems for a smaller code base and increased security.

Since CPU designs are not Open Source, and on top of that flooded with patents, nothing like that will happen in this space. Intel and AMD are on their own, rather than having their design checked by a motivated international research community.


But these attacks (Meltdown/Spectre) are on a fundamental design approach, which was conceived and developed and researched in the open. People in colleges all over the world study about them. Do you really think this would have been caught much sooner is Intel had released all schematics and layouts to the public?


I'm just saying that in general, the incentive for a scientist to put work into an open system is orders of magnitude higher than to put work into a closed system.

To provide a similar example:

The crypto experts around Daniel J. Berstein and Tanja Lange stated publicly at 34C3 that they refused to perform crypto analysis on a certain algorithm that was patented. But they (and others) published good crypto analysis results (working attacks!) just a few months after the patent expired.


> I'm just saying that in general, the incentive for a scientist to put work into an open system is orders of magnitude higher than to put work into a closed system.

They already do that, I'm sure you can find a multitude of papers on branch prediction and speculative execution if you simply took the time to look. Probably even some by the very same people who designed the Intel chips causing all the fuss.


Please refrain from strawman arguments.

Nobody said there is no research, just that openness would lead to more research. Even "a multitude of papers" was obviously not enough to catch this earlier.

On top of that, please refrain from personal attacks.


> This will not happen

Note that we do have good Open Source CPU designs, though, such as RISC V.


Mitigations for Spectre and Meltdown are also being added to the JavaScript VMs in Chrome [1], Firefox [2] and IE/Edge [3].

Are similar mitigations also needed in the VMs for other dynamic languages, such as CPython/PyPy, Ruby MRI and Lua/LuaJIT? What about the JVM and Microsoft's CLR?

Or are these other VMs not susceptible to this form of attack?

[1] https://www.chromium.org/Home/chromium-security/ssca

[2] https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

[3] https://blogs.windows.com/msedgedev/2018/01/03/speculative-e...


> Or are these other VMs not susceptible to this form of attack?

I think the main difference is all the other dynamic languages don't let someone do a driveby attack, you have to download the code and run it as opposed to clicking a link and having who knows what appear.


Actually it's not entirely exotic for games to use Lua in a manner that's comparable to JavaScript in browsers.

Connecting to a server could instruct the client to download a custom map that embeds code or download and execute sandboxed code alone by design.


> Are similar mitigations also needed in the VMs for other dynamic languages, such as CPython/PyPy, Ruby MRI and Lua/LuaJIT?

Yes.

> What about the JVM and Microsoft's CLR?

Yes.


Thanks. I suspected as much for LuaJIT because, like a JavaScript engine, it supports JIT compilation of untrusted code. It's interesting to hear that PUC Lua also needs these mitigations even though it's only an interpreter.

As for implementations of Python and Ruby, they might not worry about these attacks - because AFAIK they do not try to support secure execution of untrusted code.


I see how this addresses some examples of Spectre, and it's clever (hi Filip!) and probably worth doing. However, not all possible Spectre-related vulnerabilities involve type confusion.

E.g. imagine if we had JS code that does

  let x = bigArray[iframeElem.contentWindow.someProperty];
Conceivably that could get compiled to some mix of JIT code and C++ that does

  if (iframeElemOrigin == selfDocumentOrigin) {
    index = ... get someProperty ...
    x = bigArray[index];
  } else {
    ... error ...
  }
and speculative execution could let the property value leak into the cache.

So I'm not optimistic about the prospects for fully automating defenses against Spectre without hardware fixes :-(.


We assume no if statements that implement a security check are safe without something. It’s not just bounds checks and type checks.

Origin checks are a special case worth considering. I am not sure what you describe would be exploitable in practice (reasons too long to fit in this margin) but worth looking into.


Hi Robert! It’s been a while! :-)

Oooh that’s a good catch. I agree that pointer poisoning and index masking don’t handle this case. Unless the pointer to the data structure being accessed (bigArray or whatever) is poisoned with a function of origin. Maybe that’s the way to do it.

I am still optimistic about this. It’s going to be a lot of work and there will be surprises but then that’s always true when writing software.


Restricting performance.now()'s resolution to 1ms is really bad news :( Once the other mitigations are in place, will the webkit team consider increasing the resolution again? Can we get a Developer menu or Web Inspector option to temporarily enable the old resolution again?


It's surprising to me that there are such various levels of coarseness each browser vendor chose:

Gecko [0]: accurate to 20us

Edge [1]: accurate to 20us, with an additional 20us of noise

Chrome [2]: accurate to 100us (thanks mkeblx for finding this)

Safari [OP]: accurate to 1000us

[0] https://blog.mozilla.org/security/2018/01/03/mitigations-lan...

[1] https://blogs.windows.com/msedgedev/2018/01/03/speculative-e...

[2] https://chromium-review.googlesource.com/c/chromium/src/+/85...



In the same article, they describe their intent to, yes, restore timing resolution, once alternative mitigations mature.


1ms is absolutely overkill IMHO and will require rewriting a lot of code which measures frametime this way. This is not an issue for the precision reductions in the other browsers (Chrome's 100us is just about what's still tolerable, Firefox's 20us is fine). It's probably better to round the measured frametime to the next 'vsync frametime' like 16.667ms or 33.333ms anyway, but I expect a lot of WebGL demos and games to break :/


What's an example scenario where you need an ultra-high-resolution timer?

For WebGL, don't you just rely on requestAnimationFrame to call you at the right times, and draw your frame as quickly as possible? What's the benefit of being able to measure the frame time to the nearest microsecond, rather than the nearest millisecond?


Timing behavior in literally any game.


Does the timer need to be much higher resolution than the frame rate? If so, why?

Maybe you want to measure the frame rate precisely? That is certainly something you can do, but not usually something you need to do.


We are looking at ways to make it more precise but with random jitter. 1ms is a stopgap.


Can't the timing (in)accuracies be worked around by javascript? There must be countless numbers of timer-triggered events in JS, not just the high-res timers but simple second-by-second counters, frame sync events, or even external I/O. Every possible time-based event needs to be made imprecise, otherwise programmers can derive a higher precision timer from them.

e.g. if you have a tight loop that just increments a counter, couldn't you just run that for a second (interrupted by a lower frequency timer source), then find out how many counts/second the JIT can perform, presumably millions or billions. To time operation 'X', you then just perform X and run the tight loop again, and compare how many fewer tight loops you completed.


> WebKit is affected because in order to render modern web sites, any web JavaScript engine must allow untrusted JavaScript code to run on the user’s processor... WebKit is affected by both issues because WebKit allows untrusted code to run on users’ processors.

There's the elephant in the room, though this article doesn't go far enough. Trust isn't binary. It's not that the code isn't formally verified, or doesn't type check. It's that the people running it don't consciously install it, the people distributing it don't know or care what it is, and the people writing it are often adversaries far more sophisticated than the people running it.


Unfortunately the genie is out of the bottle already. The web by and large requires javascript, and it's not likely to change anytime soon. So we are stuck with the situation where the defenders (the hardware and software designers) have to be correct 100% of the time on a platform that is constantly changing. The attackers only need to be right once. Maybe someday everything will just be streaming video or something, but that is a long way off.

As for relying on users ability to determine what code is safe to download... It's not like users can be relied on to do the right thing all the time even without javascript. People respond to phishing attacks, download executables sent to their email, reuse passwords... I think a sandbox is a better solution than relying on users to understand what code is trustworthy and what isn't.


Not sure it’s a genie situation. Look at Java applets, Flash player.


The situation is apples and oranges.

Dumping Flash and Java took years, and was driven by rapid adoption of mobile devices that either didn't support them at all (Apple) or very well (everyone else). There was an already deployed alternative (Javascript).

Javascript is buried much deeper in modern websites and would be much more difficult to replace than either Flash or Java Applets were for most sites. For most sites Flash was just to play videos or display ads. Many sites only had to replace an object tag with a video tag. Java Applets were fairly niche in the first place, so aren't really comparable.

In addition there isn't currently a viable alternative for Javascript. Maybe WebAssembly someday, but currently it's designed to supplement Javascript not replace it.


Big changes do happen and look small only in retrospect.


But they don't happen overnight. I didn't say change wouldn't happen, I said it wasn't likely to happen anytime soon.


Let the record note my gripe was only with the genie/bottle wording, which indicates an irreversible change :)


Depends, I white list JavaScript on the web sites I thrust, and use native apps when given the option.

On the OSes I use, I make sure all sandbox options are turned on.


Would you consider yourself a typical example of a naive user? The security default needs to work for everyone. I think the sandbox approach is better than expecting everyone to correctly decide when it is or isn't okay to allow a site to run code.


Yes in the context of iOS, Android and UWP native apps.


You must run in much better circles than I do. Nevermind a naive user, the average users I know wouldn't know how to disable javascript, or whitelist sites, or that there even were sandbox settings in the OS.

Some of the older people I know can't even correctly choose between between writing a text message, and a facebook post.

If you think you are a typical example of a naive user, I would hate to see what you expect of the average user.


You completely misunderstood me.

iOS, Android and UWP sandboxes for native apps are always enabled, there isn't any configuration for naive user available.

You just allow or not the access to specific actions and that's it.

On UWP you cannot even access files directly, the user has to select them for your application.

You can only change the way sandboxes behave via developer settings, but that is only in the context of debugging, which naive users will never do anyway.


> The web by and large requires javascript, and it's not likely to change.

While I am mostly a pessimist, this is one place where I harbor a tiny sliver of optimism. Once users recognize that something is awful, and have the means to get rid of it, they eventually do so. Client side Java and Flash are dead, because everyday users figured out that they sucked, and were given the option not to use them. We're getting close to a situation where average people recognize that millisecond auctions to run arbitrary code on their machines are a bad idea, and those people also have the tools to say "no." Hope springs internal...


The users didn't make the decision, the platforms did, and the decision was made after an alternative was available in Javascript.

If anyone killed Flash and Java, it was Apple by refusing to support either on the iPhone. All the users did was make popular a platform that didn't support them. If Apple had supported Flash and Java we probably would still have them.

Signed code that users had to install was tried before with ActiveX. It didn't work either because you can't rely on random developers to write secure code either. It's also how IE ended up with Flash. Click here to download Flash (signed by Macromedia).

So basically everything sucks because we're human and make mistakes. Maybe one day we'll have machines that reason more thoroughly than us and they can come up with a secure platform. Until then... Javascript and sandboxes.


> If anyone killed Flash and Java, it was Apple by refusing to support either on the iPhone.

Apple may have put the nail in the coffin, but long before that quite a few developers and users were avoiding Java programs when there were alternatives, and ClickToFlash was a popular browser plug-in. As they often do, Apple saw which way the wind was blowing.


> Client side Java and Flash are dead

Not paying attention to the news? Just wait until WebAssembly gets more mature.


I don't think WebAssembly is going to bring back Flash or Java Applets in any meaningful way. Maybe someone will hack something together and use it for niche old Flash game sites, but it's hard to see a reason for widespread adoption of anything new. People have moved on.

Flash and Java Applets may not be "dead" forever, but they also are not likely to ever be more than undead zombies.



I think we are talking about different things. Half your links have nothing to do with Flash or Java Applets (.net, XAML?).

You seem to be talking about new platforms that may derive some part from the old. I'm not saying WebAssembly won't be used for new platforms as that's sort of the whole point of it.

What I'm saying is that WebAssembly won't bring back people making Flash .swfs or writing classes derived from java.applet.Applet in any mainstream way. In that sense Flash and Java Applets are dead. Maybe someone will hack something together that allows you to run them, but that won't bring the developers back to the old platforms.

If Oracle or Adobe announce something, maybe the industry will jump on board, but currently I don't see anything in your list that makes me change my mind about Flash and Java Applets being dead and not coming back.


TeaVM allows any Java application to be ported into WebAssembly, be it an applet or not.

As3-WebAssembly is a compiler for porting Action Script 3, Flash's programming language, into WebAssembly. Already integrated into Flash Develop.

Microsoft and Xamarin efforts to port Mono into WebAssembly, will allow making Silverlight apps again.

You are free to believe this won't turn out into anything, I rather think we will end up in WebAssembly + Canvas/WebGL in a couple of years.


As I said, we are talking about different things. I am talking about running binary executables from old platforms and people making new binaries in the old way again, you seem to be talking about porting old source code to new platforms.

The new platforms may use the old languages, but the runtime libraries are not the same. The new platforms have different APIs and functionalities (no threads in TeaVM for instance).

To simplify my point. ActionScript is just a language. Adobe Flash was much more than that. Maybe companies really will find some compelling reason to start using ActionScript again... but Adobe Flash and it's SWFs will still be dead.


> Not paying attention to the news? Just wait until WebAssembly gets more mature.

Not yet. If "WebAssembly" is anything like actual machine code, then it will be easy to compile a bunch of languages to "WebAssembly," and a huge pain to make them talk to each other. So nothing really changes.


> Unfortunately the genie is out of the bottle already. The web by and large requires javascript, and it's not likely to change.

This fatalist stance is childish, and dangerous. As long as there are people working to make things better, things do improve.

Sure, if you make sure that one small website works without JavaScript then that doesn't change the world. I hear a similiar argument when I explain why I don't eat meat; I'm told that animals will be killed anyways. But if I don't eat meat, then less will be killed. How many less? I can't specify the number, but I don't power-hungrily demand my own decision can magically change the world completely over night, and give up when I realize it doesn't.

Still, each and everyones has an influence. It's about doing what's right. And even if you are the only one doing the right thing, things will be a bit better for your actions. But in fact the situation isn't that bad, and one isn't alone.

> It's not like users can be relied on to do the right thing all the time anyway.

So, if you aren't sure you locked the front door of your house, then it also doesn't matter if you left the door to the garden open? Clearly it does matter!

This fatalism deems us into inaction and into accepting the status quo even if we know it's bad.


>> Unfortunately the genie is out of the bottle already. The web by and large requires javascript, and it's not likely to change.

>This fatalist stance is childish, and dangerous. As long as there are people working to make things better, things do improve.

How is it any more of a fatalist stance than the idea that javascript or running untrusted code in a sandbox is fundamentally broken?

Personally I think it's easier and more likely that we attempt to fix a few web browsers than expect millions of websites to change. You might call it fatalism, but I think of it as realism.

>> It's not like users can be relied on to do the right thing all the time anyway.

> So, if you aren't sure you locked the front door of your house, then it also doesn't matter if you left the door to the garden open? Clearly not!

My point was not that security is useless because users are unreliable. My point was that it's better to keep the kids in the sandbox, than to have them play in a busy street. Security should be preserved in spite of users actions, not rely on them.


> Personally I think it's easier and more likely that we attempt to fix a few web browsers than expect a millions of websites to change.

This is the same argument I criticized before. The world doesn't change completely over night because one changes their own actions, and that is fine. These million websites maybe never change, and that's what we'd have to live with.


And yet presumably you didn't get pwned in the process of writing that comment. It's true that securing the execution of untrusted code is hard. But it's not impossible.


Commenting here works just fine without JavaScript.


Yes, but at the cost of:

1. Increased complexity on the server side to handle both cases.

2. Degraded experience on the client side (page reloading for simple actions like upvoting).


Then let's standardize generic ways that permit things like upvotes without reloading the page and without executing externally supplied code. :-)


You can do this already with iframes, link targets and :visited for css classes. Using javascript is unnecessary. This technique used to be popular before XMLHttpRequest.


Thanks for providing a solution.

If using JavaScript to achieve this is easier/more efficient/better in some other way that is convincing, then we can think about standardizing better ways to do it. And if it's not better using JavaScript, then it's just an issue of raising awareness about not using JavaScript for it.


Embedding hundreds of iframes into existing pages isn't a viable solution.

Consider that a single reddit comment requires at least: upvote, downvote, save, report, hide. Not to mention actually commenting, expanding/collapsing threads, or gilding comments.

A thread shows 200 comments by default (500 with Gold). Even 200 comments x 5 features requires a thousand iframes, each of which is embedding a complete document.

If you think web performance is bad now...


> Embedding hundreds of iframes into existing pages isn't a viable solution.

I know I'm repeating myself, but then let's standardize a better way to solve the problem without executing supplied code. JavaScript is not a magic bullet that solves problems that aren't solveable otherwise.


You'll have to come up with proof of a better alternative before you'll get agreement from people like me. Javascript and sandboxed executable code may not be a magic bullet, but it's the bullet we have.

"Lets standardize a better way" doesn't give much to talk about if I don't buy into the issues you have with Javascript and executable code.

If you could give a compelling example of a better way that gives the same freedom I would love to hear it. The alternatives I know of all sacrifice something (speed, interactivity, flexibility, development cost) in the name of getting rid of something it seems most people don't have a fundamental problem with (namely sandboxed executable code).


Oh, I don't want to give all the freedom that JavaScript in the browser gives, and that's the whole point. If we came up with something that allowed the same things, then that would be a pointless exercise. There will be cuts in aspects, some necessary, some intentional.

I don't advocate for a better way to have apps on the web, but to have a better app-less web.

I'm a bit tired to talk about this topic right now. I'm also repeating myself. I talked a lot about JavaScript on HN recently, so you can read there more, and the surround comments have lots of other perspectives. Not everything has been said. Maybe I'll write up a structured and hopefully complete blog post about it some day soon.


I will certainly agree that requiring JavaScript unnecessarily should be frowned on and progressive enhancement should be the norm. A least give user the option of taking on additional security risks associated with Javascript (and WebAssembly) or not (edit: where possible).


Agreed to a point. For some apps user interaction is required, and progressive enhancement isn't possible.

For non-apps though, it's still very important. Articles should not require a JS bundle to load. When JS is used, it should be optimized.


Unfortunately the web seems to be moving away from a progressive enhancement world.

Momentum seems to be with whatever is easiest. JavaScript frameworks, analytics companies, and ad companies would probably all have to make progressive enhancement the default way for that to change.


Let's stick with the reddit example. The up/down vote links could be standard anchors with additional tags <a href="upvote" reload="comment#564534">^</a>. Inline posting could be similar.

It doesn't take much to do 99% of what we want with javascript and server side rendering, we used hacks to do it before javascript so I'm sure we could design something even better. It doesn't sacrifice anything.


Online forums were already possible without JavaScript.

I would even rather read HN via NNTP, if given the option.


> I would even rather read HN via NNTP, if given the option.

https://github.com/gromnitsky/hackernews2nntp


Interesting, thanks for the heads up.


That comment could've been written without executing any Javascript.

// Edit: heh, someone already said the same.


I've whitelisted JS from this site, because I "trust" it enough. If HN sold their comment code through some opaque web of third parties, I would be more cautious.

You're clearly an insider, and I'm clearly an outsider, so you understand the nuances here better than I do. What's your concept of securely executing untrusted code? Should users give up control over what runs on their machines? Should they know whose code may run before and/or after it does? Or what data it can or has already exfiltrated?


One of the most brilliant features in the latest versions of Safari are per-website settings for ad blockers, notifications, location, etc. Between those two vulnerabilities and the general obnoxious useage of JS on websites, I’d love to see the addition of a per-website setting for JS. I would personally turn it off by default and only whitelist a handful of websites.


similar to noscript, there’s “JS Blocker” for Safari. it’s truly an excellent extension; significantly more powerful than noscript because it allows hugely customisable rules for loading only some scripts on some pages (eg https and *.cloudfront.com when loading console.aws.amazon.com), rules about canvas usage, XHR requests and a whole lot of other things. it’s the honestly the only reason i haven’t moved back to firefox, because by comparison noscript is just SO awful!

http://jsblocker.toggleable.com/home


Don’t you need to audit the extension source, and verify it matches the build (or build it yourself), to make sure it’s not exfiltrating data from visited web sites?

I’ve been reluctant to install extensions, because I’d have to extend my trust set from the OS and Browser makers, to the extension author and the integrity of whatever system they use for their build.


uMatrix, for everyone else.

It's a bit fiddly, but you can set moderately sane defaults. Well, sort of. If websites themselves were far more sane, this would be far less a problem.


Am trying out the brave browser at the moment, it does that pretty well, though for any site that doesn't just work in brave by turning on scripts for a page, I still tend to use firefox or opera, rather than enabling more things on brave.


This is what I already get with FF + NoScript.

And yes, I am bored by sites that I visit for the first time that will not show me any meaningful content until I enable layer upon layer of JS. Just not necessary, nor is it safe.


Sometimes you can open the developer tools and hack the CSS, because some of those sites hide content with a display: none unless some JavaScript removes it. If it's a site you often navigate to, there are addons like Stylus that can make those changes permanent.


Same here (with noscript as well). Although I'll say it's hard to blame web developers for not considering the case where JS is disabled beyond a simple "you need Javascript to see this page" message (though even that tends to be the exception, not the rule), because noscript/umatrix users are probably in the low, low minority.


I'm really curious how SharedArrayBuffer can be used to create high-precision timers. I'm also wondering what's now broken due to SharedArrayBuffer being removed.


From page 10-11 of [1]:

> A shared resource provides a way to build a real counting thread with a negligible overhead compared to a message passing approach. This already raised concerns with respect to the creation of a high-resolution clock [19]. In this method, one worker continuously increments the value of the buffer without checking for any events on the event queue. The main thread simply reads the current value from the shared buffer and uses it as a high-resolution timestamp.

[1] https://gruss.cc/files/fantastictimers.pdf


My understanding is: Have a web workers continuously increase a number, and save it to the SharedArrayBuffer. As long as the web worker is running, you have a steadily (enough) increasing timer that can be accessed by any other thread. It's not as precise, so relying on it makes things harder, but still usable.


By running a worker thread in a counting loop.


Good work WebKit team in putting this together, it's just as good as Raspberry Pi's note on why it isn't affected by Spectre & Meltdown - https://www.raspberrypi.org/blog/why-raspberry-pi-isnt-vulne...

Of-course here, the mitigation are being detailed as well.


I'm starting to think naming multiple similar vulnerabilities by different names was a mistake. Every article I read I have to go back and remind myself while one was which.


Giving them similar names certainly wouldn't make it any easier to remember which one is which.


With current CPU design there is only so much that can be done, but the architectural fix seems obvious to me, one that doesn't give up the performance benefits of speculative execution and branch prediction.

The data in caches, the TLB, and the state tracked by branch prediction must be segregated by the protected mode bit. That isn't sufficient so solve all the problems, but seems like it solves a number of them.


The problem that JSC is dealing with is that Spectre allows untrusted JS code to read things from the memory of the process that runs it. We don't want to allow that, but I don't think your proposed solution would fix it.


If browsers are careful not to put any sensitive information in the same process that is executing JS code, sandboxing could work.

This seems a sensible thing anyway as it would mitigate other attacks as well. Allocating one process for JS-using webpage is expensive though.


Chrome is working towards something like this with Site Isolation, and it’s a good idea. Unfortunately it’s not a complete defense.

First, web pages can load cross origin resources, and that may be enough to get data or a cookie into the attacker’s web process. Second, some risks of this attack (e.g. ASLR bypass) don’t require any data from another origin to be in process to be dangerous.


> web pages can load cross origin resources

I know nothing about web technologies, but maybe this is something we should stop doing, at least for any executable resource? This would prevent JS ads I guess, so win/win?

> Second, some risks of this attack (e.g. ASLR bypass) don’t require any data from another origin to be in process to be dangerous.

yes, ASLR seems to be busted.


> > web pages can load cross origin resources > I know nothing about web technologies, but maybe this is something we should stop doing, at least for any executable resource? This would prevent JS ads I guess, so win/win?

It's arguably a flaw in the design of the web that loading cross-origin resources is allowed by default. Unfortunately, there isn't a great path to changing this. We may be able to allow websites to opt out of having their resources loaded cross-origin, maybe (similar to X-Frame-Options but for resource types other than frames).


What JSC is doing is addressing a problem that exists in today's CPUs. I'm saying what future CPUs need to do to avoid certain classes of sidechannel attacks. That list isn't sufficient, but if it isn't done it seems like inviting other future attacks.


Isn't Intel and AMD shipping new fence instructions that prevent speculative execution from progressing beyond a certain point? Why doesn't Webkit use those?


The fence instruction that Intel recommends (lfence) is way slower than the techniques described here. We measured a 5x slowdown on Web Assembly trying to use it.

Also we have been working on these mitigations since well before Intel made their suggestion.


Have you considered arr[max(i, arr.len)] instead of AND?


Max() is not a CPU instruction. It's an abstraction, a function that could be implemented either using a branch (which defeats the whole purpose) or, on some architectures, with something like cmovXX.

Perhaps they wanted a solution that works on Arm, which I think doesn't have cmovXX. Or maybe Intel does speculation with cmovXX used on array index.


Cmov can speculate as well (it makes sense from a performance standpoint but that isn’t the same as secure ;) )


On some CPUs, cmov is known not to speculate, but in those cases it is apparently super slow.


These mitigations are way cheaper than fences.

For example, for WebAssembly, we measured that the fence instruction was a 5x regression. That’s nuts. Index masking is a rounding error by comparison.


I suspect fencing instructions are quite a bit slower than this, involve more work and don't even cover every CPU.

Branchless bounds checking and type checking with masking and pointer poisoning is probably what we are going to see in other languages too.


I don't understand why they say Spectre can control branching in WebKit. Spectre is an information leak attack, it doesn't allow to modify memory. I could allow to find x in `is x == valueToCheck`. But if this is possible, even before Spectre it's a security issue, it's only harder to guess, and Javascript code should not be allowed to control `x`.


I think this part is misleading =>

"Spectre means that an attacker can control branches, so branches alone are no longer adequate for enforcing security properties."

I think they meant "Spectre means an that attacker can ABUSE branches", and in that they are right.


This is clarified later: “Spectre means that branches are no longer sufficient for enforcing the security properties of read operations in WebKit.“

It’s totally true that Spectre allows attackers to control reads, but when they do this, they enter a non-destructive execution mode. They can read but anything they write is thrown away. (To our knowledge, lol.)


Thank you for clarifying, that what I thought. Still I find that the article is not clear enough on this point, I fell that some people will read this as "OMG they can control execution remotely, this the apocalypse". I mean to an extent yes, but the results are dropped like you said and the main execution path shouldn't be affected. It only facilitate information leaks AFAIK.


What is the reasoning behind reusing version numbers?


Apple does this sometimes for macOS. It's probably because it's an application update, not an OS update (so OS patch version does not change), and doesn't require a reboot.


This doesn’t explain why they kept Safari at the same version number.


Oh you're right. Specifically, they kept the CFBundleShortVersionString the same, but updated the CFBundleVersion. My guess is that updating CFBundleShortVersionString is a marketing decision. Maybe Apple is waiting till all the WebKit fixes related to Spectre/Meltdown are available before updating it?


Also various Apple systems define CFBundleShortVersionString to be of the format A.B.C, no fourth element allowed. macOS versioning is still stuck with 10 as the A, B identifies the major release and C is the point release that is effectively also marketing driven. With iOS the first number is used for the major releases so C is available for bug fixes.


From the patches it seems that the write operation to the array is also protected with masking. What is the reason for it? If due to a bug unrelated to Spectre a JS code could trigger a write beyond the allocated memory, restricting writes to the next power of two just slightly complicates the attacks. This is very different from the reading situation.


The masking is done in addition to bounds checks, not instead of, so no it isn’t adding a mechanism to write out of bounds - the masking is purely to limit the upper bounds of speculative load distance.

As far as the attack: writing to memory pulls the effected page into the cache just as a read does


It is not clear from the article how WebKit avoids changing semantics with array index masking. In JS out-of-bounds access should return undefined, not a random element of the array. To preserve that a branch still has to be made.


If you combine index masking with a branch that should still be Ok. For example, if you do `if(idx > arrayLength) return undefined else array[idx & mask]` then the CPU can only predict "return undefined" or "array[idx & mask]", none of which can cause any harm.


Does that imply that WebKit always allocate by power of two and a script cannot read the memory for unrelated allocation between the array length and the nearest 2n?


No, it means attacks are still possible in the memory region just after the buffer, when it's smaller than the next power of two. Not ideal but it's better than nothing.


> All type checks are also vulnerable. For example, if some type contains an integer at offset 8 while another type contains a pointer at offset 8, then an attacker could use Spectre to bypass the type check that is supposed to ensure that you can’t use the integer to craft an arbitrary pointer.

So, even if WebKit were entirely written in Rust, we'd still be fucked. I'm really starting to think it's about time for me to get out of this business.


I'm actually not sure what that passage is trying to imply at all, I've not seen any prior remarks that Spectre has that sort of implication. AIUI, Spectre lets you read arbitrary memory, not write it, so I'm not sure what they mean by "bypass the type check" and "craft arbitrary pointer". Here's another quote later on:

> But Spectre could theoretically involve any branch that enforces security properties, like the branches used for type checks in JavaScriptCore.

I get the impression this is all referring to JIT'd JS, not to the C++ (or Rust etc.) code powering the underlying engine (where all typechecking is done at compile-time anyway). Of course having the JS engine written in Rust would still allow the same, but it's been known for a long time that Rust's typesystem can't do squat to verify the correctness of dynamically-generated code; it's part of the reason why Mozilla only bothered to kick off a rewrite of Gecko and not SpiderMonkey.


It's common for C++ code to do type checking dynamically. In WebKit we do this by rolling the type checks ourselves. With RTTI, you can do dynamic type checks using the build-in dynamic_cast primitive. Our changes, particularly the ones having to do with pointer poisoning, are meant to protect C++ code that does dynamic type checks in addition to JIT'd JS code that does dynamic type checks.

Rust's `match` statement is a dynamic type check just like C++'s `dynamic_cast`. I bet you it's implemented using branches.


> Rust's `match` statement is a dynamic type check just like C++'s `dynamic_cast`.

I wonder if this is a case of a difference in terminology, because enum variants (the branches in `match` expressions) aren't considered types in Rust, and have no interaction with Rust's typechecker (Rust always has to assume that every instance of an enum can be any variant, even in cases where we as the programmer know that only one variant is possible).

At the same time I'm still unclear on what the OP is trying to imply. To reiterate, AIUI Spectre can only read, not write, privileged memory, so it's impossible for a Spectre attack to trick the engine into believing that (to use Rust terminology) an instance of an enum is a variant that it isn't. Am I incorrect?


The idea is to read random things as follows:

1) Write a function that reads the "nth" value out of an array in JS.

2) Call the function a bunch of times with JS arrays.

3) Pass an integer to that function for the "array" value. This would normally end up throwing. But before it does, the CPU might speculate the VM-internal typecheck as "it's going to be an Array, like the previous 1000 times" and end up doing an "nth value" read out of a memory address that you fully control (by changing which integer you pass).

That is, you can use speculative type confusion in the VM to allow precise control over what memory addresses get accessed and how for your timing attacks on the cache.


Hypothetical example pseudocode:

    if is_pointer(pt):
        // do pointer-based stuff
    else:
        raise error
If you train the branch predictor to expect a pointer, it will speculatively treat arbitrary values as pointers until it can determine that they are not. So you can pass in any value and get it treated like a pointer for the duration of the window of speculative execution.

Any conditional branch is potentially vulnerable, an attacker just needs some sort of side effect from speculative execution that persists after rollback.


Thanks for the reply, I think I've got it now: the ultimate attack is still to read arbitrary memory, and the part about defeating typechecking and crafting a pointer is intended to expand the range of memory that can be read.


Serious question, is it even possible to have something like ‘match’ or ‘dynamic_cast’ in a programming language without using branches under the hood?

I’m not talking about research/example microprocessors that don’t have equivalent instructions. I mean isn’t branching a fundamental logical construct and we either do it in hardware with branch assembly operations or emulate functional equivalents of them in the software that runs on the microprocessor some other way? Or is there some clever/old logical/mathematical “alternative” to branches?


Consider something like this:

    struct __internal_variable {
        uint64_t type;
        void *data;
    }
    
    uint64_t __last_type = [number of builtin types];
    
    whenever you create a new type: increment __last_type and associate that type with the number;

    uint64_t typeof(__internal_variable var) {
        return var.type;
    }
    
    function[__last_type] typechecks;
    functions[] = void function(uint64_t type) { failure; }
    functions[int] = void function(uint64_t type) { success; }

    functions[typeof(myvar)]()
If typeof(myvar) is int, then the function that returns success will be called, otherwise failure, no branches involved! Yes, it's a kludge, but it kind of works.

---

Of course, you don't actually need this trick. If you put enough effort into it, you can essentially compile anything to c, after which you can just compile it with the movfuscator[1]

1: https://github.com/xoreaxeaxeax/movfuscator


That's basically a jump table. Still vulnerable to spectre (variant 2) without mitigations, btw.


It is even worse then switch/match statements. The jump tables can be used to train CPU to jump to a wide variety of addresses than switch/match statements so mounting a speculative execution attack is simpler.


true. Then again switch statements (and I assume match statements in language that have them) are often compiled down to a jump table...


The problem isn't the branches it's the speculative execution


Yes,it's already implemented as a poc - just look at movfuscator


AIUI you create a pointer by issuing an array OoB. The check against this is a brach statement followed by the array OoB. However, the CPU starts executing the array OoB while the check is happening and the memory in the pointer gets loaded into cache.

This is an architecture bug so any security check that relies on simple branching is potentiality vulnerable.


Something I keep coming back to in all this is that maybe we should be surprised that it’s even possible to share a CPU between mutually untrusted programs, let alone do it in so many contexts.

How do we stop his entire class of bugs? What is the Rust for CPU design?


CPUs tend to have multiple cores these days, would it be possible to assign (in software) all kernal space work to one or more dedicated cores to mitigate some of the risks?


Yes, well, close.

> Since the 2010 Westmere microarchitecture Intel 64 processors also support 12-bit "process-context identifiers" (PCIDs), which allow retaining TLB entries for multiple linear-address spaces, with only those that match the current PCID being used for address translation.[19][20]

https://en.wikipedia.org/wiki/Translation_lookaside_buffer#P...

HN discussion: https://news.ycombinator.com/item?id=16094349


That's just Meltdown workaround.

Spectre was more relevant to context.


With a shared cache.


Some of the newer Intel server CPUs allow statically partitioning the shared L3 cache between cores. It might (or not) work as a way to reduce side channel communications between cores.


I always liked the https://en.wikipedia.org/wiki/J%E2%80%93Machine idea: gobs of processors each with a small private memory (nothing shared), in a fast mesh network. It fits with physics: accessing local memory can be fast, while distant memory physically just can't. But even if it's really all that, it's just so different from the architectures we've put so much work into or on top of.


That sounds like a more extreme version of what the Cell[1] was supposed to be. Developers seemed to think that developing for that architecture was really painful, specially compared to other contemporary console platforms. I'd love it if the progress being made in the industry got us to actually being able to exploit the full potential of such architectures.

[1]: https://en.wikipedia.org/wiki/Cell_(microprocessor)


It was painful - mostly because we don't have good tools for that sort of thing and most of our programming sort of hides the idea of multiple CPUs and pretends it's all working at once (in terms of us writing the code anyhow).

It's all abstracted in a very linear way.

Changing all that will take a lot of time if we do.


Actually multiple CPUs are not normally hidden in mainstream programming languages [1]: i.e. threads are visible. The fiction that it is maintained by the programming languages and hardware is that of a single, coherent address space, which Cell very much did not have.

[1] outside of parallel iteration/folding constructs.


Yeah. Erlang seems to show that there is a path from here to there, but the Erlang-ish world is tiny relative to the whole industry.


> I'd love it if the progress being made in the industry got us to actually being able to exploit the full potential of such architectures.

If they had ever released it as a general purpose computer instead of a locked down game console people could've experimented with it enough to unlock its full potential.

If they were smart they'd dust off the old chips, throw them on a RasberryPi type system and laugh all the way to the bank.


There were PCI-Express cards with Cells on them, for numbercrunching or something. These even used the "good" Cells with full 64-bit floating point numbers needed for scientific workloads.

I vaguely remember that the price tag I saw was so outrageous I can understand why that didn't go anywhere.

Would have been interesting, though, I agree.


> What is the Rust for CPU design?

Intel iAPX 432 could have been it, but they did a lousy job implementing it.

https://en.wikipedia.org/wiki/Intel_iAPX_432


Yes, we need a many-core CPU. Then every program can run on a core. And we can use MPI, CILK, etc to run parallel programs. Each CPU could run its own small operating system, or using a micro kernel OS.

Writing algorithms in parallel manner isn't as complicated as make to believe you, a lot can be rewritten based on cookbooks/patterns.

Intel developed a many-core CPU (72 CPU cores) called Xeon Phi: https://en.wikipedia.org/wiki/Xeon_Phi

Unfortunately they drifted of the target (instead of making it the successor of x86-64).

For Javascript engines, it would help if the offer an optional fallback "Javascript interpreter" instead of JIT. A JIT is basically at the moment insecure, especially when running untrusted code. The same goes for WebAssembly, let the user deactivate execution/support of it. Please allow end user to enable an alternative JS interpreter for high security things like online banking.


Well, Meltdown mitigation involves unmapping as much Kernel as possible when switching modes. It’s so much overhead that I wonder if uK messaging and process context switching is cheaper and faster.


Interpreters generally do a lot of branches, so they are also vulnerable to Spectre.


Right, programming-language-based security mechanisms that rely on emitting runtime checks are broken and you need something like index masking or pointer poisoning to fix them. Rust could use those fixes as well.


Why does Rust need those? Those are things you need when running untrusted code in a sandbox. Is there a scenario where Rust is used that way?

There's WASM, but in that case the extra checks need to be applied at the WASM level, not the Rust level.


Interestingly you could apply these mitigations in Rust itself --- array index masking where Rust has array bounds checks, and pointer obfuscation where you check the variant tags of algebraic types. Some extra work would be required for unsafe Rust code. That would automatically protect almost all Rust code, whereas most C and C++ code would have to be handled manually.


Note that the article talks about type checks for the JavaScript code that is being executed by WebKit, not type checks in the implementation of WebKit itself. So there are two different kinds of type checks involved here.


We are using pointer poisoning to protect type checks in WebKit itself as well as type checks in the JavaScript code we execute.

We are using index masking in WTF::Vector and WTF::StringImpl, which are used both for JavaScript execution and for lots of other things in WebKit that want a vector or a string.


Doesn't Rust do all its type checking at compile time, not run-time?

(Not to say that a correctly-written Rust program wouldn't be vulnerable to Spectre for other reasons, of course.)


If I'm reading this correctly, the issue isn't with the types of the language you wrote the Javascript engine in, but with Javascript's types. All major Javascript engines are written in C++, which also does all its type checking at compile-time.


Not quite. Our C++ code often does static_cast<> as a downcast based on testing some condition. That's a run-time type check. This isn't solely about those type checks that were implemented using some built-into-the-language type checking mechanism. Also, C++ does have a dynamic type check primitive (dynamic_cast).

We are mitigating these branch-based type checks with pointer poisoning, and I don't think that those changes are biased in favor of C++ or JS, since both are vulnerable.


Ah yes, that makes sense.


Rust has algebraic types, so Spectre could be used to train the branch predictor to take one type case of your match statement, and then have that case taken when the enum is actually in a different shape. That's a violation of the type system.


> I'm really starting to think it's about time for me to get out of this business.

Maybe it is time to move to IT security business?


A lot of game-related media has reported that a lot of games are largely unaffected by Spectre and Meltdown. My impression is that the most common result is "about 3%." (With some notable exceptions, however.)

Does this bode well for Apple code, which seems to tend to pull in innovations from games into the general UI?


The performance issues come from security mitigations when entering/exiting the kernel during syscalls. So I/O heavy workloads such as databases will suffer, whereas heavy compute-oriented tasks such as games should be mostly OK. This is expected. There is nothing any company can do to "take advantage" of the fact that games are cpu/gpu bound in order to mitigate new performance issues in e.g. databases.


Exactly. Though it may encourage different performance best practices now that syscalls are considered more expensive.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: