So you’re afraid of coming home and there’s someone hiding in the house? How often does this actually happen compared to homes secured differently? It’s not like most houses have some impossible to pick locks...
Realistically tens of millions at best. Assuming the Bloomberg campaign hires 200 people for this strategy (the article states "hundreds") it would take 167 years for him to spend $1 billion on it.
Are you suggesting when a new drug hits the market doctors should just try it out on a few patients to see if actually does what it says it does? If only there was some way this sort of trial and error could be performed before a drug was widely available, in a controlled environment...
I'm familiar with the off-label discussion, but what number of these off-label uses were discovered by treating coinciding conditions in patients? How many were last ditch attempts when other treatment options either don't exist or failed?
I'm not suggesting that current regulation by the FDA isn't flawed, or that it doesn't need an overhaul, but I think it's clear that there should be some proof that efficacy studies were done before a drug becomes widely available, especially when the potential damage done by a failed treatment is so high. The current opioid crisis has completely eroded any faith that the pharmaceutical industry is capable or willing to regulate itself.
This has to be some of the thinnest gruel I've read in a while. The entire premise of the article is that the "dark side" of WebAssembly is that "security" products can't do string matching against compiled code.
Case 1: People can write scams that "security" products can't block because WebAssembly somewhat obfuscates the code. The comparison to scanning WASM in a "security" product is like opening an executable in a text editor is laughable.
Case 2: People can write website keyloggers in WASM and it will be obfuscated against "security" products. Alternatively the bad guys could just obfuscate plain old JavaScript, or any number of other techniques to exfiltrate data. If people are executing malicious WASM on your website you're already owned.
The only one of their points that has any merit is that WASM implementations increase the attack surface of the browser. This is ostensibly true, as do all new features. Fortunately the major browser vendors have competent engineers dedicated to testing their software for vulnerabilities.
I would also argue that security should not depend on your ability to introspect into code, because whether code is treating data according to the user's expectations isn't something that's going to be statically analyzable.
The interface which consumes the code should be safe.
Furthermore almost any programming language is going to be pretty much impossible to analyse the behaviour of. There are are some pretty fancy theorems about incomputability, but really it's just because you can build a compiler that reads obfuscated byte code.
Forget not being able to use string matching, there are no programs capable of predicting the behaviour of programs without effectively running it in a sandbox.
> The only one of their points that has any merit is that WASM implementations increase the attack surface of the browser. This is ostensibly true, as do all new features.
If things go following to what seems to be the plan, and the JS interpreter is replaced by one that compiles into WASM, it will severy reduce the attack surface. The WASM VM is much simpler than a JS interpreter.
But well, of course, that's a long time in the future. Up to then, you are correct.
You're definitely right, string inspection of JavaScript is not really a thing, I'm not aware of it being done and it wouldn't be reliable unless you had something that powerful than a regex.
I will say though, that I have seen some products out there today that work by rendering text as pixels on a <canvas>, so that accessibility if you can't read it with your eyes, searchability, discoverability, etc are nonexistent.
This is something WASM could inadvertently encourage a higher percentage of products to use for whatever purpose. Same thing people did with flash.
Yeah, I expected it to say WASM somehow makes browsers intrinsically more vulnerable to exploits despite the sandboxing. But this is definitely reaching.
I disagree because obfuscated JavaScript would already be suspicious as it is not that widespread as an industry practice. The standard industry practice for JS is minification, not obfuscation; so you can still see what functions are called and any developer can quickly identify if there is a suspicious looking AJAX request to a strange URL.
With WASM, you can't see anything. It's a complete black box. It's a lot easier for a hacker to hide stuff from users. It's easier to sneak in malicious code and it's harder to identify and remove it.
Minification is a form of obfuscation, and a lot of libraries and commonly used scripts are purposely completely unreadable. show_ads.js, for example, is nonsense, so it's not unreasonable for a hacker to sneak into ad code, insert some dubious lines of code that does more or less anything they'd want. There are easy was to mitigate this for sure, but it doesn't seem that WASM makes this point worse. Surely you could just as easily spot an AJAX request from a strange URL while utilising WASM too?
With WASM, you can't see anything. It's a complete black box.
That isn't a problem in itself, as long as the only mechanisms for the black box to interact with anything else are well-defined and properly secured.
Put another way, I shouldn't need to decompile arbitrary WASM code downloaded by some site I visit, as long it's only allowed to do anything through APIs that I'm happy with.
Like perhaps a gauge that could indicate how much fuel is remaining in your tank? They could even include a little light next to the gauge that comes on when you're really low.
> A loss leader doesn't need to be sold below cost, only below its minimum profit margin.
I think that's an important point, in that you're saying that it's incorrect to consider only the cost of ingredients, rather than the entire cost of delivery (including cooking, labor, etc.).
On the other hand, as the parent pointed out, they're not a restaurant, so their "minimum profit margin" for this particular activity is zero.
As such, they can afford to do it, at whatever volume, indefinitely.
> Leaving money on the table is effectively losing money.
This is the different thing, since it has to do with what the market will bear and not with what it costs them.
The distinction is important because a business can, by this definition, "effectively lose money" while still making a profit. That's good enough for some people.
It's not "effectively" losing money. It's losing imaginary money.
If you want to see an actual loss leader, go to a grocery story in mid-November. They'll sell you a frozen turkey for (depending on competition) 20 to 50 cents a pound. Wholesale cost will be 80 cents to $1.20 a pound. They take an actual loss on the turkey in order to get you in the store where you will buy three times your usual weekly grocery budget in special holiday foods -- all of which are marked up more than they will be right after Thanksgiving.
If selling at less than the market is willing to pay but more than what covers your total costs is a "loss leader", then every sale to attract customers is a "loss leader" and the term has no useful meaning.
Except, as others have pointed out, there are documented cases of ISPs hijacking DNS traffic, even for people who have configured their client to use resolvers other than their ISP, which is possible because of DNS's lack of authentication or encryption.
Besides, I don't see how adding an option for DoH to Firefox is centralizing anything, you're free to set the DoH URL to whatever you like, and you're free to run your own DoH resolver, just like you're free to run your own vanilla DNS resolver.