My theory on why they didn't elaborate on U1 / announce the (rumored) object tracking tags: they need better U1 chip penetration geographically before making any promises. Once a bunch of iPhone 11's are out there, they can further test the new product in a more realistic setting, and make adjustments if necessary. There could be some critical flaw that may need reengineering and they won't want a repeat of AirPower.
For instance, if they released the new object tracking tag with iPhone 11's, it would be a bad user experience because U1 is not spread out. Better to wait until the U1 geographically saturates to their standards before releasing it.
They can still talk about U1 in regards to e.g. new AirDrop capabilities, and in other marketing-level generalities.
The Xcode GM points towards an Apple AR devices. I think there could have been a "One More thing" in the Keynote, but it was pulled out in the last min along with All U1 announcement. They simply don't want Another AirPower to happen.
And it might have something to do with whether the software is ready for it as well, iOS 13.1 coming 10 - 14 days after the release of iOS 13.0. With quite a lot of features promised in WWDC missing, when was the last time we had something like this?
And may be they decided to save it for October event. After all we are still missing new MacBook Pro, New iPad Pro, Mac Pro update, and may be new iMac Pro?
That is actually a really good thought. I could see this being an additional feature in the new hardware; I'm guessing it's in the new Series 5 watch. Maybe this is a push for people to upgrade their computers as well as their phones.
There's an even more compelling reason why Apple's waiting a while before announcing the tracking tags: crowdsourcing.
At WWDC this year, Apple announced that all iOS devices would begin reporting nearby devices' locations back to Apple. U1 probably only works at relatively close range, so they want to wait until their "crowdsourcing network" includes enough devices for long-range tracking to work. (IIRC they aren't going to let users prevent their devices from contributing to the network, so it's just a matter of waiting until everyone has the latest iOS update.)
> all iOS devices would begin reporting nearby devices' locations back to Apple
Just to clarify for others - its not that all iOS devices will report the location of all other devices, but rather "lost" devices will emit a (BLE?) signal, which nearby Apple devices will report its location back to Apple.
> Just to clarify for others - its not that all iOS devices will report the location of all other devices, but rather "lost" devices will emit a (BLE?) signal, which nearby Apple devices will report its location back to Apple.
How would a lost device know it was lost? The Wired piece on this suggests that all devices are always broadcasting their location:
I'm trying to find sources, but its alluding me at the moment, but my understanding was that when a device is offline for long enough, it freaks out and starts emitting its key.
A rotating scheme that makes it impossible for the wrong actors to track you.
It would be better if I could verify the code running on the device. You have to trust that Apple won’t track you. And hey, I trust Apple more than some unfriendly government, but I still have to trust Apple. If a big profitable unfriendly government asked them to track someone, what would Apple do?
> If a big profitable unfriendly government asked them to track someone, what would Apple do?
Presumably the exact same thing they'd have done if they hadn't rolled out this Find My feature. They designed it so Apple can't tell where your device is, which means if anyone wants to demand this info from Apple, then Apple has to implement that tracking separately, which they could do regardless of the Find My feature's existence.
If Apple had access to the device data themselves, then that's a huge problem because governments can reasonably start issuing warrants for that info. The fact that Apple doesn't have it means nothing has changed on the governmental front.
> The fact that Apple doesn't have it means nothing has changed on the governmental front.
You’re presuming that Apple does not “have” our location data. I am making the claim that Apple could gain our location data if Apple wanted to. I think we are making different arguments. I don’t know if Apple has anyone’s location data. But if Apple decided to selectively enable the collection of some people’s location data, the iPhone 11 would offer them increased precision which other iPhone 11 users would unknowingly assist in collecting.
Why would an actor use Apple as a target vector?
It is much easier to get location tracking thru the mobile service providers. They have the location data in real time and can’t deny that. It’s done thousands of times a day all over the world with and without a warrant. No vendor specific phone and no activation of special functions needed.
Or any third parties that you agreed to broadcast location data. We know the geo-location + time dragnet warrants are already used very widely with Google.
> Why would an actor use Apple as a target vector?
I can only speculate. But if I wanted someone’s location without making some noise, this increased precision would be nice to have over carrier’s coarse location data. Your GPS position could localize you to a building, but your UWB position could perhaps localize you to a room. And actually if the law enforcement agency had a device capable of detecting yours based on its UWB signal, they could find you very quietly.
Using current generation technologies like OTDOA and similar, cell carriers can already localize you to a room in a building in many cases.
Verizon can do down to around 10 centimeter accuracy with their LTE-M network in the best case (don’t have a link to support that handy, otherwise I’d share).
I'm not really sure how the U1 data is particularly relevant to governments. They don't generally need to know the precise movements of your phone about a single room. GPS will already locate you to the building and frequently to the room itself.
Most people underestimate the usefulness of location data. Do you can use location data to compute accurate credit scores? (better than what banks achieve). To predict health state?
The article talks how a device owner can opt-in, but there's this:
> A nearby stranger's iPhone, with no interaction from its owner, will pick up the signal, check its own location, and encrypt that location data using the public key it picked up from the [stolen] laptop
It feels like this is the key to unlocking useful AR [1] for low powered, lightweight headsets. If the heavy lifting for locating and interacting with objects in 4D space is done with UWB, you don't need to do too much computational heavy-lifting - reducing bulk, increasing available space for batteries and reducing battery draw.
[1] Right now I can't really see a killer app for AR, something that would make you super-human enough to want a layer of abstraction between you and the real world. However if I could remotely interact with objects through walls, like turning off a lamp in a bedroom or seeing where my wife is and opening up a audio chat with her so that we're not yelling to hear each other, all of a sudden the trade off of having to charge and wear another device seems less annoying. All of a sudden you have powers not unlike a sorcerer.
> Right now I can't really see a killer app for AR, something that would make you super-human enough to want a layer of abstraction between you and the real world.
For me the key uses of AR technology would mostly have to do with abstracting input and output (mostly) away from the actual physical devices involved, which would benefit from UWB for certain cases but really wouldn't hinge on it.
For a simple example: Sitting in an airline seat, and having a virtual 'big screen' to watch a movie - but having it anchored to a certain relative position, so I can still look left and talk to a cabin stewart without removing AR glasses (and still have the film playing in my peripheral).
Another simple example would be providing a GPS 'HUD' when driving, similar in result to physical devices you can currently get that project onto windshields. UWB stuff might benefit here by providing a reference point to anchor virtual objects to (e.g. saving a position and orientation relative to the physical chip in the car).
Yes. And even what I'm doing right now -- sitting in front of a display, and typing on a keyboard. The display, keyboard and touchpad are great.
But sometimes it'd be pleasant to sit outside. Or useful to work while in transit. I can do all those, but they involve smaller screens, more cramped keyboards, and the hassle of dealing with gear.
If I don’t need the bulk of a laptop screen on a train/plane a physical keyboard, possibly with processing integrated like current laptops, would be fine. In my experience there’s always space on the tray table for a keyboard, but once I angle the screen so I can see it everything’s hanging off the edge and squashing me in.
Decades ago, I had a rather nice folding keyboard for an early Qualcom "smartphone". It was mostly aluminum, and folded into a ~10 cm square, maybe 2 cm thick. Unfolded, it was about the size of a netbook keyboard. I would have kept it, but it was cabled for a docking stand, and I didn't want to attempt rewiring it.
The problem isn't so much the feedback, it's that there's a lack of a barrier to stop you from pushing further (making the maximum finger movement minimal) and to a lesser extent that there's no spring mechanism to push the key and your finger back up.
Killer app: ambient information, especially face recognition. Think "thought bubble" information appearing over anything you might want to know about. I'm lousy with names, would be nice to see names over everyone I should know.
The creepiness factor this could enable would be out of this world ... you could set this up to point out people who you’d swiped on Tinder, cam girls you were planning to harass, people who’d been doxxed and you wanted to confront, etc etc etc. Brave new world.
Face recognition is something I absolutely need. I can't remember people's names for the life of me. If I could train a set of AR glasses to recognize people I've met before and pull up their contact info on the fly, that would be immensely helpful.
One line of thought I've pondered here for scifi settings is people being able to share their currently playing music that way, like a tiny virtual radio station.
For my commute I set the radio to between 91.1 and 93.3. This range tends to be where the default for most Bluetooth and AUX radio broadcasters ends up being set.
Arguably an invasion of privacy, but it turns fellow commuters into DJs. I’ve discovered some good music this way.
Are you saying that if I turn on my phone's radio and set it between 91.1 and 93.3, I'd be able to hear what other people are listening to? Could you elaborate?
Nowadays, most new cars offer at least one way of connecting to an external playback device, via Bluetooth or wired auxiliary input or both. A decade ago, though, this was more of a premium feature. Back then, if you wanted to connect your music player or smart phone to your basic stereo, the simplest and least invasive way was a device that plugged into the headphone jack of your device (or alternatively connected to it via Bluetooth) and which broadcast a very low power FM signal. You would the tune your car stereo to the matching frequency, and your device's audio would be piped through the car's speakers. Usually the transmitter provided a way to choose from a list of frequencies so you could pick one not used by any local radio stations. When operating in good conditions, if another car was driving down the road next to you, they could also tune to the same frequency and hear your playback.
Just the other day I was walking down a relatively quiet part of Amsterdam and some guy with a boombox taped to his bike 'shared his currently playing music with me'. It seems that is working just fine already.
Then to preserve some semblance of individual privacy that would need each individual you want to display freely opting in and out. Whenever suits them. Whatever is wrong with just asking? Start a conversation, make a friendship...
Can you opt out of my brain recognizing your face?
Centralized face recognition where a single entity knows everybody a la Facebook would be a dystopian nightmare. Personalized face recognition where everyone has their own instance trained purely on the data they have access to (e.g. Apple's existing Photos face recognition) and only linking that face to the user's Contacts entry for the recognized person, that's not a privacy violation because that's just offloading information from my brain into my personal digital assistant.
Personalised but running on a cloud server, controlled by any third party is a step too close.
Who's to say they won't retain the biometrics long after you requested they remove the data? The standard HN trope is if it's out there it's never deleted. Trusting any corporate with data, given the last ten years, is never wise. Even Apple, busy marketing the privacy high ground.
No doubt someone would make a GDPR case against it though...
For AR, instead of personal use think of what it can do in industry.
Autoworkers can have parts deep in the engine glow for them to see, plumbers can see through walls, doctors can see where major arteries normally run before they make an incision.
It's stuff like that, not turning on a lamp, that will push AR initially.
I love HomeCourt which is an app that uses your phone camera to track your shots and movement on any basketball court. In addition it has dribbling exercises where you have to practice dribbling with one hand while reaching for invisible objects and accumulating points. Wish I had this capability a long time ago
...Amazon shelf worker. Data about their speed of work and how much time before they deplete their error grants.
...Security agents, recognizing faces at the entrance of a night club. Using the human only for his muscle.
But really, enough people have inagined usecases. If an obvious one didn’t surface now, it’s because it triggers the imagination without actual usecases. I live in an almost entirely virtual world, have fewer and fewer friends, am online 70% of the time whether home, at work, on holidays or in conferences, yet I feel no need for AR glasses. Though I do feel the lack of long-term friends though ;)
> Right now I can't really see a killer app for AR
Something, something faster horses.
The fact that so many people don’t see any use for AR and that the people who think they do are so spectacularly wrong are what make me excited for AR.
I want to see what nobody has thought of yet and Apple is the only company that I think is capable of that.
So far Apple has shown they can make low grade camera filters with AR. What do you think gives them an advantage over players like Microsoft Hololens + Holographic in the space?
Yeah, but Blackberry was there and we knew exactly some things that people wanted smart phones for. Apple was able to refine it into the iPhone. Today, what's even the Blackberry of AR? Where not really even at the Newton of AR yet.
Yes, I think that's a key innovation that Apple made. But my point was blackberry was a proof of the problems that people wanted solve. iPhone's innovation was an interface one.
It feels like now, everyone is trying to solve the interface problem (I'll concede, perhaps not as well as Apple) but we still don't really know what the problems and use-cases really look like yet.
Yeah, maybe the newton of AR. I guess it was the internet and communication that made smart phones important, not the rest of what was in the newton. In that sense, maybe hololens is exactally the Newton of AR.
Living in the UK, I think AR for walking apps would be neat. Label all the routes, the interesting ruins, old settlements etc. Looking out to sea for wrecks, sea life, label ships and aircraft.
Then put pokemon GO into the mix and it'd be really fun.
UWB is essential for secure vehicle unlocking. Bluetooth isn't, because it can be relayed. It'll give Apple a nice integration point with your new car.
For scooter rental, it'll be way smoother than opening an app and scanning a QR code. You can just step on the scooter and go.
(My wife hates the QR-code scooter app experience so much that I generally unlock both scooters under my own ID when we're going around together. Fun fact: there are geocoded low-speed zones in Paris, where it limits the scooter to 10 kph in some crowded pedestrian areas. Fine. But when you unlock two scooters with the same phone, it uses the GPS location of the phone to decide when to limit or un-limit the speed. And a lost Bluetooth connection can leave it stuck at low speed. But I digress.)
UWB will also improve the ergonomics of contactless payment. With the current system, you have to hold your phone really close to the reader for a few seconds. UWB could allow an instant gesture.
> UWB is essential for secure vehicle unlocking. Bluetooth isn't, because it can be relayed. It'll give Apple a nice integration point with your new car.
The radio technology involved should have almost nothing to do with the ability to perform a relay attack. There are two straightforward mitigations:
1. Timing. The two parties confirm that a message can round trip in a specified, very short time T. This proves that the distance between the parties is T/2c or less.
To be useful, T should be, say, 20ns or less, which requires a bit of clever crypto to make the actual exchange fast enough.
2. The transmitter attempts to localize itself, using GPS or other technologies, and refuses to authenticate unless it’s near the receiver.
There is reason to believe that UWB will help with localization. I see no reason it would be any better than any other technology for time-of-flight measurements unless sub-nanosecond resolution is needed.
Bluetooth uses frequency hopping, which is unsuitable for measuring TOF. This group tried it: https://hal.inria.fr/hal-01995171/document and even with a lot of cleverness they were seeing 100 foot RMS errors from Bluetooth. Not enough to prevent stealing someone's car parked in their driveway while their phone is in the house.
They’re using unmodified hardware and minimally modified software. It would be interesting to see how much precision could be achieved with hardware modifications.
Instead of Bluetooth (which Apple could totally fudge on their own devices if they wanted to), the TOF for the Apple Watch<>Mac auto-unlock uses Wi-Fi 802.11v. It seems really quick too (presumably the Mac is establishing a short-lived AP), would work fine for unlocking a car.
> To be useful, T should be, say, 20ns or less, which requires a bit of clever crypto to make the actual exchange fast enough
I don't think you even need crypto on the actual exchange.
The car and the key fob could negotiate a pair of random values, R1 and R2, using an encrypted protocol so that eavesdroppers cannot figure out R1 and R2. This negotiation does not have to be particularly fast, so no need for anything clever. This is also where you would do authentication to prove to the car that it is talking to an authorized key fob.
For the actual distance measurement, the car sends a message containing R1 and the key fob responds with a message containing R2. Each R1/R2 pair would only be good for one try, so it doesn't matter if an eavesdropper sees the distance measurement attempt.
This is not necessarily good enough. Suppose one party sends R1 at a rate of 1 bit per 10 ns, i.e. 100 Mbps. The other party checks R1, aborts if it’s wrong, and otherwise sends R2. The first party checks that R2 was received on time and correctly.
An attacker can relay all but the last n of R1 and guess the last n bits. With probability 2^-n, the guess is correct, and the attacker learns R2 n*10 ns early. If the attacker can attempt the attack a few times, then getting a 60ns advantage is entirely reasonable and is enough to break some use cases.
I'm not seeing how this works. Let's say R1 and R2 are 8 bits each, and the attacker chooses n=2. Let the parties by C (car), F (fob), and M (attacker).
So C starts sending R1, M starts receiving R1, and after relaying 6 bits of R1 M guesses the last 2. Let's say M gets lucky and guesses correctly. So now M has the 6 bits of R1 it received from C, plus the 2 it correctly guessed, and so it has R1, 20 ns earlier than it should.
But to get F to send R2, M still had to send those 2 guessed bits on to F. That's 20 ns, assuming a fixed bit rate on the physical layer, which is how I assume these kind of systems would be designed. That should prevent M from learning R2 early.
The hardware I was envisioning for this would have a shift register on F that gets loaded with R1 xor R2. As the range finding bitstream comes in, it would shift out the bits from that shift register, xor them with the incoming bits, and transmit the result.
Similar on C. Load a shift register with R2. As echo bits come in, shift bits out of the shift register, xor with the incoming bit, and shift the result into another shift register (or get fancy and do this all with one circular shift register). At the end of the echo message, if the output shift register is all zero, it got the right R2.
Note that with this implementation there is no aborting on F if it receives the wrong R1. The wrong R1 simply makes it send back the wrong R2.
(Note: this design assumes that C and F both can simultaneously transmit and receive).
Hmm, I may indeed be wrong. I guess that the fob won’t start transmitting R2 until after it receives all of R1, so the early guess doesn’t help. This does assume that the fob doesn’t accept higher-than-expected data rates. That last bit could be a real issue if the physical later can operate at multiple data rates and the attacker can convince the car and fob to disagree on the rate.
But I think your xor register is no good. The attacker can just send all zeros to learn R1 xor R2 and then can emulate the fob at whatever range it likes when the car sends the real R1. Also, the radios are likely to be half-duplex, which may make it impossible. Instead, I think F just verifies each bit and stores a single bit of state indicating whether the bits were all correct. Then, after the last bit is received, it transmits R2 if all bits of R1 were correct.
Why does the attacker gain a time advantage here? Is the attacker capable of transmitting at a much higher rate than the original sender? Even in that scenario, they just have to negotiate a send/receive rate up front, and that way the attacker can't jump the line because that would violate the negotiated rate (nor can the attacker intercept and increase the rate because then the original sender won't be transmitting fast enough).
My attack doesn’t need reuse. It’s based on the fact that cheating the protocol has a success probability that falls off too slowly with the amount of cheating.
This can be improved with multiple rounds. I’m not sure it’s possible, even in theory, to do better in a single round.
Maybe someone cares to speculate about replacing Bluetooth for AirPods.
It fits easily in the bandwidth of UWB. AirPods are ideal candidates for better locating ability. I can’t say for certain but a rough googling suggests that the radio accounts for the bulk of the energy used in an AirPod, and UWB appears to have better efficiency per bit.
Scooters could be using NFC/RFID payments today (e.g., Apple pay) instead of QR codes. So you could just walk up, tap your phone, and go. UWB offers no real usability advantage here. Scooter companies don't use these methods because they want you to install their app for user retention reasons.
So what's missing in your scooter use case is a loyalty mechanism rather than a lack of payment technology.
Also you want your scooters to work for the large portion of people who didn’t upgrade to a NFC enabled phone yet
QR codes are highly backwards compatible
Doesn’t preclude a NFC improvement, but my experience with Scan QR to Unlock has been very good, point the phone at the bike, bike unlocks, not much to improve
>UWB will also improve the ergonomics of contactless payment. With the current system, you have to hold your phone really close to the reader for a few seconds. UWB could allow an instant gesture.
I’ve made guesses at specific product applications based on forthcoming tech from Apple before and the focus is too narrow. It may _also_ be good for secure vehicle unlocking but it wouldn’t make it in unless it has a dozen more useful and ecosystem expanding product applications.
I dunno, Apple's put some pretty single-purpose hardware into their devices in the past. Like the sensor array that's pretty much only useful for Face ID, and the Force Touch stuff. (And Macs used to have an IR receiver, which IIRC was only used for compatibility with the old Apple TV remote.)
I could totally see Apple only ever using this for the tracking tags (with a hastily-thrown-together the AirDrop feature so they have something to show off when the phone launches) and never letting anyone use it for anything else, regardless of the potential utility.
Definitely not dead. iOS 13 is adding new Animoji characters along with more customizable Memoji features. Plus the ability to use Memoji stickers across a variety of services, rather than just through iMessage.
Can't the relay problem be solved by having time of flight measurements as any relay attempt would inherently take more time than what a direct communication would?
I'm pretty sure that's what they're doing with the macOS Apple Watch unlock.
Yes, time of flight is what UWB measures. But Bluetooth isn't suitable for measuring TOF to a useful accuracy. (I'm not sure what the Watch unlock feature is using.)
In practice this sucks more than one would imagine. In a cluttered environment all you get reliably is close, near, far, somewhere else. Because you have 10-30db multipath fades and attenuation exponents are on the order of 2.5 to 3.5
I don't know which companies operate in your area, but you should be able to simply open the phone's camera and read the QR code, which will then pop open the app. Five seconds tops.
> Imagine a whole-home audio system moving music playback through multiple rooms based on the location of an individual listener.
Hell, why stop there? Imagine that same audio system not only activating different rooms as you move through them, but also adding the appropriate delays to each audio channel to make sure they arrive at your ears in sync. That'd be pretty neat.
Could someone who understands the radio technology please explain why using time-of-arrival and phase tricks with existing WiFi and Bluetooth signals is "wide-banded" in any way?
The wider the bandwidth the faster the rise time of each 'pulse'. The UWB standard allows for a known preamble sequence upon which a particular symbol is designated the ranging pulse. By each pulse having a rise time in the picoseconds it allows for an exact time of arrival to be measured. Exact timestamps can be recorded on each side of the transmission giving the final bit of information needed to calculate tof.
edit: clarity of last sentence
WiFi channels can also be 80MHz (which my laptop is currently connected at) and 160MHz (which is still a bit rarely supported), but your point still stands.
Can someone explain the privacy implications? Is this part of the new always on Find Phone network? Does apple respond requests from LE to 'find person x'? Is there similar random UUID protections built in (e.g. change MAC address frequency)?
I get that Apple has a better rep than most other companies when it comes to user privacy, but that still doesn't mean that we should take everything that they say at face value. As users, it is up to us to stay vigilant and ensure that Apple is doing as promised.
I think the interesting question to follow up is: "why not camera"? Is it intuitively bad? How bad? When weighing against the benefits, is camera still worth it? I'm not 100% sure, just a couple random points:
1/ The first concern pops into my head is having a recording device nearly always on. In reality, there is no way you can prevent people to take photos and/or videos using smartphone when something interesting is happening in the public. It takes less than 5 seconds for people to take the phone out of their pockets and start recording. Then what's the real difference between camera on the face or camera in the pocket?
2/ The Glass-Hole issue. People always blame the camera as the ultimate evil. I'm not that sure. Remember, the original Google Glass was basically a piece of not-that-useful-to-put-it-nicely gadget that costs $1500. Honestly, back then, I thought whoever is buying those stuff that just overpaid nerdy douchebags (I'm thinking that way mainly because I cannot afford it). I'm not sure having a camera is the biggest issue; but having a camera is probably the biggest issue that you can publicly talk about.
3/ Trust issue, i.e. whether Google/Apple/FB are secretively collecting those video feed. Or even worse: what if the business model of future AR products depending on collecting those video feed.
When someone is holding up their phone or actual camera to take a photo, that’s very visible and obvious to everyone in the area what they’re doing, and social feedback can be delivered if the usage is inappropriate.
That’s very different from an accessory like glasses, which are typically on somebody’s face all the time, whether or not they’re being used for photo functionality, with a hidden camera constantly pointed at whoever they’re talking to or whatever they’re looking at. You may never know that a photo or video or other recordings of you, of your property, of your company secrets, of national security, etc are imminent or already taking place.
“What’s the real difference” is kind of absurdly obvious, yes?
Apologize if I didn't make my idea clear enough. Seems there are two topics here: 1. being able to use a camera & 2. being able to conceal a camera.
My originally argument is focusing on point 1: people can already _use_ smartphones to take pictures and to record videos as easily/convenient as using a wearable camera.
On the point 2. I totally agree with you, wearables will make it easier to conceal the camera and that is a bad thing. I guess the takeaway here is that the camera isn't the problem, the problem is being able to conceal the camera.
No one has ever said anything to me about carrying my phone in my shirt pocket. The lens is exposed and facing outward.
My primary recording device is 100% socially acceptable to have deployed at all times. Recording would be unacceptable, but no one can tell if I am recording. (To be clear, I am not recording.)
There is no technical difference between this and camera glasses. It is just a weird public perception problem for the glasses.
> .. phone in my shirt pocket.. Recording would be unacceptable, but no one can tell if I am recording.
People know how phones work, you press a button to start recording. AR Glasses are expected to capture at all times, recording or sending to cloud for analysis. And that is what no one would like.
I don't think being able to conceal the camera is a concern. People can do that already, the ones that do are scum and people probably already avoid them, even if they don't know they're concealing a camera, just because they're creepy in general. I think it's more about it becoming acceptable that a camera is always pointed at you in typical day-to-day interactions with people that wouldn't normally creep you out.
I'm talking about when not doing something strange. If someone approched me with an active camera (and a microphone), until it's turned off, the only conversation we'll be having is about turning it off.
I've seen this pattern several times: ground-breaking new technology appears, public freaks out over the privacy implications, killer app appears, convenience overwhelms pragmatic, nobody cares about privacy implications.
C'mon, y'all are using credit cards, cell phones, Google, smart speakers, etc. You'll be wearing always-on cameras soon enough.
There’s a big difference between a camera used in real-time for AR vs a camera recording everything to disk and sending it to the cloud for analysis and data mining.
My only guess is the glasses screen not relying on a camera and something like soli would be used instead of a camera for detecting air interactions. https://atap.google.com/soli/
Everybody has transparent screens on their AR googles, but how do you overlay information over real objects if the device has no clue what real objects exist and where they are? Without cameras you are restricted to being essentially another screen for your smartphone.
Eh, no. UWB solves the problem of locating yourself in relation to other UWB devices. It doesn't help you identify non-UWB devices (like a wall) that AR glasses might want to know about.
I don't know, the Decawave DW1000[1] has been out for a couple of years now and the revolution hasn't happened. Perhaps there is another missing piece to the puzzle.
What phones is that chip in? When apple can release a new phone that hits 5% market share within a year (with appropriate software improvements) it means your install base is huge and you can start doing things with it.
Not sure if UWB-enabled devices are also capable of sensing spatial position of other devices (i.e. phone A is on the left of the phone B), but I'm already envision how this can be used to create screen-to-screen file moving feature – some sort of Avatar-like UX: https://imgur.com/8O89CXK
According to their ad copy it should be able to do it.
To quote from the iPhone 11 page:
> The new Apple‑designed U1 chip uses Ultra Wideband technology for spatial awareness — allowing iPhone 11 to understand its precise location relative to other nearby U1‑equipped Apple devices.
It's been 10 years since I think the last attempt at UWB failed to gain traction. But I think you can use a rake receiver with UWB to combine the multipath components.
Apple leaks and patents suggest that they're even aiming to have distributed location recovery, e.g., you lose your keys in a store and turn on 'find my keys', some random person's iPhone detects them and pings Apple, and Apple pings you with the location.
I guess it depends on if UWB devices can detect the location of beacon devices regardless of whether or not they specifically integrate with the "finder device"'s manufacturer. IE would Toyota and others have to add some code to work with Apple's chip, or can every device find other devices in beacon mode? (Another hurdle is if Apple limits what you can do with the chip as a developer, since they've done this for their NFC tech)
Tile does it, but it's essentially an active beacon, which isn't really privacy friendly as it allows your movements to be tracked. Apple's innovation here seems to be doing it in a privacy friendly way https://news.ycombinator.com/item?id=20129942
UWB scales poorly. There's a limited number of channels. Ad hoc range finding requires a call and response.
However.
The range finding is accurate.
So long as one is dealing with a few devices, it is incredibly effective. It's not going to be GPS for the house, but it does make proximity triggers reliable.
There's 6 channels, and I think a high / low setting to get a total of 12.
With indoor positioning, the anchors aren't positioned as precisely as GPS satellites, nor do they have the same quality clocks. Thus time of flight is done with a call and response per device.
It was hard to setup positioning for more than 1x device... remember that 1x device is range finding to 4x anchors.
Am sure these problems can be solved, but in the near term think this is more about precise distance rather than position.
I haven't really been this exciting for a new technology on my phone for a while. Am I just drinking the UWB hype kool-aid it is this actually revolutionary? So far it is difficult to tell but I guess we'll see.
I have no idea if this could replace Bluetooth but even if it did, Bluetooth would not be "dead". Bluetooth has enough inertia that it will be supported for the next decade.
For instance, if they released the new object tracking tag with iPhone 11's, it would be a bad user experience because U1 is not spread out. Better to wait until the U1 geographically saturates to their standards before releasing it.
They can still talk about U1 in regards to e.g. new AirDrop capabilities, and in other marketing-level generalities.