MR. BILL WHITAKER: And yet Prime Minister Netanyahu seems to be charting his own course. The Biden-Harris Administration has pressed him to agree to a ceasefire, he's resisted. You urged him not to go into Lebanon, he went in anyway. Does the U.S. have no sway over Prime Minister Netanyahu?
VICE PRESIDENT KAMALA HARRIS: The aid that we have given Israel allowed Israel to defend itself against 200 ballistic missiles that were just meant to attack the Israelis, and the people of Israel. And when we think about the threat that Hamas, Hezbollah presents Iran, I think that it is without any question our imperative to do what we can to allow Israel to defend itself against those kinds of attacks. Now, the work that we do diplomatically with the leadership of Israel is an ongoing pursuit around making clear our principles, which include the need for humanitarian aid, the need for this war to end, the need for a deal to be done which would release the hostages, and create a ceasefire. And we're not going to stop in terms of putting that pressure on Israel, and in the region, including Arab leaders.
MR. BILL WHITAKER: And yet Prime Minister Netanyahu seems to be charting his own course. The Biden-Harris Administration has pressed him to agree to a ceasefire, he's resisted. You urged him not to go into Lebanon, he went in anyway. Does the U.S. have no sway over Prime Minister Netanyahu?
VICE PRESIDENT KAMALA HARRIS: The work that we do diplomatically with the leadership of Israel is an ongoing pursuit around making clear our principles.
I don't understand how you think this is a smoking gun?
He mentioned military aid in the part you cut out of your quote of the transcript. She responded to that by characterizing the military aid and why it was necessary, but they cut out her response to the mention of military aid because they wanted a response to only the second part of Whitaker's statement (ie the "no sway" part). Seems reasonable for a TV interview that needs to fit a tight schedule, no?
RE: How do either of those help or hurt Trump?
I am not going to speculate that. But looking at the diff of the edit, anything of "substance" is removed from the response, and what remains is a non-answer.
It is both candidates competing to outdo each other on who will drop more bombs. It is a smoking gun, but not in the way you think it is.
That CBS should owe trump money or face any sort of repercussions over the perceived sleight is absolutely ridiculous. This is the behavior of a narcissistic snowflake, not grown man and last of all a president.
MR. BILL WHITAKER: And yet Prime Minister Netanyahu seems to be charting his own course. The Biden-Harris Administration has pressed him to agree to a ceasefire, he's resisted. You urged him not to go into Lebanon, he went in anyway. Does the U.S. have no sway over Prime Minister Netanyahu?
VICE PRESIDENT KAMALA HARRIS: The aid that we have given Israel allowed Israel to defend itself against 200 ballistic missiles that were just meant to attack the Israelis, and the people of Israel. And when we think about the threat that Hamas, Hezbollah presents Iran, I think that it is without any question our imperative to do what we can to allow Israel to defend itself against those kinds of attacks. Now, the work that we do diplomatically with the leadership of Israel is an ongoing pursuit around making clear our principles, which include the need for humanitarian aid, the need for this war to end, the need for a deal to be done which would release the hostages, and create a ceasefire. And we're not going to stop in terms of putting that pressure on Israel, and in the region, including Arab leaders.
MR. BILL WHITAKER: And yet Prime Minister Netanyahu seems to be charting his own course. The Biden-Harris Administration has pressed him to agree to a ceasefire, he's resisted. You urged him not to go into Lebanon, he went in anyway. Does the U.S. have no sway over Prime Minister Netanyahu?
VICE PRESIDENT KAMALA HARRIS: The work that we do diplomatically with the leadership of Israel is an ongoing pursuit around making clear our principles.
I assume you're at least as upset by the cleanup that NYT and WaPo have done to Trump's utterances. During his helicopter news conferences, we'd see and hear what he said, and it was nothing like what was reported.
There is a virus lab in Wuhan because a lot of coronaviruses originate in that region. Its existence/location is not evidence of a lab leak.
If anything, the lab leak “theory” has received too much media attention when the primary evidence (location of a lab) is easily explained by other factors.
Imagine a virus was spread from penguins to humans. It would not be surprising if research on the virus were conducted in Antarctica!
The idea that the lab was in Wuhan due to the prevalence of bat coronaviruses in the region was one of the most frequent, yet almost universally unreferenced claims, that was made to explain away why the virus coincidentally showed up first in the same city as the lab. Hubei, where Wuhan is located, is not a central hot spot of bat coronaviruses in China. The available information points toward bat coronaviruses being much more common in the Southern provinces of Yunnan, Guizhou and in particular Guangdong. This can be seen in Figure 1 ("Geographical distribution of bat coronaviruses") in the below referenced Chinese study on bat coronaviruses from 2019, published by members of the Wuhan Institute of Virology less than a year before the sars-cov-2 outbreak.
Do you know where you got this idea? It's completely wrong and incredibly prevalent; so I'm wondering if particular sources are misleading people, or if it just "feels right" and people come to it independently unprompted.
Beyond the general background already linked, Dr. Shi specifically did not expect that natural spillover of SARS-CoV-2 occurred near Wuhan:
> We have done bat virus surveillance in Hubei Province for many years, but have not found that bats in Wuhan or even the wider Hubei Province carry any coronaviruses that are closely related to SARS-CoV-2. I don't think the spillover from bats to humans occurred in Wuhan or in Hubei Province.
The particular viruses they were working with were only distantly related to covid. Related in the same way that house cats are related to tigers.
In addition they were not doing “gain of function research”, unless you want to say that they were also doing “loss of function research”. What they were doing was seeing how point mutation affected infectivity both positively and negatively.
We know what they were working with, and it wasn’t the virus that gave rise to covid. There are much closer matches than in other species.
Work from home is good for people who have to work in-person, because it lessens the spread.
Also, just because I have a job that has some level of risk does not mean I would expect others to have a similar level of risk. Especially if it can be easily mitigated.
AR/VR is in a hype superposition, large companies have made large investments yet the tech is arguably in the "trough of disillusionment". Few people think it will break into mainstream use anytime soon.
So far, 99% of sales are in the "expensive gaming accessory" category. There is fun and interesting UI and gameplay innovation happening here.
The one non-gaming thing everyone seems to want is just a large virtual desktop workspace. Apple has probably come closest here. It sounds like an easy problem but without a high resolution and a wide field of view, it's not a good experience.
Is there a lot of innovation happening in this space for military applications? I ask this as someone totally unfamiliar with the technology in general. It seems like it could greatly benefit pilots, people on the battlefield, and anyone else who needs access to some kind of visual information while still having both of their hands.
Because their prophet was murdered by the state. It seems weird that a religion would be pro-execution when their founding was, in part, "innocent man was executed".
I'm sure believers have jumped through the hoops required to justify it but from the outside, one would expect a country that is majority christian to oppose executions.
From using various VR systems, a hololens, and reading reviews of the vision pro I really feel like hand gestures are a bad way to interact with AR systems. They might work in a pinch (heh) but some sort of small controller that can act as a pointer and has a button or two is superior in every way.
It's interesting that meta went through the effort of bundling an accessory but stuck with hand gestures anyway.
6dof input from hand gestures is a killer feature but it has to be rock solid. So far only controllers can do it but it's getting much better every year.
Haptic feedback, discrete buttons and precise analog input from controllers are also very important. The downside of controllers is that your hands are full and it's just not feasible for an all day wearable.
Hopefully someone figures out a good compromise be it rings or gloves or whatever.
My killer feature will be a keyboard (/mouse) in a wristband(s), which comes along with, or nearby 6dof (also worthy, but not a personal grail).
Electromyography is an awesome technology, among other reasons, because it can (or will) detect neural signals below the activation (movement) threshold, meaning you should be able to train yourself to type without moving your fingers. A viable way to thought control without the invasive aspects of other approaches.
Back in the nineties, I said the computer user fifty years from then would look like a hippy. Headband (neural interface), sunglasses (I thought monitor, but AR is cooler), and a crystal around their neck (optical computer, maybe a miss, we'll see what the next decade brings, a slab in a pocket will do for now). Given my zero trust of end stage capitalism near my noggin, wristbands are an excellent transitional, as long as they're local (or can be made so, happy jail breaking)
Unless EMG signal processing has had some breakthrough in the past 10 years, it is not a very precise interaction mode. I worked in a lab developing it for quadriplegics to use with the muscle on their temple (we tested above the thumb as well). You can get rough 2-axis control with some practice, but that's with an adhesive EMG pad. Can a wrist band get a clean signal?
For typing, I'd expect you need to combine with eye tracking. So you're back to the Vision Pro UI.
On its own, EMG makes a good button, I'd expect. Maybe 1-axis control.
Thanks for the reality check. Wait some more, use voice for now, is what I hear... Although a decade is a long time in signal processing and Meta has dumped a boatload of cash into this.
No 6dof either ?
Sorry for using you as the 'say something wrong, get corrected' research method, but kudos for jumping in. ;}
The haptic feedback you get from touching your thumb and forefinger to simulate a click is actually better than a button because it feels more organic and natural.
Where it falls apart is not being able to feel yourself touching objects which nothing other than a full glove is going to be able to simulate. Controllers and rings provide no benefit over Apple's approach.
> Controllers and rings provide no benefit over Apple's approach.
When I touch a good quality button, I can feel the actuation point, and it's the same every time - I can learn to tell reliably whether I've pressed it or not.
When I touch my thumb and forefinger for a camera, I can't reliably tell what point it'll get detected as touching, because it isn't the same point each time.
As a result, I have to hold them together until I'm sure it's registered.
As a user, knowing unambiguously whether you've activated a control or not is a huge advantage for controllers & buttons.
It sounds like the wrist strap thing will have haptic feedback for when the gestures get registered, so you'll at least know when that happens. It sounds like that might actually make it better than the annoying capacitive buttons that's popular these days with no feedback…
> The haptic feedback you get from touching your thumb and forefinger to simulate a click is actually better than a button because it feels more organic and natural.
Where did you experience this? My only experience is with the Apple Vision Pro and it failed like half the time
Isn't haptic feedback supposed to mean that you feel something as feedback that an action happened? If so, then this would be more like haptic feedforward. Apple vision reacts because you feel something, and that sounds as reliable as it probably is.
Apple doesn't react because you feel something. Apple estimates, based on the kinematics it recreates from it's camera feed, when something happens. It is NOT looking for a visual gap between fingers to disappear, as this would require an exactly correct camera angle.
I guess you get the natural haptic, but the feedback is visual/audio (happens in software). In any case the link between haptic and visual/audio action is kinda broken on the vision pro
the reason is that it is camera based, unlike Orion. And this is why people describe Orion as magical, whereas nobody talks about the hand gestures of the Apple Vision Pro (but people do talk about the eye tracking of the AVP as magical)
> Controllers and rings provide no benefit over Apple's approach.
This is the problem with fanboi-ism... the hyperbole is so clearly false. Let me list the ways that controllers are better:
- Typing/Chording
- Cheaper
- More efficient, no cameras pointing at things the user can't see. No continuous video processing.
- No dead zones where the camera can't see.
- Accuracy. For all but a few camera angles, Apple has to guess when you're fingers make contact. it works best with bigger movements, but the bigger the movement, the longer that movement takes. There's a reason no big-name competitive games have been ported over.
- Actual haptic feedback. Play Horizon: Forbidden West on PS5 to understand just what haptics can communicate to you... it's so much more than tapping your fingers together.
Apple's approach is amazing, and it's good for the use cases that Apple tells you are important... but there's so much more than that. You're doing a disservice to Apple by going full fanboi.
Users can just connect a ps5 controller to the Vision Pro, of course it's an extra expense but for Vision Pro buyers that's pretty much insignificant.
And all the sensors, cameras and superfast processing will still be needed for that immerse quasi AR simulated experience, so there are no cost savings there.
Where do you figure? Last I checked mushy, organic keyboards are not preferred.
I mean, it's just laughable to suggest that inpu work just as well on the AVP. People are not using the virtual keyboard if a real one is available. Gamers want clicky buttons too.
There are clear benefits and disadvantages of each setup.
I agree, and I also think that walking around to items positioned statically in space is a really dumb way to do embodied computing. I mean if an app is associated with your kitchen fridge or whatever fine, pin it to your kitchen fridge, but if I'm going to be enveloped in an omnidirectional high def display, I want a way to bring the windows to me, not have to move my body to different windows.
Anyway, Logitech made an awesome little handheld keyboard for home theater PCs, called DiNovo Mini HTPC, I was able to pair it with Vision Pro.
Uses: Being able to physically walk around a life-sized 3d model of an engine, human body, etc.
2. Tying AR to a point relative to the user
Uses: Heads-up Display notifications, virtual screens, etc.
These things are not mutually exclusive.
Even once you've placed a "AR object" at some static absolute location, I'm sure you can scroll through the list of active processes similar at any time, and snap it back to your body.
As somebody who hates the sedentary aspect of software engineering, I messed around with a friend's Apple Vision Pro and fell in love with the spatial computing aspect. I do a great deal of pacing when working through problems, and the ability to physically move around multiple virtualized workspaces was really engaging.
I have Xreal glasses, and it's handy to be able to pin a window to a physical spot (well, direction in this case). I used that feature to have basically a virtual TV on my laundry room wall while I watched a video on how to fix my dryer as I was fixing it. But if I'm playing a game or watching other content, I don't want to have to focus on a single spot, so I have the window always in front of my face.
I disagree. Physical activity enhances mental well-being and the quality of cognition. A system that allows one to design their workspace with regular movement in mind is going to be a boon for people who are usually chained to a desk for purely virtual work, forced to navigate cramped UI with their finger. Say I'm editing a video; you're telling me my media bin could be a literal bin a few steps away, allowing me to physically separate my "media selection" task from my "cutting and moving clips" task? Knowing that this kind of encapsulation helps to improve focus, and the movement would help with circulation and stress? Sign me up.
That’s a great little keyboard, it reminds me of early smartphone keyboards in a good way.
I wonder if Apple decided against a controller in order to allow third party solutions to flourish . They can take their time and see what people gravitate towards.
Most of these products arnted aimed at consumers, but industrial tech. While the critque is great for enthusiasts, it makes no sense for what this type of industrial use will occur.
Hololens and Glass got relegated to industrial use. Pretty certain Meta would rather cross the chasm and get to everyday consumers, but the tech and use case doesn't cut it for that yet, which is why they aren't going to sell these until it's more of a home run. Seemed like they even had a subtle dig at Apple for pushing to get high end AR into the hands of consumers prematurely.
Imho hand gestures are the best way to interact with XR.
If your only experience is the HoloLens, you’re roughly a decade out of date with how well it can work today.
There’s also not been much until the Vision Pro that combines eye tracking with hand tracking which is what’s really needed.
You should really try the Vision Pro, because it really does move hand tracking to the point where it’s the best primary interaction method. Controllers might be good for some stuff , in the way an Apple Pencil is, but most interactions do not need it.
Why do you need a controller for any of that? Those are all point and click, with low precision and large hitboxes. Which is perfect for eye+hand. Almost every review says hand+eye is greatly intuitive, almost every user in the Vision Pro communities as well. Even Meta are using hands+eyes as the way forward.
What exactly are you missing that a controller gives you for those tasks?
I did buy a Vision Pro, but it's a nearly unusable device and outside of fora, I've never met anyone whose had a positive experience, so I suspect even among Vision Pro users, it's a minority opinion.
Hand tracking is not a feasible input method for routine computing.
I'll have to dig mine out of the box and try it again, version 1 of the os basically had one gesture which was click. I hate the gaze and click interface, it works OK for Netflix but invariably I would fall back to using the trackpad on my MacBook to actually do any work.
Hand tracking has also been pretty great on the Quest for some time now. I've got the first-gen one, and you can very comfortably type/navigate the UI with no controllers.
Traditional personal computers have both a keyboard and a mouse / trackpad. Game controllers have both joysticks and buttons.
While analog manipulation devices (mice, trackpads, joysticks, the 3D controllers) are good at physically precise manipulation and navigation, keys and buttons are good at symbolic / textual entry and logical / symbolic navigation with comparatively very low effort and high speed.
When VR / AR acquires a fast and low-effort symbolic input mode, comparable in efficiency to a keyboard, and it becomes possible to build highly productive interfaces driven by it, like Vim and MS Excel are driven by the keyboard, many interesting developments will happen.
Kind of a pointless comment to make if you haven't actually tried the Vision Pro.
Its interface is unlike anything else and really can only be experienced in person. The ability to simply glance at UI controls and slightly move your head whilst resting on your leg really does feel like magic.
And the UI is built around it so if you are looking for example at a sidebar it will lock you into choosing the options unless you substantially look away. This makes using it much easier and faster than a controller.
yeah. Call me oldskool but I still think a small controller with laser pointer is optimal.
You can move the point at least as fast as your eyes and a button press will be faster and more accurate than pinch, and no occlusion. Plus more buttons gives you additional actions for the same onscreen target.
A controller with a laser pointer might be good, but not optimal.
For a pointing device, moving a stylus on a small graphic tablet (in relative mode, i.e. like a mouse, not in absolute mode), in order to position a visible cursor on a target, is both faster and more precise and also much more comfortable than pointing forward with a laser pointer or an equivalent device.
When you are not at a desk, perhaps one can make a pointing device with a stylus that you hold like for writing and you point downwards, but the relative movements of the stylus are used to control a visible cursor that is in front of you, or in any other direction. That would still be much better than a pointing device that forces you to point in the real direction of the target.
Especially when only the relative movements are of interest, it is easy to measure the 3D orientation of a stylus with a combined accelerometer/magnetometer sensor.
I think the Quest/Index/etc. controllers are a far better form factor than a cylindrical tube, for this use case. But then I also think we should be adding XBox controller input to CAD applications and such, so maybe I am the weird one. We should get over the idea that gamepad = unprofessional, because these are seriously engineer, ergonomic, highly sensitive input devices.
If all you wanted was a pointing device to make selections on a 2D gui interface, then the laser pointer form factor would be better. But I’m going to be an old fart and ask why are you doing this in AR then? Just use a screen. I’m more interested in the different human interface possibilities that are opened up by tactile input and 3D visual controls.
We should be using index controllers to build “grabby” UIs. I’m curious what this could turn into. It opens up a whole new human computer interface medium for intrinsically spatial applications, just like a good stylus on a tablet opened up creative use cases.
while I agree, you also just don't have a choice as otherwise you force the user to carry a controller with them all the time (and even then the experience sucks because you have to take out/reach for your controller every time you want to usei t)
It depends why you think hand gestures are bad. The wrist babd is detecting brain waves and allows for hand gestures while the hands are out of the field of view of the cameras.
Are they? I don't know any factory workers but I have friends outside of tech and they can all relate to "arbitrary rules from management that make my job harder are frustrating".
The people who spend the most time sneering at 'the laptop class' that I'm aware of are independent, ie, non-unionized, tradespeople who operate as sole proprietorships - emphatically not factory workers. The arbitrary, unproductive distinction between 'real' physical labour of the kind carpenters, roofers, drywall technicians, etc, do and 'laptop labour' is increasingly being seen as exactly that.