Hacker News new | past | comments | ask | show | jobs | submit login

> text is barely readable.

I understand what you're saying, and I agree, but it's like saying a party is boring because there are no books.

The whole "meta" concept is that you're socializing in VR and it feels as if you're talking to your friend directly. I'm sure the resolution has improved in the new model, but even still, super sharp text isn't as important as you'd think to make the fundamental social use cases work




Socializing is one thing you do in the Metaverse. Working is probably the bigger one though.

Lots of people don’t grok this but the endgame for VR/AR is replacing every monitor in every office and home. Probably ditto TVs when every person owns a headset.

It’s going to be a massive game-changer (and a rapid transition) when VR gets good enough to work in.

The question is just whether this is a 5-year timeline or 50. Mark is obviously betting he can get traction in the 5-10 year timeframe.


Why would anyone want to work with a helmet on their head at all times like this? It's like saying the endgame for computers is all communications will be over video phones like on the jetsons. It sounds like a cool scifi concept until you realize even if the technology is there people would often rather not be seen on camera when they talk.


First, note I said VR/AR, so not everyone needs a headset. Many workloads will work in AR, though the black pixel problem means these will be much harder to crack and so will probably arrive later.

But I think once the VR headsets get miniaturized a bit more (give it a generation or three) people will laugh when they think about the current models, just like we do about cell phones vs. the first satellite phones that were bigger than your head. At some point these will be as light as a pair of plastic sunglasses or goggles.

There is nothing forcing you to use VR for something like a call, where you don’t currently need a monitor. But I think we’ll see a tipping point where the face tracking gets across the canny valley and people stop saying “you need to meet someone in person to really connect”. At that point VR calls substitute for in-person meetings, not VCs on a screen.

Consider the move to remote work; if we can get a virtual meeting room to feel like whiteboarding in person, including gaze and expression detection, then you could bounce between meeting room with your distributed team and perfect immersive dev setup without leaving your seat.

For the median worker using a monitor I think the requirements to beat monitors are just good enough resolution for spreadsheets/email (we may be there next gen?), comfort (currently the crux), and a decent story on input passthrough (your physical keyboard rendered in VR? Something else? Seems tractable, we just haven’t standardized any options.)


I see three assumptions in your paragraph that need to be true in order for the technology to work. I'm ordering them by how likely I think they are to come true in the next decade.

* Headsets will have good enough resolution and be generally comfortable enough to replace monitors for office space.

* There is a solution to the pass through input problem that is acceptable for the average office worker. I don't think there is a solution of the passthrough problem that beats a keyboard/mouse, or even a laptop in a cafe. I think it's more likely the average office worker will accept a worse form of text input given the right conditions.

* It's possible to project a 3d image of myself while wearing a headset that doesn't include the headset and passes the uncanny valley. The uncanny valley is wide, and even AAA video games haven't cleared it yet.


That seems like a good list. For 1, I expect to update substantially (either for or against) after seeing how much of a jump Apple’s headset is. I view this one as inevitable unless something crazy like a complete end to progress on SoC density.

For 2, there are demos already; Immersed (and maybe Meta natively?) has a mode where it recognizes your keyboard (like 2 specific models, prototype) and positions the keyboard in VR. Not good enough for hunt and peck but if you touchtype this works. The Quest 3 seems to have better passthrough so again, this generation will provide a good steer. This one seems pretty easy though.

For 3, if we can do deepfakes we can do 3d photorealistic avatars. Can’t be more than 5-10 years away to render a face in real-time. Unreal already has some crazy tech with MetaHuman that would work here already I suspect, given enough compute.


VR headsets are just more compact than laptops, let alone desktops. If you could just replace workstations with a headset, though it will not likely happen in near terms, that’ll be nice.


VR hardware is chock full of compromises right now. VR’s final form factor will probably look more like goggles or glasses.


> Lots of people don’t grok this but the endgame for VR/AR is replacing every monitor in every office and home

It's very, very hard to see how VR/AR would be a good replacement for monitors in general, even if it was perfect.


I find it helpful to invert the perspective; let’s assume we have perfect VR/AR; what would monitors be good for when you can call up arbitrary number of windows anywhere in your visual field?

There are maybe some information-radiator type use cases. Probably things like big screens at live events / shows.

But for individual users, I think anything a monitor can do, an endgame AR display can do, and more flexibly.

Why would you (personally, I’m interested in your viewpoint) want a monitor if you had a perfect AR display that you could dismiss into thin air when you didn’t need it, or conjure up a 6-screen dev environment when you did need it? You could put your keyboard and mouse down at any comfortable chair and be as productive as your current dev setup with your optimal monitor count.

(My claim was even stronger than yours, I don’t think they need to be perfect to be better, but since you volunteered it, let’s explore that extreme.)

I’m assuming “perfect” is something completely unnoticeable like lightweight glasses or contacts BTW, and I think there is a bunch of good stuff before perfect.


> I’m assuming “perfect” is something completely unnoticeable like lightweight glasses or contacts BTW, and I think there is a bunch of good stuff before perfect

If that's the case, and if your perception of your real environment is in no way hindered, then I agree -- monitors wouldn't have an advantage.

Short of that, though, I would strongly prefer monitors over VR.

> you can call up arbitrary number of windows anywhere in your visual field?

This is not something I personally would want to do. I want my computer/user interface to be constrained to a specific part of my visual field. Even if it's all in VR, that's how I would use it anyway.


I'll be interested to see if views on this have shifted this time next week after Apple's announcement.

For my part, I think the notion of putting down giant expensive phyiscal rectangles that are geolocked to one physical position just to use computers is going to seem like a quaint artefact of history in a few years.


I think the socializing part is a hard sell at this point. You'd have Rec Rooms and VR Chat, but both have image issues and are still peddling with content moderation and user interaction issues.

This aspect is way to early to be touted as something worth 500$ to a regular user.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: