Hacker News new | past | comments | ask | show | jobs | submit login
Simula One: an office-focused, standalone VR headset built on top of Linux (simulavr.com)
570 points by sandebert on Sept 29, 2021 | hide | past | favorite | 222 comments



One of the founders of Simula here.

We're flattered that someone posted us to HN, but we were honestly not ready for this much publicity at this precise stage of our project. It would have been better had this happened a few weeks from now, when we have more accurate footage of our actual prototype to show. Let me explain:

1. *Video footage.* The video footage on the front page of www.simulavr.com is taken against an HTC Vive and an older prototype of our window manager. It doesn't showcase the higher resolution of the Simula One (more than 4x that of the Valve Index), or any of the new features we are intending on releasing with it (hand tracking, AR mode, environments, etc).

2. *Prototype pictures.* The website doesn't have any actual photos of our headset yet! That's because we are in the process of finalizing the design. We have printed parts and plenty of renderings, but they are still changing every week.

3. *Specs.* The specs are close to the final specs, but still placeholders. Between supply chain issues, stuff still under development, and issues getting support from manufacturers at our volumes, we might have to change things for the final prototype.

One of the reasons we threw up this website in its current form was to get the ball rolling for manufacturers. They won't supply us with parts unless we have some sort of product interest, but we can't generate any sort of product interest unless we have some sort of website. It's very much a chicken and egg sort of problem.

We appreciate everyone's kind words, yet also understand the skepticism. For people on the waitlist: expect updates to start to come from us in a few weeks, when we will show some previews of some of the actual goodies which makes our headset special.


Another founder here, with some more comments on the tech side of things:

1. The software is relatively usable, and you can try it out right now on https://github.com/SimulaVR/Simula

2. The hardware is still being worked on, and the website is kind of a list of expected specs/placeholder in that regard:

2.a. The compute unit is tested and works, but requires a custom carrier board to fit the form factor. This is a blocker for the final product, but relatively low priority for the prototype.

2.b. Lens system design is scheduled to be complete in early November, with first prototypes available in early December. We're planning to use Valve Index lenses as a stopgap right now for prototyping etc.

2.c. We're currently solving a few challenges in driving the displays, as we're pushing the boundaries of the available technology, and at our volumes support from manufacturers is like pulling teeth. BOE supplies the 2880x2880 panels and there aren't even enough docs to figure out how to drive the (non-trivial, local dimming based) backlight.

2.d. We're also experimenting with different approaches to tracking as our original plan (RealSense) became end-of-life recently. I'm interested in an mmwave based solution, but we might just use RGB cameras instead.

2.e. The mechanical design for the front part is reasonably advanced, but we're still working on the back part.

There's a lot more going on right now that's probably not coming to mind immediately, but that should provide a good overview.


What's the best off the shelf inside-out tracking system you can get now? Does anything compete with Quest yet?


Nothing that's satisfactory in one way or another. Probably Luxonis DepthAI?

The main problem with off-the-shelf solutions is that they add another set of cameras, and afaik nothing exists that allows custom cameras.

We're gonna need an FPGA anyway due to the large amount of IO (2 cameras for AR, 2 for eye tracking, IMU, whatever other sensors we need, plus potentially mmwave radar if we decide to go that way) so it's tempting to put the processing on the FPGA as well.


Interesting - I guess I assumed the hurdle is both hardware and software. Oculus's hand tracking was a huge lift. Is there any commercially available software stack being worked on that is at least hardware generic? Or is everyone forced to build from scratch?


There's a lot of research papers that I found, but nothing hardware generic unfortunately.

Hand tracking is a difficult beast especially, and we would like to just use the new Ultraleap module for that, but they don't support Linux yet.

Eye tracking is relatively simple because it's a closed/controlled environment. Just some IR LEDs, an IR camera, and some edge detection and math.

SLAM (positional tracking) has a lot of different approaches . There's open source software, but it's generally running on a normal computer and that's not particularly efficient (especially with our GPU already loaded). Some research papers use a FPGA, but the code is rarely available so you just have a starting point.

You could probably crib the software from DepthAI or similar? We could implement the AI coprocessor they're using and adapt the code. I haven't looked closely enough yet to see whether that's a good use of resources.


Cool, that's helpful, thanks!


I recommend QP if you are going to do FPGA processing using a softcore or hardcore processor. It's an event-based state machine framework that handles IO really well. A hardcore processor would be more performant and take less LUTs but softcore will give you more flexibility as far as sourcing FPGAs.


Appreciated. FPGAs are something I've been aware of for a long while now but haven't used before, so recs are always good.


What are the potential advantages of an mmwave tracking system? The only previous commercial application I can think of was the pixel 4, which was very range and accuracy limited and power hungry.


You get position/velocity/angle data directly, and it's less power hungry than running high-res cameras specifically. Also some research papers show an increased tracking accuracy with mmwave+IMU than RGB+IMU.

So less processing + potentially less power + better performance, in theory.


does the device have a cpu or it needs to connect to a pc?

what is the predicted price point?

how to fit prescription lens ?


Compute module based on NUC will be included, and is pluggable on the back of the headset.

About 2-2.5k predicted price point.

Prescription lens we'll figure something out. We're trying to keep enough eye relief to support glasses, and we'll have at least provisions for mounting prescription lenses.

If we can, we'll be able to supply prescription lenses with the headset (for a surcharge) or collaborate with an existing vendor to provide lenses.


2.5k price point, ouch.

On a side note... are you on Kickstarter?


We will be, once we've sorted out all the blocker issues and our prototype is complete.


Thank you for the clarification.

Let me ask you a quick question that is surely on the minds of many other HNers:

Are you guys using the prototypes for day-to-day work (i.e., are you dogfooding Simula hardware and software)?


Yes. We're building the Simula One because we ourselves wanted to work all day in VR, using the best OS (Linux). Here's a fun video I made working on Simula, in Simula: https://youtu.be/FWLuwG91HnI

And I love seeing the progress unfold on our own headset, because I can't wait to start working in it.


I should also say that when you work a lot in VR, you get intimately familiar with its improvement bottlenecks:

1. *Text quality.* Text quality is really important, especially to sustain long work sessions. This is why we're pushing as hard as we can on resolution. It's more important for work than it even is for gaming, because gaming doesn't require you to sustain focus on detailed text for long periods of time.

2. *Headset bulkiness/portability.* Headsets are too bulky, and tethered ones are annoying to work with. While the Simula One won't be as light weight as headsets will become 10 years from now, it will at least be truly portable (not requiring you to tether to a PC with chords or over WiFi). We are also planning on using something like a halo strap to make flipping the headset up and down more easy (instead of requiring you to take the headset fully off or on).

3. *Real world stuff.* VR forces you to be very touch-type proficient. But sometimes you want to be able to see your keyboard, or see your surroundings, etc. We are planning on having an "AR mode" for our headset to help accommodate for this.


Points 1-3 make a lot sense to me.

What about 4. Impact on neck.?

Is there any risk of repetitive-stress neck injury from all the looking up and down?


Good question. That's an open research area for this particular use case. Some research about flight helmets/night vision goggles indicate that a counterweight alleviates neck strain. But it doesn't have the specific up/down motion that'd be more common here.


"because gaming doesn't require you to sustain focus on detailed text for long periods of time"

Have you never played VR Zork? You haven't lived yet.


Awesome. That means more than any tech specs to me.

Wait a sec, are you on HN via Simula right now?


How good is the battery life for the prototype headsets? If the headset is small enough and lasts long enough, I could totally see myself tossing it in my backpack instead of a laptop.


Thanks for posting here. I was getting ready to post something about the resolution, but... dang, now I'm excited. Good luck and god speed!


Your humility is endearing. Seriously.


> They won't supply us with parts unless we have some sort of product interest, but we can't generate any sort of product interest unless we have some sort of website. It's very much a chicken and egg sort of problem.

Well I reckon that problem is now solved.


I'm very excited about SimulaVR and watching you guys closely now. Great work and can't wait to get my hand on the headset!


Shut up and take my money!


Any price indication? $500, $1000, $2000?


Tentatively 2k.


It's important to remember that this is on the way to being an MVP. It's not a finished, polished product. This is HN. Don't we like people hacking on startups any more?

I think it looks like a really positive first step. Yeah, the resolution is too low but that's solvable with a better headset. It's a bit laggy, but that's probably due to recording video of VR being hard. The windows are weird sizes, but that might be user choice. Heck, I personally don't want to code things floating in space, but that's just a matter of choosing a different background. All those problems are solvable.

There are loads more problems that are also likely solvable too. And maybe some that aren't. We don't know yet. I'm glad someone is working on the problem to find out. That's far better than not having, maybe, one day in the distant future, a VR option to use for work.


> It's important to remember that this is on the way to being an MVP. It's not a finished, polished product. This is HN. Don't we like people hacking on startups any more?

It doesn't need to be a finished product. Just give FOSS developers something to work with, and the finished product will automatically emerge ;)


> the finished product will automatically emerge

"Finished"? No. Somewhat usable and constantly being updated or rewritten? Probably.


At least the software never deliberately works against the user, and never is defective by design.


> At least the software never deliberately works against the user

I look at some of the Gnome 3 design choices and I start to question that.


What do you expect from Red Hat/IBM? We've all seen how poorly they handled OpenOffice.


And to add to this: hardware without software is a feature, not a flaw.


Sometimes true, but sometimes not, it depends on how the hardware is designed to integrate with software. The Leap Motion failed miserably (my opinion) at this by expecting software to figure out where it fit as a human interface device rather than having a strong opinion based on software/reference drivers.


Well, you know what they say: fail fast, fail often.


Agreed, I'm happy to see people working on this, but with VR there are so many limiting factors/dealbreakers that I don't envy anyone playing in this space without massive resources to tackle them all.

For me, the MVP for working in VR is replacing my fairly standard monitor setup (3x24-27", 12-14px text size, 60hz). I don't really care about virtual meetings or whiteboards or environments, I just want to make the transition to having what would be unreasonable/impossible in hardware (dozens of resizable windows I can easily rearrange and fill my whole FoV with. Nail that and I will gladly drop a couple of grand on it.


I think the conclusion is unfair. It IS an existing VR option for work right now if you have a compatible headset. No it's not perfect and many people will not want to stay in there for a long time with existing headsets. But that doesn't mean it's not an option.


I think that viability of an "option" is assumed. It's not viable in general today. I'd be surprised if there is even a single person that has actually fully replaced their conventional workspace with a VR desktop like this.


I think you missed https://news.ycombinator.com/item?id=28678041 from a few days ago...

To each their own. You might not be on the left of the S curve for that technology, that's all :)

https://en.wikipedia.org/wiki/Technology_adoption_life_cycle


Color me surprised then! I'm still not sure I'd call it viable in general. But I've moved from "convinced" to "uncertain".

I'm at the tail of that S-curve for sure, so maybe I'm just get-off-my-lawning. Thanks for sharing.


Makes sense to me. This is going to be sick once it's ready!


Fun note, almost all of it is written in Haskell. If I remember correctly they've also done a bunch of stuff that helps Godot, or at least something to do with the Haskell bindings. Very accessible guys, I once asked a question on the gitter, and got helpful replies almost immediately.


If it's written in Haskell, it's highly likely that it'll be far harder to modify than something written in, well, not-Haskell.

Haskell might be easy when you get the hang of it, but the vast majority of programmers haven't, and the language ideas are alien compared to the mainstream ones (Python, JS, C++, Lua).

The design of Haskell also encourages users to be very clever, which only makes code harder to read.


I don't think writing Haskell ever gets easy, at least it hasn't for me over about ten years. Writing good code in any language is hard, but Haskell makes it harder to write bad code.

I have to write object-oriented bullshit all day for my job, if I started a fun new project like this I'd happily choose Haskell. If people who only know JS and won't learn anything new can't contribute then that's a cost worth bearing.


> If people who only know JS and won't learn anything new can't contribute then that's a cost worth bearing.

I think this is an unfair and unnecessarily-snarky take.

My list of languages is fairly long these days. I’ve written php, ruby, and go in reasonable volume for mostly personal projects. I used to teach embedded C at uni. I write python professionally every day. Recently I’ve even started playing with rust (and had some success thanks to the awesome book). The list is far longer, these are just the ones used the most. I’ve been writing code in some capacity for the last 20 or so years (first self-taught, then at university, and more recently professionally full-time).

Despite all that, and not for lack of trying, for whatever reason purely functional languages are the only ones that elude me. Every couple of years I try Haskell or erlang again and I just get nowhere fast.

Maybe it’s because I was never very good at maths, maybe it’s because I haven’t had sufficient motivation, or maybe I just haven’t found the right monad blog post to convert me. All I know is Haskell remains chronically out of reach to many experienced and inexperienced devs alike.


> for whatever reason purely functional languages are the only ones that elude me

That's because all of the languages you listed are really similar conceptually. PHP, Ruby, Go, C, and Python (and others you didn't list: Java, C++, C#, Perl, Lua, JavaScript) are far more similar to each other than they are to Haskell - they have different syntax and slightly different semantics, but for the most part they're all in the same language family. It's much easier to pick up Java after learning PHP than it is to learn programming for the first time - or, alternatively, to learn a structured language after having written unstructured BASIC for your whole life.

Lisp users (of which I am one) frequently talk about how all of those languages are "basically the same" - but for all that, Common Lisp is still way more similar to Python than it is to Haskell (which I've also tried and failed to learn).

Also, Haskell isn't just a functional language - it also is a lazy language, which means you have to now learn two new ways of thinking at once.

Erlang is similarly in a different world from the mainstream imperative-OO languages, but I don't think that being functional is the only thing that makes it hard to learn - it also has a fundamentally different model of concurrency (as well as being designed around concurrency) that is difficult to learn if you're just used to pthreads.

Similarly, I think that Prolog (logic programming), Adga (is it a programming language, or a theorem prover?), Rust (maybe; lifetimes), and Forth (stack programming) all present unique challenges to people like us that have only written Python/Ruby/Perl/Go/Lisp...


Yeah, I agree with all this. Amusingly, from your extended list I have also written a non-trivial amount of Java, JS, and Perl too which perhaps demonstrates your point further.

I hadn’t before considered the laziness part of Haskell, perhaps I simply hadn’t made it far enough to be aware of it before. Maybe that perspective will help me next time I feel the urge to try out fp again.

fwiw to your point about other languages, Rust has certainly presented a challenge due to new paradigms but it’s never felt insurmountable. The rust book is truly an incredible piece of writing that imo makes the language so much more accessible than it would otherwise be.


Yeah, I don't really see functional programming being that great of a fit for VR. I mean, the 3D environment is inherently object-oriented and stateful. And you're working in an extremely performance intensive task, such that pure immutability really starts to get in the way. Also, Haskell's lazy evaluation has a bad reputation for poor and/or unpredictable performance. It's better to have a higher mean execution time with an extremely narrow standard deviation so you can plan your frame budget and not occasionally blow it, dropping frames all over the floor.

What you want is something that puts the movement of memory between processes and threads front-and-center. Half the difficulty of writing a 3D rendering engine is coming up with a good memory model for loading 3D models and textures to push them onto the GPU. That really sounds more like Rust's wheelhouse than Haskell.


I disagree on the 3D environment being inherently OOP. ECS, a popular paradigm in 3D and 2D game making, can map pretty well to functional programming.

I would pick Rust as well here but just because Haskell has exceptions and it has less emphasis on safety and less pleasant defaults than Rust. In order to get the Haskell I want I need several pragmas and a different prelude...

I would say Rust is pretty functional itself, though.


What? ECS is soooo object oriented. It's basically the modern OOP attitude of poo-pooing inheritance in favor of composition, writ-large. Yet, somehow, all the components have deep inheritance hierarchies themselves, so [shrug].


Today I'd definitely use Rust. Most of the rendering work is gone in Godot, which is C++.

GC hasn't been an noticeable issue as a result.


> the 3D environment is inherently object-oriented

What does it mean for the 3D environment to be inherently object-oriented?


Haskell reduces the size of the developer pool, but increases their productivity (the Haskell ecosystem is enormous and astoundingly solid given the number of devs in that space), and also makes the docs a joy to read.


This. Im wondering how this is possible, high haskell reusability / easy to get familiar with codebases & extend them when familiar with hs or being a genius? I guess its the first


Strict static types reduce a giant class of errors (including most of the footguns I'm always triggering in my dayjob with Python). Purity reduces another giant class: It makes it easy (indeed kind of forces you) to restrict your IO to a thin, top-level error, hence keeping the vast majority of the code purely functional and thus easy to test.

Exactly how Haddock produces such wonderful automatic documentation, I don't know, but good God it does. The strict static types clearly help -- you can see exactly what every function inputs and outputs, what every data type needs, etc. And then you can jump to the definitions of any of those things, and if need be (almost never), the source code that defines them.


And what if it was the other way round? A steeper learning curve means that only the brightest, more motivated individuals can enter the realm, and those individuals are capable of doing more than the average developer.


You're right, that's clearly happening too.


> The design of Haskell also encourages users to be very clever, which only makes code harder to read.

This is BS - find me some "clever" code in one of these Haskell projects and I bet it's not clever but simply using a set of abstractions there maintainers like and grok and the reader just doesn't understand.


Maybe I'm being overly pedantic but I think that's a textbook description for overly "clever" code in any language- it makes perfect sense to the authors but is nontrivial for the reader to understand


Except in Haskell, it's not just some pet bit of code by the authors. It's a shared abstraction with well understood laws and theoretical underpinning.

What people call "clever" code in mainstream programming is not in any way similar to the Haskell being referred to.

Luckily in the Haskell world, we don't ascribe negative attributes like "cleverness" to code that isn't outsider-friendly. We gain a lot by not requiring that all our code is understandable by a mainstream programmer.


I find Haskell to be absolutely impenetrable, but I run XMonad and work with my config file just fine.

So users will probably be able to hack together what they want, but they may struggle to grow a large developer community. But that can still work out okay.


I often find myself wondering what the 'killer app' of purely functional programming languages is. For the longest time I assumed they would become much more popular as multi-core cpus proliferated, but pretty much every purely functional language out there is relegated to a dusty corner.


The absolute killer app is compilers, static analyzers, and similar. Other languages don't even come close for anything in that space.

However, I think that purely functional languages are useful for basically everything. Haskell is my general purpose language of choice, unless libraries or system constraints force me to another language.


> The absolute killer app is compilers, static analyzers, and similar.

In other words, those problems that don't involve users and the outside world.

Pure languages for pure problems. Impure languages for impure problems. It computes.


It's the year of linux on your face.

This looks like a really neat project, I think not having pictures of the headset front & centre makes it feel very vapourware though, even if it's a hacked together development unit we still want to see it. Arguably, we might even want to see that more.

I struggle to envision this in my workflow though, my experience with VR headsets suggest that for a lot of text they might not be the best choice just yet. Excited to see where this goes though!


With steamdeck, I really think 2022 will be the year of linux.


I'm happy the hardware isn't locked down.

I love my iPad, but it's ultimately an expensive content consumption device.

The Steamdeck can run whatever you want on it, this opens it up to being useful even a decade from now.


I'm excited for the device and was able to reserve one. But honestly, I'll likely end up putting Windows 11 on mine.


It's a page about a VR headset, but neither this page nor the 'shop' page that lets you waitlist it has any pictures of the thing even by itself, let alone someone wearing it. What an absolutely bizarre choice.


The video at the top eventually shows somebody in the bottom right corner wearing it.


That person is wearing the HTC Vive.


Yes, I want information about the hardware, not just the software (and I hope I can do more with it than just run a virtual window manager).


Not sure if I should feel more safe about this than the usual Kickstarter scam or not. Those have at least some product images.


EDIT: nvm


Think you meant *wary, though yeah sometimes weary fits.


If you drop the assumption that it is bizarre, you can think about the actual reasons for the omission of said pictures.


Was trying to give them the benefit of the doubt, here.


fwiw its supposed to be around 2k


I'm very keen for this market to open up, but we need to talk about the quality of the desktop before moving on.

On the website, the demo has the person drinking coffee, the youtube window is the wrong ratio, the text is too tall. For me, I really want picture perfect rendering before jumping. I don't want to look at badly rendered stuff, its part of the reason linux was so hard back in the day, getting your text to render and screen resolution to work right.

Now, for me, it looks like I can fit about as many terminals on "screen" as I can on two 1280x1024 screens. In all the demos, there are at most 5 windows open. On my current screen I have one browser, and 8 terminals, and there is still loads more space.

The thing that makes me a little sad is that we still all appear to be stuck at rendering everything on the inside of a sphere. If we are in VR, then we don't have to limit our selves to laying out windows on a single primitive. Where are the virtual shelves? where is the quick change, what about hotspots to bring groups of windows back into near field.

We have unlimited z depth, surely this is time to start using it? unlike a normal screen, we have parallax and gaze sensors, we really need to start using them.

(it looks like they don't have room tracking, so feels like they have limited 6dof source: coffee video, I'd expect much more sideways translation if they had proper headset tracking)


I think they'll get there. This is still early, they're probably focusing on the core rendering right now.

> If we are in VR, then we don't have to limit our selves to laying out windows on a single primitive. Where are the virtual shelves? where is the quick change, what about hotspots to bring groups of windows back into near field.

From playing VR games, actually the best implementation of this I've seen is to put them on a "belt" of sorts. Like an oversized toolbelt (so they're away from your body) so you can simply look down and grab stuff.

Spheres do work well, though. It keeps all the text at the same distance from you, so the edges aren't out of focus. I suspect it also makes zooming easier since you can just move the camera closer. If you have an actual 3D space, moving the camera can get weird, and the camera being weird in VR is really disorienting.

> (it looks like they don't have room tracking, so feels like they have limited 6dof source: coffee video, I'd expect much more sideways translation if they had proper headset tracking)

Not having room tracking doesn't prevent movement. You'd just move with wasd like before. Not having room tracking means you can move your chair 6 inches without all your windows getting shifted. Likewise, you can stand up in your standing desk without re-adjusting all the windows.

Plus I'm sure it'll help keep the cost down.


What do you do with 8 terminal open at the same time? I can never focus on more than one or two at a time.


good question!

depending on the project, it'll normally be browser for reference (or if its web based testing as well, but thats rare)

then I have about between 1/2 and 2/3rds of the screen space left to have terminals in.

I'll have one/two long terminals open with vim for the main file(s) I'm working on, then 1-4 smaller terminals for running the program/tests. if its a big project, then more terminals for reference (ie library one, library two etc.).

Think of it more as having a really big desk, with loads of copies of the same reference book open to different pages. Its quicker to glance than it is to alt-tab. For me (and i'm not claiming this is a universal trait) that flash of screens where we switch context from one full screen to the other makes me loose context on what I'm working on.

I have virtual monitors as well, all split into contexts, so one will have email/slack/$messenger one will have "personal" internets (ie timewasting sites) and a professional browser, thats logged into company services. If I'm doing graphics, then drawing/editing screens as well.

I used to be a proponent of many monitors. I bought a matrox parhelia new to support triple monitors when they came out. (yes I am that old). However due to the way my mind works, I found that with three screens I would end up in a spinlock with two browsers open, one in each monitor, and not do any work.


My eyesight is bad, and I use a 15" laptop with about 300 views open at a time. I feel no need for a bigger screen.

I keep one window on screen at any time, full screen. I use 2 sets of 16 virtual desktops, tmix, emacs and brave. Maybe 10 instances of each app, and of within each maybe ten tabs. So that's a total of about 300 windows. It gives me zero visibility problems. I much prefer using my hands to change whatn I'm looking at instead of my neck.


Surely vim can manage its own display and you don't need multiple terminals each running their own instance?


Vim can, but why in the world would I have vim do that itself when that means I have to learn/use both vims windowing system and my window managers windowing system? What advantage does it give me...


I spend more time interacting with Emacs than I do my window system, so I am quite happy arranging terminals, database clients, REPLs and code inside that, and there are huge advantages to all those being able to communicate frictionlessly. It'd be very annoying having to constantly multitask.


I feel like emacs makes a much better case for this than vim, because it actually takes advantage of everything being one program (at the expense of rewriting the world, and basically being it's own operating system).


It can, and sometimes its quite useful. If I have long terminals, I can :split, which is a nice option to have.

However a lot of the time the Vims are there for reference.


Same. I also stopped using two monitors because it was distracting.


I'm debugging a embedded system, and have three different minicom sessions going on, looking at the simultaneous output of the various consoles.

In general, I often have up to 10 tabs open on Gnome terminal, and a GNU screen session in each with up to 10 screens. Maintaining lots of context.

Within a single GNU screen session, I'll often have a build window, some editing windows for code, editing windows for config files, etc.


Maintain context. I might have several terminals up, each ssh'd to a different server. At least one terminal per software project. It helps that I use a tiling manager, so I can keep them all visible at the same time.


In Emacs I have tens of eshells open, not to mention SSH sessions, etc! I use one shell per task and then context switch using Emacs buffer switching machinery.


"Microservices, son!"


I would love to see VR/AR work cross paths with the concept of the "Memory Palace". Maybe groups of folders, websites, screens etc could be set up to be in different rooms of your house. Home automations could be configured to be in sync with your movement around the house.

Every time I stand up and walk to a different room the screens move to the periphery, and return when I sit. When a loud sound is heard, a notification identifies the sound and asks if you want to replay it. Etc.


The killer "corporate" app for VR in my opinion would be some kind of teleconference app. I absolutely HATE trying to have interactive brainstorming/collaborative design meeting remotely. There are various pancake tools, but none really compare to the experience of 2 people standing in front of a white board, and making wild motions with their arms that seemingly both people understand while drawing on a board.

Some of the experiences I've had in VR are unrivaled by any pancake game i've ever played for that very reason.

I'm not 100% sure I want to be in VR all day all the time. But VR should have a place in at the very least a remote workforce.


I absolutely agree. I'm very much a whiteboard hand-wavier guy. I explain with gestures as much as I do with the actual drawing (might be an Italian thing too).

That said, this "pancake" problem reminded me of the old Wii head-tracking project.[1] There might be an interesting hybrid opportunity with that idea and 3d avatars. I feel like the head-tracking hack never really took off because it only worked as a solo experience. But since remote work is mostly us all individually sitting at a computer I could see it working better.

Combine this idea with a large dedicated monitor/tv and now you have something that would literally just feel like a window/portal to the person you're talking to.

[1] https://www.youtube.com/watch?v=Jd3-eiid-Uw


I'd expect conventional live footage of whiteboard hand-waving to wildly outdo VR for the entire foreseeable future. If you need two guys hand-waving in different rooms, focus the extra tech on the whiteboards, to somehow merge their content.


plug: this is along the lines of our thinking, at ShareTheBoard. rather than kill the whiteboard with VR, we use vision to give it new tricks.

step 1 is real-time content digitization and obstacle avoidance (available now). step n involves synchronizing cameras and projectors for a deviceless AR experience - true "remote whiteboarding." we've completed this in laboratory conditions, hope to release it next year.


Is it possible to use this with Teams or Zoom or does everyone need a new app? This seems like an ideal plug-in or app to use with those since it's essentially just a "filter" app from the presenter's side. Using your app as the video source seems like it'd be straightforward to do.


Indeed, most of our users simply open our app, then use Teams/Zoom/Meet to share that tab using the built-in screen-share. That gives you content legibility but none of the other benefits: by hitting our meeting link (in browser: no sign-up/download needed), all viewers can also save board contents at any time and even add digital content of their own.

Can't do that second part "through" videoconferencing apps - not without a deeper integration (read: participation with the companies in question). It's on our list of possible developments but will require dancing partners on the other side.


What is "pancake" in this context?


Pancake is 'not VR' i.e. a traditional flat monitor.


Oculus Horizon Workrooms? It seems that they already achieved.


For me personally, my main concern would be text readability. It was probably an older model, but on a Vive I played games on for a good while I was struggling to read text on the game's HUD (Elite Dangerous). Pixelation in the center was also an issue, but manageable.

I get by my workday nowadays with minimal head movement, enough to avoid it getting stuck but I don't have to crane around too much. That's probably not an issue in a VR environment either though, since your primary thing will be in front of you as well.

Comfort is another one, I think I'd like to be more reclined if I didn't have to look at a screen. But then I'm worried about muscle atrophy, I don't do enough sports and exercise as it is.


The Valve Index significantly improved on text readability. For example, I was able to easily read the blurb on the back of a random paperback in Half Life: Alyx.



Really interesting. And when I have a headset that is comfortable and has high enough resolution I may switch to working in VR. Right now the Quest 2 that I have is too low resolution and too uncomfortable.

There are a few reallly high resolution VR headsets that I can't afford. But I just heard about a 200 gram (supposedly) on coming out of China. If it has good resolution thag might be a viable choice for this (as an alternative to whatever the hardware is which they didn't say).

But what makes this stuff REALLY interesting to me are the possibilities for 3D widgets and interface elements and metaphors. That has been explored a little and generally discarded in flatland but I think in an environment that is always 3d with good hand tracking, it's a different ballgame.

One thing to imagine would be, what could 3d "web browsing" be like if you knew that everyone looking at your site was in VR? Maybe something along the lines of JanusXR. Although I think they barely began to scratch the surface.

To me the idea of working in VR could be a gateway to the 'metaverse'. It could start with people trying to make more interesting environments for their floating 2d windows. Then they add some physics and locomotion. Now they want to collaborate over the network.

In three years, the most popular Linux distribution could be the one you run on your headset, and could come with multiplayer VR baked in.

But anyway I would want to escape from the 2d windows. There might even be some interesting ways to represent code or codebases in 3d. Or even new ways to manipulate text with your hands.


I see many comments about resolution, which I guess is fair. But there are high-resolution headsets available on the market right now: Varjo VR-3 afaik is the top dog currently https://varjo.com/products/vr-3/ and Pimax also has some high res models. So don't dismiss the idea too hastily based on experiences of old and/or cheap HW.


I've never seen a Varjo headset in person, so I can't say anything about it other than it's extremely expensive, both to buy and to operate (you have to agree to a yearly service contract).

But everyone should avoid PiMax like the plague. Their hardware is super buggy with lots of weird lens distortions and colored static in the displays. Their drivers are weirdly front and present like a HP does with their printer drivers, making me wonder just what they hell they think they are doing (and also, very buggy as well). Whatever resolution advantage they claim is wasted on bad image compression and extremely bad optics.

But the worst part of all is that the business side is really scammy. They'll gladly taking your money and sending you a shipping notification long before they ever have an actual headset they can send you. I get that manufacturing delays are a thing, but for PiMax they've always been a thing, long before the global pandemic and chip shortage. Don't tell me I'm going to get a headset "any day now, it's in the mail" for 3 months straight, only to finally get your act in gear when I threaten to reverse the credit card charge.

Of all the people I've talked to who have eventually gotten their PiMax headset, only one says he likes it, but he's also super DPRC-nationalistic and has accused people of racism anytime they talk bad about PiMax.


The Varjo headset must be awesome in person, so far every reviewer is blown away by the resolution:

https://www.youtube.com/watch?v=NOk_M1Ib5F0

https://youtu.be/iDb0OjNG2is?t=736

https://youtu.be/e6djDSf0kxg?t=699

trough the lens recording:

https://www.youtube.com/watch?v=XOEcv0mFdXg


I've been following SimulaVR/Simula for a while now, but really did not want to buy a (legacy) HTC Vive headset. This idea of a standalone Linux machine in a headset is something I'd definitely put some money towards given a bit more detail and timelines.

Also pretty excited by John Carmack's recent tweets on sideloading stuff to the Quest. It feels like that would provide a much bigger opportunity for this project (cameras and all), than building/shipping new hardware.


They're pretty cheap second hand. I recently got one for €200, including base stations (forward-compatible to a point) and "pro" audio strap. It's my first VR headset though, I understand not wanting to spend too much on something you already have a better version of.

Also, the Valve Index is pretty much last (tethered) gen.


Sideloading to the Go, not Quest right?


You’re right, Caramack has been working on the Go.

Sideloading does work on the Quest. The Go work has been a more open OS.


10x more productivity. oh boy, can i work just an hour a day with this thing?


I'm glad someone pointed this out. What is it with this obsession to be 1000x productive all the time? I will not get paid more for it, will I? And how is this going to make me more productive anyway? By removing "distractions"? I doubt anything can make me solve problems twice as fast as I already do, let alone 10x. This is ridiculous.


No, you will still work 8 hours, maybe even 10; it's the great paradox of productivity increases: e.g., teachers had to write way fewer reports before the computers became ubiquitous but when they did, the amount of reports increased slightly out of proportion so that teachers now spend more time on reports than they used to. Progress!


Hopefully it can autofill all these TPS reports that I have to fill out to show my productivity!


Simula One was the first version of the Simula programming language:

https://portablesimula.github.io/github.io/doc/HiNC1-webvers...


My thought exactly.


The main benefit of vr seems to have many "screens" for free, but the more years I've worked the more I've reduced the number/size of screens I use. I find it more ergonomic to not be looking around so much.


Pretty cool. Personally, I would love AR for office work. I actually investigated this for my company and got to meet with Microsoft to try the Hololens. Cool stuff. It was too expensive for us though.

Here's a video from a different company.

https://m.youtube.com/watch?v=0NogltmewmQ


I have a question. Why? How it will improve my multi-monitor setup? I see only potential cons here. Wearing headset all day. It doesn't sound good for your neck and eyes, at all.


Yea I don't think its going to improve a multi-monitor setup. Having a real office illuminated by the trillions of ray traces from the free sun in the sky is always going to be better than VR. That's why I really think the future is going to be an AR display that lets you leverage 90% of the real world with 10% of a simulated overlay. But! the road to the AR future is paved with VR work like this so I dare not write it off as useless.


My office is an interior room, no windows. VR is definitely better than my office.


This reminds me a lot of SphereXP [1] A WindowsXP application that converted your desktop into a "spherical" desktop.

I agree on the cons: Wearing a headset all day? Also I wonder how looking at a light-bulb at 5cm of your eyes all day long will feel like.

OTOH for me one _advantage_ that these type of VR/AR technologies can bring is are "endless desktop space". Right now we have something like that with "multiple desktops" in Linux and OSX. But it will be cool to have just an endless space of screen real-estate to tile all open windows.

[1] https://www.youtube.com/watch?v=oeHe-li-cZE


You might have it backwards about neck movement.

Sitting at a desk looking at a monitor in one position sounds terrible for your neck.

With VR you don't have physical (and cost) limitations of having screens above and below you, they could be all around you. You could even program the screen(s) to slowly rotate around you to induce motion so your neck/body isn't in the same place all day.


If you don't have the space for a multi monitor setup then virtual monitors could be an alternative.


I'm super excited with posts like these and the recent [1]. I've never been a digital nomad, but with this stuff, there's now one hurdle less :)

[1] https://news.ycombinator.com/item?id=28678041


Isn't this just a Desktop implementation for the SteamVR? It is even open source right here -- https://github.com/SimulaVR/Simula

No headset in sight, other than just "any headset".


"Simula One Headset: We are also in the process of developing a (limited number of) portable VR headsets for sale which come with SimulaVR mounted on them by default. If you are interested in purchasing one, visit our website and join our waitlist to receive a place in line and/or periodic updates on its development."


The number one thing stopping this is resolution. Monitors need to be sharp and crisp. It's none of those things in VR.


Right, and it's most important that text be crisp. Maybe they could re-render all of the visible text every frame with the VR perspective matrix, so you could have helmet-pixel-sharp antialiasing and hinting.

Maaayyyybe.


Comfort is a bigger blocker. Headsets with a high enough resolution already exist, but they are too large and uncomfortable to wear for more than a short session.


Resolution is the resolution.


I sincerely hope VR will change the way we compute one day, but the wait will be decade or more, as the progress is small and incremental. Unless everything turns out to be proprietary, in which case it'll be never.


There's a bunch of old guys from Norway here for you, complaining that your name was already taken.


inorite?! This is sacrilege! Use any other name!


I actually tried to get us to rename but we never managed before we had to incorporate and, yeah, we're a bit stuck now.

When I was interviewing for a programming language theory PhD program in... Bristol I think, back when this was purely software, I had probably a 20 min discussion on this because the prof thought it was related to the language.


The name you put on the front door doesn't have to have anything to do with the name you incorporated as. There are lots of companies "doing business as" some completely different name.


One thing that worries me about wearing VR headsets for extended periods of time (whether for game or for work) is that your eyes are focusing at a single distance for so long. Not sure if there is any research to suggest extended usage is a problem, but it 'feels' like it could be. At least with a desktop or laptop, when you look around the room, you are actually changing your focal length.


From experience, I can tell you that you aren't focused on a single point, and in fact your eyes will change focus to look at objects in the distance just like in real life. In my virtual office set up, when I want to relax my eyes, I can look over my right shoulder and look at the office wall, or out the window, and my focus shifts. You can also make your screens as big as you want and as far away as you want, there are a lot of options to keep your eyes from looking at a single focal length for extended periods of time. My YouTube screen is also pushed back a bit from my normal monitor so I can shift focus for a few minutes every hour or so.


Your eyes do move, and they point at other stuff, but the focal length afforded by the lenses is fixed (it makes as if everything was at a few meters IIRC).

I also wonder what could happen if the ciliary muscle isn't used enough. Does it get weak and you rapidly get short-sightedness, or is occasional exercise enough?

Edit: maybe an answer could come from looking at people who recovered from extended comas. Does their visual acuity degrade? If not, that's promising. If yes, it could also come from other causes like the light of light (lack of UV light, especially during teenage years, has also been linked to short-sightedness).


> but the focal length afforded by the lenses is fixed

That seems weird to me because when I look at a screen at a different distance to the one I'm using I can see the other screens getting blurry just like in real life.


My guess is that this has to do with stereoscopic vision, not focal distance. Does the other screen become simply washed-out blurry or is it double-vision-type of blurry?


ok that makes sense. I actually just did this one eyed, and you are absolutely correct everything is in focus. So it must be just stereoscopy that looks that made me feel I'm focusing differently.

That said, it isn't any more or less strenuous on my eyes than IRL except when the text is too small. Font smoothing still isn't great, but if the screen is big enough you dont notice it.


This statement does not make much sense to me. In VR everything is at the same focal distance, you just fake perspective/parallax/size/stereovision to make it feel 3D. You can not change the focal distance if you just have a screen and fixed lenses. Even actuated lenses would not be enough as they will change the focal distance for the entire scene.

"Perceived distance/depth" has little to do with "focal distance" in this situation.


I would definitely try AR glasses as a monitor replacement but I find most VR headsets to make me feel pretty ill after less than one hour.


I feel the same. I'd be ready to pay good money for an AR headset focused on replacing external monitors (no gaming but very good text readability). AR and not VR because I don't want to be completely isolated from my surroundings.

And, if luminosity would allow, working outside with it would be a dream come true to me!


So much this!. I'm writing this while sitting in my desk, in front of it is my 27in monitor and to the left is my Mac's monitor in on a stand.

Behind all of this is my room's white wall with an area of around 4x2 meters. Imagine if I could change my monitor with an AR version of it that mapped/projected the "desktop" to my wall.

And then, let's say I want to get into a meeting with a colleague to do some diagrams, so we "map" some Whiteboard software into another wall in my room and a wall in my colleague's room. We will be looking at the same and could even "draw" with hand gestures or something.

I have more hope for AR than for VR as well.


I don't like to see where it's going. I mean, there are good chances this is the future of desktop computing. And it's no good in my opinion. Computers already cut us out from our bodies and trap in head as is. Replacing the screen with VR will push this effect even further. Expect more anxiety, depression, social awkwardness.

I feel lucky to be born in times where we have technology and abundance but it didn't yet turn to a sci-fi dystopia created by tech nerds.

And even from a productivity perspective, a regular screen is enough if it's powered by a good window manager (i.e. i3wm). There's no reason to block the reality with a VR, unless you are living and working in a room with no windows, then maybe you can get yourself a nicer view and that's it.


The ease and portability are what make me moderately interested. I had kids over the last few years and no longer have a home office. I might get a desk area at some point but the only option is right next to my wife and my time to code would be when she's asleep.

So I plop on the couch with my laptop but the constraints of that setup quickly drain my energy. If I had a quality "multi monitor" setup that is ready any time, anywhere, my productivity would increase, even if the device/process itself isn't better than monitors + i3.

And if that happens, I'll have a happier time coding, probably produce more code, and live a happier life (because currently I am unhappy with my lack of output and that feeds back into my family life). That isn't some magical hope either, the same would happen if I had a multi monitor office again, but buying a single device is easier.

So don't despair, there will be plenty of good outcomes from stuff like this project.


I'm not sure the fullscreen video in the page background helps to promote working with a VR headset. It's lagging, tearing and blurry. If I get dizzy by just looking at the page, how on earth would I last more than 5 minutes in a VR office without throwing up?


This is nice and all, but since the dropping in price of larger monitors / TVs, it doesn't seem that practical.

Right now I'm using a 42in LG 4K TV as my monitor, and I've mostly gotten used to it. One thing is that since I'm older, it is nice to push the monitor further away from me than I had in the past, so the monitor itself is on a file cabinet (with an adjustable stand) not sitting on my desk itself.

TVs are not monitors, and it took a bit of fiddling with the TV settings (game console mode, turn off overscan) and drivers on the desktop side, but I'm fairly pleased with the setup these days. I really like having additional vertical space for editing windows.


But you cannot take your 42in TV with you in a backpack. Mobile setups are more practical than fixed ones for most people.


That's true.

Though with the power consumption required for a VR GPU display, you'll likely need a decent-sized battery pack (maybe on your belt), for this not to be tethered to a power outlet.

For this product space, I think I personally would be happier with a AR style solution, though that comes with its own challenges.

I've got to wonder if a CastAR / Tilt5 AR projection system would work better for working. You could certainly roll up the retro-reflective "screen" into your backpack.


For a work VR headset to work I think you need a couple of things.

One is that it needs to be comfortable enough to wear for long periods of time. I usually get quite warm when wearing one. You also often end up with impressions on your face. This is fine for gaming or shorter sessions but would be a distraction if trying to focus for longer periods of time.

It would also need to have a higher resolution than most (all?) current VR headsets. Text needs to be huge so even though you can have lots of virtual screens you can't fit much on each.

Finally, I imagine that there could be a more innovative interface than just screens in a virtual space. Something that embraces the close-to-physical-reality illusion of VR.


The issues you're presenting seem real for some, but enthusiasts seem to be making it work[1: post from yesterday].

I like the call in your last point, but personally I think an innovation like 360 degree resizable and movable windows is a reasonable step up from where we're currently at. It would be nice to integrate the work we do into physical space a bit more though. I've wondered about doing practical programming work in an infinifactory-type[2] interface. I don't think it's necessarily a good idea, but it'd be fun to see attempts at no-code tooling embedded in the space.

[1] https://news.ycombinator.com/item?id=28678041 [2] https://www.zachtronics.com/infinifactory/


Resolution is the killer. The distance I'm currently typing this on my 32" monitor is much larger then the virtual draw distance I can "see" in VR at current resolutions.

Regarding comfort: I have been curious whether you could take the weight off the head by suspending the headset off the back of a chair so it "floats" at face level via a tether or something.


Looks a lot like Project Looking Glass :-) https://en.wikipedia.org/wiki/Project_Looking_Glass


Such a forgotten classic. I remember it took me hours to get the dependencies on my suse right and once it worked I was so overwhelmed by what's possible. It was so weird and beautiful at once.

Compiz desktop cubes [1] and native zooming were all the rage back then as well.

[1] https://heise.cloudimg.io/width/993/q75.png-lossy-75.webp-lo...


Also like SphereXP ( https://web.archive.org/web/20040130224608/http://www.hamar.... ) . A pretty undervalued project from around the same time


Excited for this! I know it's early, but everything was early once - VR will definitely be one of the next big computing interfaces, and we need alternatives from the current options being put out by Big Tech.


The video in the website is really bad quality. this one is better https://www.youtube.com/watch?v=FWLuwG91HnI


They should probably talk to someone who knows something about ergonomics–the way his neck was craning to look at those vertical "screens" looked painful


This is what I started thinking about: what are the proper ergonomics for a VR-office, constantly rotated one's neck may not actually be good. It's like this would become like immersion in a curved display with a huge horizon for every subscreen you want, then a "window" into the real world when you completely turn around (180).


Yo. I want! How about Oculus Go compatibility, especially since Facebook decided to be a step closer to "reasonable" when it comes to sideloading et al?


I've tried VR with my phone and it looked promising to me WRT resolution and clarity. I like the idea of a camera to let me see my workspace and I suspect with a few iterations it'll be amazing.

Given what I do to my posture as a desk worker, I'm thinking a little extra weight on my head might do good things for my back tbh.

If the price is right, I'm down to try it anyway.


The website states "10x more screens ... 10x more focus ... 10x more productivity"

That's a stretch! Or where to improve your pitch. Personally the only 10x productivity I get is by disconnecting from wifi.

I'd love to see the windows non-square, more like a POV or fisheye if that's possible. Cool product, I hope to use it soon.


After about 2 or 3 visible windows at a given moment, I start becoming more distracted than productive, but this development still seems like an inevitability that I'm excited to try out.


Compiling Haskell in the background? Sold.

Jokes aside I've tried a few virtual desktops in VR and it _seems_ promising. The text clarity is a hard one, the weight of the headset for prolonged sessions would also be difficult.

A whiteboard app that works with non-VR users w/o 3D would be nice. Something like Miro but with nice feel in VR.


The Simula VR window manager is actually written in Haskell ;)


So is this the future? Hundreds of office workers wearing for 8 hours a VR Headset in an open plan office?


No, they'll all be working remotely in their own matrix-style cocoons. The nutritional paste will be delivered by Doordash.


We won't have offices in the future


So Ready Player One basically? :)


I’ll echo both the praise and the concerns here - awesome that somebody has made this, definitely has a long way to go before I would consider using it, but honestly my main question is... is all-day VR not terrible for your eyes?


See also a project member (allegedly) talking about it yesterday here: https://news.ycombinator.com/item?id=28678541


I guess I must be the odd one out, I don't find adding more screens does a ton to help my productivity, it just means more context switching.

Maybe if I needed to more stuff where I compared things side-by-side...


From the wiki on their Github [1] and looking at Github contributors, this is a 3 person project (and no Github commits since June 2021). They're clearly not making their own VR headset, they're using something off the shelf (from other comments seems like an HTC Vive). Making the website ambiguous about this is just confusing.

I saw a couple YC apps in there, so I'm sure they had to be ambitious, but thinking realistically there's a big difference between being:

1) A VR headset for work company

2) A VR operating system for work company

3) A window manager for VR for work company

[1] https://github.com/SimulaVR/Simula/wiki/Simula-Master-Plan


I suppose it’s a difficult field to enter? Idk if it’s needed but my naive assumption is that I won’t be able to contribute without having a VR headset.

Speaking of, what’s a good VR headset that I can purchase in ‘21 that does not have anything to do with FB? I enjoyed playing Alyx on the Index so that’s my prime candidate for now, for both gaming and productivity.


Have you considered HP Reverb G2?


We're making our own headset. Nothing off-the-shelf was available or satisfactory.

Most of the recent Github commits are in the dev branch, but right now it's 90% George working on the code. I'm busy on the hardware and wrangling vendors side, our third employee is the ME so no coding there.


Do you have to air-guitar the keyboard?


Has anybody hacked regular consumer VR headsets for working with regular apps?


For oculus quest 2 there is already a focus on office work. From the start we've had Virtual Desktop and from v28 software update Oculus itself has had several office focused moves.


I have played around with it, and the resolution just isn't there. The desktop space in VR is ironically less than on a laptop, because every window has to be massive to be usable at all. In a future with 2x4K res or something in that ballpark, this might be interesting.


If you're anything like me, by your mid-forties you aren't going to need 4k per eye.

The Quest 2 is already pretty much as sharp as my eyes can resolve - and that's with prescription glasses.


I suppose that's an alternative use case for the tech - to simulate and get an understanding of how to build UX that is friendly towards those with poor eyesight. Haha.


Well, the non-standalone ones can already do whatever you want them to since they're just a display


Finally, RSI for your neck!


I would LOVE an ultra high res VR standalone headset for reading


My neck hurts already just watching that teaser video...

Looks cool though!


Reminds me on ~2008 era Linux. Compiz, on steroids.


+1 Compiz is one of the things which originally inspired me to try Linux back in the mid 2000s.


Are there any estimates on what would a unit cost?


Well, I can tell you from a lot of research in purchasing headsets to use in an adult education environment: it'll have to be round about $1000 all-in to be competitive. That's what Oculus Quest 2s with Elite Straps through Oculus for Business or Windows MR or HTC Vive headsets with min-spec VR laptops will cost you. Pico Neo 3 is a little cheaper, but it would have also come at signficant extra development cost for us.

Basically, any way I tried to slice the problem of "full VR system I can box up and send to people without having Facebook spy on them" came out to $1000/ea. We ultimately went with Oculus for Business to keep everything small and easy to setup.


We're planning for 2k. Due to our low volumes etc going hard on optimizing the unit cost is difficult.


What's the resolution per eye?


We're evaluating 2880x2880 displays right now. With our upcoming optics, we can get up to 45PPD in the foveal area if everything works out as planned (they're basically a variable magnification optic).

Depending on vendor support we might have to go to 2.5k displays from another vendor, but hopefully we can get the support we need for the 3k displays.


Ok nice, that's better than G2. What about IPD? All HMDs have too narrow IPD for me, what is the challenge with offering wide IPD? The Human IPD range is 51 to 77. Quest 2 offers only up to 66 mm, the Index up to 70mm.

How's the lens glare?

Can you use Simula currently with Quest 2 air link or do you need to be tethered?

EDIT: So yeah, this is Linux only? There will not be a Windows version?


We're trying to hit as wide of an IPD as feasible. Right now with the BOE displays we can hit an IPD of 55-77mm, but the design isn't finalized yet.

Lens glare, too early in the design to tell yet. I'll have more info in late Oct.

It's Linux only as Windows doesn't have the APIs for what we're doing (being a window manager, basically). No idea if WSL will work nowadays; definitely not without a lot of finagling.


Depends on the device you use. They seem to support a few different ones.

From here: https://github.com/SimulaVR/Simula it seems like anything that will run steamvr should do it.


I was obviously referring to their HMD.


Seems like tool for 3d designer


Check the previous discussion link, it also allows you to create work environments and such.


Their approach of having a desktop metaphor within the VR environment is one that I have argued for a long time is the "right" approach to eventually grow VR as a general purpose compute platform.

The only two other examples of this going right now are Microsoft's Windows Mixed Reality platform with Windows Holographic and the Magic Leap One's interface. Windows MR works really well, Magic Leap is extremely half-baked and has had very little effort put into it (well, at least that was the case a year ago. I ended up selling my headset for lack of use).

Being that I've been a Windows developer for 20 years, it's extremely frustrating to me that Microsoft won't make a standalone VR headset. They have all the technical capability to do it. The HoloLens 2 is a sufficiently powerful compute platform and runs the full Windows MR experience. The PC-tethered Windows MR headsets are very high resolution and some of them are even quite comfortable (I regularly use a Samsung Oddyssey+ for work). But they seem hell-bent on pushing difractive waveguide displays for any mobile devices, which I think by now have pretty handily proven to be straight up garbage.

How a Linux headset could succeed here where others are really stagnating is to develop:

A) Hotswappable batteries. You're not going to get all-day usage out of a single battery pack anytime soon (my Quest 2 with the Elite Strap doubles the battery life to a whole TWO HOURS OMG!), and having to connect a wire to a device that is supposed to be standalone just for power is kind of like playing with one of those line-tethered model airplanes: what you really want is an RC model airplane, so you're just stuck being disappointed all the time.

B) A really good spatialized audio system. Spatialized audio is an often overlooked component of making a believable VR scene. All of the ambisonic audio drivers are platform-specific right now, and they each have their pros and cons. I really like Microsoft's HFRT as it seems more realistic than Oculus'. It'd be especially nice to have work put into Chromium and Firefox to upgrade the WebAudio system to use the system's spatializer rather than implement their own (Google's Resonance is particularly bad).

C) Really big emphasis on accelerated 2D rendering. 3D rendering is great for games, but work is all text and text is all 2D rendering. Some of the most costly components of my VR projects are the text rendering. You can go a long way with low-fi 3D as an aesthetic, but 2D absolutely needs to be crisp and tight. And it's not just about the resolution of the display. Your 2D buffers aren't going to ever map one-to-one to the display's pixels, because your head is moving around, you'll be looking at things at slight angles and in motion, etc., and most mipmapping algorithms are designed for gradients, not sharp edges. So making clear text is really hard. Oculus has a hack where 2D quads can be rendered in a separate pass from the rest of the 3D environment, but there are issues with it regarding scene compositing and hit testing.

D) A native scene graph, something akin to extending the desktop metaphor into 3-dimensions, not just for compositing 2D windows but for allowing 3D applications to mix and match objects. Windows MR and SteamVR are the only systems that really even attempt to do this, with neither really seeing enough emphasis. Every VR app right now runs in a completely exclusive, retained mode. That's fine for games, but it's completely unnecessary for things like teleconferencing apps. Why shouldn't you be able to "spaceshare" like we do screen-sharing? Then you'd be able to have your teleconferencing app separate from your whiteboard app.


Great points.

- (A) is tractable, and will be placed on our queue.

- RE (B): Simula has developed a special text filter to help improve its text quality. See https://github.com/SimulaVR/Simula#text-quality for a demonstration. As you point out though, there are even deeper things that can be done. We have considered making a vector-based based terminal before (in which text is rendered on a vector basis).

- (D) is a point brought up by Forrest Reiling in his master's thesis on window managers (which influenced Simula's early founding). See [Toward General Purpose 3D User Interfaces: Extending Windowing Systems to Three Dimensions](https://github.com/evil0sheep/MastersThesis/blob/master/thes...).


Jaron Lanier did a lot of writing on what a VR system interface might look like. He was (well, probably still is, though I take it he doesn't really program anymore) interested in cutting away from traditional process and window models entirely. Indeed, he argued that even something like interprocess communication should take place solely through the VR environment, as bot avatars interfacing with the same virtual control interfaces that a human avatar would have to do.

I think his main concern was mostly dog-fooding the VR environment, not building back-door interfaces that the bots could use that the humans could not. I don't know if it really has to go that far, but there is a lot that needs to be done to define common interfaces between objects.

Whenever I do my own thinking on how such interfaces would be built, I always end up with something akin to Bluetooth GATT profiles, which is... less than ideal. There's certainly a lot to like about GATT, in that it has a lot of different functionality pre-defined. The dream of device and software interoperability is certainly there. But at the same time, vendors in the wild seem to just shove everything into the public use space and vertically integrate with their own stand alone apps, so maybe Lanier was right.

It probably needs to be something more akin to how AR systems attempt to understand their surrounding world. AR apps also run exclusively, but they do have to consider the huge design challenge of not owning the entire environment. Perhaps a model of AR that can't differentiate between the real world and the virtual world that includes other AR apps is the way to go.


love that the email waitlist signup goes to wolframcloud


I've been meaning to use a proper service but it works right now and once we're actually sending I'll import the emails and do an opt-in and such.


no I like it -- it feels like I'm interacting with a person who's using powerful tools


both me and george are avid wolfram users. incredibly good stuff


Except for hololens, aren’t they all built on top of Linux?


I mean, Android has the Linux Kernel in there somewhere, but can we really call it a Linux system?


What’s the FOV?


We plan for at least 100deg DFOV.


I'm so joining the wait list




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: