Fun. I hope they are successful. One of the things that my daughter pointed out about 3D movies is that because they are 3D your brain thinks it should be able to shift the focus but it can't and that 'fight' gets in the way of enjoying the content. My understanding is that light field optics don't have this issue.
Your daughter will be glad to hear that she's in good company, in noticing the "Vergence-Accommodation conflict"[1] that is unavoidable on two-view stereoscopic displays. You're correct that light field optics don't have this issue -- other options are volumetric displays and "super multi-view" displays (which are, to a certain interpretation, a type of lightfield display).
>One of the things that my daughter pointed out about 3D movies is that because they are 3D your brain thinks it should be able to shift the focus but it can't and that 'fight' gets in the way of enjoying the content.
My two grown daughters and I avoid 3D movies. We have all come to the conclusion that the 2D versions are more enjoyable. Plus our local theater is now digital in 4K. We love that theater experience.
I've not seen many 3D movies, but those I did see were very flat. Like cardboard cut outs. I saw some of the hobbit in 3D and it looked weird. There is no volume to the characters...
>We're going to need a new word for resolution at depth.
Well then I have good news for you - we already have a metric for resolution irrespective of the depth at which an object is rendered, and we've had it for years. That metric is Pixels Per Degree (PPD). PPD complements the way the eye resolves images (visual acuity) and is independent of depth. ~62 PPD is generally the maximum resolution of the human eye - although motion/shapes/contrast/color can alter (increase or decrease) this threshold.
PPD takes into account FOV, binocular overlap, and rederable pictures - all in one clean metric.
The truth of the matter is that when companies use this standard unit such that all AR/VR display resolutions can be compared on an level playing field, that tends to lift the veil that is obfuscating how poor the actual imaging resolutions truly are.
They claify that while acuity limits for a person with 20/20 vision is 60PPD, the average acuity is more like 20/15, equating to 80PPD. It looks like NVIDIA put the PPD benchmark for an ideal display at ~92PPD, which would push the acuity limits for just about every human user. That seems reasonable to me.
As a point of reference, accounting for binocular overlap values (which I had to hunt for), I calculated PPD's for two of the highest pixel-count HMD's out there:
StarVR: 18.6 PPD
Sensics dSight: 20.2 PPD
I don't have the details needed to calculate Avagent or Hololens PPD's, but I'd venture to say their at this point or lower. So, as you can see, we still have a long way to go.
One more note - Displays don't need a constant PPD accros the entire rendered image, thanks to steep visual acuity drop-off relative to FOV angle (1). So, to the OP's credit, perhaps there is still some work that is needed on the PPD metric to account for HMD's that use foveated imaging (2), such that all displays (foveated & non-foveated) can still be compared side-by-side.
volume pixel (voxel) is already a term of art. I'm not aware of jargon to decouple the sampling precision in different axis, though that probably exists too.
You should be able to convert your smartphone's HD display into a low-resolution light field display by attaching a special lens array. It's like the lenticular 3D displays (they refract two rows of pixels in two different directions to create a stereo image), just omnidirectional.
I'm not sure how complex the manufacturing and calibration is but in theory we could create an attachable lens array for smartphones and use it to display a low-res light field, couldn't we?
This is a horrible misuse of the term "light-field" for a display.
An actual light-field display would have, for every pixel, maybe 16 sub-pixels, each only visible from a certain vantage point. They represent the rays that travel through each point in space.
This is something that could bring true glass-less 3-D, and would be an ideal VR holographic display if they could make a 16K display that could fit in a headset. You wouldn't even need lenses to focus - just stick the display right up to your face.
I would tend to agree and why I put it in quotes in my article. It it is not as egregious as M/S Hololens calling what they are doing "Holograms" and everyone knows it but they are all calling mixed reality objects Holograms now. At least the focus planes are doing something similar to Light Fields.
Similarly it looks like Avegant is calling what they are doing "Light Fields" because Magic Leap was calling Focus/Focal Planes "Light Fields".
Loosely conceptually, Light Field work in the horizontal direction relative to the light rays from the image source, where Focus Planes work in the vertical direction.
I have not done much with Light Fields personally, but Nvidia has published some good work about them (https://research.nvidia.com/sites/default/files/publications...) using what I would call "classic light fields." Gordon Wetzstein has presented how to do light fields with much fewer sub-elements (https://www.youtube.com/watch?v=itSjM_OzVtA) and there is a lot of other material out there on "compressive light field displays" that he has been working on for a long time. He claims to get decent results with just 2 fields. BTW, I highly recommend reading about everything I have seen from Dr. Wetzstein, he does an excellent job of explaining complex optics concepts.
If we look at the human eye, only the off-axis angle of the coming light matters. It doesn't matter if it's off-axis to the right or to the top, if the angle is the same.
This is because they will result in an identical signal at the retina, because the lens has circular symmetry and the aperture is circular too.
Now, I don't know about Light Field VR for goats, the.