back to article Amount of pixels needed to make VR less crap may set your PC on fire

Put on a virtual reality headset and it's hard to believe that your visual system is being stretched beyond its limit. Individual pixels are still visible and the narrow field of view makes it feel like you're wearing ski googles. Yet even now VR bombards our visual system with more information than it can process. Engineers …

Page:

      1. Sir Runcible Spoon

        Even if, like me, you are extremely* motion sensitive, you can earn your VR legs.

        As long as you stop when you start to feel queasy or you start to feel hot under the collar you can retry the next day and it will be marginally better. After a couple of weeks it's no problem.

        I didn't use my psvr for about a month, and then I got Skyrim. I started to get those hot under the collar feelings again (although not quite as bad as I did when I first got the headset) only took me 2-3 days and it's all good again.

        *if I get motion sick irl it can take about four hours for my stomach/head to settle enough to walk around.

  1. Deltics

    Humans may have foveated vision but we use it to observe a fully resolved world.

    This "problem" doesn't seem to affect 2D VR (i.e. TV/monitor output) from PC's/consoles - human vision is still foveated when viewing VR on a 2D plane, so the "problem" here is - I suspect - less about trying to make stereo-scopic VR "work" but rather to grasp desperately for an excuse as to why the technology has failed to take off as predicted despite being hailed as The Next Big Thing.

    If they ever do produce eyeball tracking foveated rendering I suspect that the result will be even worse - foveated vision perceiving a pre-foveated render.... I suspect the results will be worse, not better.

    1. Dave 126 Silver badge

      The article wasn't talking about what is needed to make VR fun, but rather what is needed to make VR indistinguishable from reality, so for that reason comparison to TVs and monitors is only of limited use.

      If we ignore that most people have two eyes, we can think about how we could try to render a real looking scene in the entirety of a users field of view using a huge bank of monitors (if they are low DPI then no problem - just use more of them and place them further away from the user!). The issue would still be processing power.

      We aren't conscious of only being able to focus sharply on a small area because our eyes move around a lot and our brain builds up an image that it presents to us. There's lots of optical illusions that illustrate this.

  2. Anonymous Coward
    Anonymous Coward

    Why pixels? Why not color palette?

    Instead of reducing the resolution outside of the fovea area, why not reduce the number of color bits from full in the center down to 8 (4?, 2?) at the periphery?

    1. Dave 126 Silver badge

      Re: Why pixels? Why not color palette?

      And here was me thinking I'd never play a game displayed in EGA or CGA again! :)

  3. Destroy All Monsters Silver badge
    Pint

    That was a good read!

    +1 El Reg

    Have some Greg Egan in return (Permutation City, 1994):

    Paul uncovered his eyes, and looked around the room. Away from a few dazzling patches of direct sunshine, everything glowed softly in the diffuse light: the matte white brick walls, the imitation (imitation) mahogany furniture; even the posters -- Bosch, Dali, Ernst, and Giger -- looked harmless, domesticated. Wherever he turned his gaze (if nowhere else), the simulation was utterly convincing; the spotlight of his attention made it so. Hypothetical light rays were being traced backward from individual rod and cone cells on his simulated retinas, and projected out into the virtual environment to determine exactly what needed to be computed: a lot of detail near the center of his vision, much less toward the periphery. Objects out of sight didn't 'vanish' entirely, if they influenced the ambient light, but Paul knew that the calculations would rarely be pursued beyond the crudest first-order approximations: Bosch's Garden of Earthly Delights reduced to an average reflectance value, a single gray rectangle -- because once his back was turned, any more detail would have been wasted. Everything in the room was as finely resolved, at any given moment, as it needed to be to fool him -- no more, no less. He had been aware of the technique for decades. It was something else to experience it. He resisted the urge to wheel around suddenly, in a futile attempt to catch the process out -- but for a moment it was almost unbearable, just knowing what was happening at the edge of his vision. The fact that his view of the room remained flawless only made it worse, an irrefutable paranoid fixation: No matter how fast you turn your head, you'll never even catch a glimpse of what's going on all around you ...

  4. This post has been deleted by its author

  5. Ubermik

    Arent they overcomplicating this a bit?

    From how it read you need a certain amount of pixels for the eye and brain to not struggle, but then they try to drive every pixel independently which is where the bottleneck comes from

    Surely the solution would be to go "anadigilogue" a melding of analogue and digital

    make the screen with the required pixel density, but then feed into it a lower resolution image and have silicon that "smears" the picture across the extra pixels in either a gradient or hard transition as required

    This gives the eye the density it needs up close, but would allow the hardware to only feed in a monitors level of resolution which the hardware is already capable of doing

  6. FlamingDeath Silver badge

    I know kungfoo

  7. Milton

    Touching a nerve

    I feel some sympathy with the post about direct-to-brain interfacing. Having to jump through so many hoops to satisfy the annoyingly choosy Mk#1 Eyeball would just go away if we could plug into the Visual Data Bus (used to be called the Optic Nerve) and feed whatever the hell we like, properly formatted pix, to the brain. Right?

    But no. The problem is, as I understand it (i.e. no expert) that the eye effectively does a lot of the filtering, call it pre-processing work, before it reveals to the brain whatever it thinks the latter should be allowed to worry about. I believe this includes "fixing" the image, by for example inferring that areas of colour are sharply bounded if lines are perceptible between them—even if, in reality, there is bleed between the colours. Also there is selective blindness: simply ignoring quite large and otherwise noticeable features because something more interesting is the focus of concentration. There's also the question of relative sizes, whereby the eye reports erroneously on dimensions because it is taking cues from other parts of a scene.

    We've all seen these. The first is one reason early colour TV was able to use so little broadcast bandwidth: it relied upon the human eye putting in features that just weren't there, in the picture, ever. The second you'll have noted if you were ever fooled by the gorilla-behind-the-basketball-players vignette: your eye was following the frantic motion of the players and you never so much as noticed that a gorilla had walked across the court. The third is a favourite of trompe l'oeil and other illusions you can find all over the web. (And: how many of the decisions about the *distances* of objects are already made by the time the consciousness "sees" that heavily filtered, processed, image? Read up on why good 2D movies are already kinda 3D movies, for some answers.) My wild-assed guess is that the eye is making dozens if not hundreds of modifications to pre-process stuff for the consciousness, and we're just barely scratching the surface with current understanding.

    The eye isn't doing this in isolation from the brain, but the brain isn't doing it in isolation from the eye, either: in fact, arguably, the eye is a highly specialised part of the brain. So, tempting as it may be, we can't simply decode the Visual Data Bus and then shovel our own bits onto it.

    That said, there are some awesomely clever people working in the field of biological vision, and I don't doubt that we will eventually find a way to get images into the brain directly—though perhaps it were more accurate to say, "to get a detailed subjective impression of an optical environment accepted by the brain as valid".

    My guess, FWIW, is that when this happens VR will *still* not be good enough to fool the brain, or convince anyone that they are looking at reality.

  8. Thomas Gray

    Monochrome?

    Assuming the problem of fovea focus could be dealt with, would it save processing power to place the surroundings in monochrome, since rods only detect light intensity rather than actual frequency?

  9. BlueTemplar
    Thumb Up

    Great Article ! Some extra numbers :

    1.) Remember that putting our visual bandwidth in terms of bits/second involves making not only assumptions about resolution, but also about colors, brightness, and framerate too !

    (I'm assuming that the quoted "74 gigabytes of visual data are available to us each second" are assuming the sRGB color space and 90 Hz ? Source ?)

    2.) Japan's Broadcasting Corporation (NHK)'s research "claimed the tests showed 310 pixels/degree are needed for an image to reach the limit for human resolution" (which is a lot more than the quoted 120 pixels /degree)

    https://www.homecinemaguru.com/can-we-see-4kuhd-on-a-normal-sized-screen-you-betcha/

    (Though you can indeed see from the graph that you start indeed to get into diminishing returns over 80 pixels / degree - which also corresponds to the average 20/15 vision - but then it *also* looks like that the Nyquist sampling requirement doubles that number, and you end up with a minimum of the double : 160 pixels / degree.)

    That Samsung 850ppi display, put in a Oculus CV1 would result in ~23 pixels per degree :

    https://forums.oculusvr.com/community/discussion/comment/521849/#Comment_521849

    The US Air Force hypothetical 10,300ppi display would therefore result in 282 pixels per degree ? (notice it's close to NHK's research result!)

    1. BlueTemplar

      Re: Great Article ! Some extra numbers :

      On second thought, it's weird that a maximum angular resolution with (fixed) televisions in mind would be similar to the maximum angular resolution of (head-following) Head-Mounted Displays !

  10. ProgrammerForHire

    I might be wrong but the 90 Hz figure is computed considering you need 2 frames each time, for each eye

    So it's 45Hz overall

    1. BlueTemplar

      I'm pretty sure that you're wrong :

      https://thevrbase.com/why-does-the-90-hz-refresh-rate-matter-for-virtual-reality/

      You might confuse it with the "Asynchronous SpaceWarp" visual trickery that the Rift does that allows it to cut the framerate pushed by the graphic card in half to 45 Hz with hardly any noticeable difference?

      https://developer.oculus.com/blog/asynchronous-spacewarp/

      1. ProgrammerForHire

        This is confusing. 90Hz and 60 fps ? I thought the numbers should be equal ?

        1. BlueTemplar

          If I'm not mistaken that's 60 fps when talking about the video if it was shown on a regular screen, and 90 Hz for the actual refresh rate of the HMD's screens.

          Hence some trickery is involved : see my second link (I probably should have said 45 fps (and 90 Hz) in my previous comment ?)

  11. Mark Simon

    Correction

    Number of pixels.

  12. Indolent Wretch

    Maybe returning to the concept of a vector display (like asteroids and the old vectrex gaming system) would be a good idea. That didn't have pixels at all.

    Admittedly the world would like your in Battlezone but that's not necessarily a bad thing.

  13. Anonymous Coward
    Anonymous Coward

    I question the numbers

    I question th numbers in this article.

    Field of view 180 degrees - more like 80 i.m.h.o.

    All of our fine resolution takes place in the central one degree, shouldn't that be more like 10 degrees?

    But I agree we will need a high screen refresh rates to prevent nausea when using headsets.

    1. BlueTemplar

      Re: I question the numbers

      No :

      https://en.wikipedia.org/wiki/Human_eye#Field_of_view

      "For both eyes combined (binocular) visual field is 135° vertical and 200° horizontal."

      the high resolution area of the eye doesn't seem to have have a clear cut-off... (until the blind spot at ~15° ?) :

      https://en.wikipedia.org/wiki/Fovea_centralis

Page:

POST COMMENT House rules

Not a member of The Register? Create a new account here.

  • Enter your comment

  • Add an icon

Anonymous cowards cannot choose their icon

Other stories you might like