Touching a nerve
I feel some sympathy with the post about direct-to-brain interfacing. Having to jump through so many hoops to satisfy the annoyingly choosy Mk#1 Eyeball would just go away if we could plug into the Visual Data Bus (used to be called the Optic Nerve) and feed whatever the hell we like, properly formatted pix, to the brain. Right?
But no. The problem is, as I understand it (i.e. no expert) that the eye effectively does a lot of the filtering, call it pre-processing work, before it reveals to the brain whatever it thinks the latter should be allowed to worry about. I believe this includes "fixing" the image, by for example inferring that areas of colour are sharply bounded if lines are perceptible between them—even if, in reality, there is bleed between the colours. Also there is selective blindness: simply ignoring quite large and otherwise noticeable features because something more interesting is the focus of concentration. There's also the question of relative sizes, whereby the eye reports erroneously on dimensions because it is taking cues from other parts of a scene.
We've all seen these. The first is one reason early colour TV was able to use so little broadcast bandwidth: it relied upon the human eye putting in features that just weren't there, in the picture, ever. The second you'll have noted if you were ever fooled by the gorilla-behind-the-basketball-players vignette: your eye was following the frantic motion of the players and you never so much as noticed that a gorilla had walked across the court. The third is a favourite of trompe l'oeil and other illusions you can find all over the web. (And: how many of the decisions about the *distances* of objects are already made by the time the consciousness "sees" that heavily filtered, processed, image? Read up on why good 2D movies are already kinda 3D movies, for some answers.) My wild-assed guess is that the eye is making dozens if not hundreds of modifications to pre-process stuff for the consciousness, and we're just barely scratching the surface with current understanding.
The eye isn't doing this in isolation from the brain, but the brain isn't doing it in isolation from the eye, either: in fact, arguably, the eye is a highly specialised part of the brain. So, tempting as it may be, we can't simply decode the Visual Data Bus and then shovel our own bits onto it.
That said, there are some awesomely clever people working in the field of biological vision, and I don't doubt that we will eventually find a way to get images into the brain directly—though perhaps it were more accurate to say, "to get a detailed subjective impression of an optical environment accepted by the brain as valid".
My guess, FWIW, is that when this happens VR will *still* not be good enough to fool the brain, or convince anyone that they are looking at reality.