And now for something completely different
I'm looking through a window at something outdoors. All the light from the object that my eyes focus into the object's image will pass through the clear glass in that window.
So when do I get a "window" that will detect the frequency (color) and direction of the photons passing through it, with that data being first sorted as to which photons would be heading towards a hypothetical eye, then it's computed what image would be seen? Then reprocess again for whatever viewpoints are wanted?
Haven't we gotten to the point where we can have a camera without a lens?
(Yes, detecting the photon destroys it, so how do you get direction? You'd actually have quite a few identical ones on the same path, at light speed. So the first transparent layer of detectors catches one, while the next layer simultaneously gets another of identical frequency but twenty detector pixels left and thirty-three up, while third finds one that's forty-one left and sixty-five up from the top layer's detecting pixel, etc. Software can sort it out.)