Re: Above the belt?
> You can run a lighting model backwards to infer the geometry of a scene
It's that very process of *inferring* which requires prior 'knowledge', if the input data is limited. With two photos of a scene for input it is easy for software to distinguish between a flat plane (a billboard advertising a car) and a 3D object (a real car) - it doesn't require prior 'knowledge' of cars, merely planes. One single photo doesn't give you simple path.
When we create a lighting model of a scene we use HDR 24bit images of that scene - basically that means that each pixel can be brighter or darker than what can printed or shown a moniter so that lightbulbs aren't merely white like paper is white, but *bright*. There are systems that attempt to infer limited geometry from a HDR image, but merely as regards the position of light and shadow - easy enough if lightbulb pixels hold values of an order of magnitude greater than floor or sofa pixels. The results are often a good enough approximation. Normal web images give a far cruder approximation.