The photo above is 205 degrees wide. If you were there, both ends of the picture would be behind you. Yet it looks plausible, because it satisfies many of your visual expectations. Lines you would expect to be straight look straight, things you know should be vertical are vertical, and sizes and shapes look reasonable. Painters know very well how exploit such expectations in order to create believable pictures, even of scenes that could not exist in reality, and to make us see space, even though the picture is flat. This aspect of their art is called perspective.
Perspective is about trying to depict the 3 dimensional world on a flat surface. This is tricky because what we actually see is not a real image, but a fictional one created by our brain. Leonardo da Vinci called how we see “natural perspective”, and famously declared that it would never be captured on a piece of paper. He was basically right, but that has not stopped generations of artists from trying to do it anyhow. In the process they have invented some useful conventions, and some clever tricks, and created many remarkable images. Now, with software partly based on painters’ methods, photographers can join the struggle for natural perspective.
Photography normally deals with images of real scenes, captured objectively by lenses which faithfully perform the rectilinear perspective projection. That projection gave Renaissance painters a powerful way to depict space, and is the foundation of the modern language of perspective. Nevertheless, despite having ‘correct perspective’, relatively few photos give a really convincing sense of space, and many photographers prefer just to create interesting flat patterns. One reason for this is that until recently they could not do much to control perspective. They simply had to match lens focal length to camera-to-subject distance, resulting in a rather boring “photographic perspective” in which depth and field of view are inversely related by immutable laws of optics.
But in the digital age, we are no longer bound by the relationship between distance and field of view built into our lenses. In fact, we can completely separate an image from the lens that took it. By correcting the geometrical defects of the lens and camera, software can recover an ideal spherical image, that records the true direction to every element of the scene. There is no simple way to view a spherical image directly, but there are many possible ways to convert it back into a viewable flat picture. I think of those as ways of re-photographing the subject, using software lenses. And software lenses are not limited to simulating what a glass lens could do.
One thing glass lenses cannot do is capture really wide fields of view without obvious distortion. Rectilinear lenses grossly over-expand the outer parts of wide images; fish-eye lenses grossly compress those areas, and moreover bend most straight lines into curves. But now we have panorama stitching software that can extract ideal spherical images from photos taken with any lens, combine those seamlessly into a larger spherical image, and render that into flat views in a great variety of ways. One possible result is an image with a really big field of view and believable perspective, like the picture above.
That picture was re-photographed with Panini-Pro from a partial spherical panorama stitched with PTGui from 30 photos taken at 3 exposure levels. It is not important that I took the photos with a Nikon F 24mm lens on a Canon EOS 7D camera, because the image would look much the same no matter what camera and lens I used to capture the raw data. This is an image of a real scene, but it was made with a virtual camera. And although it is undeniably a photograph, the perspective is certainly not “photographic”. It is my idea of what I might have seen from that spot if my visual field of view were 205 degrees instead of 130 degrees, as well as I could approximate it with the software tools available to me.
[originally posted in April 2012]