Common photography myths

There are a number of general statements about photography passed off as “the truth”. They are repeated again and again in introductory texts about photography and on the Internet. Repetition, however, doesn't make a false statement true. Here are the most common myths I've encountered:
  1. Autofocus works by measuring the distance to the subject
  2. Unlike linear polarizers, you don't have to turn circular polarizers
  3. A 50 mm lens on 35 mm systems is called “normal” because it delivers about the same view as the human eye
  4. Infrared films record thermal radiation
  5. Wide-angle lenses distort the image
  6. Flash range is increased when using positive flash exposure compensation
  7. The shorter the exposure time, the faster the shutter
  8. Different focal lengths create different perspectives
  9. Tele-photo lenses have a shallow DOF
  10. Macro lenses are only sharp at close distances
  11. Digital cameras have a deeper DOF than film cameras
  12. Medium format lenses have a higher resolution than 35 mm lenses
  13. Using a TC results in different DOF than using a longer base lens
  14. Depth of field around the focus distance is 1/3 towards the camera and 2/3 away from the camera

If you have a candidate for the above list, or when you have a comment, please get in contact.


Myth #1: Autofocus works by measuring the distance to the subject

While there are indeed a few AF systems that measure the distance to the subject, most of the more modern systems don't work that way. A few older systems (mostly in early AF point-and-shoot and some SLR cameras) use ultrasonic sound to measure the distance between the camera and some object in front of the camera. I specifically say “some object” here because it's not necessarily the subject. For example, when you shoot through a window, these systems focus on the glass instead of the real subject behind the window. All in all these so-called “active AF systems” don't work too well. Although they can do some things that modern AF systems can't (e.g. they can focus in total darkness), they are not very precise and don't work well over longer distances.

All modern AF systems are passive, ie. they don't send signals and use the echo to focus, but they only look at the light entering the camera through the lens. With the help of phase detectors sitting at distinct points of the viewfinder image, they determine whether the current focus is in front of the object under the sensor or whether it's behind, and how much it is out of focus qualitatively. They then turn the lens in the right direction until contrast of the image under the sensor is maximized. So in a way they work just like a photographer focusing manually.

The primary result of this process is, of course, that the lens is focused on the object under the sensor(s). The distance to that subject is a secondary result, but it's neither required for the AF process to work nor is it measured by the AF sensors. The distance is often extracted after focus is acquired, for example by reading the position of the focusing shaft directly or by indirectly calculating the distance from the number of turns of the AF drive and information on how one turn translates into change of distance. The distance information can then be used for other things, e.g. for flash exposure, advanced program exposure or DOF calculations.

Myth #2: Unlike linear polarizers, you don't have to turn circular polarizers

The word “circular” may have caused this misconception. Of course you still have to turn a circular polarizer. Why would these filters have a rotating mount if you didn't have to turn them? The position of the polarizer selects the direction of polarization, and light which is polarized in this direction can pass the filter, and other light is reduced or even blocked. It's the entire point of a polarizer to be able to make that selection, so you have to and want to turn it.

What changes with a circular polarizer is the nature of the light leaving the filter. With a linear polarizer the light leaving the filter is polarized mainly in one direction. Light with different polarization was blocked by the filter. This can cause problems with cameras that use a beam splitter to lead some of the light towards the AF sensors and metering cells. These beam splitters also act like a polarizer. When the direction of the beam splitter is at a 90 degree angle relative to the direction of the polarizing filter, no or very little light will reach the sensors. The camera will have trouble focusing or may meter incorrectly, depending on the position of your polarizer in front of the lens. This is not what you want.

Circular polarizers solve this problem by adding a de-polarizing layer to the back of the filter. This causes the light to be de-polarized, ie. it does no longer oscillates in a single direction. This light can pass the beam splitter just like light that never went through a polarizer, and AF and metering can work correctly.

Myth #3: A 50 mm lens on 35 mm systems is called “normal” because it delivers about the same view as the human eye

You have already guessed it, this isn't really the case. Human vision is much different from that of a camera. The horizontal angle of view is about 120 to 140 degrees, which translates to 13 mm to 9 mm focal length with a 35 mm camera. However, we don't see very sharp over this entire field of view but only at the center. This part of our retina has an angle of view of about 60 degrees, which translates to 35 mm focal length. The area with the sharpest vision, the one that you use right now to read these lines, is even smaller. Its angle of view is only 2 degrees, equivalent to 1100 mm focal length.

The lateral field of vision is mainly used to detect motion while the central part is used for precise visual inspection. So you see, human vision is quite different from the view through a 50 mm lens.

Some other theory says it's the standard focal length because it's the diagonal size of the 35 mm frame [and even that isn't precisely true; the diagonal size is about 43.27 mm]. Well, that's how we calculate what the standard focal length for a given format is, but it's not the reason why it was actually selected as the standard.

The real reason is much more mundane. When SLRs (or rather: system cameras) became widely availably to a large number of amateur photographers, the makers of these systems had to select one lens that they could sell as a standard set together with the camera. Following the logic of economics, it had to be one that could be made well for little money. 50 mm lenses fit these criteria perfectly. They typically are focused by extension (simple mechanism), don't require aspheric elements (simple lens element shapes), they don't require elements made of glass with anomalous dispersion (simple materials), and they're not zooms (fewer elements and simpler mechanism). It's easy to make a really good and fast 50 mm lens for little money. Also for this reason, the 50 mm standard lenses are often among the best lenses of one maker's lens lineup. That's why most 35 mm cameras came with a 50 mm lens until recently. The makers could have also selected 40 mm or 55 mm as the standard, but 50 mm probably looked more “even”.

Myth #4: Infrared films record thermal radiation

Simply put, they don't. You can't take a picture of your house with IR film to see where the insulation is leaking. If that worked, the temperature of the camera, of the film can and of the film itself would expose the film. For thermal imaging you usually use completely different devices, mostly electronic video cameras whose imaging sensor is cooled to very low temperatures so that it doesn't cause exposure to itself.

Infrared film is not much different from normal film. It's just also sensitive to light at wavelengths longer than visible red. It's sensitive into the infrared range of light. Thermal radiation has wavelengths much longer than what IR film is capable of recording.

Typically, IR films are sensitive to wavelengths between 400 nm and 800 nm–900 nm. Regular film is sensitive between about 400 nm and 660 nm. An object will emit light with a wavelength of 900 nm when it's almost glowing hot!

With IR films you still need a light source, eg. the sun, and you record the light that is reflected from objects in the scene. Also, when you want only IR light to expose IR film, you have to use filters that block out almost all other wavelengths. Otherwise the results from monochrome IR films doesn't look much different than those from normal monochrome films.

Myth #5: Wide-angle lenses distort the image

That misconception may be rooted in the fact that there's only a single word in the english language for different kinds of distortion. It's better to say “lens distortion” when we talk about distortions caused by the lens design, and “perspective distortion” when talking about distortions caused by perspective. Since there are different ways to design a wide angle lens, there are different degrees of lens distortions resulting from the design. You can find designs that intentionally don't even try to correct distortions. These are called “fish-eye lenses”, delivering not only very wide angles of view of about 180 degrees, but also extreme distortions. Straight lines of the original scene are rendered as strongly bent curves in the image.

On the other end of the spectrum you'll find the more expensive wide-angle primes that display virtually no distortions (it's harder to reduce distortions with zooms, so you can expect considerably stronger distortions even from expensive wide-angle zooms). These render straight lines of the scene as straight lines in the image. These lenses prove that wide-angle lenses do not generally distort the image.

So what happens when you stand in front of a skyscraper and take a photo of it with a wide-angle lens? The image looks strongly distorted at first sight. Well, it is, but it's not lens distortion, but perspective distortion. When the straight edge of the skyscraper is a straight edge in the image, the lens causes little to no distortions. What you see is distortions caused by perspective, ie. by the fact that the base of the skyscaper is much closer to you than the tip, so the base appears much bigger. A tele-photo lens would give you the same distortions of you used it from the same position. You only don't do this usually, because you want to fit the entire skyscraper into the frame.

The only way to reduce perspective distortion is to change perspective. Changing the lens doesn't help. If you photograph the skyscraper from a distance then all parts of the building are at about the same distance from you, and you won't see much of perspective distortions. You will probably use a telephoto lens in this situation, so again it looks like avoiding the wide-angle lens reduces distortions. However, this is coincidence, not cause.

Myth #6: Flash range is increased when using positive flash exposure compensation

Actually, the opposite is true. But first things first:

An electronic flash basically is a flash bulb connected to a capacitor. The batteries of the flash unit load the capacitor, and the camera closes and opens the circuit between capacitor and bulb. With TTL-OTF metering, the camera closes the circuit at the beginning of the exposure, and when it detects that proper exposure is reached, it opens the circuit, cutting off the flash. “Proper exposure” here means that the sensors have detected a total amount of light that causes a mid-toned image on the given film.

It's obvious that the load of the capacitor and therefore the burn duration of the flash bulb is limited. When the capacitor is exhausted before proper exposure is reached, the image is considered underexposed by the camera. Objects farther away from the flash receive less light from it, so to properly expose objects farther away, the flash has to burn longer. This distance is limited because the load of the capacitor and therefore the burn duration is limited. We call this maximum distance “flash range”. Objects at this distance or closer can be properly exposed by this flash unit with the given film sensitivity and lens aperture.

Some people think they can “juice up” the flash and extend the flash flash range by dialing in a positive flash exposure compensation on their camera. But when you understand that the flash works as explained above, you also understand that turning a knob can't increase the maximum load of the capacitor and therefore also doesn't increase the maximum burn time of the flash bulb and doesn't increase flash range. It would be really neat if you could save a lot of money that way, but it doesn't work.

What flash exposure compensation really does is move the cut-off point. With negative flash exposure compensation you tell the camera to cut off the flash earlier than normal, and with positive flash exposure compensation you tell it to cut off the flash later than normal. The main purpose is to compensate for subjects that are not mid-toned, so that flash exposure doesn't result in images that are mid-toned when the subject is not. So when the subject is too far away and the capacitor is exhausted before the cut-off point, it doesn't help at all to move the cut-off point even farther away.

But didn't I say that doing this even reduces flash range? How is that possible?

When using positive flash compensation, you basically tell the camera that proper exposure is not reached with the normal amount of light, but with more. Since you can not output enough light to reach that level for objects at maximum flash range, the image is underexposed for the camera. You have to move these objects closer to reach the level given to the camera, thus effectively reducing flash range for the given level.

Sorry, there are no cheap tricks to increase flash range. You have to either use a stronger flash with a larger capacitor, use a faster film or shoot at wider apertures.

[Real-life modern flash systems are a lot more complex. It would have been too complicated to explain it all here. But these systems still have a limited load of the capacitor. So even with the latest whiz-bang flash system, you can not increase flash range easily.]

Myth #7: The shorter the exposure time, the faster the shutter

Photographers can get a bit sloppy with their terminology. When they say “faster shutter” they actually mean “shorter exposure time”. The two are not really the same. Here's how shutters work:

Focal plane shutters (ie. located in the camera body between the mirror box and the film, and opening and closing vertically or horizontally) are two sets of shutter blades. Before the shot, one set is packed away at one side of the frame, and the other set is unfolded to cover the frame. When you take a shot with a long (let's say 1/30 second or longer) exposure time, the following happens: the unfolded set is opening in the direction opposite the other set, uncovering the film beneath it and letting light fall onto film. It moves with a constant speed until the entire frame is uncovered. This is called the “first curtain”. After some time, the other set starts to unfold and cover the film, starting with the part that was uncovered first. Again, the shutter blades move at a constant speed until the frame is completely covered. This is called the “second curtain”. Every point of the film is uncovered for a certain time, determined by the speed of the shutter blades and the duration the shutter is fully open. This is called the “exposure time”.

To get shorter exposure times the first step is to reduce the duration the shutter is fully open. By doing this you reach a point where the second curtain starts to close just as the first curtain becomes fully open. This speed is called the “x-sync speed”. It's important for flash exposure (I won't go into detail, here).

If you want to have even shorter exposure times, you have two options: you can make the shutter blades move faster. The first curtain takes less time from fully closed to fully open, and you can close the second curtain earlier and faster. There are obviously limits for this strategy. The shutter blades are not weightless, and they have to keep their shape while they move to ensure even exposure across the frame. You can not accelerate them beyond a certain rate. Otherwise the forces of acceleration and the mass of the shutter blades would tear them apart.

The second option is to use a trick: it works not by making the shutter blades move faster, but by starting to close the second curtain while the first curtain is not yet fully open. The edges of the first and second curtain move across the film plane in parallel. Light falls onto the film through a moving slit between the two curtains. This way each point of the film is uncovered for a shorter amount of time, and we get the shorter exposure time that we want. For even shorter exposure times you simply make the slit between first and second curtain narrower. You don't have to move the shutter blades any faster.

Shutter movement below x-sync speedShutter movement above x-sync speed

All modern cameras with focal plane shutters work that way. The shutter blades move at the same speed for all exposure times. The camera only varies the time the shutter is fully open (for times longer than the x-sync speed) and the width of the moving slit (for times shorter than the x-sync speed). This way the shutter mechanism can be kept simple, cheap and durable. Shorter exposure times can be achieved by more precise timing instead of faster shutter blades.

The shortest exposure time of a camera is not a good indicator for the physical speed of its shutter. The x-sync speed, however, is. As explained above, the “trick” only works for exposure times shorter than the x-sync speed. If you want to make the x-sync speed itself faster, there is no other way than to make the shutter blades move faster.

So how fast do shutter blades move? Let's look at a typical modern camera. It has a vertically moving shutter, an x-sync speed of 1/90 s, and 1/4000 s shortest exposure time. In other words, the edge of the shutter has to move a distance of 24 mm in 1/90 of a second. The speed of such a shutter is 2.16 m/s. That's 7.776 km/h or under 5 mph. If you walk briskly, you're moving faster than the typical shutter. Even the fastest focal plane shutters with an x-sync time of 1/300 s move only at about 7.2 m/s. If we didn't use the “trick” the shutter had to move these 24 mm in 1/4000 of a second. That would be a speed of 345.6 km/h or roughly 215 mph. What a difference!

Oddly, if you take a photo with an exposure time of 1/4000 s with the above camera, it will take a lot longer than 1/4000 s to take the shot. It will take a bit longer than 1/90 s (x-sync time plus delay between first and second curtain, which is the exposure time). There are some cameras (mostly panoramic cameras with swing lenses) that use the same “trick” as focal plane shutters, but for all exposure times. A rotating barrel with a narrow slit is exposing the frame from one side to the other. The exposure time is adjusted by making the barrel rotate faster or slower. Just like with focal plane shutters, it takes a lot longer to actually make the shot than the exposure time suggests.

Myth #8: Different focal lengths create different perspectives

People can be very sloppy with terminology. I've noticed that when people say “perspective” they very often mean something else, eg. “field of view”, and that many are unaware of what “perspective” actually is. This happens even to people who should know better: Taken literally from a lens catalog of a camera maker:
[…] Wideangle lenses offer an increased depth-of-field perspective not possible with the human eye. […]
What the hell is a “depth-of-field perspective”!? They probably meant “field of view”.

Perspective has nothing to do with focal length. Perspective describes how a 3D scene is projected onto a 2D canvas, eg. film, digital sensor or even your retina. Perspective describes how the objects in the 3D scene appear in the 2D image, which objects are visible and which are obscured by other objects, how big they appear in relation to each other. Focal length only influences the field of view, ie. how much of the entire scene appears in the image. Focal length does not change the relative sizes of objects in the image. Perspective is only influenced by the relative position of the objects in the scene and by the position of the canvas (viewer, camera). If you want one object in the foreground appear much bigger than another object farther away, it doesn't help to change focal length. The only way to accomplish that is to get closer to the object in the foreground. If you want both to appear as being about the same size, the only way to accomplish that is to get farther away from both objects.

Perspective is only influenced by your position, and field of view is only influenced by focal length.

You often hear things like “wide-angle perspective”. There is no such thing. People mean either “wide-angle field of view” or “close-up perspective” here. You also often hear “zooming with your feet”. This doesn't make any sense at all. Zooming changes focal length only (and therefore field of view), and walking changes perspective only. You can't replace one with the other.

So why is it still that with wide-angle photos the objects in the foreground look quite large, and with tele-photo shots the scene looks “compressed”? Doesn't this contradict what I just said? No, it's just a coincidence. With wide-angle lenses you can focus quite closely, and if you do get close to foreground objects, they will appear large. The wide-angle lens just allows you to also capture some of the background of the scene. With tele-photo lenses, you often shoot objects that are far away. It's that “being far away” that makes the scene look compressed, not the tele-photo lens. The lens only lets you capture the “right” crop of the entire scene, blocking out the foreground and concentrating on the faraway objects.

In many lens catalogs and books and on many web sites you can see a series of photos of the same scene taken with different lenses. With some of these the perspective is indeed different in different shots. Again, that's just a coincidence. What they don't tell you is that the photographer walked back to capture a foreground object at constant size while using ever longer lenses. But it's this “walking back” that caused the change in perspective, not the longer lenses.

Still not convinced? Here are a few examples:

Image 1: Shot taken with 20 mm focal lengthImage 2: Crop from image 1Image 3: Shot taken with 100 mm focal length

If focal length had any influence on perspective, shouldn't image 3 look radically different than image 2? But except for sharpness, they're identical. So why do they look the same? It's because they were taken from the same position.

Another example:

Image 4: Shot taken with 20 mm focal lengthImage 5: Shot taken with 20 mm focal length

The background is almost identical in both shots, but in image 5 the foreground is much more pronounced. Both were taken with the same wide-angle lens. Why is that? It's because image 5 was taken closer to the stump in the foreground and from a lower angle. It's not the wide-angle lens that pronounces the foreground, it's the shooting position.

Myth #9: Tele-photo lenses have a shallow DOF

In this short form, this statement is wrong. The accurate version would be “Tele-photo lenses used at short distances and at wide apertures have a shallow DOF”. Maybe too inconvenient, but with enough omissions even a correct statement can turn into an incorrect one.

Here're the facts: DOF is a function of aperture and magnification (on film or sensor), and magnification is a function of focal length and distance. When you shoot a longer lens from a greater distance you can get the same magnification as when shooting a shorter lens from a closer distance. When you also use the same aperture, you will get the exact same DOF. So DOF with a longer lens will only be shallow when you also shoot from a short distance.

Many super-tele lenses can't focus particularly close. You often don't get magnifications greater than 1:6 or 1:8. For example, a 600/4 shot at its closest distance of 6 m will have a deeper DOF than a 100/2.8 shot at 80 cm (and at f/4), even though it's six times as long.

Often you get a smoother out-of-focus background with a longer lens. But that's not because DOF is shallower. The longer lens with its narrower angle of view just sees a smaller section of the background, and it's easier to find a smooth section of the background when it's small rather than large. For example, you have to turn a 600 mm lens by only 4° to get a completely new background. To do the same with a 100 mm lens you have to turn it by 24°. So if you're after a smooth background, using a longer lens may be a good idea. But if you're actually after a shallow DOF, using a longer lens may not be enough.

Myth #10: Macro lenses are only sharp at close distances

I don't know if this was ever true, maybe half a century ago. Today's macro lenses are just as sharp at close distances as at infinity focus. You can use a macro lens for general photography just like any other lens.

Myth #11: Digital cameras have a deeper DOF than film cameras

When digital cameras with sensors smaller than the usual 35mm format were released, all kinds of confusion was created. First, there was the infamous “focal length multiplication factor”, which is really just a crop. Then came the myth that the same lens, used on a digital camera instead of a film camera, produces a deeper DOF. The myth lives on with full-frame digital cameras.

Technically, this is wrong. The depth of field depends only on magnification (on film or sensor) and aperture. “Magnification” is the relative size of the object in front of the lens compared to its image projected onto the film or sensor. This relationship does not change, no matter what kind of sensor you hold behind the lens or what size it is. With the same magnification and the same aperture, you will always get the same DOF.

For practical purposes, there is some truth to this myth. That's because you usually don't compose your shot for a certain magnification but for a certain framing. For example, you try to fill the frame with some object. With film or sensors of different sizes, this results in different magnifications, and hence in different DOF. With smaller sensors, you shoot at a smaller magnification when you fill the frame with a given object. A smaller magnification leads to a deeper DOF when you use the same aperture. With a larger sensor, e.g. medium format film, the same framing results in a larger magnification, reducing DOF. So DOF indeed appears deeper for smaller sensors and shallower for larger sensors. However, this is not because of the size or nature of the sensor, but because you typically use these formats differently.

Myth #12: Medium format lenses have a higher resolution than 35 mm lenses

Bad news: most often the opposite is true. Most medium format lenses have a smaller resolution than comparable lenses for the 35 mm format. The good news is that the larger film format more than makes up for the smaller resolution.

For example, when a 35 mm lens can resolve 80 lp/mm (line pairs per millimeter), it can resolve 2880 lp over the width of a 35 mm frame. A medium format lens only needs to resolve about 50 lp/mm to project 2800 lp onto its 56 mm wide frame. If it can resolve between 50 and 80 lp/mm, it can resolve more lp on its frame than a 35 mm lens on a 35 mm frame.

Because of the typically lower resolving power of medium format lenses it also makes little sense to adapt these lenses to 35 mm cameras, at least when you try to gain resolution. These adapters only combine the disadvantages of both systems. By using medium format lenses you can only gain resolution by also using medium format cameras.

Myth #13: Using a TC on a shorter base lens results in different DOF compared to using a longer base lens

When you mount a TC between the lens and the camera, the TC changes both the true focal length and the true relative aperture of the lens (the physical aperture does not change, but since the true focal length becomes longer, the relative aperture becomes smaller). With a 1.4× TC the focal length becomes 1.4× longer, and the aperture becomes 1 full stop smaller. With a 2× TC the focal length becomes 2× longer, and the aperture becomes 2 full stops smaller.

DOF only depends on true focal length, true relative aperture and focus distance. When you use a TC to make a lens longer, you get all the characteristics of the longer lens, including DOF. For example, when you use a 2× TC on a 200/4 lens, you get the DOF of a 400/8 lens.

The following two pictures are taken from the same distance. With both shots the focus was on the middle of the ruler. One picture was taken with a 400/4.5 lens at f/8. The other was taken with a 200/4 lens + 2× TC (=400/8) at f/8. As you can see, DOF is the same for both shots.

Shot 1: 400/4.5 at f/8Shot 2: 200/4 + 2x TC at f/8

Myth #14: Depth of field around the focus distance is 1/3 towards the camera and 2/3 away from the camera

That's only a very rough rule of thumb, valid for medium magnifications. Depth of field is very variable. At larger magnifications, e.g. larger than 1:15 (!), DOF is roughly symmetric, i.e. front DOF and rear DOF are almost the same. As magnification becomes smaller (typically due to increased distance), DOF becomes more and more asymmetric. At some point there is indeed a 1:2 relationship between front DOF and rear DOF. But if you decrease magnification more, rear DOF will become larger and larger. At some distance, rear DOF will even reach infinity. We call this point “hyperfocal distance”. Also see the explanation on the optical formulas page.

As an example, here's a short table of front and rear DOF for 50 mm focal length and f/4 for various distances:

Distance [m]Front DOF [m]Rear DOF [m]Front:rear ratio
0.50.0120.0121:1
10.0460.051:1.09
50.9681.5791:1.63
103.2439.2311:2.85
156.27938.5711:6.14
209.7961:∞

Morale: If you want to know depth of field, use a DOF calculator or the DOF preview function of your camera.

There are a number of general statements about photography passed off as “the truth”. They are repeated again and again in introductory texts about photography and on the Internet. Repetition, however, doesn't make a false statement true. Here are the most common myths I've encountered:


Readers' comments

#1: Comment posted by Steve Jones on November 8th, 2008 - 03:52:48 PM:
I think Myth #8 as documented is misleading. There is indeed a difference in perspective if you use two different focal length lenses to frame similar scenes. I say similar - it's simply impossible to frame a 3D scene identically with lenses of different focal lengths due to perspective differences; you can only frame a 2D object with a plane parallel to that of the sensor using two different focal lengths. However, people often have the option of picturing something like a building with a wide-angle lens or from a greater distance using a longer lens. If the focal length is markedly different, then so will be the perspective.

One of the most obvious examples of this is with portraiture where there is a radical difference between taking two identically framed photos with different focal lengths (and a common sensor size). In general, short focal length lenses will exaggerate features and the longer focal length lens will tend to flatten them. It's for that reason that moderate telephoto lenses such at 85mm or 135mm (of 35mm FF) are often used for such purposes - the effect is considered to be more flattering. Frame the same person with a 28mm lens and the perspective will be very different indeed.

So perspective is a combination of the distance from the scene, and the focal length used. For similarly framed photographs then the perspective will be differ between lenses of different focal lengths.

It is this effect that people are referring to when they state that focal length changes perspective - it's the view of a "similarly framed" object with different focal lengths which necessarily involves changing the distance too. So I'd characterise this as a "part myth".
Michael Hohner answers:
I can only repeat what I wrote above: the change in perspective that you see when you frame a subject identically with different lenses is not caused by changing focal length, but only by changing the distance to the subject. I think it is important to emphasize on this, because if you don't know which cause has which effect and only work with some vague idea of it, you're less likely to make a conscious decision about which lens and distance to choose to get the effect you want. I admit that when sometimes the myth is stated as titled above, it is only for the purpose of simplification, but I suspect just as often it is stated as such out of ignorance.
#2: Comment posted by KGruppe on May 3rd, 2009 - 01:11:36 PM:
Great doubts of your some conclusions.
Myth #12 is your own opinion. I have an experience with medium format Mamiya 75/3,5 lens fitting to Sony A700 DSLR camera. IQ results as Leica-like comparison to or even more.
See please: http://public.fotki.com/Essel33/medium-format-lens-/
Michael Hohner answers:
This is not just an opinion, it's based on MTF measurements of some MF lenses.
#3: Comment posted by KGruppe on May 4th, 2009 - 12:25:44 PM:
Well. But gaussian type lens for RF cameras in medium format have nice MTF.
Which MF lenses you meaning and where I find MTF measuring for productivity understanding?

Some of MF lens have 100 lp/mm resolution as Mamiya lens for example.

http://www.hevanet.com/cperez/MF_testing.html

Mamiya 7 50mm f/4.5 68 76 68 f/4.5
107 96 42 f5.6
107 107 48 f/8
96 96 68 f/11
85 85 68 f/16
54 54 60 f/22

Mamiya 7 80mm f/4 120 120 60 f/4
120 120 68 f5.6
120 107 68 f/8
107 107 76 f/11
76 76 68 f/16
60 60 60 f/22
Michael Hohner answers:
For example, at www.photodo.com you can find many MF lenses with MTF ratings that are significantly lower than that of quality 35 mm lenses. For example, compare the Hasselblad Makro-Planar CF 120mm f/4 with the Minolta 100/2.8 Macro.

Note that I didn't say that there are no MF lenses with higher resolution than 35 mm lenses.

#4: Comment posted by Steve Meredith on May 6th, 2009 - 06:17:26 PM:
Thanks for the interesting essay!

I believe there's an error in your speed calculation in Myth #7 though. Since there are two curtains each only has half of the 1/90s to travel across the frame, which means the speed they travel at will be double what you have stated.
Michael Hohner answers:
The numbers are correct. Both curtains travel in the same direction across the full frame, and if the x-sync time of the camera is 1/90 s, this will take 1/90 s.
#5: Comment posted by Douglas on May 6th, 2009 - 10:33:42 PM:
I would say that using MF lenses is still a viable option on 35mm DSLR cameras. While it's true that MF lenses don't have to be as sharp, they usually are. Now that digital backs have pixel pitches similar to the A900, these lenses are tested and still perform well. Cameras like the A900 only uses the "sweetspot" of the MF lens, so results are even better. I know of photographers who have exhaustive lens tests with the A900, D3x, and 5Dii, and the Zeiss 120 CF is still tops. Plus, a lot of these MF lenses have an advantage, because they only have 4-5 elements, resulting in less flair. IMO, if you already have some Hassie Zeiss lenses, get an adapter, as it's worth your while. :)

#6: Comment posted by scottG on May 12th, 2009 - 12:05:21 AM:
michael, thanks for your great photography essays. I have a question, regarding #7 - are you saying if I'm taking a photo with my Nikon d300 at 1/1000th of a second, of a hummingbird, it really takes 1/250th of a second to take the photo (the sync speed)? if that was the case, wouldn't hummingbirds always come slightly blurry? I'm confused! doesn't shooting at 1/1000th of a second truly "freeze motion"? is there no point in shooting faster than 1/250th to freeze motion?
Michael Hohner answers:
Yes, the process of taking the photo will take about 1/250, even when your exposure time is 1/1000. This is just the time the first and second courtain take to travel across the frame. Of course, each point of the frame will only be exposed for 1/1000.

If you could take two photos of the same moving subject, both at 1/1000, but one with a camera with an x-synch time of 1/60 and the other with an x-synch time of 1/250, you'd probably see a difference.

#7: Comment posted by John Leonhard on July 7th, 2009 - 08:40:10 AM:
The best explanation of focal length vs. perspective that I have come across. Thanks, you're a good teacher, you are particularly articulate and not confusing. (have you other pages of explanatory information that one can read?)
Michael Hohner answers:
Check the menu to the left. This is all there is currently.
#8: Comment posted by gimumancer on October 5th, 2009 - 09:42:20 AM:
hi, i think myth # 9 can be revised more simply to "Tele-photo lenses have shallowER DOF" instead of "Tele-photo lenses used at short distances and at wide apertures have a shallow DOF", my 2 cents...
Michael Hohner answers:
Well, shallower than what? As the section describes, it's more complicated, and focal length is only one of several factors.
#9: Comment posted by Steve Lane on January 26th, 2010 - 05:19:31 PM:
Myth #10 "You can use a macro lens for general photography just like any other lens."

This is not strictly true; I have two macro lenses that only operate from between 2 to 9.5 x life size. Even with no extension, they cannot focus in areas that can be considered OK for general photography. They are stricly high magnification macro lenses. There is some confusion as to what a macro lens is; True macro lenses operate from life size (1:1). The majority of lenses that focus from infinity down to half life size should be considered as 'close'focus' lenses. These are the type that can be considered good for general photography. I have one of these (Zuiko Digital 50mm F2) and it is superb for capturing general images as well as close-ups .
Michael Hohner answers:
There are always exceptions to a rule. The above entry deals mainly with “normal” macro lenses that also focus to infinity.

I'd better not start a discussion about what a “true” macro lens is. Following your definition, Zeiss currently does not have a “true” macro lens for SLRs.

#10: Comment posted by Rizal on January 28th, 2010 - 10:32:34 AM:
A very good article to read, which many misguided about photography that needs to be learned. I hunt continued in the book or the internet about fotografie, but this is very different from artkel-existing article.

Thank you very much I really enjoyed your article.

Yose Rizal
#11: Comment posted by Madeline on February 10th, 2011 - 10:50:00 PM:
Great clear information. Thank you!
#12: Comment posted by Tristan Grimmer on May 27th, 2011 - 09:13:39 PM:
Regarding Myth 8 again. I really think that what it being described here as a myth is incorrect.

"the change in perspective that you see when you frame a subject identically with different lenses is not caused by changing focal length, but only by changing the distance to the subject."

If a subject is framed identically with a large FOV lens, the ratio of its size to a further-away object will be much larger than with a smaller FOV lens. Changing 'the distance to the subject' is simply what needs to be done to achieve 'identical framing'. It is the FOV/focal-length that very directly affects 'perspective'.

In 3D computer graphics we use a set of transformations to get things into 2D picture form:
v' = PVMv where v' is the final 2D position (like in a final photograph) of a point 'v' on an object in the world. Basically, and reading backwards, positions in the world (Mv) are transformed by a view (V). The view is a function of the camera's position/orientation only. After that the positions are in 'eye space' (camera-space). Then we tranform the point by a projection (P)... the projection transformation represents the 'lens' in a real camera. The projection may be parallel (eg orthogonal like CAD drawings, or isometric), or it may be a perspective transformation. To create the perspective transformation you generally need a FOV angle (horiz, vert, or diag), an aspect-ratio, and near and far clipping distances. The focal-distance,film-width/height directly converts to a FOV/aspect/near-plane-dist. The point here is, the focal distance directly affects the perspective matrix... and that's all... it does NOT affect the view transformation at all. In fact, I would go as far as saying: "The primary variable that affects perspective is the focal distance."

Your example with the cropped pictures is very misleading... the 'cropping' is really another (scaling/clipping) transformation and can't be discounted. In fact, that's exactly what widening the FOV does, it pushes more content into the same space, so to undo it you needed to do your scaling/cropping! Again, the point here is that with the 20mm focal distance (larger FOV) I can make the relative sizes of any two objects I choose in the scene be much more different than with the 100mm lens... yes I need to walk around so I can frame, but I, with the 20mm, will be able to achieve a bigger difference than you with the 100mm... and I don't care where you walk or position yourself. In fact, as the focal distance increases, the projection becomes less and less perspective and more and more parallel.

I will concede that this may be a bit of a problem in naming. To many of us, and certainly mathematically, the word 'perspective' is the effect of things closer being bigger than things far away (but there may be some other artistic or common usage I am unaware of). The other extreme is a 'parallel' projection where distance has no bearing on final size. This is why, for example, taking a portrait is better done with a smaller FOV... you don't get the big noses... less perspective, more parallel... having to walk around (or back in this case) is the 'side-effect'... it's what you need to do to frame your shot.

--tristan
Michael Hohner answers:
In your 3D CGI example, the angle of view does not change perspective at all. It merely restricts what parts of the scene will be visible in the rendering. The size relationships of the objects in the scene do not change when you change angle of view. What does influence perspective is the position of the (virtual) camera in the scene. And that's exactly what also happens in the real world: The lens determines only angle of view, and the position of the camera in the scene determines only perspective.

I know that cropping changes the field of view, that not misleading. It's the whole point of the example. It shows that by changing lenses you only select which parts of the scene are visible in the picture. It does not change the size relationships and the apparent relative position of the objects in the scene (i.e. “perspective”) unless you also change the position of the camera.

I think you're confusing terminology a lot. When you say “FOV” (and most confusingly, “FOV angle”), you sometimes mean “angle of view”, and sometimes “field of view”. These are two very different things and should never be confused. I'm also not sure what you mean with “focal distance”. Maybe simply “distance”. “Focal length”, in any case, is something different. There's also no such thing as “less perspective” or “more perspective”. “Perspective” is not a quantity.

I suggest that you look around with a small frame on your hand. Holding the frame close to your eye corresponds to a short focal length (wide angle of view), holding it at arm's length corresponds to a long focal length (narrow angle of view). When you don't move your eye, but only move the frame in your hand, what's changing? Only what you see inside the frame vs. outside the frame. The size and position of the objects in the frame relative to each other (i.e. perspective) does not change.

#13: Comment posted by Tristan Grimmer on May 28th, 2011 - 01:12:52 AM:
Hmm... interesting. I can say that I agree 100% with what the end effects are as you describe them... but not necessarily the terminology. I don't think I'm 'confusing' terminology but rather that different fields use terminology differently. I must also admit here I'm in the CG field, not photography. FOV for us IS an angle... it is either the horizontal, vertical, or diagonal angle formed by the 'view frustum' sides, the frustum is often represented as a 4x4 'Perspective Projection' matrix. Unlike cameras that often exhibit some sort of 'lens distortion', the sides of our frustum are perfectly flat, and straight lines always remain straight (unlike a real lens).

Also interestingly, I would use the terms 'Angle of View' and 'FOV' interchangeably (and, not that Wikipedia is the end-all and be-all, but it also agrees: "In photography, angle of view describes the angular extent of a given scene that is imaged by a camera. It is used interchangeably with the more general term field of view.")

I'm curious how you (or photographers in general) define them differently... as I suspect Wikipedia may be wrong in this instance.

I also treat length and distance as synonyms... so when I say focal distance I mean what you call focal length. Even in optics, Front focal length (FFL) and Front focal distance (FFD) are synonyms.

I suppose the crux is, CG does NOT define perspective as the size and position of the objects in the frame relative to each other...that's the 'view' transformation that does that. Rather, perspective is a type of projection, one that compresses space more the further away you get... it defines a 'frustum' with non-parallel sides.

I think of perspective as the degree to which parallel lines in the real world end up being NOT parallel in the end photograph.... or in artistic terms, the amount of 'foreshortening'.

Anyways, thanks for your response! Still curious about your take on 'Angle of View' :)
Michael Hohner answers:
Let me clarify the terminology. Imagine the view from the camera being a pyramid in space, with the tip at the camera and the base directed towards the scene (actually, it should be a cone, not a pyramid, but the rectangular recording medium crops the cone to a pyramid). With this visualisation:

  • Angle between the sides of the pyramid: angle of view
  • Distance between tip of the pyramid and the base: distance, or if you choose to place the focus of the lens there, also focus distance
  • Size of the base of the pyramid: field of view
  • In general, the plane of the base, uncropped: field
  • (while we're at it, flatness of the base, as it may slightly bend in or out: field curvature)

The focal length in photographic terms is not directly visible here. It's a property of the lens and merely translates to an angle of view.

As you see, changing lenses or zooming only changes the angles between the sides of the pyramid, i.e. angle of view, but not how objects at the base look like in the picture. Walking back and forth (and possibly refocusing) only changes the height of the pyramid and therefore also the size of the base (field of view), but not the angles.

#14: Comment posted by Tristan on May 28th, 2011 - 08:55:20 AM:
Oh yes... to be clear, when I say "more perspective" and "less perspective" I probably should have said "more perspective distortion" and "less perspective distortion". A good example of this can be found here:

http://en.wikipedia.org/wiki/Perspective_distortion_%28photography%29

The cube on the right goes from a high level of perspective distortion to none at all for the final isometric projection.

Again, the claim that "In your 3D CGI example, the angle of view does not change perspective at all" is completely false in the field of CG. The perspective matrix (P) is completely defined by an FOV angle, an aspect ratio, and a near and far clip distance:

http://www.opengl.org/sdk/docs/man/xhtml/gluPerspective.xml

In that particular implementation the fov angle (in radians) is the vertical one... although it could be rewritten to either take a diagonal or horizontal fov angle.
#15: Comment posted by Martin on May 30th, 2011 - 06:40:12 PM:
I think I understand the disagreement that you two are having. Michael's statement is that for a fixed target, shot from a fixed location, perspective is compeletly independent of focal length. Tristan, I think you're trying to say that the shorter focal length allows the maximum amount of perspective distortion, which is also true, but requires you get closer to the target in order to take advantage of this option. Again, for a fixed shooting distance, the amount of perspective distortion is unchanged. Please note, that the example you gave from wikipedia is doing exactly that: the camera is moving away from the target as the focal length is being changed (look at the number of grid squares between the base of the cube and the base of the frame)
#16: Comment posted by jurgen on August 22nd, 2011 - 02:53:48 PM:
the explanation done in myth 8 is 10 % correct to my opinion. Even computer graphics apply the pinhole camera model. First Object coordinates are transformed from world to Camera coordinates, and than to image and pixel coordinates using a perspective projection and a camera matrix. F, the focal length defines the position of the image plane. Changing the focal length only shift this projection plane , a bigger focal length will allow you to project more FOV on the ccd,
but ratios stay the same. Which is of course not the case if you change the camera's position. It fits nicely because I applied the camera model to make simulated images and the explanation here helped me understand. Nice thanks
Michael Hohner answers:
See the discussion above. Focal length in photography does not define the position of the image plane.
#17: Comment posted by Mario Liedtke on January 12th, 2012 - 05:01:33 AM:
I just want to thank you yor that great article!
It teached me one more time, that there are to many wannabes around that make people confuse by using sloppy terms.
Thank you for helping me back on the right way. With the very most of your words I agreed already before started reading, but now I know more exactly WHY!

Thanks a lot!
#18: Comment posted by Jay S on January 31st, 2012 - 02:37:55 AM:
Thanks for a great write-up!

It took me a while to understand the entire concept behind Myth #8, but I do see that you are completely correct with what you say. Ironically, there are two ways to look at it!

The way I see it, perspective is the trajectories that light takes from the scene to the camera sensor... these lines do not change at all when zooming (which is essentially has the same effect as cropping), but they do change when the camera is moved.

However, the point of view many take is that to get the same frame (in terms of foreground) at a different focal length, the camera must be moved. It's the moving that causes the perspective change, though, and the zooming that resizes the foreground/subjects of the frame.

Thanks, and cheers from Canada!
#19: Comment posted by Pranav on January 31st, 2012 - 11:53:56 AM:
Hi,
Myth#6:-
Last line:-
"You have to either use a stronger flash with a larger capacitor, use a faster film or shoot at wider apertures."

I am not able to understand how shooting at wider aperture or faster film(i guess you mean shorter exposure time)can increase the flash range?
Michael Hohner answers:
Flash power is measured as Guide Number, which is calculated as aperture number multiplied with distance at ISO 100. So for the same GN, using a larger aperture (smaller aperture number) results in a greater flash distance. When you use a larger aperture, more light enters the camera, so objects at greater distance can be illuminated less by the flash and still be considered exposed correctly.

The same is true for increased film/sensor sensitivity. Objects at greater distance can be illuminated less by the flash and still be considered exposed correctly, because the higher sensitivity compensates.

Exposure time is irrelevant for regular flash and low ambient light.

Also see the Flash Compendium for more details.

#20: Comment posted by Pranav on February 2nd, 2012 - 05:40:19 AM:
Hi,
With increased sensitivity of film or sensor you mean higher ISO number?
Michael Hohner answers:
Yes.
#21: Comment posted by Keith Toh on February 4th, 2012 - 03:49:25 AM:
I want to thank you for this epic article! I grappled with these same myths for years. Reading explanations from so-called experts only served to confuse me even more. In the end I had to discard all pre-assumptions and verify everything from first principles, while piecing together facts from various sources. I finally came to the same conclusions as yourself, but how I wish I discovered your article earlier- it would have saved me years of brainwracking!

My view is that photography has a creative side and a technical side. The former requires imagination, aesthetics and an artistic sense. The latter, however, requires razor-sharp analyses, intellectual rigor and a no-nonsense approach. Photographers who don't appreciate these will confuse themselves and mislead others to no end. Your article addresses the technical aspect of photography authoritatively- like a sharp knife slicing apart an entangled web of photography concepts. Massive kudos to you for that!

To other fellow photographers:

If you still have doubts about Michael's statements, please spend the time to re-read the passages, or even test out his suggestions to convince yourself. I assure you it will be worth the effort- because his article is the closest to the truth I have ever read. If you intend to disagree, please take one word of advice from me: in any debate, agreeing on definitions is crucial. Sloppy or conflicting usage of terminology will render the whole discussion pointless. Michael's usage of terminology is logical and consistent within the field of photography. Terms like 'perspective' may mean different things in other fields, but I think it serves little benefit to impose them here.

Well, that's all from me. Congratulations on a great piece of work, I will surely recommend any photography buff to visit this wonderful page!
#22: Comment posted by Sage on April 28th, 2012 - 09:09:37 AM:
@ Tristan Grimmer,

I wouldn't compare 3D programs to life/photography. There's a lot of "cheating" going on under the hood in regards to perspective and field of view (and lighting and reflections, etc.).

From my understanding, changing the Field of View is akin to positioning the lens so that the image projected on the sensor is larger; i.e. being magnified onto the sensor.

Changing the perspective is getting actually closer to the subject so that it simply fills more of the sensor.

The camera doesn't see the subject differently at different FOV's, it's just a way of selecting a specific area of the scene or a broad area, call it pre-cropping.

When you move closer to the subject the camera does see it differently, relative distance between elements of the subject change. If I'm 25 feet away from you, your ears may only be 5 inches further away than your nose. That 5 inch difference is infinitesimal compared to the overall distance of you and I, only 1.6%. Because of that, they will pretty much look relatively undistorted and in the right position. I can now use my 300mm zoom lens to capture a portrait of your head.

Now lets say I switch to a much wider lens, maybe an 18mm wide angle lens. If I want to fill the frame with your head, I now have to get way closer to you because my field of view is much larger now. Lets say I have to get 1.5 feet away in order to fill the frame with your head. Now that 5 inch difference between your nose and ears is significant, it's 27.7% of the entire distance between you and I. The relative size of your nose to your ears is much different because of the new perspective.

The actual figures are estimated but the logic behind them should be valid.
#23: Comment posted by Perspective on May 17th, 2012 - 12:05:44 AM:
The issue with Number 8 is the idea that the camera determines perspective and not the viewer; that perspective is the location of viewpoint.

But If you asked 100 people if image 1 and 3 (from the first example) were different perspectives, everyone would say yes. Because from the viewer's point of view of perspectives are different (there is more stuff in image 1 not in image 3. This is why they call it a wide-angle perspective vs a tele-photo.

There are really two types of perspective going on here.
1) The distance of subjects to the film plane and their relationships to another (this does not changed with focal length)
2) The way the viewer sees the scene in relation to the space it fills on the frame. (this does change with focal length).
Michael Hohner answers:
But #2 is not “perspective”, it's “cropping” or “composition”. That's the whole point.
#24: Comment posted by Richard on June 29th, 2012 - 06:36:37 PM:
Just wanted to say THANKS for a great article - things I was ignorant of or confused about are now clear to me. I really enjoyed reading this.

re: the Perspective vs Focal Length myth - the debate is over, the street scene photos prove the point: focal length has no effect on perspective.

Maybe now the online community of photographers will officially retire the expression "zooming with your feet." I always knew it made no sense, now I understand why.

Thanks again!
#25: Comment posted by Chris on February 28th, 2013 - 03:26:28 PM:
Wow! Epic! Thanks so much for contributing to enlightenment of people! The discussions/replies to some of the myths are so revealing, and just illustrate how much more of this kind of work is needed. Very, very good job, Michael!

I'd enhance #8 a little with some more images to include the same-framing-different-focal length argument, in case you want to invest any more time in this.

Also valid for the shallow-DOF myth #9. It's a long time ago, but I calculated this once and as I recall it, it roughly boils down to the rule which you outlined already: Same framing, same aperture value, same DOF, no matter what focal length. (Limited, but only slightly, by the higher "visibility" at higher differences focal lenghts.)

Thanks again and best regards!
Chris
#26: Comment posted by Jay on May 25th, 2013 - 05:16:57 AM:
As for myth #3 (50mm lens on 35mm camera), indeed the angle of view is not the reason why it is considered a normal lens, and the angle of view or FOV is not what people mean when they say "it delivers about the same view as the human eye".

It is about 3D perspective, about the relation of distances. When you shoot objects with a wide angle lens, the perspective is stretched away from you. Objects that are far away appear even further away. Likewise, when you shoot with a telephoto lens, distance appears to be compressed, tightened. Think of formula 1 cars on TV, shot with telephoto, driving towards you. They're going faster than 300km/h, yet they seem to hardly move.

(by the way, this effect is being used in a cinematographic technique called 'dolly zoom'. You zoom in while backing off, or zoom out while getting closer, so your subject stays in the frame as it is but the whole background is getting stretched or compressed)

So, both wide angle and telephoto lenses distort the perception of distance against what we humans are used to see. A 50mm lens on 35mm cameras is as close as you can get to reproduce the normal perception of distance on photographs (it's slightly above 48mm, so 50mm comes very close).
Michael Hohner answers:
I think you should go right to myth #8 and read that. The reason why the F1 car seems to hardly move is not that it was shot with a telephoto lens, but that it is far away. If you looked at that scene with your bare eyes and concentrated only on the car, it would look the same. Perspective is not influenced by the lens.
#27: Comment posted by Jay on May 25th, 2013 - 04:32:32 PM:
@Michael Hohner:
Yes, that is true of course. As such, it is no "special thing" about the 50mm lens. But still, when you are using a telephoto lens, you are bringing the subject closer.

Let me explain it like this.
When you shoot very wide angle (and let's just assume it is free of lens distortion) and want the picture to look "natural" to our eyes, to get that effect of "natural view", you could just crop the picture in post. So yes, technically that has nothing to do with the lens used. But when you're shooting telephoto, for the same thing, you would have to extend the picture, which you cannot do, unless you shoot a panorama with a certain number of rows and columns so you get the same field of view as you would with a 50mm. So, again, yes, it's not something special about the lens.
But the point is, the human vision or the human perception thereof is not used to seeing just a crop of its normal vision, but that's what you do with a telephoto lens.
Our two-eye stereo view allows us to have "real" 3D vision only within a certain range from where we stand, because the distance between our eyes is kind of small. Objects further away appear flat to us. We don't actually notice because our brain knows they're 3D too. But seeing only flattened objects in the frame is not what our normal vision is used to see, especially in relation to the distance. Because as soon as the objects get closer to us (and that is, by size, what a telephoto lens does), they "un-flatten" and start to look threedimensional. They don't do that when you use telephoto.
So, to be absolutely precise, I will say: The *field of view* you get when you use a 50mm lens on a 35mm camera is what best reproduces the human perception of distance. You could as well crop a wider shot to the same FOV or stitch a panorama of telephoto shots to the same FOV, but when you don't want to do that and just ask for a lens that does this without any post production, it's the 50mm.

The best proof that it is not about the lens: To get the same FOV on a crop camera like entry level DSLRs with a 1.5x or 1.6x crop factor, you'd have to use a 30mm lens. You could as well use a 30mm on a full frame camera and then crop the picture to compensate for the larger sensor. It is all the very same. So, once again, yes, it's not the lens but the FOV, but you only get that FOV with a 50mm/30mm lens unless you put more work in it than just taking the picture.
Can we agree on this?
#28: Comment posted by Pat on June 19th, 2013 - 01:00:10 AM:
Thanks for clarifying a lot of points for me but I am now very confused re #7 and shutter times.
If we were taking a photo of say a hummingbird and we get motion blur at 1/200s (my fastest synch speed) would we not get the same blur at 1/4000 because the time taken to expose the sensor completely is still 1/200?
If that is true, what is the value of shutter speeds above synch?
I am sure I have misunderstood something here.
Michael Hohner answers:
Imagine an object moving vertically in the frame in parallel to the shutter blades and above sync speed (see animation above). Then for each part of the sensor the object would appear reasonably sharp, but the object appears strangely distorted in the direction of the shutter blade movement, as if it were stretched out. Or if it moves orthogonally to the blades, the top of the object will be at one place when it exposes the sensor, and the bottom part will be at a different position horizontally instead of straight below when it exposes the sensor.

For example, see this Wikipedia image.

The main value of shutter speeds above x-sync is to get correct exposure with the given light, ISO value and aperture. By having, for example, exposure times between 1/250 and 1/8000 available, you have another 5 stops of exposure range available before you have to change one of the other parameters.

#29: Comment posted by carl on July 26th, 2013 - 09:56:11 PM:
Large format sensor seem to have more depth of field because depth of field is judged off the print, not the sensor. Bigger sensors don't have to be magnified as much resulting in smaller circles of confusion on the print. Very small sensor cameras operate at diffraction, thus everything is in focus, it just isn't sharp.
#30: Comment posted by Wilba on February 23rd, 2014 - 01:26:24 PM:
Thanks for a great article. I have linked to it in a mythbusting collection of my own, which you might find interesting (http://www.dpreview.com/articles/2978485979/busted-digital-photography-myths).
#31: Comment posted by Tim L on May 13th, 2014 - 11:30:15 PM:
Great article. I have a question about panos. Based on #5 and #8, I would conclude that if I created a panorama using a tripod with a properly positioned nodal plate, the image would overlay more or less perfectly with a single image shot on that same tripod using a lens with an FOV equivalent to the pano.

If true, this suggests that the advice I occasionally read about shooting panos with no wider than an "x"mm lens is incorrect. The resolution of the final image will vary but not the appearance of it.
Michael Hohner answers:
In theory, you're right. In practice, however, you will have problems finding that wide-angle lens that covers the same angle of view of your typical stitched panorama, and your stitching software and wide-angle lens will probably not produce the same distortions.
Add your comment to this page

If you have questions please read the F.A.Q. list first. Many common questions are already answered there. Also please read earlier comments as they may also answer your questions.

Do not send messages with unsolicited commercial offerings (SPAM). They will be deleted unanswered. Don't post HTML or something similar, it will be rejected.

Comments and answers will be published here. For messages that are not intended to be published, please use the contact form instead.

Share/Bookmark