Shallow Depth Of Field

Q: How do you get a shallow depth of field?

Use a lens with a very wide aperture. And remember that a bigger sensor format will achieve shallower depth of field, whereas the same settings in a smaller format will not have as shallow depth of field.

Q: What is the magic and why would I want shallow depth of field?

Shallow depth of field makes the foreground subject stand out from the background. it works best when the foreground is further from rather than near to the background. And it works best when the subject is near to the camera. If the subject is far from the background but both the subject and the background are far away from the camera then they will merge together and the foreground will not stand out from the background. So shallow depth of field works best when the subject is near the camera.

Q: Help me choose?

Well, f1.8 is considered to be a standard wide aperture. Anything wider than that is where the magic starts. So f1.4 is gong to give good separation between the foreground and the background in an image. A wide aperture means the hole in the lens is big compared to other f stops where the hole is smaller. And the bigger the hole the more shallow the depth of field possible. That is why bigger formats such as full frame or medium format give more shallow depth of field than smaller formats such as APS-C or Micro Four Thirds.

Q: Any downsides?

Yes, you might miss focus completely when the depth of field is very shallow. It is easy to find you focused on a nose or a cheek and not on the eye, for example. Also, wider aperture lenses are more expensive and heavier. They are more expensive to make because the lens element at the front of the lens has to be big enough to open up wide.

And glass is heavy, so wide lenses weigh more.

It’s not necessary to chase super wide apertures on long focal length lenses because they are going to be used at greater distances and longer focal lengths compress the distance between foreground and background. So there is no advantage is chasing something that isn’t going to show.

A long lens with a widest aperture of f4 is as good as anyone needs. Actually, if you are shooting in poor light – maybe wildlife in the early morning or in a wood – then a bigger aperture of say f2.8 is better. But that is for light gathering rather than for separating foreground from background.

Q: Some numbers?

OK. Let’s clear one thing up and get it out of the way. A 50mm lens on a full frame camera and a 35mm lens on APS-C will have the same field of view. So to compare like with like means comparing these two focal lengths.

A camera with an APS-C sensor and a 35mm f1.4 lens and a subject 1.5m away has a depth of field of 10 cm. A full frame camera with a 50mm f1.4 lens and a subject 1.5m away has a depth of field of 7 cm.

A camera with an APS-C sensor with a 35mm f1.4 lens and a subject 2.5m away has a depth of field of 29 cm. A full frame camera with a 50mm f1.4 lens and a subject 2.5m away has a depth of field of 21 cm.

A camera with an APS-C sensor with a 35mm f1.4 lens and a subject 3.5m away has a depth of field of 56 cm. A full frame camera with a 50mm f1.4 lens and a subject 3.5m away has a depth of field of 41 cm.

From these numbers we see that depth of field increases the further away the subject is from the camera and we also see that the difference between full frame and APS-C narrows the further way the subject is.

Depth Of Field

Street musicians in Cambridge
Street musicians in Cambridge

The street musicians were at a Winter Fair in Cambridge. Two of the men are holding Dino Baffetti melodions or squeezeboxes.

The two versions of the photograph illustrate how post processing can affect depth of field,

Depth of field is the distance between the nearest and the farthest objects that are in focus in a photograph.

And depth of field depends on the circle of confusion, which is a way of describing what the eye can and cannot see.

An image might not be perfectly sharp when viewed close up with a magnifying glass, but at a normal viewing distance the eye can only make out blur when that blur is big enough to be apparent at that normal viewing distance.

One thing that affects how sharp something looks is how much experience a person has at looking at photographs. Once the eye becomes more practised, small difference in sharpness become more obvious.

That said, if one man has perfect vision while another is older, with cataracts forming and wears glasses, then what is sharp to one will be less sharp to the other.

In other words, the circle of confusion is not an exact science, and one man’s blur will be another man’s ‘sharp enough’.

Whatever the circle of confusion is agreed at, it defines the limits of the depth of field.

And depth of field for a given focal length and subject-to-camera distance varies with the camera format.

There are. many formats and I am going to just look at two – the Nikon full frame (24x36mm), and APS-C (23,5×15.6mm) sensors. Each linear dimension of the APS-C is two thirds the length of the full frame. And it is linear dimension, and not the overall area that determines what seems sharp.

When comparing full-frame and APS-C sensors, for a given focal length and distance from camera to subject, the smaller sensor captures a narrower field of view, and that affects depth of field.

For example, a 50mm lens on an APS-C sensor at a given aperture gives a similar field of view as a 75mm lens on a full-frame camera. But the image from the APS-C sensor will have a greater depth of field because the smaller sensor increases the apparent sharpness because the circle of confusion will be smaller.

Which is best depends on what effect you want. If you are shooting portraits then a larger sensor will enable a shallower depth of field, and for landscapes a smaller sensor will give greater depth of field..

So if you want the foreground to ‘pop’ as it is called, to appear separated from the background, then use the largest aperture and the biggest sensor.

And that is where post processing in Lightroom comes in because it can do that after the photo is taken. So a careful use of background blur preset in Lightroom can give APS-C sensors the best of both worlds.

Shift Lenses

Shift lenses are often combined with a tilt mechanism, when they are called tilt shift lenses. Tilt mechanisms enable you to change the plane of focus. Shift lenses keep lines straight, horizontal lines horizontal and vertical lines vertical.

But, and here’s the thing, they allow you to look beyond the normal image circle of the lens.

Imagine you’re looking at tall building. You tilt your head up and the building seems to be narrower at the top than down at the ground. You know it is not, and it doesn’t bother you because you know that in reality the building is not leaning in and getting narrower. But that’s what it looks like.

It’s the same when taking a photo of a tall building with a normal lens. If you tilt the camera up to fit the building within the frame, the sides of the building will look like they’re leaning inward.

And if you took a photo like that, it could be perfectly acceptable. But if you were photographing architecture and wanted to show what the building actually looks like, with vertical verticals, then you can’t unless you use a shift lens.

A shift lens is made in to two parts that can move relative to each other. The front part slides upwards. That way it covers more of the scene.Think of it like two overlapping circles that cover more of the scene than just one circle would.

The only downside to shift lenses is that they are expensive to make and to buy. As and when I get a shift lens for photographing architecture in London, I will link to the photos from here.

AI-Powered Focus and Subject Tracking

In the 23rd October edition of Amateur Photographer, photographer John Bridges ask whether the death of the DSLR was the biggest market misstep of the 21st Century. He gives as his reasons the ergonomics of DSLRs – with a bigger grip and an optical viewfinder.

He notes that with mirrorless cameras you have to think about how long the battery is going to last. And that although they can shoot at astronomically high frames per second, he asks who actually needs that.

Finally, he notes that used DSLRs are cheap as chips now compared to the price of mirrorless cameras.

Bigger grips aren’t really an issue. It depends on the camera. The grip on a Canon R6 is every bit as pleasant to hold as that on a Nikon D750. Batteries are a kind of an issue, mainly because DSLRs just go on and on. But it is not a big deal to carry an extra battery, and keep the one in the camera topped up.

That leaves the optical versus electronic viewfinder question.

In a recent article I wrote about using a DSLR in 2024. I said how I hankered after using one because I missed looking through an OVF (optical viewfinder). An OVF is as near to directly looking at an object as you can get. It means seeing the object you are photographing. That is, light enters the camera through the lens. It is reflected by mirrors and a prism up and into the viewfinder.

The thing is that that when the photographer wants to take a photo, the mirror has to get out of the way. In its rest position the mirror is in the path of the light that needs to reach the sensor. So the mirror housing springs up and then down again. That’s the ‘reflex’ in single lens reflex.

Mirrorless cameras don’t have a prism and mirror arrangement. In the viewfinder the photographer sees a digital representation of the scene,

So now I have scratched the itch to use an OVF, and I am selling on the camera. That leaves a question. How have EVFs improved since they first appeared, and what does the future promise?

The improvements since they first appeared are easy to describe. Lower latency and more dots. Lower latency means that as I move the camera around and look at the scene, the electronic viewfinder keeps up with me smoothly and as though I was looking at the scene with an optical viewfinder. More dots means that scene is clearer, brighter, and more detailed.

I can’t help but think that past a certain point the returns on improvements in ‘more of the same’ will be less easy to get excited about. But how about AI-powered focus and subject tracking?

Exposing Black and White Photographs ‘Correctly’

I put ‘correctly’ in quotes because everyone has their taste. So what does ‘correctly’ mean here?

What it means here is the way to get the maximum information out of the scene. Put simply, if the shot is underexposed then some of the dark areas may not show detail. Or if overexposed then the highlights might be blown and not show detail.

The Zone System is a method of getting an optimal exposure of black and white photographs. It was created by Ansel Adams. a landscape photographer, and Fred Archer, a portrait photographer.

The method divides the tonal range of a scene into eleven zones from pure black to pure white. Notice that the range starts with Zone zero.

Zone 0: Pure black (no detail)
Zone 1: Near black (minimal detail)
Zone 2: Very dark shadow
Zone 3: Dark shadow with visible texture
Zone 4: Slightly darker shadow with good detail
Zone 5: Middle gray (18% gray), average light meter reading
Zone 6: Light gray (skin tone, sunlit grass)
Zone 7: Bright highlights (texture still visible)
Zone 8: Very bright highlights (minimal detail)
Zone 9: Near white (no detail)
Zone 10: Pure white (no detail)

To use it you measure light in your chosen part of the scene with your light meter. That may be a hand-held meter or the meter built into your camera. Whichever it is, all light meters are built to ‘assume’ that all scenes are Zone 5, middle grey. That is of course not true. A black cat in the snow for example.

And before we go any further you should know that while it is true the meters in cameras are based on middle grey, modern cameras are also computers. They look at the scene and measure it against a bank of similar scenes in their built-in database. If a camera stores 90,000 scenes then the chances are it has a black cat in the snow in there. So even assuming the light meter works on 18% grey Zone 5 and is wrong, it will correct itself if it recognises the scene.

And even if a camera does not have a built-in database of scenes, it will have metering that can cover most of the scene and then average out the brightness.

At the other end of evaluative metering, cameras now have AI or machine learning so they learn more scenes the more photographs the photographer takes.

In 2024 with built-in scene recognition and intelligent exposure adjustments we are a long way from Kansas.

So for the rest of this article I am talking about the zone system used with a hand-held light meter,

The Method

Put your camera on manual exposure. Point the light meter at the part of the scene you want to measure. The part you want to measure is the part in the scene that is important to you. Everything else in the scene will be measured by reference to that.

The meter will always give you an exposure (shutter speed and aperture) for Zone 5. Decide what Zone your chosen area should actually be in. Yes, that means you have to put your brain’s evaluative input into the calculation. Adjust the exposure. If you think the part of the scene you measured is Zone 3, then reduce exposure by two stops. In other words you are saying the following.

My starting point was to meter the brightness of the part of the scene I think is important. Now I want to expose darker than the meter is telling me because in my opinion the bit of the scene I want to measure is not mid grey, It is two stops darker than mid grey. So I reduce exposure by two stops.

That’s it. That’s the Zone system.

Meanwhile, with digital photography it is just a ‘flick of switch’ as it were to make a black and white version of a full colour image. Click on the image to see a large version.

By the way this is a crop of about one seventh of the full frame of a photo I took of this couple, from across the street with a 50mm lens on a Nikon D750.