Shift Lenses

Shift lenses are often combined with a tilt mechanism, when they are called tilt shift lenses. Tilt mechanisms enable you to change the plane of focus. Shift lenses keep lines straight, horizontal lines horizontal and vertical lines vertical.

But, and here’s the thing, they allow you to look beyond the normal image circle of the lens.

Imagine you’re looking at tall building. You tilt your head up and the building seems to be narrower at the top than down at the ground. You know it is not, and it doesn’t bother you because you know that in reality the building is not leaning in and getting narrower. But that’s what it looks like.

It’s the same when taking a photo of a tall building with a normal lens. If you tilt the camera up to fit the building within the frame, the sides of the building will look like they’re leaning inward.

And if you took a photo like that, it could be perfectly acceptable. But if you were photographing architecture and wanted to show what the building actually looks like, with vertical verticals, then you can’t unless you use a shift lens.

A shift lens is made in to two parts that can move relative to each other. The front part slides upwards. That way it covers more of the scene.Think of it like two overlapping circles that cover more of the scene than just one circle would.

The only downside to shift lenses is that they are expensive to make and to buy. As and when I get a shift lens for photographing architecture in London, I will link to the photos from here.

AI-Powered Focus and Subject Tracking

In the 23rd October edition of Amateur Photographer, photographer John Bridges ask whether the death of the DSLR was the biggest market misstep of the 21st Century. He gives as his reasons the ergonomics of DSLRs – with a bigger grip and an optical viewfinder.

He notes that with mirrorless cameras you have to think about how long the battery is going to last. And that although they can shoot at astronomically high frames per second, he asks who actually needs that.

Finally, he notes that used DSLRs are cheap as chips now compared to the price of mirrorless cameras.

Bigger grips aren’t really an issue. It depends on the camera. The grip on a Canon R6 is every bit as pleasant to hold as that on a Nikon D750. Batteries are a kind of an issue, mainly because DSLRs just go on and on. But it is not a big deal to carry an extra battery, and keep the one in the camera topped up.

That leaves the optical versus electronic viewfinder question.

In a recent article I wrote about using a DSLR in 2024. I said how I hankered after using one because I missed looking through an OVF (optical viewfinder). An OVF is as near to directly looking at an object as you can get. It means seeing the object you are photographing. That is, light enters the camera through the lens. It is reflected by mirrors and a prism up and into the viewfinder.

The thing is that that when the photographer wants to take a photo, the mirror has to get out of the way. In its rest position the mirror is in the path of the light that needs to reach the sensor. So the mirror housing springs up and then down again. That’s the ‘reflex’ in single lens reflex.

Mirrorless cameras don’t have a prism and mirror arrangement. In the viewfinder the photographer sees a digital representation of the scene,

So now I have scratched the itch to use an OVF, and I am selling on the camera. That leaves a question. How have EVFs improved since they first appeared, and what does the future promise?

The improvements since they first appeared are easy to describe. Lower latency and more dots. Lower latency means that as I move the camera around and look at the scene, the electronic viewfinder keeps up with me smoothly and as though I was looking at the scene with an optical viewfinder. More dots means that scene is clearer, brighter, and more detailed.

I can’t help but think that past a certain point the returns on improvements in ‘more of the same’ will be less easy to get excited about. But how about AI-powered focus and subject tracking?

Exposing Black and White Photographs ‘Correctly’

I put ‘correctly’ in quotes because everyone has their taste. So what does ‘correctly’ mean here?

What it means here is the way to get the maximum information out of the scene. Put simply, if the shot is underexposed then some of the dark areas may not show detail. Or if overexposed then the highlights might be blown and not show detail.

The Zone System is a method of getting an optimal exposure of black and white photographs. It was created by Ansel Adams. a landscape photographer, and Fred Archer, a portrait photographer.

The method divides the tonal range of a scene into eleven zones from pure black to pure white. Notice that the range starts with Zone zero.

Zone 0: Pure black (no detail)
Zone 1: Near black (minimal detail)
Zone 2: Very dark shadow
Zone 3: Dark shadow with visible texture
Zone 4: Slightly darker shadow with good detail
Zone 5: Middle gray (18% gray), average light meter reading
Zone 6: Light gray (skin tone, sunlit grass)
Zone 7: Bright highlights (texture still visible)
Zone 8: Very bright highlights (minimal detail)
Zone 9: Near white (no detail)
Zone 10: Pure white (no detail)

To use it you measure light in your chosen part of the scene with your light meter. That may be a hand-held meter or the meter built into your camera. Whichever it is, all light meters are built to ‘assume’ that all scenes are Zone 5, middle grey. That is of course not true. A black cat in the snow for example.

And before we go any further you should know that while it is true the meters in cameras are based on middle grey, modern cameras are also computers. They look at the scene and measure it against a bank of similar scenes in their built-in database. If a camera stores 90,000 scenes then the chances are it has a black cat in the snow in there. So even assuming the light meter works on 18% grey Zone 5 and is wrong, it will correct itself if it recognises the scene.

And even if a camera does not have a built-in database of scenes, it will have metering that can cover most of the scene and then average out the brightness.

At the other end of evaluative metering, cameras now have AI or machine learning so they learn more scenes the more photographs the photographer takes.

In 2024 with built-in scene recognition and intelligent exposure adjustments we are a long way from Kansas.

So for the rest of this article I am talking about the zone system used with a hand-held light meter,

The Method

Put your camera on manual exposure. Point the light meter at the part of the scene you want to measure. The part you want to measure is the part in the scene that is important to you. Everything else in the scene will be measured by reference to that.

The meter will always give you an exposure (shutter speed and aperture) for Zone 5. Decide what Zone your chosen area should actually be in. Yes, that means you have to put your brain’s evaluative input into the calculation. Adjust the exposure. If you think the part of the scene you measured is Zone 3, then reduce exposure by two stops. In other words you are saying the following.

My starting point was to meter the brightness of the part of the scene I think is important. Now I want to expose darker than the meter is telling me because in my opinion the bit of the scene I want to measure is not mid grey, It is two stops darker than mid grey. So I reduce exposure by two stops.

That’s it. That’s the Zone system.

Meanwhile, with digital photography it is just a ‘flick of switch’ as it were to make a black and white version of a full colour image. Click on the image to see a large version.

By the way this is a crop of about one seventh of the full frame of a photo I took of this couple, from across the street with a 50mm lens on a Nikon D750.