Are More Pixels Better

First, let’s look at pixel counts on different size sensors.

Suppose we had a 40MP APS-C sensor – would that produce images as good as a 20MP full frame sensor?

Well, noise for a given ISO is lower on a bigger sensor because bigger sensors gather more light. So it is not automatic that a bigger APS-C sensor would outperform a more modest pixel count on a full frame sensor.

Small sensors also mean more depth of field for a given framing, so to get the same subject separation you would need a shorter focal length or you would need to move back further from the subject.

With those two points out of the way, the question is whether more pixels on a sensor is better than fewer pixels on the same size sensor. And by same size I mean that same physical dimensions, in this case APS-C.

On the principle that a picture is worth a thousand words, here is the full frame from the Fuji X-T50 with it 40MP APS-C sensor, and a crop of about 9% of the full frame.

The full frame is 7728 x 5152 px. I downscaled it to 1000 x 1500 px to display here. The crop is 1622 x 2433 px, also downscaled to 1000 x 1500 px

One thing I know is that the crop is much more detailed than I could get from a lower pixel camera with the same size sensor.

And to show you what that deterioration in quality actually means, here is a smaller crop upsized to 1000 x 1500 px.

Perspective Compression In Photography

Perspective compression or lens compression is a visual effect where distant and close objects appear closer together and more flattened in depth. So what compresses them?

Photograph a person’s face close up and the face will appear big compared to distant objects behind them. Those distant objects will appear small and far away. They may be so small that they are hidden behind the face.

Not only that but the outer parts of the face further from the lens than the nose will appear further away than is ‘natural’. The overall effect is to make the face look narrow and the nose big.

In other words, there is very little or no perspective compression or lens compression because the lens is very near the subject.

Lack of compression is not flattering in people but we don’t mind it is animals because we don’t have a strict idea of what the relative size of nose and ears and the curve of the face should be.

Front view of theace of a sheep close up

Lens Focal Length

So how, if at all, is perspective compression or lens compression affected by the focal length of a lens?

Long focal lengths create the appearance of compression. That is, subjects in the background appear larger and closer to the foreground, and depth seems flattened. This is not because of the focal length in itself.

It’s because to frame the subject the lens is going to be further away from the subject than a short focal length lens would be.

Short focal lengths create exaggerated perspective, background elements appear smaller and farther away, and depth is exaggerated. This is because the camera has to be nearer to the subject to frame the subject than would be the case with a long focal length lens.

What we get from this is that compression is a function of the distance between the camera and the subject, not directly because of the focal length. But focal length influences how far back you need to stand to frame the subject, which in turn affects compression.

Sensor Size

Sensor size doesn’t directly cause compression, but it affects the field of view for a given focal length. A crop sensor has a narrow field of view than a full-frame sensor used with a lens of the same focal length. Therefore we can use a shorter focal length with a smaller sensor to get the same field of view as a full-frame sensor.

Using Perspective Compression

Perspective compression in use means taking advantage of distance. To make a face look more attractive, shoot from a longer distance, which means using a longer focal length lens.

Typically, a full frame lens with a focal length in the region 135mm to 200mm will flatter a face.

With an APS-C sensor you would get the equivalent compression by using a lens of between 90mm and 135mm.

Cropping

What about cropping the image after you have taken it? For example, a camera with a full-frame equivalent 35mm lens can be cropped to the field of view equivalent of a longer focal length lens.

So what are the limits? The answer is that it the only limit is how many pixels are left. Too few and the image will be poor quality and useless as a photo even if it could be cropped to the equivalence of a longer focal length.

Starting with a 20MP sensor doesn’t leave many pixels left after cropping heavily. But a high megapixel sensor allows more cropping.

For example, the Fujifilm X100VI has a 35mm full-frame equivalent lens and a 40MP sensor, so it will take heavy cropping. Still, a camera that actually has a longer focal length doesn’t need cropping and doesn’t sacrifice any pixels, which is better.

A Practical Example

At some focal length the 17MP micro-four-thirds sensor on the Leica D-Lux 8 that has a zoom range of 24-75mm is going to overtake the pixel count of the cropped image of the 40MP sensor on the 35mm focal length of the Fujifilm X100VI. What is that point?

Short answer: it is at any focal length beyond 53.6mm.

Longer answer: The equivalence of the two sensors is about 1.53:1. Cropping the X100VI to simulate a focal length longer than 53.6mm (full-frame equivalent) results in fewer than 17 megapixels. That of course means that the Leica D-Lux 8 will produce a higher-resolution image beyond this point.

Acuity Is A Function Of Liner Distance

To calculate the megapixel relationship when cropping compares the areas of the sensors. But perceived sharpness or acuity is a function of linear distance, not of area.

Megapixels measure the total number of pixels. which is a measure of area resolution. When cropping, this drops off by the square of the crop factor.

Acuity relates to perceived sharpness or clarity. It is a function of linear resolution, which is the number of pixels per line or per millimetre across width or height that can be perceived.

After all, if you cannot see something as being sharper then for all intents and purposes it is not.

When comparing file sizes or printing then you care about megapixels because it defines the total detail across the image and sets limits on enlargement, cropping, etc.

When comparing sharpness at a specific display size or print size you care about linear resolution or acuity because your eye can only resolve so many lines per inch at a given distance.

While megapixels drop off quadratically, acuity or detail across an image dimension drops linearly with the crop. So we need to do a different calculation.

So looking at the linear distance across the sensor, the X100VI is 7728 pixels wide and the Leica is 4496 pixels wide at its maximum 4:3 aspect.

So the linear relationship is 1.72:1 and that equates to a focal length of 60mm before a photo from the Leica will look sharper, or out-resolves the fuji.

How Much More Acuity In A 40MP Fuji X Sensor

I am looking at X-Trans X series sensors. I am not looking at the Fuji medium format sensors.

Increased acuity reveals more texture and finer detail. It is most noticeable when making large prints, or with heavy cropping.

It is least put to the test in portraits, because the elements are so large in the frame that the eye makes up detail easily. It is most put to the test in subjects with fine detail such as in landscapes.

So what is the difference in perceived sharpness and fine detail rendering between a 40MP sensor (like in the Fuji X-H2 or X-T5 or X-T50) and a 24MP or 26MP sensor (like in the X-T3, X-T4, X-S10, or X-S20)?

Sensor size definitely affects image quality and allows for more cropping. But what about the number of pixels? Is a higher MP (megapixel) count better?

A 24MP X-Trans III sensor is 6,000 x 4,000 pixels. An X-Trans IV sensor is 6240 x 4160 pixels. An X-Trans V sensor is 7728 x 5152 pixels.

Compared to the two smaller megapixel sensors, the X-Trans V sensor has 50% more area. But it is not area that determines extra sharpness. It is linear resolution that determines acuity. In other words, how much longer the longest side is, all other things being equal. And the 40MP sensor translates to approximately 33% more linear resolution.

In photography, acuity refers to the sharpness and clarity of an image, particularly in the details and fine edges. Acutance, a closely related term, describes the edge contrast and the perception of sharpness in an image.

In film photography, some chemical developers increase micro-contrast on edges and give the viewer a perception of increased sharpness. Sharpness tools in applications like Photoshop do a similar thing.

Anyone show has played with the sharpness sliders in Photoshop or Lightroom or any of the other tools knows that is possible to introduce a white halo around edges. If that is done carefully it can increase the apparent contrast between dark and light edges.

Either way, acuity and acutance are related to the ability of the human eye to see and compare the sharpness of different images. After all, if the human eye can’t see the difference in acuity between a lower megapixel image and a higher megapixel sensor then there is no difference in practical terms.

So to repeat, increased acuity is most noticeable when making large prints, or with heavy cropping.

Oh yes, and then there’s the fact that lenses need to be able to resolve that detail. For lenses that can resolve a Fuji 40Mp sensor, see this article: Fuji X-Mount Lens Release Dates: A Complete List.

DxO PureRAW vs. Photoshop: A Better Way to Process Fuji RAW?

Andy Hutchinson is a no-nonsense photography YouTuber from Australia. A few days ago he talked about some standalone tools that do a better job than tools built into post-processing programs like Photoshop and Lightroom.

One of the tools he described is DxO Pure RAW, a demosaicing and noise reduction tool.

What is Demosaicing

Digital camera sensors don’t know what colour light is. To the sensor, it is simply more light or less light. So sensors have a colour filter array that sites over the sensor’s pixels. The array is a mosaic of red, green, and blue colour filters.

When you want to process a RAW image on your computer, the program has to ‘read’ the raw data in the picture, using a demosaicing engine.

Programs like Photoshop and Lightroom have demosaicing engines built into them.

Some other programs use the demosaicing engine built into the operating system of your computer.

As Andy Hutchinson describes it, DxO’s approach to decoding RAW digital sensor information is to train their machine learning model to recognise real-world noise patterns and to differentiate between genuine image features and unwanted artifacts.

At the same time, the software runs DxO’s denoising algorithm on the image data.

Because they do the denoising at the same time as demosaicing, you get a purer and cleaner image than you would if you ran the image through a denoising engine after demosaicing.

There is an added advantage to using DxO Pure RAW if you use Fujifilm cameras.

Fujifilm took their own route to the construction of the colour filter array over the sensor. The result is that programs like Photoshop and Lightroom have more difficulty demosaicing the RAW images than with cameras that use a BAYER colour filter array (almost all other camera brands).

The benefit in using Pure Raw is that its end product is a DNG RAW file rather than a RAF file. So if you then want to use Photoshop or Lightroom on the DNG file, all the difficult bit has already been done by Pure Raw.

Plus, Pure RAW also includes a built in lens softness compensation feature and also corrects for lens vignetting, chromatic aberration and distortion.

Quite a mouthful. 

Does it work?

I downloaded a trial version of DxO Pure Raw and processed a Fuji RAW file with it. I processed the same image with Photoshop and then compared the two images. 

What you are looking at is a photo processed with DxO Pure Raw with part of the same image processed in Photoshop overlaid on it.

Compare the two – click twice on the photo below to blow it up to see the detail.

What is Focus By Wire

Let’s start with mechanical focus systems. They have a focus ring that is directly coupled to and moves the lens elements.

Focus by wire uses electronic signals to control focus. The photographer turns the focus ring but that doesn’t change the focus. Instead, turning the focus ring controls the motor(s) built into the lens. The motor(s) take their instruction from the movement of the focus ring, and the motor changes the focus.

Focus by wire gets its name from fly-by-wire systems used in aircraft. Except on small aircraft the pilot doesn’t move the control surfaces on the wings directly because it would be impossibly hard. Instead, the pilot presses a pedal or turns a dial and electronic motors move the control surfaces..

And because the aircraft is so big, any small errors are not relevant.

In cameras it is different because small movements can be seen and felt. This is true in still photography and in video.

And that has been the source of the criticism of focus by wire – that the systems are laggy and prone to overshoot.

The photographer turns the focus ring quickly, and the system plays catch-up.

Photographers report that they feel divorced from the focusing, which is the exact opposite of what one should feel when trying to take a photo that needs critical focusing. The pressure might be off in a studio or with landscapes, both of which are situations where the photographer has time to focus. But on the street or with any fast paced action, the photographer needs to feel that the response is immediate and consistent.

The situation is improving and some focus by wire systems have smooth focus changes.

It’s helpful to know how the focusing system is on a lens feels before you lay down money for it.