How Much More Acuity In A 40MP Fuji X Sensor

I am looking at X-Trans X series sensors. I am not looking at the Fuji medium format sensors.

Increased acuity reveals more texture and finer detail. It is most noticeable when making large prints, or with heavy cropping.

It is least put to the test in portraits, because the elements are so large in the frame that the eye makes up detail easily. It is most put to the test in subjects with fine detail such as in landscapes.

So what is the difference in perceived sharpness and fine detail rendering between a 40MP sensor (like in the Fuji X-H2 or X-T5 or X-T50) and a 24MP or 26MP sensor (like in the X-T3, X-T4, X-S10, or X-S20)?

Sensor size definitely affects image quality and allows for more cropping. But what about the number of pixels? Is a higher MP (megapixel) count better?

A 24MP X-Trans III sensor is 6,000 x 4,000 pixels. An X-Trans IV sensor is 6240 x 4160 pixels. An X-Trans V sensor is 7728 x 5152 pixels.

Compared to the two smaller megapixel sensors, the X-Trans V sensor has 50% more area. But it is not area that determines extra sharpness. It is linear resolution that determines acuity. In other words, how much longer the longest side is, all other things being equal. And the 40MP sensor translates to approximately 33% more linear resolution.

In photography, acuity refers to the sharpness and clarity of an image, particularly in the details and fine edges. Acutance, a closely related term, describes the edge contrast and the perception of sharpness in an image.

In film photography, some chemical developers increase micro-contrast on edges and give the viewer a perception of increased sharpness. Sharpness tools in applications like Photoshop do a similar thing.

Anyone show has played with the sharpness sliders in Photoshop or Lightroom or any of the other tools knows that is possible to introduce a white halo around edges. If that is done carefully it can increase the apparent contrast between dark and light edges.

Either way, acuity and acutance are related to the ability of the human eye to see and compare the sharpness of different images. After all, if the human eye can’t see the difference in acuity between a lower megapixel image and a higher megapixel sensor then there is no difference in practical terms.

So to repeat, increased acuity is most noticeable when making large prints, or with heavy cropping.

Oh yes, and then there’s the fact that lenses need to be able to resolve that detail. For lenses that can resolve a Fuji 40Mp sensor, see this article: Fuji X-Mount Lens Release Dates: A Complete List.

DxO PureRAW vs. Photoshop: A Better Way to Process Fuji RAW?

Andy Hutchinson is a no-nonsense photography YouTuber from Australia. A few days ago he talked about some standalone tools that do a better job than tools built into post-processing programs like Photoshop and Lightroom.

One of the tools he described is DxO Pure RAW, a demosaicing and noise reduction tool.

What is Demosaicing

Digital camera sensors don’t know what colour light is. To the sensor, it is simply more light or less light. So sensors have a colour filter array that sites over the sensor’s pixels. The array is a mosaic of red, green, and blue colour filters.

When you want to process a RAW image on your computer, the program has to ‘read’ the raw data in the picture, using a demosaicing engine.

Programs like Photoshop and Lightroom have demosaicing engines built into them.

Some other programs use the demosaicing engine built into the operating system of your computer.

As Andy Hutchinson describes it, DxO’s approach to decoding RAW digital sensor information is to train their machine learning model to recognise real-world noise patterns and to differentiate between genuine image features and unwanted artifacts.

At the same time, the software runs DxO’s denoising algorithm on the image data.

Because they do the denoising at the same time as demosaicing, you get a purer and cleaner image than you would if you ran the image through a denoising engine after demosaicing.

There is an added advantage to using DxO Pure RAW if you use Fujifilm cameras.

Fujifilm took their own route to the construction of the colour filter array over the sensor. The result is that programs like Photoshop and Lightroom have more difficulty demosaicing the RAW images than with cameras that use a BAYER colour filter array (almost all other camera brands).

The benefit in using Pure Raw is that its end product is a DNG RAW file rather than a RAF file. So if you then want to use Photoshop or Lightroom on the DNG file, all the difficult bit has already been done by Pure Raw.

Plus, Pure RAW also includes a built in lens softness compensation feature and also corrects for lens vignetting, chromatic aberration and distortion.

Quite a mouthful. 

Does it work?

I downloaded a trial version of DxO Pure Raw and processed a Fuji RAW file with it. I processed the same image with Photoshop and then compared the two images. 

What you are looking at is a photo processed with DxO Pure Raw with part of the same image processed in Photoshop overlaid on it.

Compare the two – click twice on the photo below to blow it up to see the detail.

What is Focus By Wire

Let’s start with mechanical focus systems. They have a focus ring that is directly coupled to and moves the lens elements.

Focus by wire uses electronic signals to control focus. The photographer turns the focus ring but that doesn’t change the focus. Instead, turning the focus ring controls the motor(s) built into the lens. The motor(s) take their instruction from the movement of the focus ring, and the motor changes the focus.

Focus by wire gets its name from fly-by-wire systems used in aircraft. Except on small aircraft the pilot doesn’t move the control surfaces on the wings directly because it would be impossibly hard. Instead, the pilot presses a pedal or turns a dial and electronic motors move the control surfaces..

And because the aircraft is so big, any small errors are not relevant.

In cameras it is different because small movements can be seen and felt. This is true in still photography and in video.

And that has been the source of the criticism of focus by wire – that the systems are laggy and prone to overshoot.

The photographer turns the focus ring quickly, and the system plays catch-up.

Photographers report that they feel divorced from the focusing, which is the exact opposite of what one should feel when trying to take a photo that needs critical focusing. The pressure might be off in a studio or with landscapes, both of which are situations where the photographer has time to focus. But on the street or with any fast paced action, the photographer needs to feel that the response is immediate and consistent.

The situation is improving and some focus by wire systems have smooth focus changes.

It’s helpful to know how the focusing system is on a lens feels before you lay down money for it.

Is Noise Bad

In London today the sun set at 16:36. Between then and dusk at 17:15 there is still enough natural light for the human eye to see features in the scene and do most activities.

Compared to the human eye, however, cameras have a much more compressed range of being able to detect all the gradations from dark to light.

I shot this at 16:59, so fifteen minutes before dusk.

You have two choices if you want to photograph in this low light – increase the ISO or keep the camera at base ISO and put it on a tripod.

I shot this at ISO 6400 and f3.2 and 1/200th of a second. If I had put the camera on a tripod I would have had to use a slow shutter speed. At base ISO I would have had to shoot at 1/4th of a second and the person coming out of the station would have been a blur.

So what are the downsides of ISO 6400 in poor light?

The photo is very noisy. Look at the close-up of the face of the man in the shadows.

So then you might think noise is a terrible thing. But at a normal viewing distance the photo will look OK. It is only when we get close that we see the noise.

Of course if I were to print it and stare at it at the same distance as I am from the computer screen then I would see the noise. But if the print was in a frame and hung on a wall, the a normal viewing distance might be more than two metres (a bit over two yards) and you would hardly see the noise.

Depth Of Field For Different Formats

The amount of light entering a lens at the maximum aperture is the same no matter the size of the sensor. It’s the hole in the lens that lets the light through that counts. So, it is the same, for example, on an APS-C lens with a maximum aperture of f4 and a full frame lens with a maximum aperture of f4. When you think about it, that must be true because f-stops are defined as the amount of light entering a lens.

What does change is the depth of field, and to look at that we have to look at equivalent apertures.

Equivalent Apertures

Equivalent aperture is the aperture value on one sensor format that shows the same depth of field (DoF) and background bluras the aperture on a different sensor.

For example, on a Canon APS-C sensor with a crop factor of 1.6, an f4 aperture would be equivalent to the depth of field on an aperture of f6.4 on a full-frame sensor. To get the answer of f6.4, multiply the aperture by the crop factor. In this example it is 1.6 x f4, which give an equivalent aperture of f6.4

To put it in a more general way, a wide aperture on a full-frame sensor will have a more shallow depth of field than the same aperture on a crop sensor.