The Stats That Matter

In a recent post I talked about the minor difficulty of making a meaningful comparison between different cameras’ ability to acquire and track focus. This brought to mind a more general problem about hardware comparisons and the mismatch between what photographers need to know, and what metrics they end up discussing.

Nerdy photographers (rather than cool ones) obsess over specifications. Specifically, they want to quantify answers to questions such as ‘How good is this camera in low light?’, ‘Is this lens or sensor sharp?’, ‘Which of these components will give better image quality?’, ‘How responsive in this imaging system to fast-moving environments?’

Some of those answers are given helpfully in manufacturer specifications and reviews, but some are not. By ‘helpful’ I mean, stats quoted in universally comparable, comprehensible, and unambigous terms. For instance: a Nikon Z6 II weighs XXg, and a Canon R7 weighs XXg. The gram is a reassuringly logical and concrete unit of measurement (unless you’re American). But wait – are the quoted stats with or without the battery and media? Does the camera require an extra grip or tripod plate in use? Will you be using adapted lenses? If, so the weight of the adaptor must be taken into account. Nikon’s FTZ adaptor is much lighter than the Sigma MC11, but the Z6 battery and media is heavier. This all matters when calculating and balancing a gimbal payload.

Megapixels

For stills, the most often quoted metric is megapixels. Megapixels matter. If we’re talking about resolving a scene, more is better. But is it better in every way? In principle, bigger pixels are better, too, with a better SNR. But bigger pixels mandate either less of of them, or the same number on a bigger sensor. A bigger sensor is (in principle) a more sluggish sensor, with a slower readout, requiring bigger lenses that are harder to design for high resolution: ‘lens megapixels’ matters as much as sensor megapixels.

Also – how many pixels is ‘enough’? While we prefer more headroom for cropping and potential uses down the line, how many of the images you have ever taken have been printed larger than poster size? Also – the larger the print, the lower the pixel density required. For 99.99% of all images ever captured, c.30MP (to pluck round figures from the air) is bigger than they will ever need to be. The megapixel race of the first phase of the digital revolution has indeed plateaued into a sensible realisation that 10-16MP is adequate for amateur purposes. Few professionals have clients paying a premium for delivering images over 24MP in size.

Given that almost all images are now viewed on screens rather than in print, even 24MP sets the bar high: Today’s class-leading iPhone 14 Pro Max has a resolution of 1290 x 2796 – a mere 3.6MP (though, granted, it’s a window on to images of potentially unlimited size). A 4K desktop or cinema screen in 16:9 ratio is an 8.3MP image. 

Ratios bring us to another wrinkle in megapixel stats. Intuitively, we tend to think about a ‘big image’ being a ‘wide image’ – but megapixels measure area, not length. Buyers of Micro FourThirds cameras wanting to shoot detailed panoramic landscapes may be disappointed to learn that their 20MP camera creates captures 5,184 pixels wide – not much more generous in the horizontal than Nikon’s nominally 16MP D7100 that in practice yields 4,928 pixels. The image ratio is different. Aside from aesthetic considerations (ie, ideal proportions) 4:3 makes sense for a camera trying to be small: tending to square, it makes more efficient use of a smaller image circle. The 3:2 ratio of APS-C and 36x24mm ‘full-frame’ sensors favours photographic styles with larger image ratios such as interiors, landscapes, and the wide world of cinematic and anamorphic moving images. But if you want to shoot square, or ‘academy ratio’, a 4:3 sensor has an important head start in resolution terms. For more on this with relevance to the Fujifilm GFX format, please see this article.

Speed

Historically, camera manufacturers raced to outdo the competition in crafting ultra high-precision shutter mechanisms capable of cycling many times per second. It wasn’t long ago that battery grips were sold less on their ability to improve a camera’s ergonomics than on their ability to boost the frame rate. ‘FPS’ is still a headline figure when advertising cameras intended for the capture of rapidly moving subjects, and in the minds of many photographers, it’s still the mark of a ‘fast’ camera. However, times have changed. Most modern cameras shoot at triple the speed of the fastest cameras of twenty years ago.

What hasn’t changed is the relevance of the question: ‘how fast is enough?’ Shooting stills of hummingbirds in flight, at 50 wingbeats a second, will always require shutter speeds faster than 1/500 to freeze motion – the faster the better. The degree to which each photographer wants, or needs, to ‘spray and pray’ varies, along with the speed of their subject. And who doesn’t love to pretend they’re shooting a machine gun instead of a sniper rifle? But FPS isn’t the only metric of speed.

More importantly, for me, is the micro-behaviour of the camera that lets me attune to the scene I’m shooting without lag, or the intermediation of faffing processes. The responsiveness of the controls matters. Viewfinder lag matters: it’s zero ms with a DSLR – beat that, mirrorless! Shutter lag matters: this is a figure rarely quoted. Card-write and sensor read-out speed matters. Buffer speed matters. The speed and reliability of eye and subject tracking matters. Not all these ‘mattering stats’ are published, or given sufficient priority.

Having mentioned reliability: speed is nothing without precision, or as my mother used to say: ‘more haste, less speed.’ The Canon R7, for instance, is described in the manufacturer literature as “Ferociously fast, at 15 fps and beyond. It’s capable of capturing up to 15 RAW + JPEG files every second using its mechanical shutter, and up to 30 fps with its silent electronic shutter – all with simultaneous AF tracking and autoexposure.” Well, yes and no. Yes, it can shoot that many images in brief bursts, but not all of them will be in focus, and if the subject or camera moves at all, many elements within the picture will be warped. In extremis, the readout and AF processor are just too slow to keep pace with its shutter. Though this is a fine, swift, high value camera (one I use regularly) it would more honestly be rated at half its published stats.

For video, this is crucial. Buried in the stats and rarely compared on a level playing field, is a specific measure of readout speeds affecting rolling shutter. What tends to be highlighted is the best-case scenario for each camera, not on a like-for-like basis. However, to gain an accurate picture, we need all the stats. Inevitably, rolling shutter tends to be worse at higher frame rates, and with larger, slower sensors than smaller, lower-resolution ones. But sometimes full frame sensors perform better in APS-C mode than native APS-C sensors. Sometimes APS-C sensors (and even full frame sensors) apply such a drastic crop that they surrender any advantage they have over Micro FourThirds cameras in terms of noise. How do they then compare in terms of rolling shutter? Figures are rarely collated or presented in terms that allow us to make this comparison. So here goes.

The following compares sensor readout speeds for 4K video at various frame rates or sampling modes. It also corresponds to warping issues when using electronic shutter. For comparison, most mechanical shutters have a transit time of less than 5ms, which for purpose of most still images can be regarded as instantaneous.

Assume that (broadly), all M43 imaging areas (created by native M43 sensors, or larger sensors operated in a cropped mode) have similar levels of noise – ditto for APS-C sensors compared to cropped full-frame sensors. And that an uncropped larger sensor will have correpondingly less noise. There has been considerable confusion over the presentation of readout speeds in milliseconds (which I’ve seen confused with microseconds) and as fractions of a second. Here I have standardised them in ms if the manufacturer, or independent tester, has published them as fractions. Below 15ms is currently considered ‘acceptable’, but it won’t in the future – it shouldn’t be acceptable at all.

Readout Speed Comparison Table

Full-frame stills
(electronic shutter)
4K video full-frame
(better for noise)
4K APS-C area
(worse for noise)
4K M43
(worst for noise)
Nikon Z6Oversampled (HQ): 31ms
Subsampled: 15.6ms
15ms
Canon R7N/A
Sony A7 IV14-bit: 66msFull width: 27ms12-bit 60P: 12.8ms
Fuji X-H2S14-bit 60P: 10.6ms
14-bit 24P: 9.7ms
12-bit: 5.6ms
Panasonic
GH6
24P: 13.3ms
Olympus
OM1
10-bit: 6.9ms

A crucial aspect of this table is the difference in quality between video modes. DPReview used to publish  instructive image comparisons of still frames taken from video that explored sometimes vast differences between identical-sounding 4K cameras. I hope they return to it, and expand it with results from cameraphones and professional video imaging systems. Especially at lower price points, cameras ask the user to trade speed for quality: the best-looking shooting modes inevitably have worse rolling shutter, and vice versa. More than anything, It is what currently distinguishes cheap cameras from expensive ones. 

Leave a Comment