Nikon recently announced its new flagship camera, the Z9. This camera represents Nikon’s first mirrorless camera fully geared towards professional photographers and hybrid shooters, with a huge spec list designed for use in the demanding fields of photojournalism, sports, nature, birding, and any other use-cases that call for a camera that shoots incredibly fast at high resolution. The $5,500 Z9 is also Nikon’s first camera to omit a traditional mechanical shutter, allowing it to achieve new levels of speed and autofocus performance.
Faster speed is great, especially for sports photographers. But it’s interesting to think about where this technology could be used to take traditional-style cameras in the future. This might be the first step towards larger format cameras adopting the computational smarts that smartphone cameras have been embracing for years.
Nikon has made no mention of things like computational photography for HDR-style photos or the cyclical buffering that smartphones do to simultaneously capture up to nine or 10 frames and combine them with each press of the shutter button. But the new 45.7-megapixel full-frame backside-illuminated stacked CMOS sensor isn’t far off from what has been in phones for years, at least in terms of the core design. This kind of construction uses a sandwiched architecture of sensor, logic board, and dedicated RAM — yielding incredibly fast readout speeds.
Today, that enables the Z9 to use a full-time electronic shutter with the fastest shutter speed of 1/32,000 of a second and achieve incredibly quick burst shooting. It can capture 20 frames per second in RAW / JPG at full resolution or as fast as 120fps at 11 megapixels, all without making any audible sounds (optional fake shutter sounds can be enabled for an audible cue). The new Expeed 7 processor and dual CFexpress / XQD card slots give the Z9 a claimed 1,000-shot buffer at full resolution in a high-efficiency compressed RAW, but it’s the fast readout speed of the stacked sensor that could be the key to the computational photography enigma.
As the first of the major camera manufacturers to ditch the mechanical shutter, it puts Nikon ahead of its competitors in the race towards computational photography. Sony’s A1 and A9 lines have already utilized stacked sensors for fast readout speeds, making electronic shutters viable for full-time duty, and Canon’s upcoming R3 will use the same technology. Moving to a fully electronic shutter has been the logical next evolution for cameras, though the onus will be on Nikon to prove its electronic shutter is up to the everyday tasks and demands for pro photographers right now.
To date, efforts from camera manufacturers to implement computational photography have been limited to features like Olympus’s Live ND and Panasonic’s post-focus and in-camera focus stacking. Handy features, yes, but these are sideshows compared to the paradigm shift that full computational photography implemented with every press of the shutter could one day be. OM System, the newly rebranded Olympus, recently promised to utilize computational photography technology in its next camera, but we will have to see if that’s the main focus or just another feature on the side.
Deep Learning, which is used in the new Z9’s object detection autofocus system, has also been used to some capacity prior by Olympus, Panasonic, and Canon. It serves to improve autofocus tracking performance, but in the end, a mirrorless camera still captures a single image that is limited by the dynamic range of the sensor.
The primary barrier most likely preventing cameras like the Z9 and other pro- or enthusiast-level mirrorless cameras with stacked sensors going fully computational might lie in the data throughput and image processing pipeline. Ten frames captured simultaneously from a 45-megapixel full-frame sensor and combined into one file will be exponentially larger than the same collection of images taken from a smartphone sensor at a fraction of the size.
Additionally, cyclical buffering is necessary to be constantly writing and rewriting images to the camera’s buffer in the background before you press the shutter. Even the Z9’s new processor might not be up to these tasks. In the smartphone space, CPUs are designed to be well-suited for this processing, even sometimes using dedicated hardware, but cameras are not built the same way. It’s possible more innovation at the CPU level is still needed from the camera manufacturers.
There are some obvious advantages to using computational photography. Most any modern smartphone can create a balanced exposure with subjects nicely lit, shadows full of visible detail, and clouds visible — all in the same frame. Advancements like Night Sight and Night Modes let you do things that are much harder to achieve with a standard camera, while Google continues to bring new computational tricks to keep subjects sharp when in motion, and Apple even allows RAW files with computational data.
On the other hand, a photo taken with even the most advanced mirrorless camera today —while superior in sharpness and resolution — results in some sacrifice to be made, such as blowing out the highlights or crushing the shadow details in high-contrast daytime scenes. Achieving the same look as most smartphones requires at least a bit of post-processing and editing, ideally from a RAW file that must be exported as a JPG or other universal format. Computational photography coming to dedicated camera systems could re-energize the camera market, though it might also take camera manufacturers finally figuring out connected Wi-Fi apps that are not terrible — admittedly, another tall order.
Cameras like the Z9 may be the bridge to that pathway, something that might be appreciated by even professional photographers who could spend less time editing to achieve the look many of their clients seek. It might just make full-size cameras a bit more exciting again, even if it may also further blur the lines of “what is a picture?”
This article was originally posted on theverge.com. Read here