Researchers at the Institut für Mikroelektronik Stuttgart in Germany, have released a paper detailing the development of an image sensor that apparently makes it virtually impossible to blow your highlights. It uses “self-resetting pixels” which when they get saturated, don’t clip. It simply starts counting over, keeping track of the number of times it’s started over.
Traditional CMOS sensors essentially have a finite range of exposure they’re able to record for each colour channel of each pixel, which is stored as an integer. On a 14-Bit sensor, this value for each of the red, green and blue colour channels for each pixel is somewhere between a value of zero for total blackness to 16,383 for pure white. Anything that would produce a brightness value higher than 16,383 is lost.
With this new sensor, when saturation on a pixel reaches the maximum (16,383), it just resets back to zero and keeps counting back up again. The amount of times a pixel has to reset in this way is stored alongside the remaining value, meaning highlights don’t clip and can always be recovered.
The self-reset pixel with the analog and digital part was explained in the previous section. For image capturing it operates in four phases as depicted in Fig. 2. After an initial reset to SVDD the floating diffusion (FD) charge storing node is discharged during the first phase due to the photo current generated by the light exposed photodiode while the transfer gate is switched on. If the node reaches the reference voltage Vcomp it will be reset to SVDD and the value of the counter in the digital part of this pixel is increased by one. This will continue until the end of the integration time of 10 ms. Thus after the end of phase 1 the value of the counter which contains the number of self-resets represents a coarse value of the measured light intensity. In phase 2 this counter value is read out and simultaneously the counter is reset to 0. At the end of phase 1 a residue charge remains on the floating diffusion node in the pixel, represented by the voltage VFD on this node. This voltage is converted as a fine value of the pixel in phase 3 using a ramp analog to digital conversion. In phase 4 this fine value is read out from the counter and the counter is reset again. The combination of the 10 bit coarse and 10 bit fine values results in a linear 20 bit value.
What’s incredible, is if this technology can be adapted for future generations of cameras, it’d mean that you’d never have to worry about applying graduated ND filters to your lens to ensure you don’t over expose a sky. You’d not have to underexpose your images in general to insure you retain details you want in the highlights. Less time would be spent editing and pulling from shadows, sheesh, maybe it’d even eliminate the introduction of noise (from retouching) in your images.
Currently this chip is being designed for video, but the lead researcher, Stefan Hirsh, was quoted saying that “it should also be possible to use for still images” which lends hope for us photographers. We can’t have the videographers getting all the best toys afterall. The technology is still in its beginning stages, and is quite limited in practicality currently. Currently there’s no timeline on when this tech will be delivered, but we’re hopeful we’ll see some development made in the near future. For those interested and technically inclined, you can read the full paper here. Until the tech gets further along, we’ll just sit here and think about all the things we can create out of camera with an “infinite” dynamic range sensor! What would you like to photograph with a sensor like this? How would it change the way you work? Let us know in the comments below.