You sequentially
see the scene in multiple images with your sensitivity band travelling along
the brightness range. In other words you essentially make your camera sensitive
to different parts of the brightness range that it might exist and collect
those images and from those images you compose a single image in which all
directions are captured in the best possible way, that is the detail has
retained the best, and you do that by composing the final image in pieces,
where each piece comes from the best individual image. So this is very similar
to the best focus thing, namely you just replace focus insensitivity with
brightness sensitivity. There is another family of things connecting camera sensors where you use different ways of controlling these sensors. You could use multiple sensor elements. Here you do thing in parallel, just as we had in the focus scale. In this case you have different cameras tuned to the front blackness ranges and you do all of this imaging, all images are quite in parallel and you combine them into single big dynamic range light, by creating this final image by individual pieces, obtained by the different cameras. Then there are also methods in which pixel exposure is used where you expose a pixel again individually with different effective exposure times. You can have spatially varying exposure, that is you can entirely different exposure times, different rates of exposure, spatially in the image, where the different pixels have different sensitivity, and you can do it that way. Then you can have just as we had multiple sensor elements within a pixel. Here we can have multiple image sensor, we can have parallel image sensors doing the same thing as multiple sensor elements within a pixel. | ![]() |