Wednesday 30 January 2013

Linear Floating Point HDR


Linear Floating Point HDR

In the real world, light behaves linearly. Turn on two
lightbulbs of equivalent wattage where you previously had
one and the entire scene becomes exactly twice as bright.

A linear color space lets you simulate this effect simply by
doubling pixel values. Because this re-creates the color
space of the original scene, linear pixels are often referred
to as scene-referred values, and doubling them in this manner
can easily send values beyond monitor range.
The Exposure effect in After Effects converts the image
to which it is applied to linear color before doing its work
unless you specifi cally tell it not to do so by checking
Bypass Linear Light Conversion. It internally applies a
.4545 gamma correction to the image (1 divided by 2.2,
inverting standard monitor gamma) before adjusting.
A common misconception is that if you work solely in the
domain of video you have no need for fl oating point. But
just because your input and output are restricted to the 0.0
to 1.0 range doesn’t mean that overbright values above 1.0
won’t fi gure into the images you create. The 11_sunrise.aep
project included on your disc shows how they can add to
your scene even when created on the fl y.
The examples in Table 11.1 show the difference between
making adjustments to digital camera photos in their
native video space and performing those same operations
in linear space. In all cases, an unaltered photograph
featuring the equivalent in-camera effect is shown for
comparison.
The table’s fi rst column contains the images brightened
by one stop, an increment on a camera’s aperture, which
controls how much light is allowed through the lens. Widening
the aperture by one stop allows twice as much light
to enter. An increase of three stops brightens the image by
a factor of eight (2 × 2 × 2, or 23).
To double pixel values in video space is to quickly blow out
bright areas in the image. Video pixels are already encoded
with extra brightness and can’t take much more.
The curtain and computer screen lose detail in video space
that is retained in linear space. The linear image is nearly
indistinguishable from the actual photo for which camera
exposure time was doubled (another practical way to
brighten by one stop).

The second column simulates an out-of-focus scene using
Fast Blur. You may be surprised to see an overall darkening
with bright highlights fading into the background—at
least in video space. In linear, the highlights pop much
better. See how the little man in the Walk sign stays bright

in linear but almost fades away in video because of the
extra emphasis given to dark pixels in video space. Squint
your eyes and you notice that only the video image darkens
overall. Because a defocused lens doesn’t cause any less
light to enter it, regular 8 bpc blur does not behave like a
true defocus.
The table’s third column uses After Effects’ built-in motion
blur to simulate the streaking caused by quick panning as
the photo was taken. Pay particular attention to the highlight
on the lamp; notice how it leaves a long, bright streak
in the linear and in-camera examples. Artifi cial dulling of
highlights is the most obvious giveaway of nonlinear image
processing.
Artists have dealt with the problems of working directly
in video space for years without even knowing we’re compensating
all the time. A perfect example is the Screen
transfer mode, which is additive in nature but whose calculations
are clearly convoluted when compared with the
pure Add transfer mode. Screen uses a multiply-towardwhite
function with the advantage of avoiding the clipping
associated with Add. But Add’s reputation comes from
its application in bright video-space images. Screen was
invented only to help people be productive when working
in video space, without overbrights; Screen darkens overbrights.
 Real light doesn’t Screen, it Adds.
Add is the new Screen, Multiply is the new Hard Light, and
many other blending modes fall away completely in linear
fl oating point.



No comments:

Post a Comment