Motion capture techniques have been evolving continuously since the early days with film. However, related artifacts such as flickering, stuttered motion and blur have all remained a concern. In this article, we take a deeper look at the science of temporal aliasing, along with how this can be used to improve cinematic quality.
BACKGROUND ON ALIASING
Many modern technologies record and reproduce signals from the real-world. Microphones encode sound waves in conjunction with audio equipment, digital photography quantifies light using arrays of pixels, and cinema cameras record spans of time using discrete frames. In all cases, a central goal is to maximize fidelity within the constraints of a recording medium.
However, whenever those real-world signals are sampled less frequently than they vary naturally, unrealistic "aliasing" artifacts are likely to appear. Aliasing is pervasive with all types of technology, and can arise as unnatural motion with cinema or as audible distortions with sound, among other complications. In the diagram below, a false wave is measured when samples are taken too infrequently:
With digital camera sensors, most of the development has been aimed at reducing aliasing in the form of pixelated edges and moiré patterns. This aliasing happens when the sensor gets tricked into recording false detail from otherwise unresolvable fine textures. It's especially detrimental with video, because it consumes bandwidth that would otherwise encode actual image detail.
The established solution uses something called an optical low-pass filter (OLPF), which effectively blurs detail finer than the resolution of a camera while preserving coarser detail. However, not every OLPF achieves this goal, and this approach does not address all types of aliasing . . .
A less familiar characteristic of aliasing, but one which is particularly relevant to cinema, is that aliasing can occur in time as well as in space. Perhaps the most common example is when a wheel or propeller appears to rotate more slowly or in the opposite direction:
This can happen more generally whenever the frame rate is less than twice the object's rate of rotation or repetition. For example, if one photographed a clock less frequently than every thirty minutes, the minute hand might appear to rotate counterclockwise. In the diagram below, the left and right clocks are photographed every 15 and 50 minutes, respectively:
Other times, imagery of electronic displays may flicker or appear partially illuminated, and strobes may flash at irregular frequencies. All of these can greatly complicate and reduce the quality of a shoot.
However, temporal aliasing can also manifest itself spatially. Still images with motion blur may develop unrealistic patterns and not look as smooth as they would to our eyes. Camera movements which depict fine texture such as trees or fabric are especially susceptible, and can make footage less pleasing by exacerbating motion stutter. Many times these artifacts are not directly attributable to aliasing beforehand, but once they've been removed, the image improves noticeably.
Since standard techniques only address spatial aliasing, entirely new technologies are needed to address temporal aliasing. However, to understand these, we need to take a deeper look at the underlying cause.
With standard cameras, all light over the duration of an exposure contributes equally to the resulting image„regardless of whether this light arrives at the beginning, middle or end of that duration. This is true for cameras that have both a global and a rolling or rotary shutter, since these alternate between obstructing and receiving light to expose each frame:
The key to addressing temporal aliasing is to treat light differently depending on when it arrives. To understand why, it helps to see how aliasing is minimized when downsizing an image. Just as how a video frame is created by averaging light over a span of time, each pixel in a downsized image is created by averaging groups of nearby pixels. To achieve both high resolution and minimal aliasing, pixels closer to the downsized pixel need to contribute more than pixels further away:
Technical Note: This is why bilinear and nearest neighbor techniques create substantial aliasing, whereas all the bicubic variants (Mitchell, Lanczos, etc.) depict very little aliasing while preserving a sharp image. In REDCINE-X PRO®, these can be selected under the options tab when editing or creating an export preset. Also note that pixels further away not only contribute less, but are often also subtracted from the end result to improve sharpness.
Similarly, light which hits the camera sensor closer to when each frame is shown needs to contribute more to an exposure than light arriving in between frames. One can think of this as a "soft" global shutter as opposed to the standard "hard" shutter:
The RED MOTION MOUNT® can act as a soft global shutter by using a special filter that globally modulates when and how much light reaches the sensor. It's fully electronic, lies between the sensor and the rear of the lens, and effectively functions as a single pixel liquid crystal screen that controls incoming light. In many ways, one can think of this as a temporal low-pass filter (TLPF), but its benefits extend beyond that.
HOW IT APPEARS
Since a soft global shutter causes the middle of an exposure to contribute more than the start and end, motion blur appears to fade gradually near a subject's edges. This typically appears smoother and more natural:
With real-world subjects, this reduces unnatural edge artifacts in motion-blurred images. In the example below, note how the edges of the beak and feet stand out more when rendered with a hard global shutter:
The smoother blur characteristics also typically produce more pleasing and continuous panning. Judder will become much less apparent, for example, even though the still frames comprising these pans will likely also appear sharper. In the example below, notice how aliased blurring gives the impression of double eyes on the tiger, and how the background branches are blurred more abruptly:
Although the potential applications are important and diverse, a soft global shutter is still not a substitute for proper camera and exposure technique. When possible, panning speeds should still be sufficiently slow, for example. A safe shutter speed and frame rate combination will also ensure optimal results under otherwise flickering artificial lighting. A soft global shutter is also less influential as the frame rate is increased, but since motion blur and temporal aliasing also decrease accordingly, the fidelity of motion capture still improves overall.
Ultimately though, a soft global shutter pushes the limits of what is possible with motion capture, and effectively overcomes many of the traditional trade-offs. Cinematographers ordinarily have to choose between using a lower shutter angle and achieving sharper stills, for example, or using a higher shutter angle and achieving smoother motion. A soft global shutter can achieve both simultaneously.
The end result is a more natural and robust representation of motion that makes the most of a given frame rate. Everything else being equal, panning and motion blur will therefore appear smoother. Flickering from electronic displays and artificial lighting will be reduced or eliminated. Cyclical motion will be represented more accurately. Strobes and flashes will be less likely to depict partial or irregular illumination. Most importantly, all this can be achieved using the same camera, lenses and media as before.
- Part 1 of this tutorial: Global Versus Rolling Shutters.
- See Shutter Angles & Creative Control for how to control motion blur with a traditional shutter.
- To learn more about non-standard frame rates, also see the Intro to Slow Motion and the tutorial on High Frame Rate Video Playback.
- To learn more about spatial as opposed to temporal aliasing, also see Resolution Versus Aliasing.