Turning a Banana Inside Out

In the summer of 1982, the science fiction movie Tron was released two weeks before another cult classic, Blade Runner. Produced by different companies, Disney and Warner Bros. respectively, the films similarly combined the aesthetic of the future with the exaggeration of light. While Blade Runner experimented with the limits of traditional film techniques like matte painting and multi-exposure superimposition, Tron played with illumination by incorporating computer-generated imagery (CGI). To do so, Disney hired the computer graphics company, Mathematical Applications Group, Inc., to create many of the film’s action shots. Within this group worked a research scientist named Ken Perlin, the inventor of the industry standard for noise. Noise introduces the effect of randomness to a digital environment, whether in its surfaces, qualities of light, or in the arrangement of its objects and images.

A fundamental deficiency of CGI in the eighties was the unnatural representation of surfaces and light. Computer rendering had just been developed in the last decade by a group of engineers at the University of Utah, and the algorithms for surface shading and light calculation were still developing. The smooth surfaces associated with primitive computer rendering were a product of the digitally simulated environment’s mathematical idealization. Recognizing this, Perlin created an algorithm that made surfaces more imperfect.

While working on the special effects software for Tron, Perlin developed tools aimed at undoing the aesthetic qualities hindering realistic depiction. Smooth surfaces and clean reflections looked unreal, so noise algorithms were applied to objects to make them bumpy and dirty. With the knowledge he gained in the entertainment industry, Perlin published his seminal paper on noise four years later.1 Perlin Noise, as it is now known, is a computer graphics filter that uses mathematically generated images, or procedural maps, to create surface texture. When overlaid onto pure geometry, the maps simulate the variation and visual complexity found in nature. This was fundamentally different from previous noise techniques because it presented a stochastic, or randomizing, function in three dimensions. Perlin called this effect solid texture, and unlike the many computer graphics projects funded by the government, it was instrumentalized for artistic effects.

Developing Perlin Noise proved to be complex problem because it not only had to be seamless but it also had to achieve endless variation. This was done by propagating an array of pseudo-randomly directed vectors across a grid—each vector being the locus of a black and white gradient. Combined, the vectors created a web of modulating intensities that gave the effect of a continuous, irregular field. Whereas basic tile-based texture mapping needed only to match figures along its edges, Perlin’s gradient mapping took on the unit’s entirety. Each gradient underwent a convoluting process with its nearest neighbors in a process known as interpolation, or the mixing of discrete visual information into a continuous field. In erasing the seams between each unit with interpolation, Perlin transitioned from a tile-based to a convoluted structure.

This noise algorithm created an efficient model for generating naturalistic scenes by combining the effects of randomness with seamless continuity. Its concerns were similar to tile-based texture mapping, where “readymade”2 images were cropped to the minimum size that allowed for maximum variability. To simulate convincingly real materials without being too computationally costly, several tricks were involved in texture tiling, including rotating, mirroring, and nonperiodic repetition. Image samples were like puzzle pieces; matching boundary figures created the effect of continuity, leaving the middle “free.” Although texture tiling produced a model for continuity and efficiency through repetition, its overall effect was irregularity.

While noise was productive in making early computer renderings appear more real, it has a complicated history in other visual arts. Its positive or negative associations are dependent on the relationship between the values of the image and the instrument. Where the value of the instrument is to record the real world and the value of the image is to open a window onto it, noise is undesirable. To put it differently, noise is perceived negatively when it is revealed by an instrument’s technical limits and when the image is tied to the epistemic virtue of objectivity. For example, a digital camera with a small sensor takes bad photos because it makes noisy shadows in a dimly lit room. Deviations in the shadow’s true color and variations in its resolution reveal the camera’s inability to form an accurate reproduction. Noise is faulty because it inserts the instrument into the scene as a dirty window. However, where the value of the instrument is to fabricate the real world and the value of the image is to imitate another medium, noise can have a positive effect. Perlin’s noise algorithm, for example, can simulate “very convincing representations of clouds, fire, water, stars, marble, wood, rock, soap films, and crystal” and can generate natural looking “falling leaves, swaying trees, flocks of birds, and muscular ripping.”3 The realism Perlin Noise added to computer modeling was so appreciated that Perlin won an Academy Award for the advancements he made to the film industry, and it continues to be a computer graphics standard today. In this case, noise is perceived positively because it aids in the construction of real world environments and because the rendering imitates parallel mediums like motion pictures and photographs.

Perlin Noise

Perlin Noise

Though mapping a digital environment in a sort of Borgesian 1:14 to the real world would perhaps yield its truest depiction, the limits of computing power require information-processing techniques to balance efficiency and realism. In the construction of computer graphics algorithms, this need for speed has often aligned with the limits of vision. Digital processes cut down the portrayed physical environment, not only in the material lost in the translation from three to two dimensionality, but also in the loss of accuracy due to visual distraction. For example, an object’s level of detail can decrease at the boundaries of one’s field of view, or fluctuations in tonal contrast can become coarser radiating away from a focal point. In many cases, the marrying of geometric fidelity with the simulation of perceptual effects allows for the partial construction of an environment that “looks real” in pictures.

Consider the images here: a stock photo of a bunch of bananas and the same image with added noise. The sample is still recognizable, and the overall picture creates the effect of an animated transformation. Familiar geometric moves like mirroring, rotating, and scaling are apparent in the yellow figures, and their similar proportion to a grey background suggests a linear transformation reminiscent of a spherical eversion. Yet, to follow the course of a banana turning inside out leads to many dead ends. One banana that absorbs another in a series of circumscriptions suddenly multiplies; or, a bunch of bananas slowly fattening abruptly morphs into what looks like a grapefruit. Any attempts at finding an unbroken line of reasoning result in syncopated stops, twists, and turns.

Stock Image

Stock Image

TABIO

TABIO

The perception of the image differs from geometrical eversions or the formal animations familiar to parametric modeling because it seems illogical. Upon closer inspection, the way in which the bananas play out across the page is not all right. The image resists the display of totalizing complexity brought out by exhaustive repetition, intricacy, or incremental change. Rather, it borrows from complexity as defined by anthropologist Alfred Gell wherein a convolution of geometrical relationships frustrates legibility. He argues that complex patterns within the decorative arts—such as labyrinthian screens and indigenous pottery painted with abstract shapes—convey topologies that confound visual perception. Whether the designs are instrumentalized towards defense or personal attachment, they make “tacky” relationships between people and things. To Gell, convolution renders the viewer stuck in the “pleasurable frustration” of figuring it out.5

In a moment where instruments are increasingly adept at opening windows onto the environment and images are easily taken in, noise makes seeing clearly difficult. The tackiness its complexity lends to pictures opens them up for scrutiny, both in what they show and how they are made. When the real world effectively doubles in the digital realm, how can architects find methods for checking out? Perhaps in the pleasures of separating layers upon layers of information, we can peel slowly and see.

Notes

1 Ken Perlin. "An image synthesizer." Computer Graphics, July 1985, p.287-296.
2 Lev Manovich. The Language of New Media. (Cambridge: MIT Press, 2001)
3 Ibid., 287.
4 Jorge Luis Borges. "On Exactitude in Science." Los Anales de Buenos Aires, año 1, no. 3, 1946.
5 Alfred Gell. Art and Agency: An Anthropological Theory. (Oxford: Oxford University Press, 1998)

Michelle ChangComment