Electronic environment

From Cinemachines

With the invention and dissemination of television and video technology, the electronic environment would enter the cinematic landscape. As a central feature, the TV enables the live transmission of moving images as opposed to the slow development of emulsion films. But also in the external features the TV presents a new materiality. Now the moving image is united with the TV itself as we look directly into the light source, as opposed to projections of light and shadow on a canvas.

Suddenly, it is as if our retina is the canvas on which the television shoots it's images, and it creates a hypnotic connection to the large picture tube that flickers, hums of capacitors and tickles your fingers with static electricity when you touch it. Not surprisingly, this monster became the symbol of strangeness and creepiness in horror films such as Poltergeist (1982), Videodrome (1983), and The Ring (1998).E.g. see Akira Mizuta Lippit's analysis of The Ring (Lippit 2012) for more on this idea

The most classic film machine of the electronic environment is cathode ray tube (CRT), which produces images by a cannon at the back emitting a beam of electrons toward the TV screen. The inside of the screen is coated with phosphorus, which is lit up in short flashes when the electron beam strikes. The intensity of light depends on the voltage in the cannon, and the individual points thus act as a kind of "pixel" (called rasters) on the TV. To draw a picture, the electron beam is deflected by magnetic coils around the tube so that the beam gradually "scans" from left to right and then starts again on the left side a line below. A TV picture consists of 625 line bars, and when the beam has drawn the bottom line, it starts from the top left corner. The time taken to draw a picture is called a frame period and is 1/50 second on European TV sets. The time taken to draw one line is called a line period, which is 64 millionths of a second (Jensen 2009). In reality, the TV picture never stands still, but is always being reshaped to the next in a continuous raster scan. To the human eye, this cannot be seen, but if you point an electronic camera at a CRT TV, one can observe a flickering line where the image is being drawn. In some of the earliest video art, Nam June Paik put electromagnets on a CRT TV, manipulating the electron beam and bending the screen lines to reveal the signal's design principle.<ref>Beck 1976: 184</ref>

The video signal itself is divided into periods of scanlines. Unlike the pixel-by-pixel sampling of the digital and the frame-by-frame sampling in emulsion films, the video signal is loaded line-by-line, with each packet corresponding to one horizontal line in the image. Within this division, each line is encoded as a varying electrical signal. Each voltage curve contains the varying brightness of the line, as well as an electrical signal that emits a "pulse" each time a new line starts.<ref>National Instruments 2006</ref>

In order to optimize the transmission of data in the antennas, cables and storage devices, a method called interlace scan was invented to compress the signal. The principle is to send only every second scanline in the picture, so that the TV can alternately load the even and odd scanlines. Each half-image is called a field, and since they only filled half as much as the data for a frame, an interlaced signal could send twice as fast as the progressive scan.

On CRT displays, the image is typically so smudged that you do not notice that the image is interlaced, but on computer screens intended for progressive scan you can observe a "combing effect" where it looks like the picture is separated as if they were combed apart.<ref>Luke's Video Guide 2002</ref> A wonderful detourment of this effect can be seen in the music video Send me a copy (2011), in which Albert Alcoz uses interlaced images from an old science-fiction film as a stylistic tool.

As an extension of Mast's distinction between continuous and successive movement, the successive principle can be further divided into three types, cf. Manovich's concept of sampling. In emulsion films, the level of sampling are the discrete frames that divide the continuous motion into moments. Within each sample (frame), the signal is a continuous as the image that cannot be further divided into natural minor parts. In the same way, the video signal sample is the discrete scanlines where the uninterrupted electrical current within is continuous. And finally, the digital adds another sampling grid that divides the individual screen into discrete pixels. (Manovich 2001: 28)

Each pixel is then quantified by a numeric value representing a color which consists of bits that are either 0 or 1. To handle this binary code, the digital computer uses logical operations that are pooled into programs. In contrast, the electrical environment uses analogous electrical operations that manipulate the amplitude or frequency of the alternating voltage of the video signal. This includes delaying the signal, adding resistance in the circuit, cutting special voltage levels, etc. (Robinson 2008: 23)

The electrical signal can come from antenna, cable TV, magnetic tapes (VHS, Hi8, Betamax), camcorder - but in principle any device capable of generating low electrical voltages can be translated into TV images, e.g. as in the case of Beck's synthesizer. As Frank Popper has noted, this means that the video signal is no longer just a "recording device" that mimics the traditional film camera, but also an "electronic system", i.e. "a complex artistic means that could be manipulated as pictorial or sculptural materials" (Popper 1993: 54). Considered as an "electronic system", the TV actually acts as a measuring instrument that converts voltages of a circuit into images - and almost exactly like an oscilloscope that visualizes electric waves.

Many of the image manipulation options offered by video systems are similar to those we know today from the computer. Throughout the 1980s, however, video had a major advantage because video-synthesized manipulation of signals was immediate, and thus it could be used to make graphics and transitions to live TV. Digital effects, on the other hand, require a rendering time where they are calculated before they can be viewed. Of course, since then the technology has taken off as is well known, and today most computers are capable of rendering high resolution animations and graphics as well almost instantaneously.

Another key difference between the two environments is that small deviations always occur in the analog video circuits. This meant that on an advanced video synthesizer, where a setup consisted of hundreds of parameters and patch cables, it was practically impossible to accurately recreate a setup. By contrast, works in the digital environment can be easily stored, re-loaded and fine-tuned because the digital signal ensures accurate loading. However, this exact difference is also a point of criticism for video puritans, who regard the inaccurate input with unforeseen results as a central part of the video aesthetic. They term digital animations as clinical because they lack the warmth and vitality that came with the whims of the analog video system.