Electronic environment
With the invention and dissemination of television and video technology, the electronic environment is created within the film landscape. As a central feature, it becomes possible with TV to transmit live instead of waiting for the slow development of emulsion films. But also in the external features the TV presents a new materiality. The film image remains on the TV with the light source as we look into the light, as opposed to the projector, where we look at the light on a canvas. Suddenly, it is as if our retina is the canvas on which the television shoots the images, and it creates a hypnotic connection to the large picture tube box that can flicker and noise, the run of capacitors and tickle your fingers with static electricity hand over. Not surprisingly, this monster room also became the symbol of strangeness and creepiness in horror films such as Poltergeist (1982), Videodrome (1983), and The Ring (1998) .1 The most classic film machine in the electronic environment is Cathode Ray Tube (CRT), which produces images by the back of a cannon emitting a beam of electrons toward the TV screen. The inside of the screen is coated with phosphorus, which is lit up in short flashes where the electron beam strikes. The intensity of light depends on the voltage in the cannon, and the individual points thus act as a kind of "pixel" in the TV. To draw a picture, the electron beam is deflected by magnetic coils around the tube so that the beam gradually "scans" from left to right and then starts again on the left side a line below. A TV picture consists of 625 line bars, and when the beam has drawn the bottom line, it starts from the top left corner. The time taken to draw a picture is called a frame period and is on European TV 1/50 second. The time taken to draw one line is called a line period, which is 64 millionths of a second (Jensen 2009). In reality, the TV picture never stands still, but is always being reshaped to the next in a continuous raster scan. To the human eye, this cannot be seen, but if you point a digital camera at a tube TV, you will observe a flickering line where the image is being loaded. In some of the earliest video art, Nam June Paik put electromagnets on a picture tube TV, manipulating the electron beam and bending the screen lines to reveal the signal's design principle. (Beck 1976: 184) The video signal itself is divided into periods of scanlines. Unlike the pixel-loaded, pixel-by-point and the image-by-image emulsion film, the video signal is loaded line-by-line, with each packet corresponding to one horizontal line in the image. Within this division, each line is encoded as a varying electrical signal. Each voltage curve contains the varying brightnesses in the line, as well as an electrical signal that emits a "pulse" each time a new line starts. (National Instruments 2006) In order to transmit as much data as possible with the antennas, a method was also invented to compress the signal with the so-called interlace scan. The principle is that you only send every second scanline in the picture, so that the TV can alternately load the equal and the odd of the scanlines. Each half-image is called a field, and since they only filled half as much as the data for a frame, an interlaced signal could send twice as fast as the progressive scan. On TV screens, the image is usually so fluid that you do not notice that the image is interlaced, but on computer screens intended for progressive scan you can observe a "combing effect" where it looks like the picture is separated as if a comb had saved them apart. (Luke's Video Guide 2002). A wonderful detourment of this effect can be seen in the music video Send me a copy (2011), in which Albert Alcoz uses interlaced images from an old science-fiction film as a stylistic tool.
Ift. Mast's distinction between continuous and successive movement, the successive principle can now be further divided into three types, cf. Manovich's concept of sampling. Just as the emulsion film divides movements into frames, the video and computer environment do the same, but at different levels depending on the discrete units that make up the signal's minor parts. In emulsion films, these are the discrete frames that divide the continuous motion into moments. Within each sample (discrete frame), the signal will be continuous as the photochemical image cannot be further divided into natural minor parts. The video signal sample is the discrete scanlines where the uninterrupted electrical current within is continuous. And finally, the digital adds another sampling grid that divides the individual screen bar into discrete pixels. (Manovich 2001: 28) Each pixel is then quantified by a numeric value representing a color consisting of bits that are either 0 or 1. To handle this binary code, the digital computer uses logical operations that are pooled into programs. In contrast, the electrical environment uses analogous electrical operations that manipulate the height or frequency of the alternating voltage that constitutes the video signal. This happens e.g. by delaying the signal, adding resistance in the circuit, filtering special voltage signals, etc. (Robinson 2008: 23) The electrical signal can come from antenna, cable TV, magnetic tapes (VHS, Hi8, Betamax), camcorder - but in principle any device capable of generating electrical voltage loss can be translated into TV images, such as. Beck's synthesizer does. As Frank Popper has noted, this means that the video signal is no longer just a "recording device" that mimics the traditional film camera, but also an "electronic system", ie. "a complex artistic means that could be manipulated as pictorial or sculptural materials" (Popper 1993: 54). Considered as an "electronic system", the TV actually acts as a measuring instrument that converts voltages into a circuit for images - and this in a way can be compared to an oscilloscope that visualizes electric waves. Many of the image manipulation options offered by video systems are similar to those we know today from the computer. Throughout the 1980s, however, video had the advantage that video-synthesized manipulation of signals was in fact instantaneous. could be used to make graphics and transitions to live TV. Digital effects, on the other hand, require a rendering time where they are calculated before they can be viewed - but the technology has taken off as is well known, and today most computers are capable of rendering high resolution animations and graphics as well as in real time. . Another key difference between the two environments is that small deviations always occur in the analog video circuits. This meant that on an advanced video synthesizer, where a setup consisted of hundreds of parameters and cables, it was practically impossible to accurately recreate a setup. By contrast, works in the digital environment can be easily stored and fine-tuned because the digital signal makes the loading accurate. However, the same relationship is also a criticism point for video puritans, who swear by the inaccurate input with unforeseen results as a central part of the video aesthetic. They term digital animations as clinical because they lack the warmth and vitality that came with the whims of the analog video system.