The Role of Computers in Visual Art

. The beginnings of computer art can be traced back to the 1960s, when three computer scientists began, almost at the same time and independently from one another, to use their computers to create geometrical designs, among them was Frieder Nake, then working at the University of Stuttgart, Germany. Some of Nakes works were shown in the gallery “Wendelin Niedlich” in Stuttgart in November 1965, which can be considered as the ﬁrst contact between an output of a computer system and the Artworld, and the reaction of most art critics was rather dismissive. This work analyzes Nakes reply to such criticism in the form of three considerations: (a) the novelty of generative procedures by means of pseudorandom numbers; (b) the evolution of authorship thanks to code parametrization; (c) a recognition of the key role of the audience in the creation of artistic experiences. By means of examples from modern art and from contemporary art we will show that (a) and (b) only refer to procedures that are indeed made more eﬃcient by the use of computers, but do not need these devices to exists, whereas (c) seems to shed light on a ﬁeld that is essentially based on today’s computing technology, namely, interactive art.


Introduction
If art is one of the oldest human activities (think of the Paleolithic paintings in the cave of Altamira in Spain, for example), computer science is much more recent: the first digital electronic computer was built by John Vincent Atanasoff in the 1930s at the Iowa State College.In spite of its brief history, computer science has widespread in the last few decades to such an extent in so many aspects of our lives, that an encounter with art was inevitable.We can interpret such encounter in different ways: a clash between radically different aspects of human culture, or a temporary and soon to be forgotten overlapping dictated by fashion, or an intersection of endeavours that makes us rethink some consolidated ideas and form new ones.This work aims at shedding some light in this direction, if not to provide answers, at least to propose some conceptual instruments to tackle one of the freshest and most interesting debates in global culture.
The already daunting task is made even more complicated by the fact that both art and computer science pose by themselves serious problems in their definition.If "What is art?" or "What is computer science?" are all but trivial questions to which we cannot provide exhaustive answers, how could we effectively study the relation between them?Although with different timescales, both questions have spawned long-standing debates that are far from reaching a conclusion.A synthetic yet very effective overview on the debate on art is provided by Tiziana Andina [1], who helps us determine the main factors at play in the creation, presentation and fruition of objects and events when they are considered artworks: ranging from Plato's theory on the imitation of nature to the institutional theory proposed by Arthur Danto [3] and perfected by George Dickie [6], according to which an object becomes an artwork the moment its value is recognized by a member of a non-official institution comprised of artists, experts and gallerists: the so-called Artworld.
The debate on the status of computer science is newer but not less heated: once they became aware that the name "computer science" seems to take the focus away from computation and puts it on the machines performing computation, a number of researchers put forward some renaming proposals, with the aim to stress the fundamental nature of this field, alongside hard sciences like physics and chemistry.Peter Denning, for instance, prefers the term "computing" [5], and the latest and most complete book on the disciplinary status of this field by Matti Tedre is titled "The Science of Computing" [18].Luckily, at least when it comes to our objective, the conceptual debate on computation and computers is not particularly problematic: whether the heart of the discipline resides in the more abstract concept of computation rather than the more concrete artifacts implementing it is not critical when we analyze the relation between computer science and art because such link was born indeed when some computer scientists, mid-20th century, started making experiments with their digital computers.It was a new way to use a computer and not a new type of computation that opened the door to a possibly new kind of art that we want to deal with in this article 1 .
This work takes off from the first encounter of the output of a computer with the Artworld in Section 2, which recounts the early experiments by Frieder Nake and the consequent reaction of the critics; Section 3 discusses two of the three replies with which Nake countered criticism against the use of computers in art, namely, randomness and parametrization, although we will try to show that these are not properties for which a computer-based system is strictly necessary; Section 4 focuses on Nake's third reply, the one on the role of the audience, which, in our opinion, points in the direction of a new kind of artwork, for which the use of a computer is indeed essential; finally, Section 5 concludes.

Computer art: New works, old controversies
To trace the early stages of Computer Art, that is, of the first works made with a computer with an artistic purpose, is rather simple thanks to Nake himself, who has always been accompanying his activity as a computer scientist/artist (or "algorist", as the pioneers of this field called themselves sometimes) with a thorough work of chronicle and philosophical analysis, brilliantly summarized in a recent paper [15], in which Nake takes the groundbreaking avant-garde artist Marcel Duchamp as a point of reference in the analysis of the theory behind the use of computer science in the artistic field.Let us first check what these early works of computer art were like and the nature of the criticism against them.

The dawn of computer art
We can trace the beginning of computer art back to the 1960s, when three computer scientists began, almost at the same time and independently of one another, to use their computers to create geometrical designs: George Nees at Siemens in Erlangen in Germany, Michael Noll in the Bell Labs in New Jersey, and Nake himself at the University of Stuttgart, Germany.Actually, there had been already other experiments in the 1950s that dealt with computers used for artistic purposes, but we consider these three scientists as initiators of the discipline for at least two reasons: they were the first to use digital computers (whereas those used in the previous decade were analog systems combined with oscilloscopes) and their works were the first to be shown not in the laboratories where they were created, but in real art galleries instead.The works of Nake, for instance, were shown together with some works by Nees in the gallery "Wendelin Niedlich" in Stuttgart in November 1965.
The works the three algorists proposed are all extraordinarily similar, to the point that it is almost impossible to believe that they were developed independently.They all consist in graphical compositions with broken lines with a random orientation to form closed or open polygons.Nake himself provides a convincing explanation, quoting Nietzsche who wrote in 1882 to his secretary Köselitz about a typewriter with only upper case letters that "our writing instrument attends to our thought," to state that even a very free kind of thought like an artist's creativity follows guidelines determined by the instrument chosen for the creative process.In the case of the three algorists, such instrument was a digital computer of the 1960s with its very limited graphical capabilities, which included little more than a function to trace segments between two points.Nake states that anybody with some artistic ambitions and such an instrument at disposal would have arrived at results like his own "Random Polygons n. 20" [14].Before analyzing Nake's work and its aspects, let us take a look at the criticism that it raised, including a controversy that has accompanied computer science since its beginnings, even before the birth of computer art.

Criticism and contradictions
A criticism against computer science and, in particular, artificial intelligence was moved ante litteram in 1843 by the English mathematician and aristocrat Ada Lovelace, when she translated the essay of Italian mathematician Luigi Federico Menabrea on Babbage's Analytical Engine and added some personal notes [13].In such notes, Lovelace showed an exceptional insight into the possible future applications of machines similar to Babbages and added some methods she conceived to solve a number of mathematical problems.Lovelace also wrote that one should not expect any originality from the Analytical Engine: it can execute whatever we are able to order it to execute, but it is not able to anticipate any analytical relation or truth.This observation, which has since then become known as the "Lovelace objection" to any attempt to create something original by means of a computer, was reprised a century later by Turing in his article "Computer Machinery and Intelligence" [19], in which he proposes the famous test to evaluate a machine's intelligence: anticipating criticism based on the above-mentioned objection against his vision of future machines able to converse like human beings, Turing affirms that the English mathematician would have changed her mind had she been exposed to the possibilities of computer science in the 20th century.Actually, it might have been Turing himself to change his mind, had he been still alive 20 years later, seeing computer art pioneers deal with the same objection.To be more precise, the criticism they were facing was more specific than the original one by Lovelace, as it was referring to the context of art.A typical complaint was as the following: since machines simply follow orders, one cannot expect any creativity from them, hence the works of algorists, if they are the result of a creative process, must entirely come from the algorists' minds; algorists are mathematicians or engineers (there were no official computer scientists at the time) but not artists, so their works are spawned from a process that is not artistic and cannot be considered artworks.The discourse is complex because there is an overlap between at least two different issues: the one in computer science on the capability of computers to create artworks, which can be seen as a specialization of the Lovelace/Turing debate, and the one in art on the essential nature of artworks.Let us begin with the latter because it is the one marred with a contradiction that shows us a lesser known shortcoming of the institutional theory of art2 .The controversy surrounding the algorists' works sheds light upon the following problem: many artists dismissed Nake's and his colleagues' works as simple mathematical games printed on paper, but in fact there were German and American gallerists who decided to show these works in their spaces.In other words: how does one consider works that trigger opposite reactions within the Artworld?The institutional theory does not provide any answer, while Nake treasures Duchamp's words: "All in all, the creative act is not performed by the artist alone; the spectator brings the work in contact with the external world by deciphering and interpreting its inner qualifications and thus adds his contribution to the creative act."[8] Moving beyond the limitations of the existing theories, Duchamp for the first time ascribes the spectator a primary role in the creation of an artwork.If many artists have decidedly rejected such an idea, Nake embraced it fully in responding to the critics stating that "Random Polygons" and similar works were "only created by mathematicians and engineers": the works by the three algorists are indeed simple because only mathematicians and engineers had access to computers and were able to use them to design.Of course, if people with an artistic background had tackled programming to create their works, much more interesting computer art works could have been produced.Nevertheless, continues Nake, if the value of a work is established also by the audience, then it does not matter whether the first works of computer art were created by mathematicians or more traditional artists, because the spectators would have surely appreciated the revolutionary content of these lines plotted on paper.What follows aims at investigating whether such content indeed brings a significant disruption, and whether such disruption strictly depends on the use of computers.

Randomness and authorship
Let us not forget about the Lovelave/Turing indirect dispute on the possibilities to obtain anything original from a computer, which specialized into the question whether computers can be endowed with any kind of artistic creativity after the advent of the algorists.This problem is tightly connected with a strong contrast between the fundamental principles regulating the workings of a computer and those that deal with human creativity: the rigor of the mathematical rules on one side and the absolute freedom of art, especially in the light of the bewildering works of Duchamp (among many others, e.g.Man Ray, Andy Warhol) on the other.In this context, one can argue that, as computers are machines for automated computations comprised of electronic circuitry, it is impossible for them to be creative in the way human beings are, who are sentient biological creatures with a growing experience of the world around them.This may sound like a conclusion, but instead it is our starting point.

The compromise on randomness
Since computers are automatic machines, they work in a deterministic way because, differently from a human being, a computer is not able to choose how to move on in solving a problem.A person can make decisions on the basis of past experiences in situations similar to the current one; a computer, obviously, cannot.Since each operation by a computer is determined before its execution, the action flow is entirely established since the beginning, and the only variations one can have depend exclusively on the input data, which in any case need to be in a planned range.To have an example from everyday life, one can think of the computer managing an ATM, which can receive only a restricted set of input data (shown on the ATMs screen) and deterministically responds in accordance with the choices of the customers, unless there is a failure.From this perspective, there is no significant difference between the ATM's computer and the most advanced super computer at NASA.Obviously, in the deterministic working of a computer there is no room for any random phenomenon: determinism and randomness are mutually exclusive.We are thus facing two limitations: in its operations a computer cannot be creative nor act at random.The debate between Lovelace and Turing seems to be over with a victory of the former.Still, one of the works by Nake is titled "Random Polygons".Is this title deceitful?Not exactly.We need to analyze more in detail how Nake conceived and created his work.
From the perspective of creativity, interpreted as the complex mental processes that brought some computer scientists in the 1960s to use their computers to create geometrical designs, we can only acknowledge that the graphical capabilities of the machines back then might have affected the choices of the algorists.Computers indeed had an active role, but only after such choices had been made: those deterministic machines did nothing more than executing the commands given by the human programmers who had indeed chosen to have polygons drawn.From the perspective of the execution of the idea, instead, computer science provides a very interesting instrument that might look like a trick at first glance, but that poses interesting epistemic questions: pseudorandom numbers.Nake did not program his computer with instructions that explicitly specified the coordinates of all the vertices in his work: such coordinates have been computed on the basis of rather complex mathematical functions parametrized with respect to several numerical values inside the computer such as the hour and the minutes in the clock, so that although resulting from a deterministic computation, they appear to be random to the human user.This is the trick: a computer is not able to generate random numerical values, but aptly programmed it can create figures that look random.Nake had a rather precise idea of the drawing he was going to make, but could not exactly foresee the positions at which the vertices of the polygons would have been placed because he was not able nor he was willing to do the math the computer was going to do to establish such positions.Thus, once the work was completed, the artist was looking at an unexpected result, at least in part.Turing, in his reply to the Lovelace objection, wrote that he was positive that computers were not entirely predictable, and they had the capability to surprise us, in particular thanks to results that human beings would not be able to obtain immediately; the works based on pseudorandom numbers seem to support his position.The field that exploits pseudorandom numbers to create artworks is called "generative art" and it is named after the generative character of this kind of process.
Are computers necessary to create generative artworks?Surely there are several other ways to create randomness (or something that looks like it) without using a computer.In fact, the mathematical functions that yield pseudorandom results could be computed by hand, or one could simply throw some dice.There exists a truly remarkable art catalogue featuring works based on randomness [12], including the one by French artist Franois Morellet titled "40,000 carrés (40,000 squares)": a series of 8 silkscreen prints derived from a painting comprised of a grid of 200 x 200 squares, to each of which the artist had associated a number read from a family member from local phonebook; squares with an even number were painted blue, those with an odd number red.The entire process took most of 1971 for completion.This may be a bizarre example, but nonetheless it shows that randomness in artworks can be achieved without relying on computers.Still, it also shows the advantage of working with computers: the ongoing evolution of digital electronic technology allows for better performance every year, that is, shorter completion times even for the most complex of works.Let us not forget that in 1965, the year "Random Polygons" was shown to the public, a computer with what was considered a reasonable price like the IBM 1620, which came with a price tag of 85,000 US dollars, needed 17,7 milliseconds to multiply two floating-point numbers, whereas as of January 2015 one only needs little more than 900 dollars to build a machine with an Intel Celeron G1830 processor and an AMD Radeon R9 295x2 graphics card (endowed itself with a processor) able to execute more than 11,500 billions of floating-point operations per second.
To have a more concrete idea of the effects of this technological evolution, let us take a look at a work by a contemporary generative artist, Matt Pearson, author of a book aptly titled "Generative Art" [16].The work, called "Tube Clock", is shown in Figure 1.By zooming in enough, one can realize that the tubular structure depicted in the work is comprised of thousands of elliptical shapes, drawn very close to each other.The basic idea of the artist to draw a series of ellipses along a circular path is affected by a discreet turbulence provided by the pseudorandom noise slightly altering the coordinates of the centers and the size of the shapes.The issue of performance must be taken into account again: even if it could be possible to obtain a design like Nake's without the aid of a computer, a work like Pearson's is not easily achievable with manual instruments.It is not only a matter of time (even with the patience of the Morellet family), but also a matter of precision and dexterity.

A new kind of authorship?
The traditional role of the artist fully in control of the creative process seems to have been changed by the introduction of an instrument like the computer, which, even if completely lacking human creativity, is indeed endowed with a characteristic a person is missing: as said before, remarkable computing power.If pseudorandomness makes the artist lose the sharp vision of the final results of their effort, the dramatic increase in computing performance not only has made such vision even more blurred (the more pseudorandom operations per second, the more variability in the final result), but it has also disclosed new, otherwise hardly reachable landscapes.Artists like Pearson create artworks that are extremely difficult to create without a computer: must we then admit that man has given up power to the machine?Should Pearson share credit with the computers he used?Actually, the final and most important decision is still in the hands of the human artist: which of all the possible results of a pseudorandom computation is the one to be shown to the public?Which one qualifies as an artwork?A computer is able to explore in a short amount of time the vast space of the possible solutions, but the last word is the artist's and there is no way to delegate such decision to the machine.To do so, one would have to write a piece of software encompassing and encoding all the factors affecting the artist's decision (i.e.their upbringing, their cultural background, their taste, the zeitgeist they live in, etc.), but philosophers have stated with rather convincing arguments that any attempt to build an exhaustive and computable list of all the relevant aspects of the human thought is doomed to failure [7].
It is important to remark that an artist's dependence on a computer for the creation of an artwork is not an essential characteristic of computer art: just think of a pianist's dependence on their piano, or a painter's on their paintbrushes.From this point of view, a computer is simply a new type of instrument, technologically very advanced, that has been recently added to the wide range of tool at our disposal to make art.Another possible change in artistic authorship that Nake discovered with his pioneering work was not about an issue of computing performance (which was rather limited in 1965 anyway), but about an issue of abstraction, in the form of a shift from the execution of a material act to the construction of a mathematical model of such act, and that construction was indeed possible and exploitable thanks to computers.In his writings, Nake stresses the distinction between an instrument and a machine, stating that the latter is a much more complex entity, comprised of an internal state that evolves through time and is able to keep track of those changes.By means of a computer, an artist does not draw a line between A and B anymore, and a description takes the place of such action, in the form of a program instruction, which is by its own nature parametric: it does not refer only to one specific action, but to a scheme of which such action is just one instance.As said before, the artist is still in charge of the creative process, but they move away from traditional artistic gestures, shifting from a material to a semiotic dimension: the working space does not include brushes and colors anymore, but symbols instead, the symbols computers process automatically.
According to Nake, a change brought by computers in art is that artists do not create one artwork anymore, but a whole class of artworks: even without relying on pseudorandom numbers a program can be seen as an instance of a more general set of programs, and a change in one of its numeric parameters will allow for the exploration of such set.These considerations have a universal character and do not depend on the evolution of technology: they were true at the time of Nake's first steps as an algorist and they are true also today.In fact, when we asked Pearson to send us a high-resolution image of his "Tube Clock" for this article, the artist kindly sent us another image "produced by the Tube Clock system", different from the one shown on his website in the form of a thumbnail, whose original version Pearson had lost in the meantime.
If one wonders whether such characteristic is made possible only by the introduction of computers into the creative process, we cannot help pointing at an example from the history of art that shows otherwise.Let us focus on Piet Mondrian's work after 1920, whose compositions of black lines and colored rectangles are considered by many as one of the most easily recognized languages in art [4].The parametrization in the abstract paintings with red, yellow and blue rectangles is evident, and although compositional algorithms with similar results have been elegantly reproduced in the form of a software for parametric design [2], the Dutch painter conceived and executed the relevant rules at least 10 years before the creation of the first digital computer, and 40 years before the algorists' early works.Again, the most significant change introduced by computers seems to be related to performance: a program like Mondrimat [10] may enable us to explore the space of abstract rectangular compositions in a much shorter time than with the paint, brushes and canvases that Mondrian used.

Interaction and technological evolution
It is time for some clarification to avoid making the reader believe that there exists only one kind of computer art, that is, pseudorandomness-based generative art, and that the evolution of technology only supports existing processes, while not playing a significant role in the expansion of the context of art: these statements are both false.Let us analyze the work of one particular artist to disprove them.

The boundaries of interactivity
Scott Snibbe was born in New York 4 years after "Random Polygons" was shown to the public, so he belongs to a later generation than the first algorists, but nevertheless he can be considered a pioneer in his own way, as he was one of the first artists to work with interactivity by means of computer-controlled projectors.In particular, one of his most famous works of this kind is "Boundary Functions", presented for the first time at the "Ars Electronica" festival in Linz, Austria in 1998 and then several other times around the world, concluding at the Milwaukee Art Museum, in Wisconsin, USA in 2008 [17].The work consists of a projection of geometric lines from above onto a platform on the floor, separating the persons on the platform from one another, as shown in Figure 2. The lines Fig. 2. S. Snibbe, "Boundary Functions" (1998), here presented at the NTT InterCommunication Centre in Tokyo, Japan, in 1999.Image with GFDL licence retrievable on http://en.wikipedia.org/wiki/ScottSnibbe are traced in accordance with the positions of the participants, and they draw the relevant Voronoi diagram on the floor, that is, the boundaries of the regions of positions that are closer to one person than any other.The projected diagram is dynamic: the lines change as people move to always keep a line between any pair of persons on the platform.Snibbe wants to show by means of an artwork that, although we think our personal space entirely belongs to and is completely defined by ourselves, its boundaries are indeed defined also with respect to the people around us, and they often undergo changes that are out of our control.It is meant to be a playful way to stress the importance of the acceptance of others: a message even more charged with meaning, if one considers that the title of this artwork is inspired by the title of the PhD thesis in Mathematics of Theodore Kaczynski, also known as Unabomber.Meaning of the work aside, it is clear that "Boundary Functions" is an example of non-generative computer art: there is no pseudorandomness involved because a Voronoi diagram is obtained by a known computable procedure and, given a certain configuration of people over a platform, the artist is able to foresee the result of such computation.In terms of the Lovelace/Turing controversy, there are no surprises by the computer for the artist.The surprise is indeed all for the audience that takes part in this work: such participation undoubtedly makes a significant difference between Snibbe's work and those by Nake and Pearson.This is another kind of computer art, born from the interaction with the audience, namely "interactive art".The concept of interaction is so general that some specification is needed.Obviously, it is always possible for the audience to interact with an artwork, even a traditional one: an observer can look at a painting from different points of view and obtain a different aesthetic experience every time; moreover, artworks with mirrored surfaces like Anish Kapoor's "Cloud Gate" in Chicago (also known as "the Bean") do invite people to interact with them, in a game of ever-changing deformed reflections.The interaction of an interactive artwork is different, though, because it is necessary for the existence of the work itself: if "Cloud Gate" can be enjoyed also from a distance, without any self-reflection on its surface, there is no experience at all, let alone aesthetic, when nobody is standing on the platform of "Boundary Functions".It is when two or more people walk around on it that the work comes to life.Let us remind the words of Duchamp reprised by Nake to defend computer art made by mathematicians and engineers; it is easy to recognize that interactive art grants the audience an even bigger role than what prescribed by the avant-garde artist: the spectator is not required to establish the value of an artwork, but to build, together with the artist, the artwork itself.

The necessary evolution
In the context of interactive art it becomes clear that one needs adequately "performant" computers.Let us indulge in a mental experiment: imagine we want to create "Boundary Functions" without computers.How would we proceed?One solution could be to enhance the platform technologically by means of small scales and LEDs: the scales should be organized in a matrix-like structure so that each scale transmits the weight on it to the surrounding scales; the LEDs of the scales in a state of balance should turn on to mark the boundaries between the people.Another solution could consist in exploiting some assistants who, by means of some flashlights that have been modified to project only segments of light, can skilfully trace the boundaries around the people from above.Not only these solutions appear to be extremely tricky, but surely they would not ensure the accuracy and the aesthetic experience provided by the actual "Boundary Functions", whose interactivity is made possible by devices that, thanks to their computing power, are able to project the lines of the Voronoi diagram relevant to the audience members currently walking around the platform.Like Nietzsche's typewriter shaped his way of thinking, many artists have their inspiration enriched by the computing possibilities provided by computers: it is reasonable to think that nobody at the times of "Random Polygons" could have conceived a work like "Boundary Functions", not because the mathematical concept of a Voronoi diagram did not exist (it existed) or there was no algorithm to compute it (it existed, too), but because the computing instruments available at the time would not have allowed even to imagine that a computer would have been able to compute in real time the boundary lines among a group of people moving on a platform.From this perspective, even more respect is due to visionaries like Turing, who more than 50 years ago imagined computers performing operations that are not possible even today (e.g.conversing with a human), in spite of all the doubts that characterize every prediction on the future.

Conclusions
Whatever the future of computers in general, and computers in art in particular, it is a fact that today there exists a new endeavour at the intersection between computer science and art that was made possible by the birth of computing devices powerful enough to ensure real time interaction between persons and machines.Interactive art has quickly gained a primary role in the artistic landscape: art historians like Katja Kwastek have recognized its potential for a significant support to the search of an adequate art theory and proposed an aesthetic of interaction with digital instruments [9]; philosophers of art like Dominic McIver Lopes have even promoted the concept of interaction to an essential and definitory characteristic of computer art in general [11].In spite of the problems in recognizing universal criteria that define art, interactive art, with its focus on technology and persons, seems to be the discipline that embodies the zeitgeist best, and it surely has the remarkable merit of having given us, on the foundations laid by the pioneers of mid-20th century, a new kind of artworks that are not achievable in any other way than the most recent computing technology.The fundamental role of the interaction between the spectator and the artwork is a break with the past that may be compared to what brought by at the beginning of the 20th century by Duchamp.Considering what happened in the following years in terms of evolution of art, technology and everything in-between, we cannot help looking forward to what awaits us in the 21th century.