This essay was originally posted at the International Symposium on Electronic Art (ISEA) site, and has now been archived for the Manuel DeLanda Annotated Bibliography. Please note that some links may thus no longer work.

  Back to the ISEA HQ 1992 symposium page

Retour à la page Symposium 1992
                  MANUEL DE LANDA
Virtual environments as intuition synthesisers

Emergent Properties

         One of the most exciting new concepts to enter the realm of science in recent decades is that of 'emergent properties'. To be sure, the idea-that the property of a whole is not reducible to its individual components-has been around for a long time. But its unfortunate association with scientificalfy disreputable schools of thought such as vitalism has not encouraged scientists and philosophers to consider it seriously.

         Today, the idea that a whole may be more than the sum of its parts is becoming commonplace in physics, biology, economics and other disciplines. In physics for instance, a typical example can be found in metallic alloys. A mixture of two different metals in the right proportions (say, copper and tin) yields a product (bronze) whose strength surpasses the sum of the strengths of either copper or tin taken separately. There is a surplus of strength that emerges, so to speak, out of nowhere. In biology, an often cited example can be found in the emergent behaviour of insect colonies. With the case of termites, each individual carries enough genetic information to perform a set of rather simple tasks, but none as complex as building an elaborate nest. Yet the complex architecture of a termite nest manages to emerge from the decentralised local actions of hundreds of individuals.

         Although there are many other examples, these two suffice to illustrate the basic principle: complex global behaviour can spontaneously emerge out of the interactions of a population of simple elements. In philosophical terms, the read towards reductionism has been permanently blocked. If the properties of matter and energy at any given level of organisation cannot be explained by the properties of the underlying levels, it follows that biology cannot be reduced to physics, or anthropology to biology. And beyond this, if emergent properties are as pervasive in nature as they seem to be, not only is one idea eliminated (that of reductionism). A whole method for generating ideas may also need to be modified: the method of analysis.

         This method, which has dominated Western thought for many centuries, relies on the assumption that a given system can be dissected into its component parts, the latter analysed ih detail, then finally added up together to yield the full system again. But this will obviously not account for emergent properties since the latter, by definition, is that which goes beyond any simple addition of parts. We seem to be in need of a new approach, to complement analysis with synthesis. This is precisely what virtual environments can provide.

In the words of Chris Langton, one of the leading figures of Artificial Life:

"Biology has traditionally started at the top, viewing a living organism as a complex bio-chemical machine, and worked analytically downwards from there-through organs, tissues, cells, organelles, membranes, and finally molecules-in the pursuit of the mechanisms of life. Artificial Life starts at the bottom, viewing an organism as a large population of simple machines, and works upwards synthetically from there- constructing large aggregates of simple rule governed objects which interact with one another nonlinearly in the support of life-like global dynamics." 1

Artificial Life

         Unlike the discipline of Artificial Intelligence, where the computer is not onfy used as a research tool but also as a paradfgm of what the mind is taken to be like, in Artificial Life there is no question of viewing ecosystems as computers. Computers are simply the means to create virtual environments where populations of simple programs are allowed to interact with one another in the hope that interesting emergent properties will result. Instead of approaching our subject top-down, we approach it bottom-up, setting loose within a virtual environment a whole population of simple interacting entities so that emergent behaviours can be synthesised. One generates knowledge in this approach by observing the dynamical results of the interactions in order to generate in ourselves new intuitions as to what is going on in real ecosystems. Hence we convert the computer into an 'intuition synthesiser'.

         There are, however, some misunderstandings in the current definition of the goals of Artificial Life. In particular, some researchers seem to think that if, for example, one manages to generate within a virtual environment the nest-building behaviour of termites, one has thereby captured some of the essence of life, or the formal basis of living processes. What this formulation overlooks is that the nonlinear dynamics underlying such emergent behaviours are a common property of both living and nonliving matter-energy, as the case of metallic alloys illustrates. Moreover, the exact same 'abstract mechanisms' responsible for the emergence of oscillatory behaviour in some chemical reactions are also behind spontaneous oscillations in human economic systems (e.g. the Kondratief wave). Similarly, the exact same 'abstract mechanism' behind emergent properties in turbulent flows of water also underlie the coherent behaviour of photons in a laser beam, or the behaviour of countries at the outbreak of war. in other words, it does not seem to matter what the population of elements happens to be (molecules, amoebas, humans). As long as their interactions are nonlinear (and most naturally occurring interactions are), the resulting emergent properties all belong to a small set of possibilities. Hence the philosophical school known as 'essentialism' seems to be condemned to the same future as 'reductionism' . If the essence of coherence in a chemical reaction is the same as that of an economic system, or as in a population of amoebas, then 'essences' (understood as that which makes something what it is) are gone forever.

         Computers, or rather, the virtual environments they allow us to create and explore, were also behind the discovery of the universality of the mechanisms behind emergent properties. This was quite a counter-intuitive idea. Discovering it involved studying the dynamical behaviour of the mathematical models of many different systems, and observing them generate the same behaviour regardless of the model involved. This was the case for instance with the discovery by Edgar Feigenbaum of the universality underlying turbulent flows and coherent laser beams (the so-called period-doubling bifurcation).

In James Gleick's words, Feigenbaum:
"... needed to inquire into [the behaviour of number and functions]. He needed-in a phrase that later became a cliche of the new science- to create intuibon... Ordinarily a computer user would construct a problem, feed it in, and wait for the machine to calculate its solution-one problem one solution. Feigenbaum and the chaos researchers that followed, needed more. They needed... to create miniature universes and observe their evolution. Then they could change this feature or that and observe the changed paths that would result."2

Populations

         The question of the 'death of essences' can be approached in another way, stressing the fact that in virtual environments we need to deal with entire populations, not with individual entities. The most crucial concept in the modern formulation of Darwinism is not the old-fashioned 'survival of the fittest' idea (which in most versions simply amounts to the truism 'survival of the survivors') but what has come to be called 'population thinking'. This can best be explained by reference to the old Aristotelian view of animal and plant species. According to that tradition, there was an essence of being a zebra, as well as an essence of being an oak. The actual individual zebras and oaks inhabiting the planet were but imperfect realisations of those essences, so that in a sense all that-was truly real was the eternal and unchanging type, not the ever- changing variants. Modern evolutionary theory has stood this on its head. For every real zebra, we can imagine each of its adaptive traits (its hoofs, its camouflage, its mating and feeding habits) as having evolved along different lineages, following different selection pressures. Given enough genetic variation in zebra populations, natural selection simply brought these traits together. Just as they came together, they might not have-had the actual evolutionary history of those populations been different. In other words, while for the Aristotelian tradition the essence was real, the variants were not; in population thinking only the variation is real, while the 'zebra archetype' is not.

         One crucial task today is to extend the insights of population thinking to other areas, such as linguistics. Language, far from being a synchronic structure embodied more or less imperfectly in the actual performances of individual speakers, is just like an animal orplant species. It is an historically emergent structure. Real languages evolve out of the daily labour of a population of human users, as the constant stylistic variations to which the latter submits sounds, words and sentences, become selected by a variety of pressures-such as the pressures exerted by. a standard language over local dialects or those exerted by dialects on each other. Once this point of view is adopted, linguistics too could make use of virtual environments, to gain insights into the processes that give languages their current form. They could, for instance, create a simulation of the populations of English peasants at the turn of the first millennium, who took a language which to us would seem like German, and transformed it into something we could recognise as English, all in the process of resisting the Norman invaders and their foreign language-French. One can then imagine the linguistics of the future being not of the analytical Chomskian kind (i.e. dissecting a temporal snap- shot of English into generative and transformational rules), but taking instead the synthetic approach-setting loose in a virtual environment populations of rules under stylistic variation and selection pressures, observing how English structure emerged from their dynamical interactions.

Gama Theory

         Another potential use for virtual environments is to unblock some promising, yet untried, analytical approaches to certain problems. Sometimes, after a specific analysis of a given system has become entrenched in academic circles, even if better analytical approaches exist, they will remain unexplored, due to the inhibiting presence of old solutions. In these circumstances, virtual environments can help researchers synthesise fresh intuitions concerning the unexploited capabilities of analysis. A case in point comes from the discipline of game theory, which attempts the formal ' study of situations involving conflict of interests. Game theory has been used extensively in military think-tanks (such as the RAND corporation) since the early 1950's, as an analytical aid for the exploration of policy alternatives for the negotiations between nations. Game theory deals with rather simplified conflictual situations, where the dynamics of the system can be boiled down to the possible payoffs defining the alternatives available to each one of the participants In the conflict. A prototypical situation is the one captured by the famous 'Prisoner's Dilemma'. In this imaginary situation, two prisoners accused of the same crime are separately offered the following deal. They can either accuse their accomplice or they can claim innocence and avoid betraying their partner. If both claim innocence, they get a midsized sentence; if both betray one another, they get a long sentence; finally, if one betrays, while the other does not, the former walks out free while the other gets the worst possible punishment. The payoffs are so designed as to put the two participants in a dilemma: the best outcome for both would be to claim innocence and avoid betrayal, yet the temptation to betray and walk out free (plus the fear of getting the 'sucker's payoff') are so great that neither one can trust the other-so they both betray. Beyond the' apparent artificiality of the example lies a situation which is rather common in real life: in the dynamics of nuclear arms negotiations for example, or in the financial panics known as 'bank runs'. In all these cases, the basic moral seems to be 'nice guys finish last'.

         This analytical solution to the dilemma had become so entrenched that other possibilities were routinely overlooked. This is particularly clear in situation where the choice between betraying and cooperating recur over a long period of time. Douglas Hofstadter offers the example of two traders who never see each other, but who leave a bag of goods at a specified place, over and over again. On any one occasion, each trader faces a dilemma similar to the one confronting the prisoners. They can simply leave an empty bag, take the other's goods, and stick him with the 'sucker's payoff'. Because the situation is iterated, and because presumably both traders would want the exchanges to continue into the future, they have added incentives not to betray one another. Despite this crucial difference, the old analytical solution had become so entrenched that researchers did not see that it did not apply in this more realistic case. It was at this point that a virtual environment came to the rescue. Political scientist Robert Axelrod decided to actually stage this trading situation, only he involved many participants. He invited people from many places to submit computer programs capable of carrying on virtual trading with one another. Due to the entrenchment of the old solution, most of the programs submitted were of the betraying kind, and yet when the competition was actually carried out, they all lost. The winner, a program called 'TIT_FOR_TAT', was willing to always cooperate in the first encounter, showing a sign of good faith to initiate a long trading partnership. Yet, if betrayed, it would retaliate immediately by refusing to trade anymore. In addition, it was forgiving, in that after retaliation it was willing to trust other traders again.

         The victory of 'TIT_FOR_TAT' was so counter-intuitive that Axelrod ran the tournament again, only this time informing the participants that 'nice, retaliating and forgiving' programs had won, so as to allow them to design betrayer programs that could take advantage of this knowledge. When the competition was run again, 'TIT FOR_TAT' won again. The reason is simple, since the criterion for winning was not who emerges victorious from single encounters, but who manages to trade the most in the long run. Betrayer programs though initially successful when confronting 'sucker' programs, soon ran out of partners, and ended up losing against 'TIT_FOR_TAT'. Although there is much more to the story than this simple outline may suggest, this will suffice for our purposes here: what Axelrod did was to first synthesise an intuition that was blocked due to the entrenchment of an old solution, and then went ahead and proved analytically that his results were indeed correct. Or as Hofstadter has put it:

         Can cooperation emerge in a world of pure egoists?... Well, as it happens, it has now been demonstrated rigorously and definitively that such cooperation can emerge, and it was done through a computer tournament conducted by political scientist Robert Axelrod... More accurately, Axelrod first studied the ways that cooperation evolved by means of a computer tournament, and when general trends emerged, he was able to spot the underlying principles and prove theorems that established the facts and conditions of cooperation's rise from nowhere.3

Beyond 'Virtual Modernism'

         So far we have given examples of the use of virtual environments as intuition synthesisers in the field of science. A few examples from the world of art could also be examined (e.g. the work of Karl Sims). Yet the majority of virtual reality projects done by artists fall squarely within the realm of modernism. Indeed, nothing I have ever seen going under the label 'postmodem' manages to go beyond the reservoir of formal resources that modernist artists have now exploited for almost a century. It is my belief that in order to go beyond this we need a brand new intuition as to what 'representations' really are. That is, if most modernist tactics and strategies have centred around a critique of the role of language, perspective tonality and other traits of classical representation, we now need new intuitions about what representations really are to break with this approach.

         One hint as to what the future may bring in this regard comes from the field of Artificial Intelligence (Al). In this field, much of the work involves the use of virtual environments, so the criterion to distinguish between different approaches is not 'virtual' but a top-down versus a bottom-up strategy. The kind of approach that dominated Al for the fist two decades uses an analytical top-down approach to generate intelligent behaviour. The basic idea behind this strategy is to build representations (rules, programs, referential symbols) directly into a. computer, and to apply logical principles (deductive and inductive) to the handling of these symbolic entities in the hope that something like human intelligence will emerge. The main rival of the symbolic strategy-going by the means of 'connectionism', taking the bottom- up approach, building knowledge about the world in the connection patterns and activation dynamics of small populations of extremely simple computers. In the connectionist approach there are no explicit representations built into a system at all. One does not program a connectionist system ('neural nets'); one trains it to perform a given task, in much the same way that one would train a living creature. Representations, or rather, rule governed behaviour, emerge spontaneously from these systems, following . the same dynamics that characterise most of the self- organising phenomena we have discussed so far (nest-building termites etc).

         What this means basically is that 'rationalism' (the school of thought that takes rationaiity to be the essence of the human animal) will go the same way as 'essentialism' and 'reductionism'. Rationality it turns out, is one of the many things that can emerge from the nonlinear flow of energy, matter and information.

         What this also means is that current philosophical schools that are still based on a critique of representation (such as the work of Baudrillard or Derrida) do not manage to go beyond modernism. Only the work of Gilles Deleuze and Felix Guattari is truly 'postmodern' in this regard (if one still insists on using this silly label).4 It is the non- linear flow of lavas and magmas that produces the structures (rocks, mountains) that inhabit the geosphere. Similarly, the nonlinear flow of flesh (biomass) through food chains, plus the flow of genetic materials in gene pools, are what creates the structures (animals, plants) that inhabit the biosphere. Linguishc structures must be approached exactly same way, as products of lengthy sedimentation of sounds, words and syntactical constructions, and their consolidation into structures over the centuries (e.g. the example of English peasants under Norman rule above). Artists could take the vanguard in the exploration of the nonlinear flow of expressive resources out of which the coagulated, stratified structures we call representations, emerge. Many entrenched notions as to what constitutes art must be debunked, and for that, virtual environments may one day constitute the perfect tool.

1 Christopher Langton Artificial Life in c. Langton (ed.) Artificial Life, Addison Wesley, 1989, p 2.
2 James Gleick, Chaos the Making of a New Science, Viking, New York 1987, p 178.
3 Douglas Hofstadter , 'The Prisoners Dilemma and the Evolution of Cooperation', Metamagical Themas, Basic Books, New York, 1985, p 720.
4 Gilles Deleuze and Felix Guuattari, 'The Geology of Morals' A Thousand Plateaux, University of Minnessota Press, 1987