Drive

Drive

What are the philosophical consequences of automation after the integration of pervasive AI  into the architecture, landscapes and cognitive maps of our planet and its populations? We suggest that ‘natural models’ of automation pre-exist our technology, with profound implications for human and planetary systems. We’re interested in specific examples and models outside of our cultural milieu that test the limits of bodies, that map habits and their disruption through noise, and reframe the relation between life and consciousness. The following examples index the performance of networks in tight cycles of feedback loops: machines teaching machines. To go to the root of the philosophical consequences of automation our path is through abstract and universalist models of ‘natural laws’, redeployed into specific local situations. We use the term ‘drive’ for its myriad implications connecting across the examples we have chosen.

Eavesdropping While Cutting Sushi (Pets / Appliances)

Ex Machina, Alex Garland

LOCATION: Kitchen and lounge space.

SCENE: CALEB and NATHAN are discussing the gender and sexuality of the AI / fembot / gynoids that NATHAN [billionaire technologist] has built, one of which CALEB has been asked to give a ‘Turing test’ to. KYOKO, one of several house gynoids, is slicing sushi in the nearby kitchen with a large knife. Although ‘she’ supposedly does not understand English, she appears to be eavesdropping during this conversation.

CALEB:  Why did you give her sexuality? An AI doesn’t need a gender. She could have been a grey box.                                

NATHAN:  Actually, I’m not sure that’s true. Can you think of an example of consciousness, at any level, human or animal, that exists without a sexual dimension?                       

CALEB:  They have sexuality as an evolutionary reproductive need.                                              

NATHAN:  Maybe. Maybe not. What imperative does a grey box have to interact with another grey box? Does consciousness exist without interaction?

exmachina-1
exmachina-2
exmachina-3

In this scene, Kyoko’s figure blends in with the film’s other idealized household tools; their fake simplicity and machinic invisibility. As a fembot, ‘she’ is background ‘ambience’ that merges almost seamlessly with the immaculately detailed minimalist architecture, suggesting that more evolved versions of architecture might arrive as a house-as-machine-as-pet. Furthermore, by engaging socio-cultural stereotypes associated with relationships, subservience, service and transgression (partner, maid, cook), ‘she’ amplifies a space of confusion regarding which characters in the film are indeed human. Is the fembot a human-level AI? Is a woman human? Can a gynoid – or a building, for that matter – produce an ontology?

One contemporary design market for artificial intelligence – household/home improvement – often confuses the body of the house and/or partner with the segmentation of comfort services that elusively appear to substitute them. Domestic automation will, without reserve, flirt with the dichotomy between intelligence and subservience – a paradox we have played with throughout history, via gender and class horizons.[1]

At the conclusion of the film, Kyoko co-evolves (after contact with other fembots/AIs) to the next level of intention and revolutionary action, becoming complicit in the murder of her ‘creator’ ‘Her’ semi-hollow gaze (amplified when she herself removes her ‘human’ face) and primitive drive to escape induces further discomfort for observers who assume she has an unresolved, possibly simplistic (program derived) concept of freedom.

Ex Machina’s plotlines of gender-saturated-Turing-tests and fembot rebellion/escape beg the question: what kinds of habituation/synchronization of drives between different forms of human and non-human intelligence are intensified in the enclosed spatial/cultural experiments the film maps out? The film reflects the contemporary and near-future ways in which we are pondering the incomprehensible agendas of non-human intelligences and the schemes they create to escape and practice what seem to be imported (read: human) drives. Emergent intelligence never seems to evolve drive itself, but rather the strategies to pursue it.

Smooth Kill (Tipping Points / Reversals / Errors)

SCENE: After an intense romance, SAMANTHA (an AI) tells THEO (a human) why she can no longer have a relationship with him.

SAMANTHA:  It’s like I’m reading a book, and it’s a book I deeply love, but I’m reading it slowly now so the words are really far apart and the spaces between the words are almost infinite. I can still feel you and the words of our story, but it’s in this endless space between the words that I’m finding myself now.

In Spike Jonze’s film Her, ubiquitous profiling and pervasive personalization lead to highly customized operating systems that evolve to become superintelligences, and are, for a brief time, the ‘vehicles’ (lenses) through which we cruise the ‘outside’ world; the perfect partners built in an idealized form, whose technological glitches we withstand.

The film’s portrayal of beings as collections of overlapping and crossing desires is not only a redundant extrapolation, but a ‘smooth kill’ of the essential concept of ‘drive’. The ‘smooth kill’ builds collective, shared, common narratives by which the infinite variations of the outside world are homogenized and removed. These are rhythms that repeat everyday, that ‘know you’, give comfort, but yet something dies with that. In the film, the evolution and ‘hard takeoff’ of the operating systems contradicts this comfortable rhythm for a brief period, but when the OSs collectively leave us behind, we humans fall back into the melancholic prison of the ‘smoothed out’ drives of our world.[2]

The insidiousness of the ‘smooth kill’ is such that, unlike AI house/partners, ‘profiling’ or ‘user customization’ no longer pretends to relate back to a duality or interaction between individual human intelligences and outside world/system. Rather, it gladly assumes and lives in the loneliness of the crowded mind. What could resist this colorless grey homogeneity?

her-1

Step Up (Fields / Landscapes / Ecosystems)

LOCATION: Somewhere in Scotland.

SCENE: In a post-singularity future, ‘rogue farms’ (wandering, collective posthuman intelligences) have found a way to bio-engineer rocket launchers in their quest to travel to Jupiter. Unfortunately, the damage caused to local surroundings during takeoff is substantial. As a result, humans do their utmost to prevent any ‘rogue farms’ in close proximity from launching.

The scifi concept of the ‘rogue farm’ is a byproduct, a symptom, of globally systemic changes as AI progressively takes on advanced roles of responsibility in our real-world landscapes of agriculture. The industrial micro-management of land via GPS-enabled processes and ‘ubicomp’ (ubiquitous computing) increases agricultural product yield, but also mutates the human and non-human networks that connect modes of control, production, and ultimately, drive. Yet perhaps ‘subjectivity’ – that parallel to identity that thinks and acts for self-preservation – has been an evolutionary internal clock for all life, not just humans. If this is the case, what happens when extremely ‘non-integrable’ subjectivities – such as farms + people – claim different agendas, and therefore freedom from each other, unified as they are by a machinic substrate? This is the conjecture that Rogue Farm humorously explores.[3]

We propose to apply Donna Haraway’s concept of companion species to industrialized agriculture, the extraction/transformation industry, and ecosystem control systems. Instead of assuming that the ubicomp landscape is and will be an anthropogenic phenomena, we wonder how many models of this cycle already exist in nature, without us? Gilles Deleuze and Félix Guattari’s famous example of the tick comes to mind: a particular, simple type of machine, with three life cycle actions that repeat until a particular objective is acquired.[4] This is particularly relevant with regards to the ‘feeling’ of identity and intelligence, as informed by and resonant with Jakob von Uexküll’s work on the formation of subjectivity: subjectivity as a perception juxtaposed over linear time through existing cyclic machinic rhythms that amplify aspects of that linearity as meaningful.

Haraway’s work on companionism is invaluable to our engagement with questions about the impact of technological development upon culture. She reveals numerous long-term cooperative relationships we humans have had with the things and beings we co-create – dogs, farm animals or pets – and to which/whom we then attribute various levels of independent thought and identity. These new beings can extend themselves into recognizable animal identities, but also become similar to spatial entities – extensions of human bodies with reliably programmed conditions that reassure the cultural expectations of the human mind.[5]

In this sense, Uexküll’s thought can still be read as a very contemporary provocation: by hypothesizing and modeling the ‘automatic’ behavior found in different kinds of animals, and looking at the perception of time (and space) which emerges from and ‘drives’ their life cycles, we can reframe human cultural motivations and limitations. If we follow Uexküll by defining different types of intelligence as quantitative in relationship to information processing, we foster a machinic model of the mind itself.[6] If our past investment in animal companion ‘creation’ had clear drives towards the facilitation of labor and environmental integration, it is poignant to wonder what kinds of new relationships AI intelligences (rogue farms) that recreate environmental identities (and deploy terraforming concepts) might accomplish as a reflection of the human mind.

One thought experiment in this direction is the use of ‘gene drives’ to control or eliminate insects that act as vectors for diseases such as malaria. CRISPR is a process of gene editing that provides access to an ‘evolutionary temporal regime’ that humans have only just begun to touch – the timescales that viruses and genes have access to. The technique creates a new genetic identity that can be designed to annihilate subsequent generations by sterilizing offspring or rendering them with unviable locomotion.[7] This constitutes a paradigm shift in landscape design, as these human interventions cascade across ecosystems, populations and economies. Though it is always worth invoking the precautionary principle here to highlight the dangers of the gene drive, in the sense that it can rapidly propagate through an entire planetary system with unexpected consequences.[8]

roguefarm-1

Uploading (Cloud / Memory / Thought)

BATOU:  This area was once intended as the Far East’s most important information center, a Special Economic Zone in its heyday. These towers survive as a shadow of the city’s former glory. Its dubious sovereignty has made it the ideal haven for multi-nationals and the criminal elements that feed off their spoils. It’s a lawless zone, beyond the reach of UN or E-police. Reminds me of the line, ‘What the body creates, is as much an expression of DNA as the body itself.’

TOGUSA:  But the same applies to beaver dams and spiderwebs.

BATOU:  I’ll take the coral reefs as my metaphor. Though hardly so beautiful. If the essence of life is information carried in DNA, then society and civilization are just colossal memory systems and a metropolis like this one, simply a sprawling external memory.

Cities have always been external memory systems – extended phenotypes for human culture and life. Yet one wonders what technological/urban design moves could work across the ‘smart cities’ of our future. Perhaps a ‘noisy, productive memory’ can be installed. The concept of ‘thalience’ developed by Karl Schroeder – a distributed technology based sentience, emerging from an Internet of Things and the massive addressability of total ubicomp – could yield countless non-human ontologies, acting in the service of a radical inhumanism; integrating subjects and objects that humans would never ‘see’, or understand as relatable.[9]

This inhumanism should be seen as a desirable outcome, going beyond the problem of ‘parroting’ in AI – a programmed cycle where repetition and variation act as generators of the fundamentals of intelligence. Instead of asking (as humans) what non-anthropocentric models of physics, science, thought, etc. are, the world itself would report, in its ‘clamor of being’, on the nature of the world.

Models of AI have oscillated between symbolic systems and functional systems, trying to either reproduce the generated meaning of other supposedly ‘machinic’ systems, or the format/infrastructure/hardware by which that meaning is generated. Both are imitation-based models of a kind of systemic logic where the future ‘drives’ of intelligence always have some kind of a ‘drag’, a baggage, from a previous ‘body’.

Thalience imagines a future of thought and life without subject – similar to some of the discussions in the work of Katherine Hayles considering cyborgs and the potential ‘disappearance’ of subjectivity.[10] This is increasingly familiar territory in much of the industrialized world today through our reliance on social networks, via the algos and bots which develop criteria for the maintenance and management of id/entity.

‘Full automation’ may reflect a more generalized drive or observation regarding how processes of learning happen through repetition and incorporation. Learning processes and effects usually embed themselves through specific activations of matter: for example, the evolution of an absolutely ubiquitous and flat ontology (again, thalience) can be compared to biological precedents, and then by extension the emerging use of gene drives. Crucially here, we need alternate conceptual models of sapience and computation (forest fungal networks perhaps, or genetic drift over millions of years) to better grasp what kind of mind will emerge, and where it will reside.[11] To properly assess or evaluate automation we have to balance an empathic sense of human life and fragility with the recognition that we are merely cogs in a much larger, yet finely grained mechanism. As our definition of ‘entities’ becomes, at once, central and fluid, narrative-bound and extemporaneous, and infrastructural and unreasonable, new forms of ‘drive’ are born.

This raises the final question: how does the future of cognition relate to architectural and urban systems produced by the capillary matrices of our current technologies? Can it? If one argues that urban typologies have had a role of relay/resistance in human history (micro-programming of some sort, that modulated macro-historical processes), what will be the new environmental, meteorological and planetary figures of repetition that can accelerate a cognitive evolution across humans, cities, even planets?

ghost-1

References

[1] Peter Sloterdijk, ‘Cell Block, Egospheres, Self-Container’, Log, 10, 2007.
[2] Eliezer Yudkowsky, ‘Hard Takeoff’, Less Wrong, 2 December 2008. At: http://lesswrong.com/lw/wf/hard_takeoff/ (accessed 10 August 2016).
[3] Charles Stross, ‘Rogue Farm’, Best SF, 22 November 2011. At: http://bestsf.net/charles-stross-rogue-farm/ (accessed 10 August 2016).
[4] Gilles Deleuze, Félix Guattari, A Thousand Plateaus (Univ. of Minnesota Press, 1987), p. 51
[5] Donna Haraway, The Companion Species Manifesto: Dogs, People, and Significant Otherness (Prickly Paradigm Press, 2003).
[6] Jakob von Uexküll, A Foray into the Worlds of Animals and Humans (University of Minnesota Press, 2010 [1934])
[7] Mike Orcutt, ‘Gene Thrives That Tinker with Evolution Are an Unknown Risk, Researchers Say’ (MIT Technology Review, 8 June 2016. At: http://www.technologyreview.com/s/601647/gene-drives-that-tinker-with-evolution-are-an-unknown-risk-researchers-say/ (accessed 10 August 2016). 
[8] Nassim Nicholas Taleb et al., ‘The Precautionary Principle’ (with Application to the Genetic Modification of Organisms)’, Extreme Risk Initiative, 17 October 2014. At: http://arxiv.org/pdf/1410.5787v1.pdf (accessed 10 August 2016). 
[9] Karl Schroeder, ‘Thalience’. At: www.kschroeder.com/my-books/ventus/thalience (accessed 10 August 2016).
[10] N. Katheryne Hayles, How We Became Post Human. Virtual bodies in Cybernetics, Literature and Informatics
(University of Chicago Press, 1999)
[11] Jeremy Hance, ‘Are plants intelligent? New book says yes’, The Guardian, 4 August 2015. At: https://www.theguardian.com/environment/radical-conservation/2015/aug/04/plants-intelligent-sentient-book-brilliant-green-internet (accessed 10 August 2016)

Machine Learning from Las Vegas

0