Going Live

Going Live

With the rise of computational networks and power, cognitive models developed and debated over in the postwar decades have finally been able to be put to work. Back then, there was a philosophical debate raging alongside the burgeoning field of computer science theory on the nature of consciousness, in which machines of artificial intelligence served as a thought experiment to question humanity. Yet with the proliferation of data and the centralization of its archives, theoretical practice moved from conceptual experiments to empirical tests.

The shift is decisive, as along with it moved the criteria of philosophical judgment from abstract reason to pragmatic conjecture. Machines of models of artificial intelligence are still put to the test, but evaluated according to their results, not logical consistency or rationality. Artificial intelligence is both a theory and a practice, which respectively evolved at different times and speeds. The ability to test artificial intelligence has only become recently possible thanks to the internet, which serves as is its prime laboratory and space of experimentation. But increasingly, artificial intelligence is leaving the web, crossing the digital divide, and being applied to cities and the environment.

There are many analogies that can be drawn today between computers and cities. This formal similarity, as rough as it may be, has allowed network-based logics to begin to shape urban development and the way we (want to) live. Logistics is the general term for such processes. Computers stand apart in the history of machines in the sense that it’s not so much a tool as it is a toolbox, one whose contents can be invented with relative freedom and lack of constraint. It’s for all of these reasons that artificial intelligence is resistant to our critical faculties. It’s not so much that we don’t understand it, but paradoxically, it’s that we finally can. Artificial intelligence is real. It’s not the perfect model of consciousness we thought we were aiming for, but it’s close enough.

It’s hard to comprehend the potential impact of artificial intelligence. As a system in need of deployment, AI is plagued by scale and ‘boundary issues’. Take basic income, for instance, which often accompanies recent calls for automation. Small scale tests are currently being developed and rolled out in certain places around the globe, the Dutch city of Utrecht being one of the largest.[1] Yet as soon as the decision needs to be made of who gets it and who doesn’t – be it only nationals, residents, inhabitants, or whatever – a certain political violence needs to be exacted that undermines the rationale behind the effort and the scientific value of its results. Basic income really only works if its universal.[2]

Self-driving cars are another ‘prime’ example of boundary issues facing artificial intelligence. While early public demos and private tests were on closed circuits or even set tracks, tests have begun in ‘dynamic’ urban environments. Last month, Uber reportedly rolled out its first fleet of self-driving vehicles in downtown Pittsburgh, Pennsylvania for an indefinite period (notably, most trials are finite).[3] Yet even before live road tests became a thing, a humbling revelation dawned about the challenges facing the successful implementation of such systems into our cities, our landscapes, and our daily lives: we are the problem. Or in other words, the problem of tomorrow is today.

Left to their own devices, self-driving cars hold immense promise to radically transform urban mobility patterns and upheave the cultures and economies that support them towards more sustainable and equitable ends. But beyond a debate over whether we should be conservative or liberal with the values and practices currently in place today, self-driving cars probe more a fundamental human anxiety about the degree of trust and responsibility we place in the machines we live with. An easy way to wash this problem away is to fabricate a boundary: if we were to remove all other things from the space in which the system is ‘live’ – non-self-driving cars and pedestrians, for example – at least some of this risk would be ameliorated, if not eliminated altogether.

Like most ‘problems’ though, the challenges facing the future of AI are not as simple as erecting a wall. Hybridity is a fundamental problem, yes, but if we were to assume it to be the only one, the future itself would be subsumed and lost within the desire for a very particular kind of progress. Artificial intelligence has given rise to what has come to be known as ‘existential risk’ insofar as it throws humanity itself into question, much like its early models questioned what it meant to be human.[4] AI stands to rewrite the logic by which we have relation with the support systems our lives depend upon. AI is predicated by ‘locking in’ a political cosmology of actors and the rights distributed to them. AI writes politics with code, yet increasingly into stone and the flesh as well.

By throwing it into question, systems of artificial intelligence such as self-driving cars allow us to reflect upon some of the most fundamental questions of humanity, like: not whether to kill or not, but which life to take in in situations where death is unavoidable.[5] Death has always been factored into infrastructure as a negative externality, and design has responded accordingly. Road barriers prevent people from crossing the highway, for example. But how does one implement a safety feature, emergency airbags for instance, in artificial intelligence? Profanity filters – basic script libraries easily drawn in to any chatbot program – are only applicable to so few cases.[6]

Artificial intelligence has finally begun to develop according to models not based on the human brain. Perhaps for this very reason, yet despite the fact that we have perhaps never had such a refined and deep understanding of it, there is great fear over our ability to control artificial intelligence. Accidents and mistakes do and will happen, and while we can be careful, we can’t really predict what will happen when AI systems go ‘live’, and especially not in increasingly large, complex, and fundamental domains. Yet we can speculate and think what we would want to happen in innumerous instances. The anxieties surrounding artificial intelligence are thus not necessarily about control per se, but rather our ability to respond to what the future brings (with or without the help of AI, I might add).

The technologies we use on a daily, hourly basis frame our relation to the world and our experience of it. This is nothing particularly new. We design machines, and machines design us. By using them we change ourselves. Yet today, machines are learning to change themselves based on how we use them. This is where that devilish concept of ‘intention’ comes in. Life is messy; unpredictable and dangerous, filled with sentiment. Social engineering has haunted the dreams of visionaries since the rise of the Soviet state, but never before have the potentials to engineer life’s folds been so great. The machines are not coming; they’re already here. We need to learn about machines because we learn from machines, because we make machines. We need to understand the power and potential they have in order to better form an idea of what we want to do with them, what we want them to do, and what we want them to do to us.


[1] Tracy Brown Hamilton, ‘The Netherlands’ Upcoming Money-for-Nothing Experiment’, The Atlantic, 21 June 2016. At: www.theatlantic.com/business/archive/2016/06/netherlands-utrecht-universal-basic-income-experiment/487883/ (accessed 26 August 2016).

[2] Alex Williams and Nick Srnicek, Inventing the Future: Postcapitalism and a World Without Work (Verso, 2015).

[3] Max Chafkin, ‘Uber’s First Self-Driving Fleet Arrives in Pittsburgh This Month’, Bloomberg Businessweek, 18 August 2016. At: www.bloomberg.com/news/features/2016-08-18/uber-s-first-self-driving-fleet-arrives-in-pittsburgh-this-month-is06r7on (accessed 21 August 2016).

[4] Raffi Khatchadourian, ‘The Doomsday Invention’, The New Yorker, 23 November 2015. At: www.newyorker.com/magazine/2015/11/23/doomsday-invention-artificial-intelligence-nick-bostrom (accessed 25 August 2016).

[5] Iyad Rahwan, Jean-Francois Bonnefon, Azim Shariff, et. al, ‘Moral Machine’, Scalable Cooperation, MIT Media Lab. At: http://moralmachine.mit.edu (accessed 21 August 2016).

[6] Peter Lee, ‘Learning from Tay’s introduction’, Official Microsoft Blog, 25 March 2016. At: http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction (accessed 21 August 2016).

Out Now: Volume #49