Side note- Antihotel: a folly in five tropes

After visiting a notable residence in Kandy, Sri Lanka, I’m writing a piece about its characters (and taking a closer look at the artwork!) for an upcoming issue of Art + Australia this year. This was a drawing done in parallel with the writing, and indeed it is one of the multitudinous ephemera I participate in, outside of actually writing. 

grey mirror Helga

Ex Machina: Postscript

Image source: https://melbourne.sciencegallery.com/sci-film-ex-machina

This text was presented for Sci Gallery Melbourne, Friday 17 August 2018, as part of National Science Week. It was an introduction prior to screening the film ‘Ex Machina’ (2014) at Federation Hall, The Victorian College of the Arts, Melbourne.

“Before you settle in to watch the film, I want to get the cogs turning a little around major themes- no spoilers! If you haven’t seen the film before, don’t worry.
 
When I was a smaller person, my Dad made the somewhat naeive error of insisting that I learn that game of supreme tactics, operating somewhere in between excruciating deliberations and then spurts of careless bravado- the game of chess, of course!
He quickly overcame his enthusiasm when I started to win, and wanted to deflect my appetite for conquest onto another, perpetually willing competitor. So we loaded up the old Windows XP computer with a CD-ROM of a chess program- he was off the hook.
 
I had a head full of imaginings around the tale of Deep Blue VS Gary Kasparov, the infamous tournament loss of a human chess champion against a computer. Even so, I made sure I dialled the difficulty settings all the way back to ‘easy’. Now this was to be my first experience of friction against a perceived algorithmic injustice, an encounter of disparity between human and non-human action taken towards a common objective- to win the game. Time and time again, as I played, rather permit my win, the chess program would enforce the second optimal result: flashing up stalemate- stalemate- stalemate! I remember feeling outraged at the unfairness of it all, at what seemed like a cold ungraciousness in what I imagined to be my opponent’s persona. 
 
Dad reminded me that was a fairly simple case, though, of placing blame on the puppet masters- the developers who had written the chess program’s code. They were responsible for the unyielding, relentless tendencies of their creation. The actions of this program, hardly representing an autonomous thinker, was only a proxy for the whims of its makers.
 
This brings us to the film we’re here to watch today- Ex Machina. As you might suspect, the story doesn’t offer up such a simple case. This movie is actually one of my favourites in a fairly recent crop of thoughtful sci-fis. Beyond the fanciful, glossy elements of typical sci fi- ultra advanced tech, cyberpunk urban sprawl, and very tight jumpsuits- audiences are encouraged to ask big picture questions of judgement, empathy, conscience beyond the human. These are what’s termed ‘near future’ concerns- not too far away that we’re talking about transcending our bodies and melding into the universal hive-mind, but along the lines of social media becoming a bit more all consuming, our phones doing more for us than ever, or the internet of things being in pretty much in everything. The hardware we already have in our pockets and the software we subscribe to with full consent, in the near future. In Ex Machina, there’s three relevant concerns around the future of tech that particularly struck me, and maybe you can think about these as you watch tonight.
 
The first arises from that story of stalemate in my chess adventures, and the agency to take certain actions. Where does that distribution of responsibility fall when there are actions of intelligence made, rather than born? How can these actions be judged if there are involuntary, prescribed differences in someone’s moral compass and ability to behave? Is it just enough to create a faithful, perfect copy of our own psyches, suspended in wetware gel? Or should we take our tech obsession with upgrade and improvement to engineering a better version of us- the next unmissable new release, a host of exciting new features, a symbol of advancement beyond perfect. Machine learning researchers surveyed by a specialist unit at Oxford University answer with some conviction. They predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), driving a truck (by 2027), working in retail (by 2031) or working as a surgeon (by 2053). These researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years. (1)
 
The title of the film, as some are probably aware, relates to the phrase : ‘Deus ex machina’. The term was used in ancient Greek theatre. It meant, ‘God from the machine’, where an intervening character would fly onto the scene, suspended high above the stage via a mechanical contraption- the machine. They would change the course of the story through powers often unexpected, or supernatural. In the case of tonight’s film, the title itself confronts us with the notion of the machine unencumbered, disconnected from a deity, master or overseer. Thus given elevated levels of autonomy, we are invited to consider an experience of sensation and action, indeed, a consciousness, as it might occur ‘ex machina’- from the machine… alone.
 
Now I recall a few occasions of being in the car with my partner, who assures me that he is a master of navigation, a genuine wayfinder in a world of directionless chaos. Inevitably of course, we require a helping hand- courtesy of Google Maps. He will protest: “Oh- shut her up! She’s always wrong anyway, she doesn’t know these streets like I do.” Google’s voice navigation is set by default as a female voice, and it’s not a big stretch to imagine a extended persona that goes along with this- one that is specifically gendered. Sure, it’s just a voice- but like Scarlett Johannsen’s character in the film Her, sometimes that disembodied voice might be all it takes to embed itself into cultural norms and expectations. If we tap into the archives of tech media cultures, from sci fi gaming, film, graphic novels, and the like, we encounter an entrenched history of female gendered virtual assistants.
 
I grew up console gaming with a character called Cortana, an A.I helper in the Halo franchise. Now, a virtual assistant called Cortana issues cheery pronouncements on the weather from the desktop of my Windows PC, with an twangy American woman’s voice. My new Samsung phone rather unfortunately pits an A.I assistant called Bixby- Samsung proprietary-against the cohabiting, also female voiced, Google Assistant. Both carefully avoid mentions of their Apple counterpart Siri, attending to such menial tasks as turning my Wi Fi on and off, or calling someone handsfree. These current day assistants also orchestrate the latest in smart home technology, bonafide domestic goddesses that are not yet seen- only heard.
 
Where the rub lies is the continual reinforcement of these artificial, abstracted, but identifiably female subjects into a deepening history of servitude and unquestioning compliance. This not only problematic for it mirroring retrograde gendered arrangements of imbalance and inequality in humans, but it is establishing such a precedent in nonhumans- the sort of thing from which dystopias are dreamed. Much of the control and initiative to shape artificial service assistants or ambient intelligence lies with tech workers, a traditionally male held domain. This is tied up with the tendency for contemporary society to applaud, glamorise and bestow incredible riches, cultural impact and power upon the tech heroes that design these systems. Gender politics are already at play in how we think, speak about and regard service technologies today. This can hardly get simpler as each iteration brings advanced complexities to the fore.
 
The final point I want to bring up around artificial intelligence is more currently tangible, gaining greater exposure with every scandal that hits the headlines. This is the surveillance, algorithmic profiling and hijackable features that are embedded in, or accompany, the technology we use today. I have a lot of slightly paranoid discussions about how my internet searches will suddenly create relentless advertising, despite my interest in dog beds or toilet seats being largely one-off. I’m not a toilet seat connoisseur or collector, you know- can’t we go back to pop up ads that fill up the screen, strobe crazy colours and say I’m today’s lucky winner??!
 
Tech ethics researcher Katja De Vries argues that the way in which we experience ourselves necessarily goes through a moment of technical mediation- the apparatus we use, those adornments and additions to ourselves in a physical sense and the way in which an online portrayal of ourselves defines a sense of who we are. She argues that there is growing societal concern around the impact of algorithmic, computational profiling on our sense of identity. (2) This is most obviously occurring online, in which algorithms thrive on reconfiguration of human identity, claiming alleged infatuation with toilet seats at the flimsiest of diagnoses.
 
This type of thing poses a sense of existential questioning around who or what influences or manufactures a sense of self and even opens up the possibility of preference-based discrimination through increasingly detailed, cross referenced and nuanced examinations of our behaviour through artificial intelligence. With a massive source of information from which to compare us among many, many others like and unlike us- the internet- the way in which algorithms capture us is not a perfect reflection. Rather, a new twist is generated in our stories, and who we appear to be from an outside point of view.
 
That all being said- I leave you now to enjoy the film!” 

References:

  1. Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. ‘When Will AI Exceed Human Performance? Evidence from AI Experts’. ArXiv:1705.08807 [Cs], 24 May 2017. http://arxiv.org/abs/1705.08807
  2. Vries, Katja de. ‘Identity, Profiling Algorithms and a World of Ambient Intelligence’. Ethics and Information Technology 12, no. 1 (1 March 2010): 71–85. https://doi.org/10.1007/s10676-009-9215-9.

The Plague: Panel

Transcript of panel presentation for Melbourne Art Book Fair 2018, Art+ Australia Journal Issue Two: The Plague.

“… I’m Jess. I operate out of the Victorian College of the Arts over the road from the National Gallery of Victoria.  I’ve now worked as a physiotherapist for close to a decade whilst undertaking my studies in Fine Arts, and I’m undertaking my PhD part time, which- to get a bit to technical- is an investigation of the agents and agencies in transdisciplinary digital art production. An agent is a person or thing that takes an active role or produces a particular effect in a given situation. For my contribution to this issue of Art+ Australia, I pulled out one particular type of agent: that is, the non human, artificial intelligence or A.I.

Who is affected? Well, it’s all of you humans out there! Those without an Internet connection are probably ok, though… In my essay, I tell the story of a semi-fictional A.I, whose function is to curate and share online media. This particular computer program- which doesn’t quite exist yet-uses a neural network, one of the most powerful machine learning architectures modelled after the animal or human brain. These systems are able to predict human behavior through the gathering of information over time- learning by experience, in other words. These systems do exist- take self driving cars, of recent controversy, for example. In my scenario, I wonder what would happen if an automated artificial intelligence replaced human labour in tastemaking visual media? What if such an entity needed to farm and control social media shares in order to retain resilience, presence, in fact, to ensure its own survival on the social web? In my piece for the journal, this entity begins to commandeer our online visual culture, feeding back to us an increasingly homogeneous diet of images and media.

Aside from the obviously dystopian, science fiction tone of my story, its concerns have their roots in the here and now. These near future speculations are the sort of thing we see in frequently bleak series like Westworld and Black Mirror, yet they are part of a potentially real horizon beyond humanity. This has some people a bit worried- and not just the conspiracy theorists! Take the Future of Humanity Institute, operating at the University of Oxford. Their entire mission is the study of existential risks – events that endanger the survival of Earth-originating, intelligent life or that threaten to drastically and permanently destroy our potential for realising a valuable future. This includes the machine-human impact (or cybernetic relationships) of future technologies, such as advancements in artifical intelligence. As Norbert Wiener, originator of the term cybernetics has warned us: “To turn a machine off effectively, we must be in possession of information as to whether the danger point has come.” If any of you saw that recent short clip of the Boston Dynamics robot dog opening a door and marching out into the unknown-maybe it already has, and the end is nigh! With all this in mind, I wanted to work with the journal’s theme of Plague to conjure an insidious epidemic of activity, brewing up on the peripheries unseen, before it begins to infiltrate mainstream awareness and affect human behaviour. In fact I’m speaking from experience, as this infiltration via fringe culture is how the story first came about.

It happened before the editor Ted Colless asked me to write for the journal. It was sparked in a conversation we’d had years earlier, when I had made up a bit of jargon for an art project I was working on. As a kind of online Choose-Your-Own adventure, or Web page maze, I wanted my title to suggest many choices, unfurling moments of consequence in a labyrinth of potential options- this is, as we are all no doubt familiar with, the experience of cruising the Internet, clicking on through the chains of links and offshoots in the everyday searches most of us all do without much thought. I cobbled together the word “hypersition”, hoping it would encompass hyperactivity and multiple positions, and we could leave it at that.

Ted informed me however, that alas, “hyperstition” was an existing neologism- a cobbled together word-once notorious for its use by a particular group of people, though it had been a while since he had last heard about them. Their term “hyperstition”, so similar to my “hypersition”, seemed to be one among many experimental terminologies, bandied about by a transdisciplinary group of rogue academics, theorists, writers, artists and philosophers operating around The University of Warwick during the ’90s. They called themselves the Cybernetic Cultures Research Unit- CCRU, for short. I think their definition of their term “hyperstition” gives us a good demonstration of their experimental and unorthodox ideas, articulated as follows:

Hyperstition:
1. Defined as an element of effective culture that makes itself real
2. Fictional quantity functional as a time-travelling device
3. Coincidence intensifier
4. Call to the Old Ones

Well, I was definitely curious then… As you can imagine, there was an almost mystical, cultish quality about the work that the CCRU did in the 90s. It was deliberately obtuse and shrouded in abstract language like this, but it is possible to identify distinct themes within much of their writings, conferences, and events. They were strongly interested in ideas around cybernetics, science-fiction, futurism, and contributed significantly to ideas around accelerationism: intensifying capitalism to an end point of totalisation or collapse.

It did initially seem that the collective that was working under the 90’s CCRU umbrella had disbanded and fragmented into obscurity, along with one of their most notorious ringleaders, the philosopher Nick Land. Ted told me back then that the enigmatic Land had either gone legitimately a little crazy- he did have a bit of maverick reputation- or he had just staged his insanity before disappearing from the scene and from conservative academia. Ted then went on to recommend me read Nick Land’s anthology, Fanged Noumena. Now, this book isn’t the easiest read, maybe we can use the term ‘read’ a bit loosely here. It’s full of cryptic glyphs and long tracts of dense and feverish writings. It’s more of a visual assault than a casual experience. If we could give this kind of thing a genre title, we could fit it into the category of Speculative Fiction.

Speculative Fiction is described by its advocates as a particular scheme of thinking and writing, one that reflects the shifting and dynamic nature of the universe itself. Speculative fiction can be viewed as space in which thought can be unshackled from experienced, prescriptive reality. Other futures open up through our imaginings and ideas, giving us agency in creation. Hence, you might find that many pieces of speculative writing may not seem linked to stable rationality or straightforward literary structures. They push the envelope for conveying a nugget of condensed, linear meaning to the reader, and might not always be taken as examples of writing that are in good, and proper taste. Luckily, Art + Australia’s platforms for writing allow us to test those borders that might be considered in good, and proper taste.

Now- I took up Ted’s recommendation, tried reading Nick Land’s book, and wrote a blog post about it early last year. I had thought this was a relatively innocuous thing to do, perhaps there might have been some people on Twitter that would take an interest in such obscurities, and so I went ahead and shared the post to social media.

Like a swarm, like an epidemic, they started to come out of the proverbial woodwork. Attracted like deep sea fish to the lure of hash tags and the mention of their guru’s name, Nick Land, a mass of affiliated people started surfacing within my Twitter feed. I was quite suddenly being offered clandestine links to anonymously maintained Google Drive accounts, and strange manifestos sent my way. Like Ben Woodard describes in this issue of Art and Australia, it seems that “the pockets of collectivity that seemed like such a novelty in the 1990s became infinitely pluralised in the 2000s”. I had come into contact with a previously unseen network of self labelled technoscience enthusiasts, cyberpunks, rave nihilists, xenofeminists, occulture connoisseurs… a whole assortment of verbose niche dwellers, all rallying around the ideals of the CCRU. Indeed, it would seem like some of them are writers, artists, philosophers who have contributed to past and current articles for the Art + Australia journal. It can be very hard to tell, since they all have a tendency to mask themselves behind glitchy avatars and codified pseudonyms when posting online.

Through contact with this newly discovered subculture, I realised that Nick Land himself appears to be very much alive, active and posting tidbits from somewhere in Shanghai to a global community of the faithful, made up of acolytes new and old. As an unfortunate side effect, I also experienced contact with what Ben’s article charts- this mass mobilisation and enablement of neoreactionaries and the alt right movement online. Neoreactionaries are the ugly underbelly of extreme right wing capitalism, a confusing and controversial spin off from that CCRU accelerationist proposal. Behind their veil of anonymity, their vitriol is rife across social media, packaged up in meme imagery, and hate speech across sites such as 4chan, Reddit and Twitter.

Eventually, this Twitter swarm brought me in contact with the artificial intelligence called Archillect. Archillect is a real life image miner, the muse for my contribution to the journal. Just like my exaggerated characterisation, Archillect actively sifts through the meta data of human image sharing behaviours on social media. Described as a ‘she’ by her human creator Murak Pak, she acts like a prosthesis for our Internet behaviours, sharing images she predicts we’ll like based on her learned experience. These fairly edgy, quite techno cool images and gifs caught my attention as they were being spread and shared this particular Twitter community. Like a virus, it seemed that these selections were contagious, spreading from point to point in a network of happy and willing supporters. For fans of a cybernetic future, the rise of AI’s agency and independence signals possible entry into the posthuman era. Posthumanism has been described as a”huge shift in the nature of society and our bodies, a mutation brought about by the exposure to simulated images in the traditional media, and the slow penetration of daily life by gadgets from contact lenses to personal computers.” I don’t know about you all, but the infiltration of culture, behavior and society all sounds pretty familiar, to me, it’s our lived experience as of right now…”

More on reading CCRU outputs, Fanged Noumena and Nick Land here.

References:

– “Hyperstition”: (now defunct) CCRU.net website
-Terranova, Tiziana. ‘Posthuman Unbounded: Artificial Evolution and High-Tech Subcultures’. In The Cybercultures Reader, edited by Barbara Kennedy and David Bell, 268–79. London : Routledge, 2000.
-Brits, Baylee, Prudence Gibson, and Amy Ireland, eds. ‘Introduction’. In Aesthetics After Finitude, pg 7–20. re.press, 2016.
-Wiener, Norbert. Cybernetics, or, Control and Communication in the Animal and the Machine. [Electronic Resource]. New York : M.I.T. Press, 1961, pg 176.