This September I was invited to participate in the Future Innovator’s Summit at the Ars Electronica Festival 2017. This was a wonderful opportunity to spend several activity packed days discussing some of the biggest questions arising from the emergence of increasingly complex technologies with an interdisciplinary and multicultural group of artists, scientists, philosophers, researchers and designers.
I was delighted that the summit was planned as part of this legendary festival of digital, hybrid and emergent art that has been the venue where a lot of seminal and groundbreaking works, from Stelarc‘s “Ear on Arm” to Joe Davis‘ “Bacterial Radio“, have been showcased over the years.
The festival is held yearly in the beautiful Austrian city of Linz, which sits on the Danube River. The festival is held in several locations around the city but the main venue, POSTCITY, is a former mail distribution center that is mysterious, spacious and labyrinthic and over the course of the festival is transformed into an futuristic exhibit filled with mind-blowing installations, performance stages and conference halls. In the past few years, I have visited Linz several times to present research at the International Conference on Computers Helping People with Disabilities (which is unrelated to the Ars Electronica Festival and is usually held at the Johannes Kepler University).
The theme of this year’s festival was Artificial Intelligence: The Other I. In this post, I will describe some of the highlights of my experience (including a brief description of the Future Innovator’s Summit).
Future Innovator’s Summit
The Future Innovator’s Summit (FIS) is an initiative, under the direction of Dr. Hideaki Ogawa, to create an ad-hoc think-tank that runs in parallel with the Ars Electronica Festival and in which participants spend several days exploring creative questions and themes related to the Future. It is supported by the Japanese firm Hakuhodo and the Ars Electronica Futurelab.
This year four interdisciplinary teams worked on the topics of Future Humanity, Future Work and Future Home. I was in one of the two teams that was focused on the future of humanity.
We started by thinking about three questions:
- How can we be more human?
- How can a machine love a human? And vice versa?
- How can we live as a multiple “I”?
Prior to the summit, we had all sent in some thoughts about these questions to the FIS team. Please see the Appendix below for my extended answers.
Our team consisted of myself, Imre Bard (philosopher/researcher), Xin Liu (media artist/engineer), Alex Reben (roboticist/artist), and Pinar Yoldas (artist/researcher). During the summit, our team (with support from facilitator Fran Miller) decided to focus on a question about the ethics of emerging technologies and specifically Artificial Intelligence:
- What should AI (not) decide for us?
We approached this question through a participatory perspective, where we interviewed about 40 people at the festival and compiled their answers in a short video. We then presented this video at a final presentation and provided our own thoughts on the topic. Our interviewees presented many excellent ideas. Some highlights included: “Human intelligence should not become obsolete”; “The machines should not repeat our mistakes”, and, “We should not lose our instincts.” We hope that this video was also a small effort towards increasing reflection about the implications of our attitudes towards ourselves and the technologies we build.
The video is being edited by the FIS team and I will put a link to the video once it is posted. There might also be a short documentary produced about the summit that I will also share here once it is available.
Artificial Intelligence: The Other I
“Art helps science understand itself.” -Joe Davis
I walked into a dark abandoned railway hall and came face to face with a man improvising a duet with a disembodied neural network made from his own cells! “CellF” was one of the opening performances, in which Guy Ben-Ary a media artist, had taken some of his own skin cells through a biopsy and transformed them into stem cells through a process known as Induced Pluripotent Stem cell technology. These stem cells were then turned into an autonomous neural network that could respond to incoming sound and create outgoing signals. The artist was playing a tune on a piano that the disembodied neural network responded to and created sound signals that were then responded to by the artist at the piano.
Another performance with a more indirect relationship to AI was Corpus Nil, in which the artists’s body (Marco Donnarumma) was connected to several biosensors that would communicate several signals from his body to a digital system that used a series of algorithms to process them and turn them into musical sounds. The algorithms would “learn” from the input signals and continually adjust the way they translated them into musical sounds. In the words of the description on the project’s website: “The player cannot control the instrument, but only learns how to affect it and be affected by it.”
These are just two examples of the many provocative art projects, including Ad Infinitum, a parasitic robot that captured human arms and forced them to turn a lever by giving them electric shocks and Samantha a talking sex doll with multiple personalities, that explored the question of what are the implications of non-human intelligence through creative expression and art experience. In addition to the art projects, there were a series of academic lectures on the ethics and aesthetics of intelligent systems by diverse participants, including Hiroshi Ishii from MIT and Zenbo Hidaka, Japanese zen head monk and AI expert.
I have been fascinated by BioArt, an art practice that uses living media as material, for many years and have often conceptualized of my Rafigh project as an edible bioart sculpture (as presented at TEI’s art track in Munich in 2014). Ars Electronica is the premier venue for this art movement and I wasn’t surprised to see many excellent pieces here.
One of the most interesting project, Regenerative Reliquary by Amy Karle, involved a 3D printed exoskeleton of a human hand that was covered with living stem cells that were growing tissue around the scaffolding during the festival.
In another project, I’m Humanity, Etsuko Yakushimaru, a famous pop star in Japan, had pushed the idea of how to record and distribute music to some extreme places. She had encoded her song of the same name as a DNA sequence that was embedded in living cells that would regenerate rapidly and have the potential to outlive humans! So, she had millions of copies of her song living in a fridge at the conference. A caveat was that with each generation of cells there was the possibility of natural genetic mutation, a phenomenon that might impact the encoded music and make it evolve beyond the artist’s original creation. The song lyrics reflected this: “Stop the evolution — don’t stop it”.
These are just a few examples of the amazing bioartworks that were presented at the festival this year. Other examples included Until I Die by ::vtol::, in which large samples of the artist’s blood were collected over 18 months and used as batteries to power an interactive sound installation, and K-9_topology, in which artist Maja Smrekar, pushed the boundaries of her relationship with her dog companions through a series of projects, including Hybrid Family that involved her breastfeeding her dog for three months, leading to increased levels of oxytocin in her body, and ARTE_mis, in which she denuclearized one of her own ovum cells and used it as a host for a somatic cell of her dog, resulting in a hybrid human/dog cell.
If these projects peak your interest and you want to learn more about bioart, I recommend Bio Art: Altered Realities by William Myers for an overview of a lot of projects and artists in this space, and/or Bioart and the Vitality of Media by Robert E. Mitchell for a more theoretical treatment of the subject.
In addition to the above highlights, there were many other amazing projects at Ars Electronica. I particularly enjoyed the amazing installation in the basement of POSTCITY which included immersive sound installations, including Reading Plan that consisted of a chorus of disembodied actors reading passages from automatic page turning manuscripts, and Fidgety, a sound installation consisting of a series of speakers and dedicated sound channels remixing in sound of the artist’s heartbeat in creative rhythmic ways.
Other highlights included a feminist book and CD compilation about noise music in Southeast Asia, called “Not Your World Music: Noise in South East Asia“, and “A Study into 21st Century Drone Acoustics“, which looked at the negative psychological impact of living in fear of bomb carrying drones. This latter worked was developed as a “field-guide”, similar to a bird-watching guide, with a soundtrack that helped people identify and avoid incoming drones in conflict zones (e.g., parts of Afghanistan).
Appendix: Some thoughts about the future of humanity
I will conclude this post with an appendix that includes my expanded thoughts on the three original questions by the FIS team.
- How can we be more human?
We can be more human by embracing our hybrid existence. I believe the line between the artificial and natural is socially constructed and we are already cyborgs whose lives are entangled with digital technologies like computers and non-digital technologies like bicycles or clothing. I believe we can be more human by embracing and shaping this reality.
Technology is like a mirror that we can use for self-reflection: all the fears that we project onto technology (and AI) have roots in our experiences with each other. For example, we are afraid that once AI is mature enough it might make us obsolete or oppress us. These are in fact things that humans have done to each other for millennia. In the face of technological challenges that make each of us different from each other based on our access to and experience with different technologies, it is imperative to think about what would future diversity look like. I’m not worried about a society in which robots and human have to co-exist. I’m more worried about a society where the distance between us becomes greater due to the technologies we can access differently. If we think of nature not as separate from us but something that is part of us, we can start thinking about how to protect and respect it rather than control or use it.
- How can a machine love a human? And vice versa?
I’m actually really interested in the emotional reactions of people to non-human intelligence, especially technologically-mediated intelligence. In order to investigate some of these dynamics, I conducted a research project, Rafigh, in which I designed and evaluated a digital living media system that consisted of a digitally-augmented mushroom colony such that the mushrooms responded to human behavior over a long period of time. In the evaluation, we observed that children and adults responded positively to the system and it created feelings of empathy, responsibility and curiosity in them. Additionally, it supported communication and collaboration between family members. This project shows that it is possible to create hybrid systems that emotionally engage users; we will have to see if these results translate to purely digital systems.
Previous research by Sherry Trukle (and many others) show that digital technologies that aim to replace us (e.g., emotional robots, virtual caregivers, …, she terms them “relational objects”) can be confusing to people and negatively impact human relationships. The technologies that she finds most damaging are ones that aim to replace non-existing or deficient human relationships rather than augmenting healthy ones. Maybe we should instead focus on developing technologies that bring people together and help alleviate some of their anxieties and stress, such that they have more time and energy to explore human emotions of empathy and love. If technology allows us to not worry about the basic questions of survival, such as security, food or health as much as before, will we finally learn how to love without being afraid or will we still wallow in our fears? What is the future of human relationships? How can machines help us love each other and ourselves? Rather than making us obsolete, how can machines make us become more valued and loved by each other?
- How can we live as a multiple “I”?
This question caused more questions than answers in my mind: if indeed it is possible to transcend our limited cognition, emotion and agency in the future, through technological means, then what happens to the existing patterns of injustice and inequality? Would they become amplified or challenged in such a future? Will we drift further apart from each other or do we become closer?
There is always fear and anticipation when encountering the Other. Is this Other our enemy or our friend, our ally or our rival? I believe the degree of our fear or anticipation relates to how much we sense we might lose control or what we would learn about ourselves through the Other.
Imagine how much self-knowledge can come from falling in love or having a child or learning to connect with people from another culture. If we fail in some of these situations (e.g., if our trust is betrayed, are left out and hurt) we tend to think of others as threats rather than possibilities for finding strength and getting aquatinted with an unexplored side of ourselves. Will we fight, run or love? This ancient idea of encountering knowledge through stepping into the fire of experience without fear is being repeated today. Why should we have a future world of winners and losers? What if we have a world of dancers where we sometimes dance the winners’ dance and sometimes the losers’ dance, and eventually we rest in the knowledge that this life is a dance rather than a final irreversible Truth?