I’m developing an XR soundwalk for the La Jolla Playhouse Without Walls Festival, premiering April 27-30, 2023.
This project, Cleaning the Stables, is one chapter from the Herakles Project, adapts his 5th labor as an immersive soundwalk, reimagining Euripedes story as an individualized XR audio experience with accompanying visuals experienced through viewers’ smartphones and headsets. Working from field recordings and photogrammetric models documenting the distinct features of Nebraskan and Southern Californian agriculture and aquaculture—this project surfaces structural similarities and illuminates parallel concerns in the twin Breadbaskets of the American midwest and southwest. It juxtaposes approaches to precision agriculture, cattle operations, and water rights management in these distinct geographic regions.
Mapped over the site of the Rady Shell, the listener explores a speculative future of geoengineering gone wrong as a locative media project, navigating the narrative space of the piece overtop of the physical space of the festival. Herakles is recast as an agricultural robot; water is scarce, sourced from the dwindling Ogollala aquifer and Colorado River; ”the moving sand dune” of the midwest has joined its brethren on the western coast; and we are left to reconcile the real costs and possible futures of livestock, agriculture, and scarcity in a changing environment.
The Herakles Project adapts Euripedes 12 Labors as a series of interlinked XR/AI/VR pieces. The project surfaces the existential question of motive–whether the labors were a form of expiation for the murder of his wife and children, or were they the cause of post-traumatic stress – violence begetting violence – that induced the murder of his family? These questions take on new significance in a contemporary frame.
On February 8th, 3:30-5pm Pacific, I’ll be speaking at the UCSD Design Lab.
Design@Large: Designing Human-AI Systems for Creativity and Beyond
This speaker series invites you to engage in the promise and perils of the next big thing in machine intelligence. We invite speakers from the worlds of art, design, technology and policy to discuss these impressive new capabilities, its limitations and how we, as designers and students, could harness it to reach new heights in art, music, dance, architecture, fashion, creative writing and programming.
Announced Speakers: David Danks, Haijun Xia, Robert Twomey, S.B. Divya, James Yu, Michael Terry, and Memo Atkin.
This will be in person and live on zoom. Hope to see you there!
Please join us on Saturday, December 3rd for ON DISPLAY GLOBAL2022 at noon Pacific Standard Time!
We have a silent Human-Robot performance for an hour. There will be two robot performing with Hortense Gerardo, a human performer.
We encourage audience members to join the zoom room to emulate a live sculpture court with audience members interspersed. You will have the options to pin performers, write in the chat, focus on one group, join the sculpture court yourself, or just observe. https://nyu.zoom.us/…/tJAvdumvqD8vHdQOcR74HzqThzC61X98cdVh
The Live Stream is a great place for people to observe the sculpture court as a whole. Please note, this live stream will only show the first page of the zoom screen. We hope you enjoy our performance!
[Please feel free to share this information widely!]
Our paper, Beyond Classification: The Machinic Sublime has been published in Proceedings of POM21. This paper details the human/non-human roundtable performance we developed for Politics of the Machines (POM) 2021. It was a tremendous pleasure to work with Joel Ong, Eunsu Kang, and Kangsan Joshua Jin on the project and paper.
Let us know what you think!
Joel Ong, Robert Twomey and Eunsu Kang et al. Beyond Classification: The Machinic Sublime. DOI: 10.14236/ewic/POM2021.50
Beyond Classification: The Machinic Sublime (BCMC) emulated an academic roundtable discussion with the authors and 3 machinic/more-than-human guests. Part performance, part intervention within the context of an academic conference, BCMC introduces a novel and explicitly visible strategy of co-dependency for an array of diverse intelligences through a connected loop of human, machine, and animal agencies. The meteoric rise of AI in the last years can be seen as a part of a larger tendency towards deeper, more opaque data collection and analysis techniques that form the dense substratum beneath the proliferation of human-computer interfaces today. As a human developer, the most striking qualities of generative AI are its vastness, non-determinism, and infinitude—explicit themes and qualities of a machinic ‘sublime’. How can a human artist/programmer sensibly navigate this multi-dimensional space of latent meaning?
This intervention is an experimental roundtable discussion/performance via web conferencing, a new kind of Turing Test where success in the testing is not found in the plausible simulation of human consciousness through speech, but rather in expressing diverse intelligences through new forms of language. In this multi-agent exchange, human interlocutors and non-human partners argue the possibility of a machinic sublime. Together, these interlinked discussions become an emergent system. In this roundtable format, audience interventions are welcome.
Brought to you by Rebecca Shapass (MFA ’23) and Inbar Hagai’s (MFA ’24) curatorial project Touchstone Cinema, with support from the Sylvia & David Steiner Speaker Series, “Vision Machines” is a lineup of moving-image works focused on artists working with/against robots that see. This program reflects upon CMU’s complicated legacy as a major contributor to the field of computer vision.
Works in the program include: Kuka (2016) by Lyndsay Bloom Conversations with Bina48 (2014- ongoing) by Stephanie Dinkins Eye/Machine II (2002) by Harun Farocki Mine (2009) by Liz Magic Laser Rover (2017) by Robert Twomey
October 7, 2022 7:00 PM–October 7, 2022 8:30 PM
4919 Frew Street Carnegie Mellon University College of Fine Arts, Room 111
My paper Three Stage Drawing Transfer has been published by Proceedings of the ACM on Computer Graphics and Interactive Techniques. I presented this at SIGGRAPH 2022 in Vancouver BC as part of the Art Papers program. It explores creative AI, human imagination, and embodied interaction through an interactive drawing performance.
Robert Twomey. 2022. Three Stage Drawing Transfer: Collaborative Drawing Between a Generative Adversarial Network, Co-robotic Arm, and Five-Year-Old Child. Proc. ACM Comput. Graph. Interact. Tech. 5, 4, Article 36 (September 2022), 7 pages. https://doi.org/10.1145/3533614
I presented Three Stage Drawing Transfer as part of the Demos program at ISEA 2022 in Barcelona, Spain. It was wonderful interacting with the attendees: we had great conversations about creative AI, child art, and human-robot interaction.