All posts by Robert

Radio PLAY at ISEA2022

a black and white photo of radio performers rehearsing in a large recording studio.
Orson Welles shown in rehearsal directing his Mercury Theatre of the Air troupe. 1938 (Photo Courtesy of Photofest, Inc.) source

Together with Ash Smith, Patrick Coleman, and Stephanie Sherman, I will be conducting a day long workshop on AI co-writing with GPT-3, culminating in a live internet radio play for ISEA 2022.

More information here:

June 11, 2022 in Barcelona, Spain

Embodied Code at CHI22

Embodied Coding Environment, showing annotations, game objects, and nodes/edges.

We will be presenting our Embodied Code project at CHI’22 as part of the Interactions program. In both online and in-person formats, we will demo the Embodied Coding Environment, and take participants through a short (5 minute) experience with the embodied coding system.

Stay tuned for more info, and to read our extended abstract:

Performance: Artificial Rural Imagination

Carson Center hosts research Flyover Summit Oct. 21-22 | Hixson-Lied  College of Fine and Performing Arts | Nebraska

For the FLYOVER Summit at UNL (Ash Smith and Stephanie Sherman), we produced a speculative machine narrative of the event. We had 11 humans, 1 neural net, and billions of anonymous textual tokens to produce a machine narrative to accompany the talks throughout the day. An AI writer’s room.

The Rural AI took over the Carson Center feed for 9 hours on 10/21. Find our micro-narratives and speculative vignettes between:

Start of event:

End of event:

POM21 Berlin – Beyond Classification

A network diagram showing transitions between images in an audio visual piece
State Transition Diagram for GPT text generation and CLIP/BigGAN image translations

Joel Ong, Eunsu Kang and I presented an an intervention for Politics of the Machines 2021 in Berlin. With three human and non-human pairs—Joel with his Euglena Gracilis (Emotional Sentiment/Light,Text), Eunsu with her Violet (Viola/Speech), and me with my text and image agent (GPT3 and CLIP/BigGAN/CMA-ES)—we discussed the machinic sublime in a performative roundtable.

Eunsu Kang, Violet (AI), Joel Ong, Euglena (AI), Robert Twomey, Artificial Imagination-1 (AI) in performance.

I look forward to further development of these projects and ideas with the group.

From the POM website:

POM21 Intervention #3

ICER21 Workshop on Embodied Computational Reasoning

Exploring Virtual Reality and Embodied Computational Reasoning

A workshop for ICER 2021, the ACM International Computing Education Research conference.

Date: Saturday, August 14, 11:00AM – 1PM PDT

Description: The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This workshop examines how AR/VR can be leveraged in computer science (CS) education within the context of embodied learning. It has been theorized that abstract computational concepts, such as data, operators, and loops, are grounded in embodied representations that arise from our sensorimotor experience of the physical world. For instance, researchers have shown that when CS students describe algorithms, conditionals, and other computational structures, they frequently gesture in ways that suggest they are conceptualizing interactions with tangible objects. Can learning to code become a more intuitive process if lessons take into account these types of embodied conceptual phenomena? This two-hour workshop explores 1) theories of embodiment and 2) new and existing tools and practices that support embodied CS learning — ranging from Papert’s LOGO turtles to a preview of an innovative 3D spatial coding platform for AR/VR under development by our group. Other open-source and commercially available resources will also be examined through streamed video demos and a hands-on break-out session for participants.


Details: See our workshop page at

NU Grant: Design Innovation Core

A photo looking up at the front of the building showing the name of the Johnny Carson Center for Emerging Media Arts.
Johnny Carson Center for Emerging Media Arts

Together with Megan Elliott (director), Jesse Fleming, and Ash Smith we have won a ~$500k grant to establish a Design Innovation Core as one of the Research Core Facilities in the Nebraska University system. This will allow us to scale up internal and external research collaborations with the unique capabilities at the Johnny Carson Center.

More details soon!

SIGGRAPH Frontiers Workshop – Measurable Creative AI

A cover slide with the name of the talk. It is yellow with a robot head.

I chaired a a workshop on Measurable Creative AI as part of SIGGRAPH Frontiers, including moderating a live Q&A during the conference. We have an amazing lineup of presenters:

  • Kenric Allado-McDowell – K Allado-McDowell is a writer, speaker, and musician. They are the author, with GPT-3, of the book Pharmako-AI, and are co-editor, with Ben Vickers, of The Atlas of Anomalous AI. They record and release music under the name Qenric. Allado-McDowell established the Artists + Machine Intelligence program at Google AI. They are a conference speaker, educator and consultant to think-tanks and institutions seeking to align their work with deeper traditions of human understanding.
  • Stephanie Dinkins – Stephanie Dinkins is a transmedia artist who creates platforms for dialog about race, gender, aging, and our future histories. Dinkins’ art practice employs emerging technologies, documentary practices, and social collaboration toward equity and community sovereignty. She is particularly driven to work with communities of color to co-create more equitable, values grounded social and technological ecosystems. Dinkins is a professor at Stony Brook University where she holds the Kusama Endowed Professor in Art.
  • Ethan Edwards – Ethan Edwards is a researcher in Experiments in Art and Technology (E.A.T.) at Nokia Bell Labs, an initiative which fuses art with engineering to humanize technology. He works directly with scientists and artists to help facilitate collaboration and builds technology which crosses these domains. He is a creative technologist, having graduated with an MFA in Sound Art from Columbia University and has had work featured in museums, galleries, and performances around the world. His independent artwork explores traditional aesthetic themes in radically new media contexts. He has designed and led numerous large scale exhibits at Nokia Bell Labs.
  • Eunsu Kang – Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning, one of the core methods for building AI. She has been making interactive art installations and performances, teaching art-making using machine learning methods, and recently looking into the possibility of creative AI. She is also a co-founder of Women Art AI collective.
  • Sang Leigh – Sang Leigh is an Assistant Professor of the School of Industrial Design at Georgia Institute of Technology. His research focuses on augmenting humans and their creativity, through forming a symbiotic and tactile relationship between humans and computers. His Machine Poetics research group investigates novel user interfaces, interactive programming, and human-robot interaction for enhancing our creative processes and learning

Pre-Recorded Panel – available On Demand, August 1 to registered SIGGRAPH attendees.

Live Discussion – Wednesday, August 11. 9am – 10am PDT/ noon-1pm EDT.

Post-Conference – We will publish all materials on our mCreativeAI website after the event:

RSS 2021 Workshop on Robotics x Arts

J responding to Robot arm drawing

I’m pleased to present a new artwork (Three Stage Drawing Transfer) at the RSS 2021 Workshop on Robotics x Arts!

I’ll also be a panelist for the discussion with Ken Goldberg, Kim Baraka, Patricia Alves-Oliveira, and Eunsu Kang. After years working with mechatronics and various kinds of automation, I’m really looking forward to this discussion with this brilliant group of panelists!