All posts by Robert

SIGGRAPH Art Paper 2022

My paper Three Stage Drawing Transfer has been published by Proceedings of the ACM on Computer Graphics and Interactive Techniques. I presented this at SIGGRAPH 2022 in Vancouver BC as part of the Art Papers program. It explores creative AI, human imagination, and embodied interaction through an interactive drawing performance.

Robert Twomey. 2022. Three Stage Drawing Transfer: Collaborative Drawing Between a Generative Adversarial Network, Co-robotic Arm, and Five-Year-Old Child. Proc. ACM Comput. Graph. Interact. Tech. 5, 4, Article 36 (September 2022), 7 pages.

SIGGRAPH Art Gallery 2022

My project Three Stage Drawing Transfer was selected for the SIGGRAPH 2022 Art Gallery program in Vancouver, BC. August 8-11th, 2022.

I did an interview about the project for the ACM SIGGRAPH Blog

I also gave a LABS session, and was on two panels:

Radio PLAY at ISEA2022

a black and white photo of radio performers rehearsing in a large recording studio.
Orson Welles shown in rehearsal directing his Mercury Theatre of the Air troupe. 1938 (Photo Courtesy of Photofest, Inc.) source

Together with Ash Smith, Patrick Coleman, and Stephanie Sherman, I will be conducting a day long workshop on AI co-writing with GPT-3, culminating in a live internet radio play for ISEA 2022.

More information here:

June 11, 2022 in Barcelona, Spain

Embodied Code at CHI22

Embodied Coding Environment, showing annotations, game objects, and nodes/edges.

We will be presenting our Embodied Code project at CHI’22 as part of the Interactions program. In both online and in-person formats, we will demo the Embodied Coding Environment, and take participants through a short (5 minute) experience with the embodied coding system.

Stay tuned for more info, and to read our extended abstract:

Performance: Artificial Rural Imagination

Carson Center hosts research Flyover Summit Oct. 21-22 | Hixson-Lied  College of Fine and Performing Arts | Nebraska

For the FLYOVER Summit at UNL (Ash Smith and Stephanie Sherman), we produced a speculative machine narrative of the event. We had 11 humans, 1 neural net, and billions of anonymous textual tokens to produce a machine narrative to accompany the talks throughout the day. An AI writer’s room.

The Rural AI took over the Carson Center feed for 9 hours on 10/21. Find our micro-narratives and speculative vignettes between:

Start of event:

End of event:

POM21 Berlin – Beyond Classification

A network diagram showing transitions between images in an audio visual piece
State Transition Diagram for GPT text generation and CLIP/BigGAN image translations

Joel Ong, Eunsu Kang and I presented an an intervention for Politics of the Machines 2021 in Berlin. With three human and non-human pairs—Joel with his Euglena Gracilis (Emotional Sentiment/Light,Text), Eunsu with her Violet (Viola/Speech), and me with my text and image agent (GPT3 and CLIP/BigGAN/CMA-ES)—we discussed the machinic sublime in a performative roundtable.

Eunsu Kang, Violet (AI), Joel Ong, Euglena (AI), Robert Twomey, Artificial Imagination-1 (AI) in performance.

I look forward to further development of these projects and ideas with the group.

From the POM website:

POM21 Intervention #3

ICER21 Workshop on Embodied Computational Reasoning

Exploring Virtual Reality and Embodied Computational Reasoning

A workshop for ICER 2021, the ACM International Computing Education Research conference.

Date: Saturday, August 14, 11:00AM – 1PM PDT

Description: The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This workshop examines how AR/VR can be leveraged in computer science (CS) education within the context of embodied learning. It has been theorized that abstract computational concepts, such as data, operators, and loops, are grounded in embodied representations that arise from our sensorimotor experience of the physical world. For instance, researchers have shown that when CS students describe algorithms, conditionals, and other computational structures, they frequently gesture in ways that suggest they are conceptualizing interactions with tangible objects. Can learning to code become a more intuitive process if lessons take into account these types of embodied conceptual phenomena? This two-hour workshop explores 1) theories of embodiment and 2) new and existing tools and practices that support embodied CS learning — ranging from Papert’s LOGO turtles to a preview of an innovative 3D spatial coding platform for AR/VR under development by our group. Other open-source and commercially available resources will also be examined through streamed video demos and a hands-on break-out session for participants.


Details: See our workshop page at