I lead this fabulous team with Patrick Coleman, Ash Smith, Ryan Schmaltz, and Jinku Kim for the NYC Media Lab and Bertelsmann Creative Industries and AI Challenge. Our project employs AI for pre-preproduction (co-creation and world building) and performance (scripts as live artifacts, real-time AI).
It’s fascinating to work with the other three teams (startups), and connect with BMG, Penguin Randomhouse, RTL, and others. I’m grateful for the opportunity!
My SIGGRAPH paper, “Communing with Creative AI” was published in the Proceedings of the ACM on Computer Graphics and Interactive Techniques. Download it here:
Twomey, R., “Communing with Creative AI”, SIGGRAPH 2023 Art Papers, Proceedings of the ACM on Computer Graphics and Interactive Techniques — Volume 6, Issue 2, August 2023. Article No.: 28, pp 1-7 https://doi.org/10.1145/3597633
Over on the No Proscenium podcast, Laura Hess and Noah Nelson had a nice breakdown of our Cleaning the Stables piece from the La Jolla Playhouse Without Walls 2023.
really joyous, really delightful, irreverent with a hint of poignancy around the environmental themes.
I’m delighted to speak about GPT, LLMs, and Generative AI at this event on April 19th at the Hal?c?o?lu Data Science Institute and San Diego Supercomputer Center.
My panel, “Implications for Healthcare, Business, Research, and Art” starts at 10:55 PDT/12:55 CDT and is available online and in person.
In this presentation, Ash Eliza Smith and Robert Twomey will discuss their collaborative work developing systems for collective co-authorship, speculative world-building, and performative AI. They will describe recent projects including Artificial Rural Imagination for Flyover Country, the AI Radio Play, Cleaning the Stables for the Herakles Project, and the Theater of Latent Possibilities. Together, these projects explore generative AI for real-time performance—creating live, participatory experiences hinging on the improvisatory dynamics of human-machine co-authorship.
Ash Eliza Smith is an artist-researcher who uses storytelling, worldbuilding, and speculative design to shape new realities. With performance as both an object and lens, Smith works across art + science, between fact + fiction, and with human + non-human agents to re-imagine past and future technologies, systems, and rural-urban ecologies. She is an Assistant Professor of Emerging Media Arts at UNL. Smith previously attended the Performance Studies program at NYU’s Tisch School of the Arts and the Visual Arts program at the University of California, San Diego where she worked as an affiliate of the UCSD Design Lab, lecturer in the Culture, Art, Technology program, and a speculative designer in residence for the inaugural Birch Fellowship at Scripps Institution of Oceanography. https://asheveryday.com/
Robert Twomey is an artist and engineer exploring poetic intersections of human and machine perception, particularly how emerging technologies transform sites of intimate life. He has presented his work at SIGGRAPH (Best Paper Award), CVPR, ISEA, the Museum of Contemporary Art San Diego, and has been supported by the National Science Foundation, the California Arts Council, Microsoft, Amazon, HP, and NVIDIA. He is an Assistant Professor of Emerging Media Arts with the Johnny Carson Center, University of Nebraska-Lincoln, and an Artist in Residence with the Arthur C. Clarke Center for Human Imagination, UC San Diego. https://roberttwomey.com/
We will be presenting our Embodied Code project at CHI’22 as part of the Interactions program. In both online and in-person formats, we will demo the Embodied Coding Environment, and take participants through a short (5 minute) experience with the embodied coding system.
Stay tuned for more info, and to read our extended abstract: embodiedcode.net
Exploring Virtual Reality and Embodied Computational Reasoning
A workshop for ICER 2021, the ACM International Computing Education Research conference.
Date: Saturday, August 14, 11:00AM – 1PM PDT
Description: The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This workshop examines how AR/VR can be leveraged in computer science (CS) education within the context of embodied learning. It has been theorized that abstract computational concepts, such as data, operators, and loops, are grounded in embodied representations that arise from our sensorimotor experience of the physical world. For instance, researchers have shown that when CS students describe algorithms, conditionals, and other computational structures, they frequently gesture in ways that suggest they are conceptualizing interactions with tangible objects. Can learning to code become a more intuitive process if lessons take into account these types of embodied conceptual phenomena? This two-hour workshop explores 1) theories of embodiment and 2) new and existing tools and practices that support embodied CS learning — ranging from Papert’s LOGO turtles to a preview of an innovative 3D spatial coding platform for AR/VR under development by our group. Other open-source and commercially available resources will also be examined through streamed video demos and a hands-on break-out session for participants.