All posts by Robert

Performance: Artificial Rural Imagination

Carson Center hosts research Flyover Summit Oct. 21-22 | Hixson-Lied  College of Fine and Performing Arts | Nebraska

For the FLYOVER Summit at UNL (Ash Smith and Stephanie Sherman), we produced a speculative machine narrative of the event. We had 11 humans, 1 neural net, and billions of anonymous textual tokens to produce a machine narrative to accompany the talks throughout the day. An AI writer’s room.

The Rural AI took over the Carson Center feed for 9 hours on 10/21. Find our micro-narratives and speculative vignettes between:

Start of event: https://twitter.com/carsoncenterunl/status/1451175473308790784

End of event: https://twitter.com/carsoncenterunl/status/1451302007403266060

POM21 Berlin – Beyond Classification

A network diagram showing transitions between images in an audio visual piece
State Transition Diagram for GPT text generation and CLIP/BigGAN image translations

Joel Ong, Eunsu Kang and I presented an an intervention for Politics of the Machines 2021 in Berlin. With three human and non-human pairs—Joel with his Euglena Gracilis (Emotional Sentiment/Light,Text), Eunsu with her Violet (Viola/Speech), and me with my text and image agent (GPT3 and CLIP/BigGAN/CMA-ES)—we discussed the machinic sublime in a performative roundtable.

Eunsu Kang, Violet (AI), Joel Ong, Euglena (AI), Robert Twomey, Artificial Imagination-1 (AI) in performance.

I look forward to further development of these projects and ideas with the group.

From the POM website:

POM21 Intervention #3

ICER21 Workshop on Embodied Computational Reasoning

Exploring Virtual Reality and Embodied Computational Reasoning

A workshop for ICER 2021, the ACM International Computing Education Research conference.

Date: Saturday, August 14, 11:00AM – 1PM PDT

Description: The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This workshop examines how AR/VR can be leveraged in computer science (CS) education within the context of embodied learning. It has been theorized that abstract computational concepts, such as data, operators, and loops, are grounded in embodied representations that arise from our sensorimotor experience of the physical world. For instance, researchers have shown that when CS students describe algorithms, conditionals, and other computational structures, they frequently gesture in ways that suggest they are conceptualizing interactions with tangible objects. Can learning to code become a more intuitive process if lessons take into account these types of embodied conceptual phenomena? This two-hour workshop explores 1) theories of embodiment and 2) new and existing tools and practices that support embodied CS learning – ranging from Papert’s LOGO turtles to a preview of an innovative 3D spatial coding platform for AR/VR under development by our group. Other open-source and commercially available resources will also be examined through streamed video demos and a hands-on break-out session for participants.

Organizers:

Details: See our workshop page at xrdesign.github.io

SIGGRAPH Frontiers Workshop – Measurable Creative AI

I chaired a a workshop on Measurable Creative AI as part of SIGGRAPH Frontiers, including moderating a live Q&A during the conference. We have an amazing lineup of presenters:

  • Kenric Allado-McDowell – K Allado-McDowell is a writer, speaker, and musician. They are the author, with GPT-3, of the book Pharmako-AI, and are co-editor, with Ben Vickers, of The Atlas of Anomalous AI. They record and release music under the name Qenric. Allado-McDowell established the Artists + Machine Intelligence program at Google AI. They are a conference speaker, educator and consultant to think-tanks and institutions seeking to align their work with deeper traditions of human understanding.
  • Stephanie Dinkins – Stephanie Dinkins is a transmedia artist who creates platforms for dialog about race, gender, aging, and our future histories. Dinkins’ art practice employs emerging technologies, documentary practices, and social collaboration toward equity and community sovereignty. She is particularly driven to work with communities of color to co-create more equitable, values grounded social and technological ecosystems. Dinkins is a professor at Stony Brook University where she holds the Kusama Endowed Professor in Art.
  • Ethan Edwards – Ethan Edwards is a researcher in Experiments in Art and Technology (E.A.T.) at Nokia Bell Labs, an initiative which fuses art with engineering to humanize technology. He works directly with scientists and artists to help facilitate collaboration and builds technology which crosses these domains. He is a creative technologist, having graduated with an MFA in Sound Art from Columbia University and has had work featured in museums, galleries, and performances around the world. His independent artwork explores traditional aesthetic themes in radically new media contexts. He has designed and led numerous large scale exhibits at Nokia Bell Labs.
  • Eunsu Kang – Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning, one of the core methods for building AI. She has been making interactive art installations and performances, teaching art-making using machine learning methods, and recently looking into the possibility of creative AI. She is also a co-founder of Women Art AI collective.
  • Sang Leigh – Sang Leigh is an Assistant Professor of the School of Industrial Design at Georgia Institute of Technology. His research focuses on augmenting humans and their creativity, through forming a symbiotic and tactile relationship between humans and computers. His Machine Poetics research group investigates novel user interfaces, interactive programming, and human-robot interaction for enhancing our creative processes and learning

Pre-Recorded Panel – available On Demand, August 1 to registered SIGGRAPH attendees.

Live Discussion – Wednesday, August 11. 9am – 10am PDT/ noon-1pm EDT. https://s2021.siggraph.org/presentation/?id=fwkp_105&sess=sess243

Post-Conference – We will publish all materials on our mCreativeAI website after the event: mcreativeai.org.

RSS 2021 Workshop on Robotics x Arts

J responding to Robot arm drawing

I’m pleased to present a new artwork (Three Stage Drawing Transfer) at the RSS 2021 Workshop on Robotics x Arts!

I’ll also be a panelist for the discussion with Ken Goldberg, Kim Baraka, Patricia Alves-Oliveira, and Eunsu Kang. After years working with mechatronics and various kinds of automation, I’m really looking forward to this discussion with this brilliant group of panelists!

Grant: Cultivating Tools for Imagination in Engineering

Together with Prof. Karcher Morris and Postdoctoral scholar Jon Paden, we have been awarded a $45,000 grant from the UC San Diego Course Development and Instructional Improvement Program (CDIIP) to develop and pilot imagination for engineers within STEM curricula. This builds on work I have done as a lecturer in Data Science, Electrical and Computer Engineering/ML for the Arts, bridging cultivate of human imagination within STEM education, and focused on imagination as a driver of engagement, retention, and broadening the scope of STEM disciplines. The modules and resources we develop (and publish) will be shaped with an eye towards broad applicability across diverse educational fields.