We will be presenting our Embodied Code project at CHI’22 as part of the Interactions program. In both online and in-person formats, we will demo the Embodied Coding Environment, and take participants through a short (5 minute) experience with the embodied coding system.
For the FLYOVER Summit at UNL (Ash Smith and Stephanie Sherman), we produced a speculative machine narrative of the event. We had 11 humans, 1 neural net, and billions of anonymous textual tokens to produce a machine narrative to accompany the talks throughout the day. An AI writer’s room.
The Rural AI took over the Carson Center feed for 9 hours on 10/21. Find our micro-narratives and speculative vignettes between:
Start of event: https://twitter.com/carsoncenterunl/status/1451175473308790784
End of event: https://twitter.com/carsoncenterunl/status/1451302007403266060
Joel Ong, Eunsu Kang and I presented an an intervention for Politics of the Machines 2021 in Berlin. With three human and non-human pairs—Joel with his Euglena Gracilis (Emotional Sentiment/Light,Text), Eunsu with her Violet (Viola/Speech), and me with my text and image agent (GPT3 and CLIP/BigGAN/CMA-ES)—we discussed the machinic sublime in a performative roundtable.
I look forward to further development of these projects and ideas with the group.
Exploring Virtual Reality and Embodied Computational Reasoning
A workshop for ICER 2021, the ACM International Computing Education Research conference.
Date: Saturday, August 14, 11:00AM – 1PM PDT
Description: The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This workshop examines how AR/VR can be leveraged in computer science (CS) education within the context of embodied learning. It has been theorized that abstract computational concepts, such as data, operators, and loops, are grounded in embodied representations that arise from our sensorimotor experience of the physical world. For instance, researchers have shown that when CS students describe algorithms, conditionals, and other computational structures, they frequently gesture in ways that suggest they are conceptualizing interactions with tangible objects. Can learning to code become a more intuitive process if lessons take into account these types of embodied conceptual phenomena? This two-hour workshop explores 1) theories of embodiment and 2) new and existing tools and practices that support embodied CS learning — ranging from Papert’s LOGO turtles to a preview of an innovative 3D spatial coding platform for AR/VR under development by our group. Other open-source and commercially available resources will also be examined through streamed video demos and a hands-on break-out session for participants.
I chaired a a workshop on Measurable Creative AI as part of SIGGRAPH Frontiers, including moderating a live Q&A during the conference. We have an amazing lineup of presenters:
Kenric Allado-McDowell – K Allado-McDowell is a writer, speaker, and musician. They are the author, with GPT-3, of the book Pharmako-AI, and are co-editor, with Ben Vickers, of The Atlas of Anomalous AI. They record and release music under the name Qenric. Allado-McDowell established the Artists + Machine Intelligence program at Google AI. They are a conference speaker, educator and consultant to think-tanks and institutions seeking to align their work with deeper traditions of human understanding.
Stephanie Dinkins – Stephanie Dinkins is a transmedia artist who creates platforms for dialog about race, gender, aging, and our future histories. Dinkins’ art practice employs emerging technologies, documentary practices, and social collaboration toward equity and community sovereignty. She is particularly driven to work with communities of color to co-create more equitable, values grounded social and technological ecosystems. Dinkins is a professor at Stony Brook University where she holds the Kusama Endowed Professor in Art.
Ethan Edwards – Ethan Edwards is a researcher in Experiments in Art and Technology (E.A.T.) at Nokia Bell Labs, an initiative which fuses art with engineering to humanize technology. He works directly with scientists and artists to help facilitate collaboration and builds technology which crosses these domains. He is a creative technologist, having graduated with an MFA in Sound Art from Columbia University and has had work featured in museums, galleries, and performances around the world. His independent artwork explores traditional aesthetic themes in radically new media contexts. He has designed and led numerous large scale exhibits at Nokia Bell Labs.
Eunsu Kang – Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning, one of the core methods for building AI. She has been making interactive art installations and performances, teaching art-making using machine learning methods, and recently looking into the possibility of creative AI. She is also a co-founder of Women Art AI collective.
Sang Leigh – Sang Leigh is an Assistant Professor of the School of Industrial Design at Georgia Institute of Technology. His research focuses on augmenting humans and their creativity, through forming a symbiotic and tactile relationship between humans and computers. His Machine Poetics research group investigates novel user interfaces, interactive programming, and human-robot interaction for enhancing our creative processes and learning
Pre-Recorded Panel – available On Demand, August 1 to registered SIGGRAPH attendees.