Category Archives: News

POM21 Berlin – Beyond Classification

A network diagram showing transitions between images in an audio visual piece
State Transition Diagram for GPT text generation and CLIP/BigGAN image translations

Joel Ong, Eunsu Kang and I presented a performative roundtable for 3 human and 3 non-human agents for Politics of the Machines 2021 in Berlin. In human/non-human pairs—Joel with his Euglena Gracilis (Emotional Sentiment/Light,Text), Eunsu with her Violet (Viola/Speech), and me with my GPT3/CLIP/BigGAN/CMA-ES artificial imagination system—we discussed the machinic sublime.

I look forward to further development of these projects and ideas with the group.

ICER21 Workshop on Embodied Computational Reasoning

Exploring Virtual Reality and Embodied Computational Reasoning

A workshop for ICER 2021, the ACM International Computing Education Research conference.

Date: Saturday, August 14, 11:00AM – 1PM PDT

Description: The increasing sophistication and availability of Augmented and Virtual Reality (AR/VR) technologies wield the potential to transform how we teach and learn computational concepts and coding. This workshop examines how AR/VR can be leveraged in computer science (CS) education within the context of embodied learning. It has been theorized that abstract computational concepts, such as data, operators, and loops, are grounded in embodied representations that arise from our sensorimotor experience of the physical world. For instance, researchers have shown that when CS students describe algorithms, conditionals, and other computational structures, they frequently gesture in ways that suggest they are conceptualizing interactions with tangible objects. Can learning to code become a more intuitive process if lessons take into account these types of embodied conceptual phenomena? This two-hour workshop explores 1) theories of embodiment and 2) new and existing tools and practices that support embodied CS learning – ranging from Papert’s LOGO turtles to a preview of an innovative 3D spatial coding platform for AR/VR under development by our group. Other open-source and commercially available resources will also be examined through streamed video demos and a hands-on break-out session for participants.

Organizers:

Details: See our workshop page at xrdesign.github.io

SIGGRAPH Frontiers Workshop – Measurable Creative AI

I will be hosting a workshop on Measurable Creative AI as part of SIGGRAPH Frontiers, including moderating a live Q&A during the conference. We have an amazing lineup of presenters:

  • Kenric Allado-McDowell – K Allado-McDowell is a writer, speaker, and musician. They are the author, with GPT-3, of the book Pharmako-AI, and are co-editor, with Ben Vickers, of The Atlas of Anomalous AI. They record and release music under the name Qenric. Allado-McDowell established the Artists + Machine Intelligence program at Google AI. They are a conference speaker, educator and consultant to think-tanks and institutions seeking to align their work with deeper traditions of human understanding.
  • Stephanie Dinkins – Stephanie Dinkins is a transmedia artist who creates platforms for dialog about race, gender, aging, and our future histories. Dinkins’ art practice employs emerging technologies, documentary practices, and social collaboration toward equity and community sovereignty. She is particularly driven to work with communities of color to co-create more equitable, values grounded social and technological ecosystems. Dinkins is a professor at Stony Brook University where she holds the Kusama Endowed Professor in Art.
  • Ethan Edwards – Ethan Edwards is a researcher in Experiments in Art and Technology (E.A.T.) at Nokia Bell Labs, an initiative which fuses art with engineering to humanize technology. He works directly with scientists and artists to help facilitate collaboration and builds technology which crosses these domains. He is a creative technologist, having graduated with an MFA in Sound Art from Columbia University and has had work featured in museums, galleries, and performances around the world. His independent artwork explores traditional aesthetic themes in radically new media contexts. He has designed and led numerous large scale exhibits at Nokia Bell Labs.
  • Eunsu Kang – Eunsu Kang is an artist, a researcher, and an educator who explores the intersection of art and machine learning, one of the core methods for building AI. She has been making interactive art installations and performances, teaching art-making using machine learning methods, and recently looking into the possibility of creative AI. She is also a co-founder of Women Art AI collective.
  • Sang Leigh – Sang Leigh is an Assistant Professor of the School of Industrial Design at Georgia Institute of Technology. His research focuses on augmenting humans and their creativity, through forming a symbiotic and tactile relationship between humans and computers. His Machine Poetics research group investigates novel user interfaces, interactive programming, and human-robot interaction for enhancing our creative processes and learning

Pre-Recorded Panel – available On Demand, August 1 to registered SIGGRAPH attendees.

Live Discussion – Wednesday, August 11. 9am – 10am PDT/ noon-1pm EDT. https://s2021.siggraph.org/presentation/?id=fwkp_105&sess=sess243

Post-Conference – We will publish all materials on our mCreativeAI website after the event: mcreativeai.org.

RSS 2021 Workshop on Robotics x Arts

J responding to Robot arm drawing

I’m pleased to present a new artwork (Three Stage Drawing Transfer) at the RSS 2021 Workshop on Robotics x Arts!

I’ll also be a panelist for the discussion with Ken Goldberg, Kim Baraka, Patricia Alves-Oliveira, and Eunsu Kang. After years working with mechatronics and various kinds of automation, I’m really looking forward to this discussion with this brilliant group of panelists!

Grant: Cultivating Tools for Imagination in Engineering

Together with Prof. Karcher Morris and Postdoctoral scholar Jon Paden, we have been awarded a $45,000 grant from the UC San Diego Course Development and Instructional Improvement Program (CDIIP) to develop and pilot imagination for engineers within STEM curricula. This builds on work I have done as a lecturer in Data Science, Electrical and Computer Engineering/ML for the Arts, bridging cultivate of human imagination within STEM education, and focused on imagination as a driver of engagement, retention, and broadening the scope of STEM disciplines. The modules and resources we develop (and publish) will be shaped with an eye towards broad applicability across diverse educational fields.

CMMC CVPR21 Workshop

sculpture television buddha (after Something Pacific) by Robert Twomey

I am co-organizing a workshop on Computational Measurements of Machine Creativity (CMMC) for CVPR21.

Bridging the Gap between Subjective and Computational Measurements of Machine Creativity

While the methods for producing machine creativity have significantly improved, the discussion on a scientific consensus on measuring the creative abilities of machines has only begun. As Artificial Intelligence becomes capable of solving more abstract and advanced problems (e.g., image synthesis, cross-modal translations), how do we measure the creative performance of a machine? In the world of visual art, subjective evaluations of creativity have been discussed at length. In the CVPR community, by comparison, evaluating a creative method has not been as systematized. Our goal in this workshop is to discuss current methods for measuring creativity both from experts in creative artificial intelligence as well as artists. We do not wish to narrow the gap between how humans evaluate creativity and how machines do, instead we wish to understand the differences and create links between the two such that our machine creativity methods improve.

June 20, 2021, 11:00am – 2:30pm EDT | http://cmmc-cvpr21.com/

UChicago Text to Image Workshop

digital-media-flyer-machine-imagination.jpg

I gave a workshop with faculty and graduate students from the University of Chicago Digital Media Workshop and Poetry & Poetics Workshop on Machine Imagination: Text to Image Generation with Neural Networks.

Description: With recent advancements in machine learning techniques, researchers have demonstrated remarkable achievements in image synthesis (BigGAN, StyleGAN), textual understanding (GPT-3), and other areas of text and image manipulation. This hands-on workshop introduces state-of-the-art techniques for text-to-image translation, where textual prompts are used to guide the generation of visual imagery. Participants will gain experience with Open AI’s CLIP network and Google’s BigGAN, using free Google Colab notebooks which they can apply to their own work after the event. We will discuss other relationships between text and image in art and literature; consider the strengths and limitations of these new techniques; and relate these computational processes to human language, perception, and visual expression and imagination. Please bring a text you would like to experiment with!

Workshop link here: https://github.com/roberttwomey/machine-imagination-workshop

SIGGRAPH SPARKS – Robotics, Electronics, AI

I spoke at the April 30, ACM SIGGRAPH Digital Arts Community SPARKS event on Robotics, Electronics, AI, moderated by Hye Yeon Nam and Jan Searleman.

My talk, From Experimental Human Computer Interaction to Machine Cohabitation: New Directions in Art, Technology, and Intimate Life, explored human-computer cohabitation:

How do we prepare for a future living, working, and learning with machines? What new possibilities arise from the advent of always-on intelligent assistants, affordable co-robotic platforms, and ubiquitous AI? Now that we have invited the machines into our homes, our workplaces, our intimate everyday, how can we reimagine the terms of our human-computer interactions?

Through the presentation of a series of experimental arts projects, this talk addresses our machine cohabitant future. I will show key previous works building affective surrogates, developing inhabitable smart spaces, and situating machine observers with varying degrees of agency within shared environments. These projects lead to the discussion of my current work building embodied interfaces and staging experimental Human-Robot Interactions. I will raise critical concerns with language and communication, embodied intelligence, and the dynamics of model-limited experience within these contexts.

April 30, 2021 | https://dac.siggraph.org/robotics-electronics-ai/

NSF Grant: Embodied Coding

Rendering of proposed system

We did it! We’ve received a 3-year grant from the National Science Foundation to develop and test an Augmented Reality (AR) environment for collaborative coding. After years working on NSF-funded projects, this is my first time as co-PI:

https://nsf.gov/awardsearch/showAward?AWD_ID=2017042

We’ll be working with HS students from underserved communities in SD to study the efficacy of visual, embodied coding compared to traditional approaches, in promoting computational interest and ability. can’t wait to start!

This is the second project with my collaborator Ying Wu.