Art and Empathy Lab

Study subjects filling out survey data in the San Diego Museum of Art.

This project with co-PI Ying Wu (from the Swartz Center for Computational Neuroscience) establishes a novel framework for understanding physical and emotional responses to art. We are collecting data using multi-modal approach that leverages innovations in wireless and wearable biosensing to monitor brain and heart activity, wearable eye tracking, and external facial expression and body pose analysis combined with spatial imaging of gallery space.

These diverse data, along with participants’ own subjective responses, will be combined to better understand how people respond to art. We can use correlations between intrinsic visual features (extracted with deep convolutional networks and computer vision techniques) and eye gaze, EEG, and behavioral responses to train new generative models for visual art.

This project is funded by a two year Research in the Arts grant from the California Arts Council. You can read more about the grant here: https://roberttwomey.com/2019/06/cac-research-in-the-arts-grant-art-and-empathy/

Pupil labs eye tracker data showing heat map with a series of visual fixations while looking at a painting
Co-PI Ying Wu and undergraduate research assistant Sydney preparing the Smartings wireless EEG device for lab tests.

GENERATIVE A+E

Informed by our Art + Empathy research at the San Diego Museum of Art, the projects below use generative AI systems to synthesize new aesthetic experience. Each of these explorations employs text-to-image translation networks (CLIP from OpenAI and BigGAN from Google), seeking insight into human imaginative processes by analogy.

Sculptural Carving

For this experiment I am iteratively refining the textual input to a text-to-image translation system to create a visual approximation of an existing artwork (Nam June Paik’s Something Pacific. What can we learn from this about how words describe images, for us (as humans)? Also, how do textual descriptions relate to the direct experience of works of visual art? Are textual descriptions condensed, low-information distillations of a work of art, or (when engaged with the human imagination) can words lead us to a cloud of possible artworks that-could-have-been?

Sculptural Carving - Five Successive Approximations Nam June Paik's Something Pacific (BigGAN - CLIP - CMA-ES)

Reverse Ekphrasis – Artificial Visual Imagination

What do we see when we read? The video below is one of a series of experiments translating poetry into prompts for a text-to-image translation network. Like binocular rivalry, this sets up an opposition between machine imagined imagery and the reader/viewers’ mind.

Machine Imagination of James Wright's Lying in a Hammock at William Duffy’s Farm in Pine Island, Minnesota

Generative Interiors

A walk through the latent space of a domestic interior.

Domestic Interior (BigGAN-CLIP)