Generative Art + Empathy

Informed by our Art + Empathy research at the San Diego Museum of Art, the projects below use generative AI systems to synthesize new aesthetic experience. Each of these explorations employs text-to-image translation networks (CLIP from OpenAI and BigGAN from Google), seeking insight into human imaginative processes by analogy.

Sculptural Carving

For this experiment I am iteratively refining the textual input to a text-to-image translation system to create a visual approximation of an existing artwork (Nam June Paik’s Something Pacific. What can we learn from this about how words describe images, for us (as humans)? Also, how do textual descriptions relate to the direct experience of works of visual art? Are textual descriptions condensed, low-information distillations of a work of art, or (when engaged with the human imagination) can words lead us to a cloud of possible artworks that-could-have-been?

Sculptural Carving – Five Successive Approximations Nam June Paik’s Something Pacific (BigGAN – CLIP – CMA-ES)

Reverse Ekphrasis – Artificial Visual Imagination

What do we see when we read? The video below is one of a series of experiments translating poetry into prompts for a text-to-image translation network. Like binocular rivalry, this sets up an opposition between machine imagined imagery and the reader/viewers’ mind.

Machine Imagination of James Wright’s Lying in a Hammock at William Duffy’s Farm in Pine Island, Minnesota

Generative Interiors

A walk through the latent space of a domestic interior.

Domestic Interior (BigGAN-CLIP)