This project with co-PI Ying Wu (from the Swartz Center for Computational Neuroscience) establishes a novel framework for understanding physical and emotional responses to art. We are collecting data using multi-modal approach that leverages innovations in wireless and wearable biosensing to monitor brain and heart activity, wearable eye tracking, and external facial expression and body pose analysis combined with spatial imaging of gallery space.
These diverse data, along with participants’ own subjective responses, will be combined to better understand how people respond to art. We can use correlations between intrinsic visual features (extracted with deep convolutional networks and computer vision techniques) and eye gaze, EEG, and behavioral responses to train new generative models for visual art.
This project is funded by a two year Research in the Arts grant from the California Arts Council.