Three Stage Drawing Transfer

Robot and child completing drawing transfer

This project creates a visual-mental-physical circuit between a generative adversarial network (GAN), a co-robotic arm, and a 5 year old child. From source training images to the latent space of a GAN to pen on paper and a final human interpreter, it establishes a flow of visual communications between a number of human and non-human actors. Enmeshed together, these discrete translational stages juxtapose advanced emerging technologies and childlike expression.

The title of the project refers to Dennis Oppenheim’s intimate 1971 performance ‘Two Stage Transfer Drawing’, a direct inspiration for this work. [1] In that piece, Oppenheim staged a photographic drawing performance with his son, reconceiving the task of drawing as a mode of intimate, embodied, inter-generational touch-based communication. Here, I have added additional stages of transfer: from the host of absent child artists contributing images to the blackbox neural network (GAN) trained on them; through the robotic arm transferring them with pen to paper; to my son’s eyes—where they are seen, named, and rendered through his own hand and mind.

The neural network at the center of this artwork is a StyleGAN2 architecture trained on a collection of over 7,000 children’s drawings gathered by Rhoda Kellogg, a psychologist and nursery school educator researching early graphical expression in young children. All told, from 1948 to 1966 she collected over a million drawings from children 2 to 8 years old, covering the full developmental spectrum from physical scribbling and basic patterns to concrete representational imagery. [2]

As visual subject matter, these “early graphic expressions” from children recall Claude Debuffet’s Art Brut (1948) or raw art.[3] Debuffet collected and exhibited art of criminals, the insane, children, and primitives—chasing his desire to get at a “raw art”: naive, unschooled, and untrammeled by convention and recognizable cultural habits of expression.

In this project and other related work, I am interested in children’s drawings as subject matter not just for being outside of taste, convention, and learned expression, but for how they show representation and visual language in their moments of genesis. When we are learning how to communicate. Each of the drawings in the RK collection hint at stories, subjects, and expressive intent that surely existed when they were drawn, but are now irretrievably absent. Here, that absence and mystery of the images is amplified by their reconstitution through a neural network. The outputs of the StyleGAN are childlike but clearly alien; recalling childhood nostalgia but rendered through a co-robotic arm. To be seen, understood, and represented by a human child.

Perhaps a better reference for this generative ML endeavor might be the Surrealists, who also sought to escape the bounds of language, expression, and enculturation. But rather than searching for the alien other—through sleep deprivation, mind-altering substances, and chance operations they sought instead to find the alien within themselves.

These questions of where we search for the other; where we grant agency, autonomy and intelligence; and why we might wish to escape our own subjectivities speaks to our design and use of emergent AI technologies, I think. Though we call it machine learning or creative AI, these terms deserve critical scrutiny in application to their new, non-human contexts. I believe that creative interactions between human and non-human actors have the potential to produce mutually revelatory encounters, and might facilitate that process. This project is one model for that.

[1] Dennis Oppenheim – Two Stage Transfer Drawing (1971) https://www.dennisaoppenheim.org/copy-of-new-page
[2] Rhoda Kellogg Child Art Collection http://www.early-pictures.ch/kellogg/archive/en/
[3] The Tate Britain on Art Brut https://www.tate.org.uk/art/art-terms/a/art-brut

This project was exhibited at the Robotics: Science and Systems (RSS) 2021 Workshop on Robotics x Arts

traversal of ChildGAN latent image space
GAN generated drawing of a boat
Robot renders GAN generated drawing; J renders interpretation
left: robot rendered image of GAN generated boat; right: child rendered interpretation
Spider Spider (machine generated image at left; child generated image at right)
line drawing of flower shape from GAN network
Drawing sampled from StyleGAN Trained on children’s drawings;
“Are you teaching the robot to draw a flower or a fox?”