withÂ Mike McCrea.
Rover is a mechatronic imaging device inserted into quotidian space, transforming the sights and sounds of the everyday into dreamlike cinematic experience. A kind of machine observer or probe, it knows very little of what it sees.
Using computational lightfield capture, Rover records all light incident through a scene. Bounded in a physical sense by the system of motors and belts delimiting its 2-dimensional plane of travel, nonetheless it explores the world before it. It strains outwards from this grid into space it cannot readily inhabit: our space. It records sequential images to document where it is, when it is. Later, through an algorithmic manipulation of those images, we witness its search through a past of imperfect moments, synthesizing dreamlike views of the spaces and scenes it previously inhabited.
While looking, Rover also listens. It records and extracts audio using machine-listening techniques, retrieving sounds we would otherwise dismiss. Just as images are ceaselessly churned by the device, sounds are revisited and reshaped until they are no longer commonplace.
The result is a kind of cinema that follows the logic of dreams: suspended but still mobile, familiar yet infinitely variable in detail. Indeed, the places we visit through Roverâ€™s motility are the kinds of places we find ourselves in dreams: cliffside, seaside, bedside, adrift and unable to return home, or trapped in the corners of those homes.
TheÂ imageryÂ gathered forÂ this iteration of Rover was capturedÂ with a custom mechatronic light field capture systemÂ designed to be portable and scalable according to the framing and depth required for each scene.
By gathering hundreds of images in a structured way, we are able to create a synthetic camera â€œapertureâ€ which allows us to resynthesize a scene after the fact,Â re-focusing, obscuring and revealing points of interestÂ in real-time. Â The result is a non-linear hybrid between photographyÂ and video.
In a somewhat analogous process, audio is recordedÂ at the site of each light field capture and analyzed to find events and textures of interestÂ through an audio classification system called Music Information Retrieval. Â Based on theÂ features discovered in the recordings, sonic moments or textures which may have otherwise gone unnoticed are exposed and recomposedÂ in concert with the visual system.
Some of the techniques and technologies used include:
- Music Information RetrievalÂ for audio classification (using SCMIR by Nick Collins)
- K-means clustering for ordering sound according to self-similarity
- Visual Structure From Motion for gathering images locations and rectifying all images to a common image plane
- Custom softwareÂ driving the resynthesis of the light field scenes (via OSC from SuperCollider)
- A real-time audio granulation softwareÂ written in SuperCollider
RoverÂ was presented at the Black Box 2.0 Festival, May 28 â€“ June 7 2015.