One of the most important ways that we experience our environment is by manipulating it: we push, pull, poke, and prod to test hypotheses about our surroundings. By observing how objects respond to forces that we control, we learn about their dynamics. Unfortunately, regular video does not afford this type of manipulation – it limits us to observing what was recorded. The goal of our work is to record objects in a way that captures not only their appearence, but their physical behavior as well.
This work is mostly based on the paper “Image-Space Modal Bases for Plausible Manipulation of Objects in Video” byAbe Davis, Justin G. Chen, and Fredo Durand. For more info about that and other related publications, check out thePublications page. For videos about this and related work, check out the Videos page.