Disney Releases Production Assets for R&D and Education

Disney Animation has released two production assets to be used in computer graphics research and software development or education.

The first data set is a realistic cloud that could be used for example in volume rendering research. The large and highly detailed volumetric cloud data set, is intended to be used for various purposes, including movie and game production as well as research. The entire data set is available in the “cloud” package from this Disney web site.

cloud Continue reading

Advertisements

Disney’s Practical Guide to Path Tracing

Path tracing is a method for generating digital images by simulating how light would interact with objects in a virtual world. The path of light is traced by shooting rays (line segments) into the scene and tracking them as they bounce between objects. Path tracing gets its name from calculating the full path of light from a light source to the camera. Light can potentially bounce between many objects inside the virtual scene. As a ray of light hits a surface, it bounces and creates new rays of light. A path can therefore consist of a number of rays. By collecting all of the rays along a path together, the contributions of a light source and the surfaces along the path can be calculated. These calculations are used to produce a final image.

Simulation-Ready Hair Capture by Disney

 

Physical simulation has long been the approach of choice for generating realistic hair animations in CG. A constant drawback of simulation, however, is the necessity to manually set the physical parameters of the simulation model in order to get the desired dynamic behavior. To alleviate this, researchers have begun to explore methods for reconstructing hair from the real world and even to estimate the corresponding simulation parameters through the process of inversion. So far, however, these methods have had limited applicability, because dynamic hair capture can only be played back without the ability to edit, and solving for simulation parameters can only be accomplished for static hairstyles, ignoring the dynamic behavior. We present the first method for capturing dynamic hair and automatically determining the physical properties for simulating the observed hairstyle in motion. Since our dynamic inversion is agnostic to the simulation model, the proposed method applies to virtually any hair simulation technique, which we demonstrate using two state-of-the-art hair simulation models. The output of our method is a fully simulation-ready hairstyle, consisting of both the static hair geometry as well as its physical properties. The hairstyle can be easily edited by adding additional external forces, changing the head motion, or re-simulating in completely different environments, all while remaining faithful to the captured hairstyle.