Vladimir is founder and CEO of 3Lateral, a multidisciplinary company built around passion for creation of an appearance of life in digital medium. Vladimir’s focus in career are high fidelity digital humans starting from capture systems (3D and 4D scanning, appearance capture) to modelling and articulation systems to data bases, statistical modelling and engine integration. 3Lateral has received several prestigious awards including Most Innovative Entrepreneur by EY (2014) and Best Real Time Graphics and Interactivity (SIGGRAPH 2016, as part of team of companies Epic Games, Ninja Theory, Cubic Motion and 3Lateral). Portfolio of his company includes many AAA game titles including Grand Theft Auto IV and V, Red Dead Redemption, Horizon: Zero Dawn, Star Citizen, Until Dawn, Battlefield 1 and The Order:1886. We had an opportunity to hear about the 3Lateral’s “behind the scenes” works at CGA Belgrade conference, where Vladimir exclusively premiered some of their fresh developments.
Physical simulation has long been the approach of choice for generating realistic hair animations in CG. A constant drawback of simulation, however, is the necessity to manually set the physical parameters of the simulation model in order to get the desired dynamic behavior. To alleviate this, researchers have begun to explore methods for reconstructing hair from the real world and even to estimate the corresponding simulation parameters through the process of inversion. So far, however, these methods have had limited applicability, because dynamic hair capture can only be played back without the ability to edit, and solving for simulation parameters can only be accomplished for static hairstyles, ignoring the dynamic behavior. We present the first method for capturing dynamic hair and automatically determining the physical properties for simulating the observed hairstyle in motion. Since our dynamic inversion is agnostic to the simulation model, the proposed method applies to virtually any hair simulation technique, which we demonstrate using two state-of-the-art hair simulation models. The output of our method is a fully simulation-ready hairstyle, consisting of both the static hair geometry as well as its physical properties. The hairstyle can be easily edited by adding additional external forces, changing the head motion, or re-simulating in completely different environments, all while remaining faithful to the captured hairstyle.