Interview: Oliver Pavićević, 3D programmer

OliverMTVIt is hard to describe Oliver Pavićević: 3D programmer, 3D modeller,  university lecturer, bass guitar player in Goth bend Stardom, VJ, professional photographer, should we mention that he made his own bass guitar? So, in order to define words “multimedia artist” it is enough to say – Oliver Pavićević.

  • For start, introduce yourself to our readers.

I was born in Belgrade, where I lived until 1998 when I moved to Milan to study New Media Design at the academy Nuova Accademia di Belle Arti. This is where I specialized in multimedia directing. I learned to program myself by experimenting with the program for virtual reality Virtools. Then I used to work as a teaching assistent and lecturer on my faculty. Nowadays I mostly do interactive design for event design and scenography. 

  • You are lecturing on University in Milan?

I am currently teaching at the Cattolica University in Milan, at the Faculty of Humanities. I am teaching a course on virtual reality (together with the Professor Simone Tosoni). this course does not concentrate too much on technical or artistic aspects of VR, but primarily on sociological aspects of this new technology. This is one of the first courses of this type in Italy. 

  • You are crossing over to the Unity 3D platform, starting to research VR and finishing VR Panorma 360 Renderer which is one of most successful plug-ins on Unity asset store. How difficult it was to write a code for plug-in of this format?

I have programmed this plug-in for my own needs. Having worked on Oculus as much as I did, the main problem was that Oculus needs PC, Joysticks, cameras, bunch of stuff that you must take with you and assemble on various presentations. I needed something that I could transfer any VR Project in 360 stereo video without using stitching softwares which are slow and don’t give expected and satisfying results. I rolled up my sleeves  and in my spare time, in the course of two or three weeks, I made the first functional version which was enough to present my works fast and efficiently on Gear VR.

At that moment, neither YouTube nor Facebook had the 360 videos loading options. When I put my WIP project on some forums, people got so  interested that I got bunch of demands to put it on sale. The problem was to turn that plug-in, which was made for personal needs,  into a plug-in which would be user-friendly, stable and reliable across all platforms. I put a lot of effort into additional optimization, compatibility and user interface, and each day I tried to add new functions so that the plug-in has evolved to offline rendering system, even for standard videos. Thanks to Unity 5 rendering engine – which possesses excellent GI, and with the  help of good GPU, SSD disks, at this moment it is possible to render photo-realistic 4K ( standard, not 360 ) animations in record time – which is seriously close to real-time rendering. One of good examples of the usage of this technique was the promotion of new Fiat 500 for which I alone have done complete animations on 270 degree screens.  It is Cave system for which I needed to make many animations on multiple screens in very short deadline, something which was impossible to do with classic offline systems without big render farms. The result was so good that I even got a prestigious Italian award Best Event Award 2015. I am currently working on the new version of VR Panorama which will have RT label, and will be able to render 360 stereo in real-time in Full HD resolution. It will help those who have capture/stream cards – with good internet connection it will be able to send the stream online on multiple devices as Gear VR and Google Cardboard.

  • Aforementioned plug-in is used for shooting music videos, what are the advantages of this type of work?

It is about system which is a bit expanded: camera rig which I made and assembling Oculus Rift on it – so I could do real time tracking of camera relatively quickly. I have developed this system so that I could do real time compositing and have the preview of the finished scene at the moment of shooting. Usually I shoot actors on green-screen, and in Unity I have developed my own real time chroma key which gives me the access to put actors directly in 3D environment. This approach is intuitive, because it is possible to move camera while musicians play or actors act. Besides, this technique is great because it facilitates the lightning at the moment where you need to compose as faithful real life characters as possible into a virtual scene.

  • VR ( Virtual Reality ) or AR ( Augmented Reality)?

Mixed Reality.

All jokes aside,  I think that VR will develop more mediums, which are connected with each other, and where is need to carefully analyze the differences so we could define anything, scientifically or linguistically. For example, 360 stereo cinema, which, by definition is neither VR nor AR. And it isn’t cinematography either. But,calling that medium VR is already accepted,  which is a mistake, VR is a simulation which is immersive and demands the possibility of interaction with virtual world. In 360 stereo movies that interaction doesn’t exist, but there is a factor of immersiveness. On the other hand, FPS games have interaction and those are much closer to VR than 360 cinema, but, because they are played on 2D screens, there is no real contact with that reality. With that point of view, 360 cinema is closer to cinematography, but it isn’t cinematography, because we could not use the cinematography definition which is based on subtraction, and not addition. When I use the term subtraction, I mean that a movie is based on subtraction of information: in movies with montage we compress time, with optics we can zoom in and out, put focus on some details, change point of view, or blur background. All of those are techniques we cannot use in 360 cinema, which is  closer to theater and will demand to go back to some old techniques, different  way of acting, montage, shooting, lightning and effects. I am talking about 360 cinema, because it is the most accessible VR medium, and I think that it will be the first step for masses toward real VR and it will be the stepping stone of the commercial future of VR.

OliStudio

  • You are developing your own hologram, tell us more about that project.

Actually, it isn’t the real hologram, but mixed reality hologram. It is a technique which I use for video shoots, but in this case I implement virtual objects into real scene and interact  with Kinect with it. The idea is to use this technique in combinations with info-graphics for presentations on big events, where, until now,  Power Point presentations have been used , and instead of clicking buttons with PP, in future they can move slides and manipulate 3D objects in space. Also, I am developing the usage of this technology for musical performances, where musician could “play” and animate virtual 3D forms. The whole idea is, actually, a test for what HoloLens and similar technologies will bring us.

HoloSetup

  • What is motion sickness in VR and how to evade it?

I think it is the past, luckily. Today’s HMD have low latency, and excellent refresh rate with positional tracking – these factors together help at removing MS. Some people are more influenced by it than others. My opinion is that people should do less VR in the beginning and give the brain the time to re-calibrate itself, and get used to VR. The most important thing is not to use HMD with demos on configurations that aren’t optimized and don’t have good performances.

  • You have researched the presenters as characters in VR. What is their advantage?

The idea of virtual guides is an old one, one of good examples is Max Headroom, from the show with the same name. I have made some serious experiments in that field around 2003-2004, trying to generate, from the speak analysis, life like character animations. At that time Kinect didn’t exist, which would be, today, used for something similar, but that is for the best. I think that the idea to have a virtual character who runs the show, reads the news, probably isn’t trending or technologically attractive, but similar systems will have larger use in VR worlds where the need for lifelike avatars is present. Research in that field is very interesting even today because it unites animation with psychology and sociology. In the process of experimenting of some AI behaviors based on analysis of human behavior.  One of good examples is, how would AI Avatar behave in VR world in relation to ours based on body language. VR enables new stuff, because it is easier to read the motion of the user through Kinect, or eye tracking ( for this I use eye-tracking technology TobiiEyeX ). In our social relations and conventions there are rituals which are defined as accepted or not, and depending on behavior we can create realistic behaviors and animations. Avatar who doesn’t know us, when it senses our presence, will, maybe, look at us, but in the moment when eye contact is made (with the help of eye-tracking we can have precise information where a user looks), it will change the behavior and look somewhere else. For how much time will that game go on until eye contact is made, and some kind of communication? How will our looks and motions  influence further behavior of the character? For example, if we have a female in front of us, and don’t look in her eyes, but in her chest, how would she react? What will be the reaction when we step closer? All those are factors which can be researched and worked on, further giving more character to the virtual person, even without verbal communication.

  • We all got struck with the news of Microsoft pulling Kinect off, but you still use it’s capabilities, why?

Kinect isn’t pulled, Microsoft gave option to buy XBox without it. It is a logical move, and I don’t think that Kinect (and similar systems) time has passed, on the contrary, Kinect time is about to come. It still have some issues, but it is a tech that, through time, will be more serious and will be less used for gaming and more in other spheres, like security, 3D scanning, AR, production and compositing for 3D or 360 movies. The most important thing, this technology will find its place in mobile industry. Microsoft used it in HoloLens, Meta in their AR HMD, and Google released Project Tango which is basically a tablet with a 3D sensor as Kinect has.

It was very interesting, especially because the client gave us free hands, and agreed on my idea to use a system which, until then, hasn’t been used before. It is video mapping software (based on Unity 5) which I programmed especially for that occasion and it was used to render all 3D animations real-time, on multiple LED screens with relatively normal workstation based on Titan Z card and android series devices which were used as command surface. Part of animations were guided by AI, which analyses the sound, and changes colors, lightning and assemble the camera, while, using tablets and phones was possible to play global animations or influence the Ai algorithm. At first sight, complex system, which is actually very easy to use, because controls were mobile, and it was possible to control everything through WI-FI from any part of the room. Of course, when these large conventions happen, the decision to use new technology is a risk, but, without, development is not possible.

MTVVJMTVVJ2OliverMTV

  • In 2007, you completed an online virtual museum, can you tell us more about that project?

That was one of pioneer tryouts of Web-3D, project of site-portal which I made with my colleagues, Claudio Dalla Bernandina and Andrea Lavezzoli with the help of Virtools platform. Everything started with the idea of a virtual museum with paintings and sculptures. For that occasion I made a system which would handle the scanning of the surface, for example, of an oil painting, and make material with the help of shades, which would realistically describe the painting’s surface and where you can see the brush strokes and micro-structure ( normal maps, glossiness, specular ) – those details which are impossible to see in ordinary photograph. But, to achieve that kind of project, it was necessary to have access to historically important paintings and the first move was to start with a simple project – photo exhibition. We succeeded in putting out six exhibitions with important world and Italian photographers, but, in the end, everything went south, even with high interest, because of the lack of funding. Today, the story would probably be different, because it is much easier to do it with crowd-funding.

organismuseum_01organismuseum_02 organismuseum_03 organismuseum_04 organismuseum_06

  • You are photographer in your free time; what are your biggest successes in that domain?

Photography is more of a side activity for me, my childhood love. But I have never thought of it as a proper profession. Sometimes I would photograph in varipopus clubs in Milan, and I had the opportunity to do a couple of shootings for Vogue magazine. Unfortunately, in the last year, I find it hard to find time for photography.  

oliver_foto1 17978_254786787997_14927_n 17978_237160437997_5088508_n 17978_237160462997_206394_n 31784_391142202997_7193674_n 582443_10150817835247998_178230917_n 425915_10150588904607998_1289876538_n

Thank you so much for your time and the privilege of having an interview with you.

You can contact Oliver via his Facebook page

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: