December 9-11 @ 2nd IEEE AIVR Conference 2019 in San Diego, CA, USA
Paper submission deadline: | |
Notification of acceptance: | |
Camera-ready Deadline: | |
Workshop date: | December 9th, 2019 |
Ari Shapiro Title:Digital humans: models of behavior and interactivity Abstract:As techniques for capturing and generating realistic digital humans become more widely available, the need for realistic movement and behavior becomes more important. The Uncanny Valley effect is more pronounced for moving, as opposed to still, imagery, necessitating higher fidelity motion replication, such as from motion capture, as well as higher fidelity behavior models for synthetic movement. This talk explores my work in modeling both appearance and behavior of digital humans, including capture, rigging, and interactivity. |
|
Chloe LeGendre Title:Multispectral Illumination in USC ICT's Light Stage X Abstract:USC ICT's computational illumination system Light Stage X has been used for a variety of different techniques: from studio lighting reproduction to high resolution facial scanning. In this talk, I'll describe how adding multispectral LEDs to the system has improved color rendition for a variety of such Light Stage techniques, while also enabling higher resolution facial capture. I will conclude with opportunities for future work on human digitization leveraging multispectral illumination sources. |
|
Kalle Bladin Title:Automating mass production of digital avatars for VR Abstract:This talk covers how the Vision and Graphics Lab at USC’s ICT is leveraging the latest Light Stage technology to devise a database of facial scans. Recent movements toward a convergence of visual quality in real time and offline rendering, in conjunction with the massive rise of deep learning approaches for processing and recreation of human data, have drastically simplified the ability to generate realistic avatars for VR; something that previously was reserved to high end visual effects studios requiring a multitude of highly specialized artists and engineers. We have developed a pipeline for scanning, preprocessing, and registration of expressive facial scans to automate the building of a database that enables training of machine learning algorithms to generate highly detailed and visually realistic avatars. This presentation will focus on the main obstacles confronted when building such a database and pipeline, aimed specifically for facial scan data but stretching further by combining multiple data sources and providing automatic rigging, animation, and rendering of a massive number of digital avatars. |
|
Fabien Danieau Title:Automatic Generation of 3D Facial Rigs Abstract:Digital humans are key aspects of the rapidly evolving areas of virtual reality, augmented reality, virtual production and gaming. Even outside of the entertainment world, they are becoming more and more commonplace in retail, sports, social media, education, health and many other fields. This talk presents a fully automatic pipeline for generating and high geometric and textural quality facial rigs. They are automatically rigged with facial blendshapes for animation. The steps of this pipeline such as photogrammetry, landmarking, retopology, and blenshapes transfer are detailed. Then two applications for creating fast VR avatars, and for generating quality digital doubles are showcased. |