INTRODUCTION

Realistic representation of digital humans in AR/VR applications is only made possible with the capture of high-quality data and appropriate rendering techniques. While capturing of accurate and relightable data is required to produce assets for realistic avatars, we also need real-time performance capture to ensure success of applications like teleconference and teleportation. On the other side, rendering of photo-real humans is even more important to the immersive experience in virtual scenes. This workshop provides a platform to share some of the most advanced human face/body capturing systems from pore-level high resolution capture to rapid motion capture along with the art of data processing. It will also cover novel rendering along with environment lighting estimation techniques required in AR/VR, like neural rendering.

We expect that the workshop will inspire novel ideas based on current practices in the field of rendering realistic digital humans and accelerate the hardware and software development in the same field. A major goal of this workshop will be to bring researchers thater collaboration. It will also provide a good introduction for researchers that are interested and want to start their research in the field.

NEWS

Ari Shapiro will give a keynote during the workshop!

LOCATION

The workshop will take place during the IEEE 2nd International Conference on Artificial Intelligence & Virtual Reality (AIVR 2019). Check the conference website for relevant information

SUBMISSION

Authors are invited to submit a maximum 4 pages technical workshop paper in double-column IEEE format following the official IEEE Manuscript Formatting guidelines. All submissions will go through a double-blind peer-review process. Authors of accepted papers are expected to attend the conference and present their paper at the workshop.

All the papers should be submitted using EasyChair website under the track of "IEEE AIVR 2019 - Workshop on From Capture to Rendering of Digital Humans for AR/VR".

The topic should includes but not limited to:

  • Capture systems (hardware / software)
  • Creation of Digital Humans
  • Rendering of Digital Humans
  • Motion capture
  • AR/VR experience with Digital Humans
  • Anything related to Digital Humans
Accepted workshop papers and special session papers will be published in the conference proceedings by IEEE Computer Society Press and included in the IEEE Xplore Digital Library.

RIGHTS

The work submitted to the conference is subject to the IEEE Intellectual Property Rights and Copyright policy. Please read carefully the linked webpages before submitting a contribution.

IMPORTANT DATES

Paper submission deadline: October 10th, 2019
Notification of acceptance: October 20th, 2019
Camera-ready Deadline: October 28th, 2019
Workshop date: December 9th, 2019

WORKSHOP PROGRAM

Workshop Date: December 9, 2019

14:00 - 15:45 Session 4B: Workshop CRHD (part 1)
  • 14:00 - 14:10 Introduction
  • 14:10 - 14:55 Keynote Speaker:Ari Shapiro
  • 14:55 - 15:20 Invited Paper:Temporal Interpolation of Dynamic Digital Humans(Irene Viola, Jelmer Mulder, Francesca De Simone and Pablo Cesar)
  • 15:20 - 15:45 Invited Talk:Fabien Danieau
15:45 - 16:15 Coffee Break

16:15-18:00 Session 5B: Workshop CRHD (part 2)
  • 16:15 - 16:45 Invited Talk:Chloe Legendre
  • 16:45 - 17:00 Paper:Influence of Motion Speed on the Perception of Latency in Avatar Control(Ludovic Hoyet, Clément Spies, Pierre Plantard, Anthony Sorel, Richard Kulpa and Franck Multon).
  • 17:00 - 17:15 Paper:The Design Process for Enhancing Visual Expressive Qualities of Characters from Performance Capture into Virtual Reality(Victoria Campbell)
  • 17:15 - 17:45 Invited Talk:Kalle Bladin
  • 17:45 - 18:00 Wrap up

INVITED SPEAKERS

Ari Shapiro
Title:Digital humans: models of behavior and interactivity
Abstract:As techniques for capturing and generating realistic digital humans become more widely available, the need for realistic movement and behavior becomes more important. The Uncanny Valley effect is more pronounced for moving, as opposed to still, imagery, necessitating higher fidelity motion replication, such as from motion capture, as well as higher fidelity behavior models for synthetic movement. This talk explores my work in modeling both appearance and behavior of digital humans, including capture, rigging, and interactivity.
Chloe LeGendre
Title:Multispectral Illumination in USC ICT's Light Stage X
Abstract:USC ICT's computational illumination system Light Stage X has been used for a variety of different techniques: from studio lighting reproduction to high resolution facial scanning. In this talk, I'll describe how adding multispectral LEDs to the system has improved color rendition for a variety of such Light Stage techniques, while also enabling higher resolution facial capture. I will conclude with opportunities for future work on human digitization leveraging multispectral illumination sources.
Kalle Bladin
Title:Automating mass production of digital avatars for VR
Abstract:This talk covers how the Vision and Graphics Lab at USC’s ICT is leveraging the latest Light Stage technology to devise a database of facial scans. Recent movements toward a convergence of visual quality in real time and offline rendering, in conjunction with the massive rise of deep learning approaches for processing and recreation of human data, have drastically simplified the ability to generate realistic avatars for VR; something that previously was reserved to high end visual effects studios requiring a multitude of highly specialized artists and engineers. We have developed a pipeline for scanning, preprocessing, and registration of expressive facial scans to automate the building of a database that enables training of machine learning algorithms to generate highly detailed and visually realistic avatars. This presentation will focus on the main obstacles confronted when building such a database and pipeline, aimed specifically for facial scan data but stretching further by combining multiple data sources and providing automatic rigging, animation, and rendering of a massive number of digital avatars.
Fabien Danieau
Title:Automatic Generation of 3D Facial Rigs
Abstract:Digital humans are key aspects of the rapidly evolving areas of virtual reality, augmented reality, virtual production and gaming. Even outside of the entertainment world, they are becoming more and more commonplace in retail, sports, social media, education, health and many other fields. This talk presents a fully automatic pipeline for generating and high geometric and textural quality facial rigs. They are automatically rigged with facial blendshapes for animation. The steps of this pipeline such as photogrammetry, landmarking, retopology, and blenshapes transfer are detailed. Then two applications for creating fast VR avatars, and for generating quality digital doubles are showcased.

ORGANIZERS

CONTACT US

If you have any questions, please contact Yajie Zhao(zhao[at].ict.usc.edu), or Fabien Danieau(fabien.danieau[at]interdigital.com) or Steve Tonneau(stonneau[at]ed.ac.uk).