Pre-recorded Sessions: From 4 December 2020 | Live Sessions: 10 – 13 December 2020

4 – 13 December 2020

#SIGGRAPHAsia | #SIGGRAPHAsia2020

Featured Sessions

  • Ultimate Supporter Ultimate Supporter
  • Ultimate Attendee Ultimate Attendee

Date: Thursday, December 10th
Time: 10:30am - 12:00pm
Venue: Main Room


Autonomous Digital Humans and Artificial Intelligence

Speaker:
Doug Roble, Digital Domain, United States of America
Darren Hendler, Digital Domain, United States of America

Abstract: In the spring of 2019 we presented a real-time, photorealistic version of Doug that mimicked what the real Doug was doing at the annual TED Conference. Since then, we’ve been working on making him look even better, make it easier to create high quality characters like him and to give him the ability to interact with people on his own. Our goal was to create something that looked impeccable and could be driven by a client’s chatbot. In doing this, we had to build an entire autonomous framework to handle all aspects of how a digital human interacts with the world. Self-driving cars? Ha! We’re talking about self-driving people! Machine learning requires data and we’ve spent considerable effort on our performance capture pipeline. This has had a huge impact on our VFX capabilities and has also made creating realistic, autonomous characters easier. We will discuss many aspects of what we’ve learned as we created this new technology.

Speaker(s) Bio:
As the Senior Director of Software R&D at Digital Domain, Doug leads a world-class team that develops software and conducts research to advance the artistry and technology for feature films and new media of all kinds. Doug has won two Academy Sci-Tech Awards: one for "Track," a groundbreaking computer vision system developed in 1993 and still in use today; and for Digital Domain’s fluid simulation system which set the standard for large-scale water effects when it was released in 2001. Doug is an active member of the Visual Effects branch of the Academy and is the chair of the Academy’s Sci/Tech Awards Committee.

Darren Hendler is a 20+ year veteran of the visual effects industry, where he has contributed his talents to over 18 feature films and scores of commercials. In his role as the Director of the Digital Human Group, Darren is responsible for spearheading new technology to create photo-real creatures and digital humans for feature films, real-time events and new technology platforms. Most recently Darren completed work on Marvel’s Avengers: End Game(2019) & Infinity Wars (2018), where he was responsible for overseeing creature development as well as developing new technology to bring Thanos to life.


A Helping Hand: How can digital humans improve lives and accessibility

Speaker:
Mike Seymour, fxguide, Motus Lab USYD,

Abstract: Mike will discuss the development of high-end digital human, and how this is merging with the developments in Neural Rendering. In particular, he will explain the profound ways that such digital humans can help people with a specific focus on Health and the work he and his team along with key international collaborators are doing into using photorealistic individualised CGI digital humans to improve the lives & accessibility of young stroke survivors.

Speaker(s) Bio:
Mike will discuss the development of high-end digital human, and how this is merging with the developments in Neural Rendering. In particular, he will explain the profound ways that such digital humans can help people with a specific focus on Health and the work he and his team along with key international collaborators are doing into using photorealistic individualised CGI digital humans to improve the lives & accessibility of young stroke survivors.


How to create your own photo-realistic avatar

Speaker:
Hao Li, Pinscreen, Inc., United States of America

Abstract: As we are moving toward a future where humans are seamlessly interacting with human-like virtual agents, existing solutions are either difficult or expensive to produce and often suffer from the Uncanny Valley effect. I will showcase how we overcome these challenges through our latest technological advancements at Pinscreen. I will showcase an end-to-end cloud-based solution of a fully autonomous and photoreal avatar, as well as present our latest methods to make the digitization and personalization of avatar accessible to consumers. I will give a live and unscripted demonstration of our avatar and illustrate a few real use cases from web-based virtual assistant, hologram-based virtual hosts, to virtual fashion influencers.

Speaker(s) Bio:
Hao Li is CEO and Co-Founder of Pinscreen, a startup that builds cutting edge AI-driven virtual avatar technologies. Before that, he was an Associate Professor of Computer Science at the University of Southern California, as well as the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication, telepresence in virtual worlds, and entertainment. His research involves the development of novel deep learning, data-driven, and geometry processing algorithms. He is known for his seminal work in avatar creation, facial animation, hair digitization, dynamic shape processing, as well as his recent efforts in preventing the spread of malicious deep fakes. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018 and was named named to the DARPA ISAT Study Group in 2019. In 2020, he won the ACM SIGGRAPH Real-Time Live! “Best in Show” award. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).


Digital humans are back! Panel Discussion

Speaker:
Christophe Hery, Facebook Reality Labs, United States of America
Mike Seymour, fxguide, Motus Lab USYD,
Doug Roble, Digital Domain,
Darren Hendler, Digital Domain, Canada
Hao Li, Pinscreen, United States of America

Abstract: Following the success of Tokyo 2018, Digital Humans are back in Siggraph Asia. And this time, they have a clear mission. In the age of covid and social distancing, these avatars and clones have to entertain us, assist us and help us communicate. After all, relatable discussions are what we, Real Humans, long for. Even more so, when isolation is forced upon us. We have invited researchers who will show the next iterations of virtual companions. These pioneers in the field of telepresence and autonomous agents, as well as visual effects and VR/AR practitioners, are not only presenting their work and approaches, but they will expand on the ethical aspects involved and how they envision a bright future for meaningful interactions with these Digital Humans.

Speaker(s) Bio:
Christophe Hery joined Facebook Reality Labs in 2019. Previously, he worked at Pixar, where he held the position of Senior Scientist. After writing new lighting models and rendering methods for Monsters University and The Blue Umbrella, Christophe continued heading the light transport research group in the studio. Christophe’s latest work includes Finding Dory, Coco and Toy Story 4. An alumnus of Industrial Light & Magic, Christophe previously served as a research and development lead, supporting the facility’s shaders and providing rendering guidance. He was first hired by ILM in 1993 as a senior technical director. During his career at ILM, he received two Technical Achievement Awards from the Academy of Motion Pictures Arts and Sciences.

Mike Seymour is a researcher at the University of Sydney. Mike did his PhD researching Digital Humans as a new form of Human-Computer Interface. Mike is co-founder of fxguide.com and Director of the Motus Lab. His expertise and research interests cover the areas of Digital Humans, innovative UX and VR/AR research, the impact of emotion, on human computer interaction, and engaged research with Industry partners especially in the Media and Entertainment space. Mike has extensive experience in industry, having worked and lived in the UK and USA before returning to Sydney. He has previously Chaired Real-Time Live @ Siggraph Asia.

As the Senior Director of Software R&D at Digital Domain, Doug leads a world-class team that develops software and conducts research to advance the artistry and technology for feature films and new media of all kinds. Doug has won two Academy Sci-Tech Awards: one for "Track," a groundbreaking computer vision system developed in 1993 and still in use today; and for Digital Domain’s fluid simulation system which set the standard for large-scale water effects when it was released in 2001. Doug is an active member of the Visual Effects branch of the Academy and is the chair of the Academy’s Sci/Tech Awards Committee.

Darren Hendler is a 20+ year veteran of the visual effects industry, where he has contributed his talents to over 18 feature films and scores of commercials. In his role as the Director of the Digital Human Group, Darren is responsible for spearheading new technology to create photo-real creatures and digital humans for feature films, real-time events and new technology platforms. Most recently Darren completed work on Marvel’s Avengers: End Game(2019) & Infinity Wars (2018), where he was responsible for overseeing creature development as well as developing new technology to bring Thanos to life. Presentations

Hao Li is CEO and Co-Founder of Pinscreen, a startup that builds cutting edge AI-driven virtual avatar technologies. Before that, he was an Associate Professor of Computer Science at the University of Southern California, as well as the director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. Hao's work in Computer Graphics and Computer Vision focuses on digitizing humans and capturing their performances for immersive communication, telepresence in virtual worlds, and entertainment. His research involves the development of novel deep learning, data-driven, and geometry processing algorithms. He is known for his seminal work in avatar creation, facial animation, hair digitization, dynamic shape processing, as well as his recent efforts in preventing the spread of malicious deep fakes. He was previously a visiting professor at Weta Digital, a research lead at Industrial Light & Magic / Lucasfilm, and a postdoctoral fellow at Columbia and Princeton Universities. He was named top 35 innovator under 35 by MIT Technology Review in 2013 and was also awarded the Google Faculty Award, the Okawa Foundation Research Grant, as well as the Andrew and Erna Viterbi Early Career Chair. He won the Office of Naval Research (ONR) Young Investigator Award in 2018 and was named named to the DARPA ISAT Study Group in 2019. In 2020, he won the ACM SIGGRAPH Real-Time Live! “Best in Show” award. Hao obtained his PhD at ETH Zurich and his MSc at the University of Karlsruhe (TH).


 

Back