Pre-recorded Sessions: From 4 December 2020 | Live Sessions: 10 – 13 December 2020
4 – 13 December 2020
Pre-recorded Sessions: From 4 December 2020 | Live Sessions: 10 – 13 December 2020
4 – 13 December 2020
#SIGGRAPHAsia | #SIGGRAPHAsia2020
#SIGGRAPHAsia | #SIGGRAPHAsia2020
Date/Time: 04 – 13 December 2020
All presentations are available in the virtual platform on-demand.
Abstract: In this paper, we propose a robust low-cost mocap system (mocap) with sparse sensors. Ours effectively handles the sensor drift, noise, and lost issues.
Abstract: We present a casual end-to-end system to capture and visualize real-world VR experiences. We use high-quality depth maps from multi-plane images for an image-based blending method producing high-quality visual results.
Abstract: An approach to improve the visibility of virtual renderings in optical see-through augmented reality by using colored cast shadows. The main advantage is to give users a strong visual cue for better spatial understanding without having to rely on parallax effect in situations where the users movements are restricted.
Abstract: We propose a virtual large-scale 3D urban road environment model that can be used to develop control algorithms for autonomous and electric vehicles using open data and a game engine.
Abstract: This study developed a virtual space globe to display the spatial information of the universe in three dimensions used a game engine and the Hipparcos catalog.
Abstract: We propose a system that requires minimal work for a cartoon artist to develop a virtual reality-based cartoon.
Abstract: This research is to create an interactive dollhouse with projection mapping and sensors. It projects a different image based on the values acquired by the distance and pressure sensors.
Abstract: We present a novel data-driven approach for automatic specular highlight removal from a single image.
Abstract: We extend deferred neural rendering to extrapolate novel-viewpoints in a smoother way. The deferred rendering approach has traditionally great potential to be used in real-time applications, i.e. VR, due to its fast inference time. The representation is not ready for practical applications yet mostly because of missing editing capabilities.
Abstract: We present a novel approach to reconstruct high-fidelity 3D face model from a single RGB image. We propose a expression related details recovery scheme and a facial attribute representation.
Abstract: Digital Twin of the Australian Square Kilometre Array Pathfinder (ASKAP) - an extended reality teleoperating framework for telescope monitoring.
Abstract: In this paper we present an extension to Direct Delta Mush that re-integrates non-rigid joint transformations into the algorithm to achieve the desired deformation behavior for squash and stretch animation.
Abstract: In this paper, we present three interaction modes that arouse the sense of social presence for audience in a VR performance. The modes could be categorized as individual expression (Danmaku commenting) and group effect (empathic atmosphere and content related cues).
Abstract: The paper presents a deep learning architecture to upsample vertices and it's associated features and a novel feature loss function designed for accommodating training of both vertex and associated features.
Abstract: People have to keep searching carefully to find "lucky four-leaf clovers" because of the rarity and difficulty distinguishing it. We designed one solution by integrating technologies into human behavior.
Abstract: We propose a system that can modify the facial expressions of people (or even statues!) from just a few input views captured with a phone.
Abstract: This work addresses a generating method of three-dimensional(3D) origami folding animations with latent space interpolation. This method can generate smooth animation sequences from 3D point cloud data using deep learning.
Abstract: This study presents a process used to compute cost function calculations in a Motion Matching system in parallel by using multiple GPU threads.
Abstract: In this work a new method for 3D-modelling and animation, inspired by natural processes, is introduced offering a novel growth-based solution using stem-voxels encoded in digital-DNA structures.
Abstract: In this paper, we conduct a preliminary study to investigate the potential use of a consumer-level brainwave headset, and attempts to explore virtual character interaction to enhance the immersive storytelling.
Abstract: Modeling backward movements from body sensors in a virtual reality environment
Abstract: This paper proposes a novel immersive virtual 3D body painting system which provides various drawing tools and paint effects used in conventional body painting works for producing high-quality works by giving insights through the stages of either concept design or pre-production. We analyzed the drawing effect of airbrush and painting brush through collaboration with body painting experts and provide excellent drawing effect through GPU-based real-time rendering. Our system also provides users with the management functions such as save/load, undo they need to create works in virtual reality.
Abstract: We propose a fully automatic method for indoor scene semantic modeling. The result has been applied to an VR application, which is precise enough for haptic touch in VR.
Abstract: We propose a novel interactive method of 3D model generation from character illustrations.
Abstract: We proposed an annotation interface for labelling instance-aware semantic labels on panoramic full images, which outperforms other annotation tools in terms of efficiency and quality.
Abstract: An educational platform for engaging 14–140 year-olds with music and technology. The focus is on providing entertaining challenges inexpensively, that promote creative problem solving, collaboration, and programming using visual apparatus.
Abstract: Drawing on recent studies: Game Cartography, Emotional Cartography and Performative cartography, this paper presents “Playable Cartography” to describe emerging creative practices utilizing interactive cartographic interfaces in contemporary locative mobile experiences.
Abstract: We have performed a particle based blood flow and pressure change simulation in the heart with the aorta and the left ventricle models generated from real CT data by using two types of forces for the isovolumetric contraction and reducing rigidness of the aortic valve.
Abstract: As digital materials for infant education, we proposed a stereoscopic gimmick picture book by combining a projection mapping system and a trick art technique based on an optical illusion principle.
Abstract: In this paper, we propose a method to refine sparse point clouds of complex structures generated by Structure from Motion in order to achieve improved visual fidelity of ancient Indian heritage sites. We compare our results with the state-of-the-art upsampling networks.
Abstract: SHARIdeas can accurately and effectively share the designer's ideas with the executor, so as to reach a consensus at the cognitive level.
Abstract: The simulation and sound reactive system presented here is for mimicking the natural snake' behavior to sound. We use a low cost sound localization system and this operates at 66hz.
Abstract: We propose a general approach to build the scenes with large portions of duplicate structures. Experiments show our method successfully constructs highly ambiguous scene models, such as multi-floor indoor scenes.
Abstract: We present an MR system for experiencing human body scaling in the virtual world with enhancement of tactile stimuli of interface that scales according to the size of the player.
Abstract: In virtual reality , a motor reaction of the fingers, towards or away from the position of the self-avatar, happens in case of embodiment and disappears in case of disembodiment.
Abstract: We present a mid-air touch Mixed Reality application with hand tracking and haptic feedback to reduce the spread of COVID-19 and other communicable diseases on public kiosks.
Abstract: Always performed together, music and dance share similarities and correspondences. This project is exploring an innovative strategy of deeper engagement between music and dance. In this project, a 1-minute clip from the ballet Giselle Act I (Scene 1.7 Peasant Pas de Deux, Male 1st Variation) is analyzed, and new music clips are generated and developed. Dancing movement is motion captured and transformed into signals, and Benesh Dance Notation is mapped onto music notation written in the similar structure of 5-line stave. A New Symphony is created by overlaying the original music and generated music clips.
Abstract: Variable Rate Ray Tracing is a method that can reduce the performance cost (Over 30% frame rate increase) with minimal quality loss facilitating hardware-accelerated ray tracing for Virtual Reality.
Abstract: We present an innovative, component-based, wireless embedded system on a glove with hand motion capture providing tactile feedback, operating without the need for a finger tracking device such as LeapMotion.