Program


Keynote Speakers

Andrew Davison
Andrew Davison

Imperial College London
Tuesday, 9am

Aaron Courville
Aaron Courville

University of Montreal
Wednesday, 9am

Kyros Kutulakos
Kyros Kutulakos

University of Toronto
Thursday, 9am



Conference Program

3DV 2019 will showcase high quality single-track oral and spotlight presentations. All papers will also be presented as posters. The main conference will run from 17 to 19 September, in conjunction with the industrial exhibition, preceded by a tutorial on September 16.

At a glance

Program

Monday September 16, 2019

9:00-12:00 — Tutorial, Room 306AB

  1. Active 3D Imaging Systems: An In-Depth Look

    Marc-Antoine Drouin; Jonathan Boisvert; Guy Godin; Lama Séoud; Frank Billy Djupkep Dizeu

12:00-... — Lunch (on your own)

17:30-20:00 — Welcome Reception, Observatoire de la Capitale

Tuesday September 17, 2019

8:00-9:00 — Morning Coffee and Tea, Hall 310

8:45-9:00 — Welcome Remarks, Room 306AB

9:00-10:00 — Keynote 1: Andrew Davison (Imperial College London), Room 306AB

From SLAM to Spatial AI

To enable the next generation of smart robots and devices which can truly interact with their environments, Simultaneous Localisation and Mapping (SLAM) will progressively develop into a general real-time geometric and semantic `Spatial AI' perception capability. I will give many examples from our work on gradually increasing visual SLAM capability over the years. However, much research must still be done to achieve true Spatial AI performance. A key issue is how estimation and machine learning components can be used and trained together as we continue to search for the best long-term scene representations to enable intelligent interaction. Further, to enable the performance and efficiency required by real products, computer vision algorithms must be developed together with the sensors and processors which form full systems, and I will cover research on vision algorithms for non-standard visual sensors as well as concepts for the longer term future of coupled algorithms and computing architectures.

10:00-17:30 — Industry Expo, Espace Urbain

10:00-10:45 — Spotlight Session 1.1, Room 306AB

  1. Pano Popups: Indoor 3D Reconstruction with a Plane-Aware Network

    Marc C Eder; Pierre Moulon; Li Guan

  2. IoU Loss for 2D/3D Object Detection

    Dingfu Zhou; Jin Fang; Xibin Song; Chenye Guan; Junbo Yin; Yuchao Dai; Yang Ruigang

  3. Pixel-Accurate Depth Evaluation in Realistic Driving Scenarios

    Tobias Gruber; Felix Heide; Mario Bijelic; Werner Ritter; Klaus Dietmayer

  4. SyDPose: Object Detection and Pose Estimation in Cluttered Real-World Depth Images Trained using only Synthetic Data

    Stefan Thalhammer; Timothy Patten; Markus Vincze

  5. NoVA: Learning to See in Novel Viewpoints and Domains

    Benjamin Coors; Alexandru Condurache; Andreas Geiger

  6. Learning Depth from Endoscopic Images

    Ju Hong Yoon; Min-Gyu Park; Youngbae Hwang; Kuk-Jin Yoon

  7. Pairwise Attention Encoding for Point Cloud Feature Learning

    Yunxiao Shi; Haoyu Fang; Jing Zhu; Yi Fang

  8. PC-Net: Unsupervised Point Correspondence Learning with Neural Networks

    Xiang Li; Lingjing Wang; Yi Fang

  9. A Unified Point-Based Framework for 3D Segmentation

    HungYueh Chiang; Yen-Liang Lin; Yueh-Cheng Liu; Winston H. Hsu

  10. Optimising for Scale in Globally Multiply-Linked Gravitational Point Set Registration Leads to Singularity

    Vladislav Golyanik; Christian Theobalt

  11. 3D Neighborhood Convolution: Learning Depth-Aware Features for RGB-D and RGB Semantic Segmentation

    Yunlu Chen; Thomas Mensink; Stratis Gavves

10:45-11:15 — Break

11:15-12:00 — Oral Session 1.1: Stereo and depth prediction, Room 306AB

  1. MVS²: Deep Unsupervised Multi-view Stereo with Multi-View Symmetry

    Yuchao Dai; Zhidong Zhu; ZhiBo Rao; Bo Li

  2. Fast Stereo Disparity Maps Refinement By Fusion of Data-Based And Model-Based Estimations

    Maxime Ferrera; Alexandre Boulch; Julien Moras

  3. Structured Coupled Generative Adversarial Networks for Unsupervised Monocular Depth Estimation

    Mihai O Puscas; Dan Xu; Andrea Pilzer; Nicu Sebe

12:00-13:30 — Lunch, Room 309AB

13:30-14:45 — Oral Session 1.2: Shape analysis and recognition, Room 306AB

  1. DispVoxNets: Non-Rigid Point Set Alignment with Supervised Learning Proxies

    Soshi Shimada; Vladislav Golyanik; Edgar Tretschk; Didier Stricker; Christian Theobalt

  2. Correspondence-Free Region Localization for Partial Shape Similarity via Hamiltonian Spectrum Alignment

    Arianna Rampini; Irene Tallini; Alex Bronstein; Maks Ovsjanikov; Emanuele Rodola

  3. Effective Rotation-invariant Point CNN with Spherical Harmonics kernels

    Adrien Poulenard; Yann Ponty; Marie-Julie Rakotosaona; Maks Ovsjanikov

  4. Structured Domain Adaptation for 3D Keypoint Estimation

    Levi Osterno Vasconcelos; Massimiliano Mancini; Davide Boscaini; Barbara Caputo; Elisa Ricci

  5. Learning Point Embeddings from Shape Repositories for Few-Shot Segmentation

    Gopal Sharma; Evangelos Kalogerakis; Subhransu Maji

14:45-15:15 — Spotlight Session 1.2, Room 306AB

  1. To Complete or to Estimate, that is the Question: A Multi-task Approach to Depth Completion and Monocular Depth Estimation

    Amir Atapour-Abarghouei; Toby Breckon

  2. Frequency Shift Method: a Robust Fringe Projection Technique for 3D Shape Acquisition in the Presence of Strong Interreflections

    Frank Billy Djupkep Dizeu; Jonathan Boisvert; Marc-Antoine Drouin; Maxime Rivard; Guy Godin; Guy Lamouche

  3. Rotation Invariant Convolutions for Deep Learning with Point Clouds

    Zhiyuan Zhang; Binh-Son Hua; David W Rosen; Sai-Kit Yeung

  4. Accurate and Real-time Object Detection based on Bird's Eye View on 3D Point Clouds

    Yi Zhang; Zhiyu Xiang; Chengyu Qiao; Shuya Chen

  5. Photometric Segmentation: Simultaneous Photometric Stereo and Masking

    Bjoern Haefner; Yvain Queau; Daniel Cremers

  6. High-Resolution Augmentation for Automatic Template-Based Matching of Human Models

    Riccardo Marin; Simone Melzi; Emanuele Rodolà; Umberto Castellani

  7. V-NAS: Neural Architecture Search for Volumetric Medical Image Segmentation

    Zhuotun Zhu; Chenxi Liu; Dong Yang; Alan Yuille; Daguang Xu

15:15-17:30 Poster Session 1 and Break, Espace Urbain

All oral and spotlight presenters also present a poster.

Wednesday September 18, 2019

8:00-9:00 — Morning Coffee and Tea, Hall 310

9:00-10:00 — Keynote 1: Aaron Courville, Room 306AB

10:00-17:30 — Industry Expo, Espace Urbain

10:00-10:45 — Spotlight Session 2.1, Room 306AB

  1. Revisiting Depth Image Fusion with Variational Message Passing

    Diego Thomas; Akihiro Sugimoto; Ekaterina Sirazitdinova; Rin-ichiro Taniguchi Kyushu

  2. Distributed Surface Reconstruction from Point Cloud for City-Scale Scenes

    Jiali Han; Shuhan Shen

  3. Web Stereo Video Supervision for Depth Prediction from Dynamic Scenes

    Chaoyang Wang; Oliver Wang; Federico Perazzi; Simon Lucey

  4. Learning to Refine 3D Human Pose Sequences

    Jieru Mei; Xingyu Chen; Chunyu Wang; Alan Yuille; Xuguang Lan; Wenjun Zeng

  5. Reconstruction of As-is Semantic 3D Models of Unorganised Storehouses

    Antonio Adán; David de la Rubia

  6. Dynamic Surface Animation using Generative Networks

    Joao Regateiro; Adrian Hilton; Marco Volino

  7. Decoupled Hybrid 360° Panoramic Stereo Video

    Kevin K Ra; James Clark

  8. Unsupervised Feature Learning for Point Cloud Understanding by Contrasting and Clustering Using Graph Convolutional Neural Networks

    Ling Zhang; Zhigang Zhu

  9. Multi-Person 3D Pose Estimation

    Rishabh Dabral; Abhishek Sharma; Nitesh B Gundavarapu; Ganesh Ramakrishnan; Rahul Mitra; Arjun Jain

  10. Optimal, Non-Rigid Alignment for Feature-Preserving Mesh Denoising

    Florian Gawrilowicz; Andreas Baerentzen

  11. Enhancing Self-supervised Monocular Depth Estimation with Traditional Visual Odometry

    Lorenzo Andraghetti; Panteleimon Myriokefalitakis; Pier Luigi Dovesi; Belen Luque; Matteo Poggi; Alessandro Pieropan; Stefano Mattoccia

10:45-11:15 — Break

11:15-12:00 — Oral Session 1.2: Emerging sensing technologies, Room 306AB

  1. Learning to Think Outside the Box: Wide-Baseline Light Field Depth Estimation with EPI-Shift

    Titus Leistner; Hendrik Schilling; Radek Mackowiak; Stefan Gumhold; Carsten Rother

  2. 360 Surface Regression with a Hyper-Sphere Loss

    Antonis Karakottas; Nikolaos Zioulis; Stamatios Samaras; Dimitrios Ataloglou; Vasileios Gkitsas; Dimitrios Zarpalas; Petros Daras

  3. Asynchronous Multi-Hypothesis Tracking of Features with Event Cameras

    Ignacio Alzugaray; Margarita Chli

12:00-13:30 — Lunch, Room 309AB

13:30-14:45 — Oral Session 2.2: Human modelling and animation, Room 306AB

  1. Towards Accurate 3D Human Body Reconstruction from Silhouettes

    Brandon M Smith; Visesh Chari; Amit Agrawal; James Rehg; Ram Sever

  2. Progression Modelling for Online and Early Gesture Detection

    Vikram Gupta; Sai Kumar Dwivedi; Rishabh Dabral; Arjun Jain

  3. Predicting Animation Skeletons for 3D Articulated Models via Volumetric Nets

    Zhan Xu; Yang Zhou; Evangelos Kalogerakis; Karan Singh

  4. Motion Capture from Pan-Tilt Cameras with Unknown Orientation

    Roman Bachmann; Sporri Jorg; Pascal Fua; Helge Rhodin

  5. Convex Optimisation for Inverse Kinematics

    Tarun Yenamandra; Florian Bernard; Jiayi Wang; Franziska Mueller; Christian Theobalt

14:45-15:20 — Spotlight Session 2.2, Room 306AB

  1. Creating Realistic Ground Truth Data for the Evaluation of Calibration Methods for Plenoptic and Conventional Cameras

    Tim Michels; Arne Petersen; Reinhard Koch

  2. Multi-Spectral Visual Odometry without Explicit Stereo Matching

    Weichen Dai; yu Zhang; Donglei Sun; Naira Hovakimyan; Pi Li

  3. Simultaneous Shape Registration and Active Stereo Shape Reconstruction using Modified Bundle Adjustment

    Ryo Furukawa; Genki Nagamatsu; Hiroshi Kawasaki

  4. Semantic Segmentation of Sparsely Annotated 3D Point Clouds by Pseudo-labelling

    Katie Xu; Yasuhiro Yao; Kazuhiko Murasaki; Shingo Ando; Atsushi Sagata

  5. SIR-Net: Scene-Independent End-to-End Trainable Visual Relocalizer

    Ryo Nakashima; Akihito Seki

  6. Light Field Compression using Eigen Textures

    Marco Volino; Armin Mustafa; Jean-Yves Guillemaut; Adrian Hilton

  7. Res3ATN - Deep 3D Residual Attention Network for Hand Gesture Recognitionin Videos

    Naina Dhingra; Andreas Kunz

  8. Physics-Aware 3D Shape Synthesis

    Jianren Wang; Yihui He

15:20-17:30 — Poster Session 2 and Break, Espace Urbain

All oral and spotlight presenters also present a poster.

19:00-... — Conference Banquet, Manège Militaire

Thursday September 19, 2019

8:00-9:00 — Morning Coffee and Tea, Hall 310

9:00-10:00 — Keynote: Kyros Kutulakos (University of Toronto), Room 306AB

Rethinking Structured Light

Even though structured-light triangulation is a decades-old problem, much remains to be discovered about it---with potential ramifications for computational imaging more broadly.

I will focus on two specific aspects of the problem that are influenced by recent developments in our field. First, programmable coded-exposure sensors vastly expand the degrees of freedom of an imaging system, essentially redefining what it means to capture images under structured light. I will discuss our efforts to understand the theory and expanded capabilities of such systems, and to build custom CMOS sensors that realize them. Second, I will outline our recent work on turning structured-light triangulation into an optimal encoding-decoding problem derived from first principles. This opens the way for adaptive systems that can learn on their own how to optimally control their light sources and sensors, and how to convert the images they capture into accurate 3D geometry.

10:00-17:30 — Industry Expo, Espace Urbain

10:00-10:45 — Spotlight Session 3.1, Room 306AB

  1. On the Redundancy Detection in Keyframe-based SLAM

    Patrik Schmuck; Margarita Chli

  2. SIPs: Succinct Interest Points from Unsupervised Inlierness Probability Learning

    Titus Cieslewski; Kosta Derpanis; Davide Scaramuzza

  3. On Object Symmetries and 6D Pose Estimation from Images

    Giorgia Pitteri; Michael Ramamonjisoa; Slobodan Ilic; Vincent Lepetit

  4. AlignNet-3D: Fast Point Cloud Registration of Partially Observed Objects

    Johannes Gross; Aljosa Osep; Bastian Leibe

  5. UnDispNet: Unsupervised Learning for Multi-Stage Monocular Depth Prediction

    Vinay Kaushik; Brejesh Lall

  6. 360-degree Textures of People in Clothing from a Single Image

    Verica Lazova; Eldar Insafutdinov; Gerard Pons-Moll

  7. Adaptive-resolution Octree-based Volumetric SLAM

    Emanuele Vespa; Nils Funk; Paul Kelly; Stefan Leutenegger

  8. Multimodal 3D Human Pose Estimation from a Single Image

    Scott Spurlock; Richard Souvenir

  9. Effective Convolutional Neural Network Layers in Flow Estimation for Omni-directional Images

    Shuang Xie; Po Lai; Robert Laganiere; Jochen Lang

  10. Learning to Translate Between Real World and Simulated 3D Sensors While Transferring Task Models

    Michael Essich; Dennis Ludl; Thomas Gulde; Cristobal Curio

  11. Spherical View Synthesis

    Antonis Karakottas; Nikolaos Zioulis; Stamatis Samaras; Dimitrios Ataloglou; Vasileios Gkitsas; Dimitrios Zarpalas; Petros Daras

10:45-11:15 — Break

11:15-12:00 — Oral Session 3.1: Applications of 3D computer vision, Room 306AB

  1. Sparse-to-Dense Hypercolumn Matching for Long-Term Visual Localization

    Hugo Germain; Guillaume M Bourmaud; Vincent Lepetit

  2. Unified Underwater Structure-from-Motion

    Kazuto Ichimaru; Yuichi Taguchi; Hiroshi Kawasaki

  3. Learned Multi-View Texture Super-resolution

    Audrey Richard; Ian Cherabier; Martin R. Oswald; Vagia Tsiminaki; Marc Pollefeys; Konrad Schindler

12:00-13:30 — Lunch, Room 309AB

13:30-14:45 — Oral Session 3.2: Reconstruction and SLAM, Room 306AB

  1. Online Stability Improvement of Groebner Basis Solvers using Deep Learning

    Wanting Xu; lan hu; Manolis C Tsakiris; Laurent Kneip

  2. Surface Reconstruction from 3D Line Segments

    Pierre-Alain Langlois; Alexandre Boulch; Renaud Marlet

  3. Let's Take This Online: Adapting Scene Coordinate Regression Network Predictions for Online RGB-D Camera Relocalisation

    Tommaso Cavallari; Luca Bertinetto; Jishnu Mukhoti; Philip Torr; Stuart Golodetz

  4. Mobile Photometric Stereo Combined with SLAM for Dense 3D Reconstruction

    Maxence Remy; Hideaki Uchiyama; Hiroshi Kawasaki; Diego Thomas; Vincent Nozick; Hideo Saito

  5. Location Field Descriptors: Single Image 3D Model Retrieval in the Wild

    Alexander Grabner; Peter M Roth; Vincent Lepetit

14:45-15:15 — Spotlight Session 3.2, Room 306AB

  1. Adaptive Mesh Texture for Multi-View Appearance Modeling

    Matthieu Armando; Jean-Sebastien Franco; Edmond Boyer

  2. Real-time Multi-material Reflectance Reconstruction for Large-scale Scenes under Uncontrolled Illumination from RGB-D Image Sequences

    Lukas Bode; Sebastian Merzbach; Patrick Stotko; Michael Weinmann; Reinhard Klein

  3. Language2Pose: Natural Language Grounded Pose Forecasting

    Chaitanya Ahuja; Louis-Philippe Morency

  4. Synthesizing Diverse Lung Nodules Wherever Massively: 3D Multi-Conditional GAN-based CT Image Augmentation for Object Detection

    Changhee Han; Yoshiro Kitamura; Akira Kudo; Akimichi Ichinose; Leonardo Rundo; Yujiro Furukawa; Kazuki Umemoto; Yuanzhong Li; Hideki Nakayama

  5. Multiple Point Light Estimation from Low-Quality 3D Reconstructions

    Mike J Kasper; Christoffer Heckman

  6. Fast Non-Convex Hull Computation

    Francisco Gomez-Fernandez; Julian Bayardo; Gabriel Taubin

  7. Incorporating 3D Information into Visual Question Answering

    Yue Qiu; Yutaka Satoh; Ryota Suzuki; Hirokatsu Kataoka

15:15-15:30 — Concluding Remarks, Room 306AB

15:30-17:30 — Poster Session 3 and Break, Espace Urbain

All oral and spotlight presenters also present a poster.


Tutorials

September 16, 2019, 09:00 - 12:00, Room 306AB

  • Active 3D Imaging Systems: An In-Depth Look

    Marc-Antoine Drouin; Jonathan Boisvert; Guy Godin; Lama Séoud; Frank Billy Djupkep Dizeu

    Computer Vision and Graphics Team, Digital Technologies Research Center, National Research Council Canada, Ottawa, Ontario, Canada.

Tutorial outlook:

The objectives of the tutorial are to provide an advanced understanding of the principles and properties of 3D active imaging sensors, in particular but not exclusively those using triangulation with structured light. The tutorial will cover the basic principles of 3D imaging systems, explore their specific properties and performance, as well as their limitations. Moreover, it will help researchers interested in the modeling and analysis of 3D data better understand the physical processes underlying their input data. Examples of the advantages of including this physical knowledge within modeling and analysis tasks will be given.


Presentation Instructions

Oral presentations

  • Oral presenters (except spotlights) can use their own laptop. A conference laptop will also be available (Windows with Powerpoint only).
  • The projector will be setup for the 16:9 format using a 1920x1080 (full HD) resolution.
  • Oral talks are allowed 12 minutes. Since the sessions are packed, this will be strictly enforced. You must leave the podium once your time is up.
  • Additional 3 minutes are allocated for questions by the session chairs, switching between speakers and introducing the next speaker.
  • Oral presenters are also required to present a poster during the poster session on the same day as the oral presentation.

Spotlight presentations

  • All papers accepted as poster are also presented in the "spotlight" session.
  • Spotlight presenters are required to send a video recording of their slides in MP4 format using a 16:9 format at a resolution of 1920x1080 (full HD), with H.264 video encoding (no audio).
  • The video will be played from the control room, so there is no need to connect your laptop.
  • The audio channel from your submitted video will NOT be used: you are still required to present the slides orally as your video plays.
  • The recording must not exceed 3 minutes and 50 seconds. Longer videos will be truncated. The next speaker's video will start at exactly 4 minutes after the beginning of your own video. Timing will be strictly enforced.
  • The session chairs will NOT be introducing each spotlight video. Thus, start your video with a title slide and introduce yourself.
  • Create a single MP4 file named PAPERNUMBER.mp4 where PAPERNUMBER is your 3DV 2019 paper number.
  • Put the MP4 file in a place where it is sharable (e.g. Dropbox, OneDrive, FTP server, etc…) and send the link by e-mail to 3dv19tutorials@3dv.org.
  • The deadline is September 6, 2019, anywhere on earth.

Poster sessions

  • Poster boards have a useable dimension of 92 (height) x 183 (width) cm. Please use a horizontal layout as those tend to occupy the space better.
  • Before your poster session begins, please identify the number of your poster in the program and attach the poster to the corresponding stand. A label indicating your paper ID will be placed on the top of your board space. Check with the volunteers or the registration desk if you cannot find your poster stand.
  • Tacks and technical equipment will be available for the mounting of posters.
  • Please remove your poster after the session ends.
  • A poster printing service is not available at the conference, but you can use Planete Multi-Services (15-mins walk to the Convention Center).