Publications
2022
- MonoScene: Monocular 3D Semantic Scene CompletionCao, Anh-Quan, and Charette, RaoulIn CVPR 2022
MonoScene proposes a 3D Semantic Scene Completion (SSC) framework, where the dense geometry and semantics of a scene are inferred from a single monocular RGB image. Different from the SSC literature, relying on 2.5 or 3D input, we solve the complex problem of 2D to 3D scene reconstruction while jointly inferring its semantics. Our framework relies on successive 2D and 3D UNets bridged by a novel 2D-3D features projection inspiring from optics and introduces a 3D context relation prior to enforce spatio-semantic consistency. Along with architectural contributions, we introduce novel global scene and local frustums losses. Experiments show we outperform the literature on all metrics and datasets while hallucinating plausible scenery even beyond the camera field of view. Our code and trained models are available at https://cv-rits.github.io/MonoScene/.
@inproceedings{cao2022monoscene, title = {MonoScene: Monocular 3D Semantic Scene Completion}, author = {Cao, Anh-Quan and de Charette, Raoul}, booktitle = {CVPR}, year = {2022}, live_demo = {https://huggingface.co/spaces/CVPR/MonoScene}, }
2021
- PCAM: Product of Cross-Attention Matrices for Rigid Registration of Point CloudsCao, Anh-Quan, Puy, Gilles, Boulch, Alexandre, and Marlet, RenaudIn ICCV 2021
Rigid registration of point clouds with partial overlaps is a longstanding problem usually solved in two steps: (a) finding correspondences between the point clouds; (b) filtering these correspondences to keep only the most reliable ones to estimate the transformation. Recently, several deep nets have been proposed to solve these steps jointly. We built upon these works and propose PCAM: a neural network whose key element is a pointwise product of cross-attention matrices that permits to mix both low-level geometric and high-level contextual information to find point correspondences. A second key element is the exchange of information between the point clouds at each layer, allowing the network to exploit context information from both point clouds to find the best matching point within the overlapping regions. The experiments show that PCAM achieves state-of-the-art results among methods which, like us, solve steps (a) and (b) jointly via deepnets.
@inproceedings{cao21pcam, title = {{PCAM}: {P}roduct of {C}ross-{A}ttention {M}atrices for {R}igid {R}egistration of {P}oint {C}louds}, author = {Cao, Anh-Quan and Puy, Gilles and Boulch, Alexandre and Marlet, Renaud}, booktitle = {ICCV}, year = {2021}, }