DiffusionNOCS: Managing Symmetry and Uncertainty in Sim2Real Multi-Modal Category-level Pose Estimation

1 Woven by Toyota, 2 Toyota Research Institute, * indicates equal contribution

Poses generated by DiffusionNOCS used for placement in a real robotics setting.

Abstract


This work addresses the challenging problem of category-level pose estimation. Current state-of-the-art methods for this task face challenges when dealing with symmetric objects and when attempting to generalize to new environments solely through synthetic data training. In this work, we address these challenges by proposing a probabilistic model that relies on diffusion to estimate dense canonical maps crucial for recovering partial object shapes as well as establishing correspondences essential for pose estimation. Furthermore, we introduce critical components to enhance performance by leveraging the strength of the diffusion models with multi-modal input representations. We demonstrate the effectiveness of our method by testing it on a range of real datasets. Despite being trained solely on our generated synthetic data, our approach achieves state-of-the-art performance and unprecedented generalization qualities, outperforming baselines, even those specifically trained on the target domain.

Method



Handling Symmetry


Thanks to its probabilistic nature, DiffusionNOCS can handle symmetrical objects without a need for special data annotations and heuristics typical for many SOTA methods.


Selectable Inputs


A single network can be used to generate reconstructions from various inputs without re-training since our method supports selectable inputs.


NOCS Real 275 Benchmark


DiffusionNOCS shows the best results across SOTA baselines trained on synthetic data on a de facto standard benchmark for category-level pose estimation, NOCS Real (Wang et al., 2019).


Generalization Benchmark


To demonstrate how existing state-of-the-art (SOTA) methods perform in various challenging real-world environments, we introduce a zero-shot Generalization Benchmark consisting of three datasets commonly used for instance-level pose estimation, YCB-V (Xiang et al., 2018), HOPE (Tyree et al., 2022), and TYO-L (Hodan et al., 2018). We show the best overall performance even when compared to methods trained on real data.


Our Related Projects


While quaternion is a common choice for rotation representation, it cannot represent the ambiguity of the observation. In order to handle the ambiguity, the Bingham distribution is one promising solution. However, it requires complicated calculation when yielding the negative log-likelihood (NLL) loss. In this paper, we introduce a fast-computable and easy-to-implement NLL loss function for Bingham distribution. We also create the inference network and show that our loss function can capture the symmetric property of target objects from their point clouds.
In recent years, synthetic data has been widely used in the training of 6D pose estimation networks, in part because it automatically provides perfect annotation at low cost. However, there are still non-trivial domain gaps, such as differences in textures/materials, between synthetic and real data. To solve this problem, we introduce a simulation to reality (sim2real) instance-level style transfer for 6D pose estimation network training. Our approach transfers the style of target objects individually, from synthetic to real, without human intervention.
In this work, we study the complex task of holistic object-centric 3D understanding from a single RGB-D observation. As it is an ill-posed problem, existing methods suffer from low performance for both 3D shape and 6D pose estimation in complex multi-object scenarios with occlusions. We present ShAPO, a method for joint multi-object detection, 3D textured reconstruction, 6D object pose and size estimation. Our method detects and reconstructs novel objects without having access to their ground truth 3D meshes.

Citation


@inproceedings{diffusionnocs,
                    title={DiffusionNOCS: Managing Symmetry and Uncertainty in Sim2Real Multi-Modal Category-level Pose Estimation},
                    author={Takuya Ikeda, Sergey Zakharov, Tianyi Ko, Muhammad Zubair Irshad, Robert Lee, Katherine Liu, Rares Ambrus, Koichi Nishiwaki},
                    journal={arXiv},
                    year={2024}
                }