We present a method named iComMa to address the 6D camera pose estimation problem in computer vision. Conventional pose estimation methods typically rely on the target's CAD model or necessitate specific network training tailored to particular object classes. Some existing methods have achieved promising results in mesh-free object and scene pose estimation by inverting the Neural Radiance Fields (NeRF). However, they still struggle with adverse initializations such as large rotations and translations. To address this issue, we propose an efficient method for accurate camera pose estimation by inverting 3D Gaussian Splatting (3DGS). Specifically, a gradient-based differentiable framework optimizes camera pose by minimizing the residual between the query image and the rendered image, requiring no training. An end-to-end matching module is designed to enhance the model's robustness against adverse initializations, while minimizing pixel-level comparing loss aids in precise pose estimation. Experimental results on synthetic and complex real-world data demonstrate the effectiveness of the proposed approach in challenging conditions and the accuracy of camera pose estimation.
Given an initial camera pose, iComMa iteratively optimizes to estimate the ground truth pose associated with the query image. For the t-th optimization step, we first render the image corresponding to the camera pose ππ‘ using 3D Gaussian Splatting. Subsequently, we compute the residuals between the rendered image and the query image, which include the matching loss πππ obtained from the end-to-end matching module and the per-pixel comparing loss ππΆππ. The entire framework is differentiable, enabling the optimization of camera poses by utilizing the gradients derived from minimizing the residuals.
Note: iNeRF† is a variant of iNeRF, achieved by inverting Mip-NeRF360.
@article{sun2023icomma,
title={icomma: Inverting 3d gaussians splatting for camera pose estimation via comparing and matching},
author={Sun, Yuan and Wang, Xuan and Zhang, Yunfan and Zhang, Jie and Jiang, Caigui and Guo, Yu and Wang, Fei},
journal={arXiv preprint arXiv:2312.09031},
year={2023}
}