2025-11-25 カリフォルニア工科大学(Caltech)

SAM 3D allows users to pull objects into 3D—even those that are partially obscured, such as this globe—from a single image. The two 3D images of the globe show the untextured mesh version (left) and the textured mesh version (right), as generated by SAM 3D.Credit: SAM 3D Team/Meta
<関連情報>
- https://www.caltech.edu/about/news/georgia-gkioxari-co-leads-major-3d-perception-model-built-on-ai
- https://arxiv.org/abs/2511.16624
SAM 3D: 画像内のあらゆるものを3D化 SAM 3D: 3Dfy Anything in Images
SAM 3D Team, Xingyu Chen, Fu-Jen Chu, Pierre Gleize, Kevin J Liang, Alexander Sax, Hao Tang, Weiyao Wang, Michelle Guo, Thibaut Hardin, Xiang Li, Aohan Lin, Jiawei Liu, Ziqi Ma, Anushka Sagar, Bowen Song, Xiaodong Wang, Jianing Yang, Bowen Zhang, Piotr Dollár, Georgia Gkioxari, Matt Feiszli, Jitendra Malik
arXiv Submitted on 20 Nov 2025
DOI:https://doi.org/10.48550/arXiv.2511.16624
Abstract
We present SAM 3D, a generative model for visually grounded 3D object reconstruction, predicting geometry, texture, and layout from a single image. SAM 3D excels in natural images, where occlusion and scene clutter are common and visual recognition cues from context play a larger role. We achieve this with a human- and model-in-the-loop pipeline for annotating object shape, texture, and pose, providing visually grounded 3D reconstruction data at unprecedented scale. We learn from this data in a modern, multi-stage training framework that combines synthetic pretraining with real-world alignment, breaking the 3D “data barrier”. We obtain significant gains over recent work, with at least a 5:1 win rate in human preference tests on real-world objects and scenes. We will release our code and model weights, an online demo, and a new challenging benchmark for in-the-wild 3D object reconstruction.


