Recent advances in 3D reconstruction have achieved remarkable progress in high-quality scene capture from dense multi-view imagery, yet struggle when input views are limited. Various approaches, including regularization techniques, semantic priors, and geometric constraints, have been implemented to address this challenge. Latest diffusion-based methods have demonstrated substantial improvements by generating novel views from new camera poses to augment training data, surpassing earlier regularization and prior-based techniques. Despite this progress, we identify three critical limitations in these state-of-the-art approaches: inadequate coverage beyond known view peripheries, geometric inconsistencies across generated views, and computationally expensive pipelines. We introduce GaMO (Geometry-aware Multi-view Outpainter), a framework that reformulates sparse-view reconstruction through multi-view outpainting. Instead of generating new viewpoints, GaMO expands the field of view from existing camera poses, which inherently preserves geometric consistency while providing broader scene coverage. Our approach employs multi-view conditioning and geometry-aware denoising strategies in a zero-shot manner without training. Extensive experiments on Replica and ScanNet++ demonstrate state-of-the-art reconstruction quality across 3, 6, and 9 input views, outperforming prior methods in PSNR and LPIPS, while achieving a 25× speedup over SOTA diffusion-based methods with processing time under 10 minutes.
Overview of Our Pipeline. Given sparse input views, our method follows a three-stage process. (a) Coarse 3D Initialization: We obtain geometry priors from initial 3D reconstruction, including an opacity mask and coarse render that provide essential structural cues. (b) GaMO: Geometry-aware Multi-view Outpainter: Using the geometry priors, GaMO generates outpainted views with enlarged FOV via a multi-view diffusion model. (c) Refined Reconstruction: The outpainted views are used to refine the 3D reconstruction, resulting in improved completeness and consistency.
Overview of GaMO (Geometry-aware Multi-view Diffusion Outpainter). (a) Multi-view Diffusion Conditioning: Sparse input views are encoded into clean latents and combined with multi-view conditions, including Plücker ray embeddings for input views (Pr) and the target view with enlarged FOV (P*t), together with original and augmented Canonical Coordinate Map (CCM) and RGB, providing both geometric and appearance cues for diffusion model conditioning. (b) Denoising Process: Coarse geometry priors (opacity mask and coarse render) guide the denoising through mask latent blending performed at multiple timesteps (t1, t2, …, tN) with progressive dilation and noise resampling, generating outpainted views with enlarged FOV (c).