Towards Geometric and Textural Consistency 3D Scene Generation via Single Image-guided Model Generation and Layout Optimization

1Harbin Institute of Technology, Shenzhen    2Peng Cheng Laboratory    3 Harbin Institute of Technology
4 Harbin Institute of Technology, Suzhou Research Institute    Corresponding Author

A single image-guided model generation and layout optimization framework to generate the 3D scene with precise textured meshes and spatial details.
teaser

Interactive Results

model1 preview
model3 preview
model11 preview
model12 preview
model2 preview
model10 preview
model13 preview
model8 preview
model7 preview
model4 preview
model5 preview
model9 preview

Abstract

In recent years, 3D generation has made great strides in both academia and industry. However, generating 3D scenes from a single RGB image remains a significant challenge, as current approaches often struggle to ensure both object generation quality and scene coherence in multi-object scenarios. To overcome these limitations, we propose a novel three-stage framework for 3D scene generation with explicit geometric representations and high-quality textural details via single image-guided model generation and spatial layout optimization. Our method begins with an image instance segmentation and inpainting phase, which recovers missing details of occluded objects in the input images, thereby achieving complete generation of foreground 3D assets. Subsequently, our approach captures the spatial geometry of reference image by constructing pseudo-stereo viewpoint for camera parameter estimation and scene depth inference, while employing a model selection strategy to ensure optimal alignment between the 3D assets generated in the previous step and the input. Finally, through model parameterization and minimization of the Chamfer distance between point clouds in 3D and 2D space, our approach optimizes layout parameters to produce an explicit 3D scene representation that maintains precise alignment with input guidance image. Extensive experiments on multi-object scene image sets have demonstrated that our approach not only outperforms state-of-the-art methods in terms of geometric accuracy and texture fidelity of individual generated 3D models, but also has significant advantages in scene layout synthesis.

Method Overview



Our approach accomplishes complex scene generation through three collaborative subtasks. Given a single image as guidance, during the Instance Segmentation and Generation stage, we first perform object detection and instance segmentation to obtain instance-specific images, masks, and related information. After that, we focuse on repairing imperfect instance images (e.g., bed) and generates corresponding multiple 3D assets with generative model. In the Point Cloud Extraction stage, we estimate camera parameters and depth maps of the input image to extract a complete scene point cloud, which is further segmented using the previously obtained masks to derive independent point cloud representations for each instance. Additionally, we sample the generated 3D models into point clouds and implement a model selection strategy to choose 3D assets that best match the instance images. During layout optimization, we optimize layout parameters by minimizing the 3D and 2D Chamfer Distance between the optimal model point cloud (depicted in red) and the instance point cloud (depicted in green), finally constructing a 3D scene that maintains high consistency with the reference image layout.

BibTeX

@article{tang2025geometrictexturalconsistency3d,
  title={Towards Geometric and Textural Consistency 3D Scene Generation via Single Image-guided Model Generation and Layout Optimization},
  author={Tang, Xiang and Li, Ruotong and Fan, Xiaopeng},
  journal={arXiv preprint arXiv:2507.14841},
  year={2025}
}