PCDreamer: Point Cloud Completion Through Multi-view Diffusion Priors

CVPR 2025

Guangshun Wei1, Yuan Feng1, Long Ma1, Chen Wang1, Yuanfeng Zhou1, Changjian Li2
1Shandong University, 2University of Edinburgh
Teaser Image

Given a partial point cloud input (a) with (b) as a novel view for visualization purposes, the goal of point cloud completion is to produce a complete point cloud retaining both the global and local geometric features. Existing methods (c, d) fail to neither recover local thin structures (e.g., the lamp holder) nor capture the global symmetric parts (e.g., the back supporter of the chair), while our approach (e) faithfully produces the desired shape compared with the ground truth (f).

Abstract

This paper presents PCDreamer, a novel method for point cloud completion. Traditional methods typically extract features from partial point clouds to predict missing regions, but the large solution space often leads to unsatisfactory results. More recent approaches have started to use images as extra guidance, effectively improving performance,but obtaining paired data of images and partial point clouds is challenging in practice. To overcome these limitations, we harness the relatively view-consistent multi-view diffusion priors within large models, to generate novel views of the desired shape. The resulting image set encodes both global and local shape cues, which is especially beneficial for shape completion. To fully exploit the priors, we have designed a shape fusion module for producing an initial complete shape from multi-modality input (i.e., images and point clouds), and a follow-up shape consolidation module to obtain the final complete shape by discarding unreliable points introduced by the inconsistency from diffusion priors. Extensive experimental results demonstrate our superior performance, especially in recovering fine details.

Method

Method Image

Figure 1. Overview. . Overview of PCDreamer. Given an input partial point cloud, we have designed three core modules to complete it. The multi-view image generation module (Sec. 3.1) dreams out multi-view images of the input by leveraging a few large models. The priors within these models serve as the fuel for the completion. The following fusion module (Sec. 3.2) effectively fuses the original input and the inspiring MV images with the help of the attention mechanism. A final consideration module (Sec. 3.3) further reduces the inherent inconsistency introduced in large models, producing a complete, dense, and uniform point cloud with both the global and local shape features.

Visual Results (PCN Dataset)

PCN Results

Figure 2. Visual comparisons on the PCN dataset. That second column is representative depth images selected from the multiple views.

PCN Results with depth images

Figure 3. Visual results as well as multi-view depth images on the PCN dataset. The generated multi-view depth images corresponding to the last three rows exhibit lower quality and less consistency than the first three rows.

BibTeX

@article{wei2024pcdreamerpointcloudcompletion,
      title={PCDreamer: Point Cloud Completion Through Multi-view Diffusion Priors},
      author={Guangshun Wei, Yuan Feng, Long Ma, Chen Wang, Yuanfeng Zhou, Changjian Li},
      year={2025},
      journal={CVPR},
}