Quantifying and Alleviating Co-Adaptation in
Sparse-View 3D Gaussian Splatting

arXiv 2025
Kangjie Chen Tsinghua University
Yingji Zhong HKUST
Zhihao Li Huawei Noah’s Ark Lab
Jiaqi Lin Tsinghua University

Youyu Chen Harbin Institute of Technology
Minghan Qin Tsinghua University
Haoqian Wang Tsinghua University

TL;DR: This paper introduces the concept of co-adaptation in 3D Gaussian Splatting, analyzes its impact on rendering artifacts, and proposes strategies (Dropout Regularization & Opacity Noise Injection) to reduce it.

Abstract

3D Gaussian Splatting (3DGS) has demonstrated impressive performance in novel view synthesis under dense-view settings. However, in sparse-view scenarios, despite the realistic renderings in training views, 3DGS occasionally manifests appearance artifacts in novel views. This paper investigates the appearance artifacts in sparse-view 3DGS and uncovers a core limitation of current approaches: the optimized Gaussians are overly-entangled with one another to aggressively fit the training views, which leads to a neglect of the real appearance distribution of the underlying scene and results in appearance artifacts in novel views. The analysis is based on a proposed metric, termed Co-Adaptation Score (CA), which quantifies the entanglement among Gaussians, i.e., co-adaptation, by computing the pixel-wise variance across multiple renderings of the same viewpoint, with different random subsets of Gaussians. The analysis reveals that the degree of co- adaptation is naturally alleviated as the number of training views increases. Based on the analysis, we propose two lightweight strategies to explicitly mitigate the co- adaptation in sparse-view 3DGS: (1) random gaussian dropout; (2) multiplicative noise injection to the opacity. Both strategies are designed to be plug-and-play, and their effectiveness is validated across various methods and benchmarks. We hope that our insights into the co-adaptation effect will inspire the community to achieve a more comprehensive understanding of sparse-view 3DGS.

Co-Adaptation of 3DGS overview
(1) Visualization of 3DGS behaviors under different levels of co-adaptation. Thin gray arrows indicate training views, bold arrows indicate a novel view. Green arrow denotes correct color prediction, while red indicates color errors. (a) Simulates a 3DGS model trained with dense views, where Gaussian ellipsoids contribute evenly to pixel color across views, resulting in accurate rendering from the novel view. (b)(c)(d) Simulate various cases of 3DGS trained under sparse-view settings. (b) and (c) show that co-adaptation in the training views — where Gaussians contribute unequally to pixel colors — results in thin and thick artifacts under novel views. (d) shows a highly co-adapted case where multiple Gaussians with distinct colors collectively overfit a single grayscale pixel in the training view, resulting in severe wrong color artifacts under the novel view.

Quantifying Co-Adaptation

To quantitatively analyze co-adaptation in 3D Gaussian Splatting, we define a Co-Adaptation Score (CA) for each target viewpoint. The key idea is that if a set of Gaussians are overly dependent on each other, then randomly removing part of them during rendering will lead to unstable outputs. Specifically, we randomly drop 50% of the Gaussians and render the target view using only the remaining ones. We repeat this process multiple times and measure the variance across the rendered results.

Quantifying Co-Adaptation of 3DGS
(2) Illustration of Co-Adaptation Score (CA) Computation. Higher CA scores indicate more inconsistent renderings, suggesting stronger co-adaptation effects. Lower CA scores reflect more stable and generalizable representations.

Empirical observations on Co-Adaptation score (CA) in sparse-view 3DGS. We summarize three empirical phenomena observed during sparse-view 3DGS training:

A. Increased training views reduce co-adaptation. (See Figure 3)

B. Co-adaptation temporarily weakens during early training. (See Figure 4)

C. Co-adaptation is lower at input views than novel views. (See Figure 4)

Inspired by these empirical findings, we investigate whether suppressing co-adaptation in 3DGS can enhance rendering quality for novel views.

Quantifying Co-Adaptation of 3DGS
(3) Comparison of co-adaptation strength (CA), GS count and reconstruction quality un- der varying numbers of training views. (a-b) CA measured as pixel-wise variance for DNGaussian and Binocular3DGS on three target view types: extrapolation (far), interpolation (near), and training. (c) Gaussian count (left axis, in thousands) and average PSNR (right axis, in dB) for both methods. All plots are shown as functions of input view counts, based on the LLFF “flower” scene.
Quantifying Co-Adaptation of 3DGS
(4) Training dynamics of co-adaptation strength (CA) and reconstruction quality (PSNR) across different LLFF scenes. CA score (left axis) and PSNR (right axis) over training iterations for DNGaussian and Binocular3DGS on the “flower” and “orchids” scenes of the LLFF dataset.

Alleviating Co-Adaptation

We explore regularization strategies to mitigate excessive co-adaptation in 3D Gaussian Splatting:

A. Dropout Regularization. Randomly drops subsets of Gaussians during training to prevent over co-adaptation on specific points, improving generalization across novel views.

B. Opacity Noise Injection. Perturbs the opacity parameters with noise to reduce deterministic fitting, effectively suppressing spurious co-adaptation and enhancing robustness.

C. Other Strategies. Beyond opacity, we further explore noise injection on other Gaussian attributes and advanced dropout variants. More details can be found in the Appendix of our paper.

Improvements

LLFF_DTU_quantitative
(5) Quantitative Comparison on LLFF and DTU Datasets. We evaluate five sparse-view 3DGS-based methods with and without our proposed co-adaptation suppression strategies, dropout regularization and opacity noise injection. We report PSNR, SSIM, LPIPS, and Co-Adaptation scores (CA) on both training and novel views to assess reconstruction quality and co-adaptation reduction.
LLFF_DTU_vis
(6) Visual comparison on the LLFF dataset based on 3DGS and Binocular3DGS. Suppressing co-adaptation reduces color noise and improves scene geometry and detail quality.
DTU_blender_vis
(7) Visual comparison on DTU and Blender datasets based on 3DGS and Binocular3DGS. Suppressing co-adaptation leads to clearer appearance representation in novel view rendering.
training_dynamics
(8) Training dynamics (CA and PSNR) comparison of Binocular3DGS Baseline and w/ two training strategies. Both strategies significantly reduce the Co-Adaptation (CA) of Baseline while improving PSNR.

Concurrent works

There are two another concurrent works that also use dropout to boost sparse-view 3DGS:

They attribute the effectiveness of dropout to empirical factors—such as reducing overfitting through fewer active splats (DropoutGS), or enhancing gradient flow to distant Gaussians (DropGaussian). We respect these insights and are pleased that several works highlight the benefits of dropout in sparse-view 3DGS. Our work complements these findings by offering a deeper analysis of co-adaptation, with the goal of stimulating broader discussion on more generalizable 3D representations.

Citation

Please use the following citation:
@article{chen2025quantifying,
  title={Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting},
  author={Chen, Kangjie and Zhong, Yingji and Li, Zhihao and Lin, Jiaqi and Chen, Youyu and Qin, Minghan and Wang, Haoqian},
  journal={arXiv preprint arXiv:2508.12720},
  year={2025}
}