TL;DR: This paper introduces the concept of co-adaptation in 3D Gaussian Splatting, analyzes its impact on rendering artifacts, and proposes strategies (Dropout Regularization & Opacity Noise Injection) to reduce it.
3D Gaussian Splatting (3DGS) has demonstrated impressive performance in novel view synthesis under dense-view settings. However, in sparse-view scenarios, despite the realistic renderings in training views, 3DGS occasionally manifests appearance artifacts in novel views. This paper investigates the appearance artifacts in sparse-view 3DGS and uncovers a core limitation of current approaches: the optimized Gaussians are overly-entangled with one another to aggressively fit the training views, which leads to a neglect of the real appearance distribution of the underlying scene and results in appearance artifacts in novel views. The analysis is based on a proposed metric, termed Co-Adaptation Score (CA), which quantifies the entanglement among Gaussians, i.e., co-adaptation, by computing the pixel-wise variance across multiple renderings of the same viewpoint, with different random subsets of Gaussians. The analysis reveals that the degree of co- adaptation is naturally alleviated as the number of training views increases. Based on the analysis, we propose two lightweight strategies to explicitly mitigate the co- adaptation in sparse-view 3DGS: (1) random gaussian dropout; (2) multiplicative noise injection to the opacity. Both strategies are designed to be plug-and-play, and their effectiveness is validated across various methods and benchmarks. We hope that our insights into the co-adaptation effect will inspire the community to achieve a more comprehensive understanding of sparse-view 3DGS.
To quantitatively analyze co-adaptation in 3D Gaussian Splatting, we define a Co-Adaptation Score (CA) for each target viewpoint. The key idea is that if a set of Gaussians are overly dependent on each other, then randomly removing part of them during rendering will lead to unstable outputs. Specifically, we randomly drop 50% of the Gaussians and render the target view using only the remaining ones. We repeat this process multiple times and measure the variance across the rendered results.
Empirical observations on Co-Adaptation score (CA) in sparse-view 3DGS. We summarize three empirical phenomena observed during sparse-view 3DGS training:
A. Increased training views reduce co-adaptation. (See Figure 3)
B. Co-adaptation temporarily weakens during early training. (See Figure 4)
C. Co-adaptation is lower at input views than novel views. (See Figure 4)
Inspired by these empirical findings, we investigate whether suppressing co-adaptation in 3DGS can enhance rendering quality for novel views.
We explore regularization strategies to mitigate excessive co-adaptation in 3D Gaussian Splatting:
A. Dropout Regularization. Randomly drops subsets of Gaussians during training to prevent over co-adaptation on specific points, improving generalization across novel views.
B. Opacity Noise Injection. Perturbs the opacity parameters with noise to reduce deterministic fitting, effectively suppressing spurious co-adaptation and enhancing robustness.
C. Other Strategies. Beyond opacity, we further explore noise injection on other Gaussian attributes and advanced dropout variants. More details can be found in the Appendix of our paper.
There are two another concurrent works that also use dropout to boost sparse-view 3DGS:
They attribute the effectiveness of dropout to empirical factors—such as reducing overfitting through fewer active splats (DropoutGS), or enhancing gradient flow to distant Gaussians (DropGaussian). We respect these insights and are pleased that several works highlight the benefits of dropout in sparse-view 3DGS. Our work complements these findings by offering a deeper analysis of co-adaptation, with the goal of stimulating broader discussion on more generalizable 3D representations.@article{chen2025quantifying,
title={Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting},
author={Chen, Kangjie and Zhong, Yingji and Li, Zhihao and Lin, Jiaqi and Chen, Youyu and Qin, Minghan and Wang, Haoqian},
journal={arXiv preprint arXiv:2508.12720},
year={2025}
}