"3DGS" redirects here. It could also refer to 3D GameStudio, a game development kit.
3D Gaussian splatting (3DGS) is a technique used in the field of real-time radiance field rendering.5 It enables the creation of high-quality real-time novel-view scenes by combining multiple photos or videos, addressing a significant challenge in the field.
The method represents scenes with 3D Gaussians that retain properties of continuous volumetric radiance fields, integrating sparse points produced during camera calibration. It introduces an Anisotropic representation using 3D Gaussians to model radiance fields, along with an interleaved optimization and density control of the Gaussians. A fast visibility-aware rendering algorithm supporting anisotropic splatting is also proposed, catered to GPU usage.6
The method involves several key steps:
The method uses differentiable 3D Gaussian splatting, which is unstructured and explicit, allowing rapid rendering and projection to 2D splats. The covariance of the Gaussians can be thought of as configurations of an ellipsoid, which can be mathematically decomposed into a scaling matrix and a rotation matrix. The gradients for all parameters are derived explicitly to overcome any overhead due to autodiff.
The optimization creates a dense set of 3D Gaussians that represent the scene as accurately as possible. Each step of rendering is followed by a comparison to the training views available in the dataset.
The authors[who?] tested their algorithm on 13 real scenes from previously published datasets and the synthetic Blender dataset.8 They compared their method against state-of-the-art techniques like Mip-NeRF360,9 InstantNGP,10 and Plenoxels.11 Quantitative evaluation metrics used were PSNR, L-PIPS, and SSIM.
Their fully converged model (30,000 iterations) achieves quality on par with or slightly better than Mip-NeRF360,12 but with significantly reduced training time (35–45 minutes vs. 48 hours) and faster rendering (real-time vs. 10 seconds per frame). At 7,000 iterations (5–10 minutes of training), their method achieves comparable quality to InstantNGP13 and Plenoxels.14
For synthetic bounded scenes (Blender dataset15), they achieved state-of-the-art results even with random initialization, starting from 100,000 uniformly random Gaussians.
Some limitations of the method include:
The authors[who?] note that some of these limitations could potentially be addressed through future improvements like better culling approaches, antialiasing, regularization, and compression techniques.
Extending 3D Gaussian splatting to dynamic scenes, 3D Temporal Gaussian splatting incorporates a time component, allowing for real-time rendering of dynamic scenes with high resolutions.16 It represents and renders dynamic scenes by modeling complex motions while maintaining efficiency. The method uses a HexPlane to connect adjacent Gaussians, providing an accurate representation of position and shape deformations. By utilizing only a single set of canonical 3D Gaussians and predictive analytics, it models how they move over different timestamps.17
It is sometimes referred to as "4D Gaussian splatting"; however, this naming convention implies the use of 4D Gaussian primitives (parameterized by a 4×4 mean and a 4×4 covariance matrix). Most work in this area still employs 3D Gaussian primitives, applying temporal constraints as an extra parameter of optimization.
Achievements of this technique include real-time rendering on dynamic scenes with high resolutions, while maintaining quality. It showcases potential applications for future developments in film and other media, although there are current limitations regarding the length of motion captured.18
3D Gaussian splatting has been adapted and extended across various computer vision and graphics applications, from dynamic scene rendering to autonomous driving simulations and 4D content creation:
Westover, Lee Alan (July 1991). "SPLATTING: A Parallel, Feed-Forward Volume Rendering Algorithm" (PDF). Retrieved October 18, 2023. https://articles.tomasparks.name/publications/Westover1991.pdf ↩
Huang, Jian (Spring 2002). "Splatting" (PPT). Retrieved 5 August 2011. http://web.eecs.utk.edu/~huangj/CS594S02/splatting.ppt ↩
Kerbl, Bernhard; Kopanas, Georgios; Leimkuehler, Thomas; Drettakis, George (2023-07-26). "3D Gaussian Splatting for Real-Time Radiance Field Rendering". ACM Transactions on Graphics. 42 (4): 139:1–139:14. arXiv:2308.04079. doi:10.1145/3592433. ISSN 0730-0301. https://dl.acm.org/doi/10.1145/3592433 ↩
Wu, Guanjun; Yi, Taoran; Fang, Jiemin; Xie, Lingxi; Zhang, Xiaopeng; Wei, Wei; Liu, Wenyu; Tian, Qi; Wang, Xinggang (June 2024). 4D Gaussian Splatting for Real-Time Dynamic Scene Rendering. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 20310–20320. arXiv:2310.08528. doi:10.1109/CVPR52733.2024.01920. https://ieeexplore.ieee.org/document/10656774/ ↩
Fridovich-Keil, Sara; Yu, Alex; Tancik, Matthew; Chen, Qinhong; Recht, Benjamin; Kanazawa, Angjoo (June 2022). "Plenoxels: Radiance Fields without Neural Networks". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5491–5500. arXiv:2112.05131. doi:10.1109/cvpr52688.2022.00542. ISBN 978-1-6654-6946-3. 978-1-6654-6946-3 ↩
Mildenhall, Ben; Srinivasan, Pratul P.; Tancik, Matthew; Barron, Jonathan T.; Ramamoorthi, Ravi; Ng, Ren (2020), "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis", Lecture Notes in Computer Science, Cham: Springer International Publishing, pp. 405–421, doi:10.1007/978-3-030-58452-8_24, ISBN 978-3-030-58451-1, retrieved 2024-09-25 978-3-030-58451-1 ↩
Barron, Jonathan T.; Mildenhall, Ben; Verbin, Dor; Srinivasan, Pratul P.; Hedman, Peter (June 2022). "Mip-NeRF 360: Unbounded Anti-Aliased Neural Radiance Fields". 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5460–5469. arXiv:2111.12077. doi:10.1109/cvpr52688.2022.00539. ISBN 978-1-6654-6946-3. 978-1-6654-6946-3 ↩
Müller, Thomas; Evans, Alex; Schied, Christoph; Keller, Alexander (July 2022). "Instant neural graphics primitives with a multiresolution hash encoding". ACM Transactions on Graphics. 41 (4): 1–15. arXiv:2201.05989. doi:10.1145/3528223.3530127. ISSN 0730-0301. https://dx.doi.org/10.1145/3528223.3530127 ↩
Franzen, Carl (16 October 2023). "Actors' worst fears come true? New 3D Temporal Gaussian Splatting method captures human motion". venturebeat.com. VentureBeat. Retrieved October 18, 2023. https://venturebeat.com/ai/actors-worst-fears-come-true-new-4d-gaussian-splatting-method-captures-human-motion ↩
Chen, Zilong; Wang, Feng; Wang, Yikai; Liu, Huaping (2024-06-16). "Text-to-3D using Gaussian Splatting". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Vol. abs/2211.0 1324. IEEE. pp. 21401–21412. arXiv:2309.16585. doi:10.1109/cvpr52733.2024.02022. ISBN 979-8-3503-5300-6. 979-8-3503-5300-6 ↩
Chen, Li; Wu, Penghao; Chitta, Kashyap; Jaeger, Bernhard; Geiger, Andreas; Li, Hongyang (2024). "End-to-end Autonomous Driving: Challenges and Frontiers". IEEE Transactions on Pattern Analysis and Machine Intelligence. PP (12): 10164–10183. arXiv:2306.16927. doi:10.1109/tpami.2024.3435937. ISSN 0162-8828. PMID 39078757. https://dx.doi.org/10.1109/tpami.2024.3435937 ↩
Guédon, Antoine; Lepetit, Vincent (2024-06-16). "SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 5354–5363. arXiv:2311.12775. doi:10.1109/cvpr52733.2024.00512. ISBN 979-8-3503-5300-6. 979-8-3503-5300-6 ↩
Keetha, Nikhil; Karhade, Jay; Jatavallabhula, Krishna Murthy; Yang, Gengshan; Scherer, Sebastian; Ramanan, Deva; Luiten, Jonathon (2024-06-16). "SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 21357–21366. doi:10.1109/cvpr52733.2024.02018. ISBN 979-8-3503-5300-6. 979-8-3503-5300-6 ↩
Ling, Huan; Kim, Seung Wook; Torralba, Antonio; Fidler, Sanja; Kreis, Karsten (2024-06-16). "Align Your Gaussians: Text-to-4D with Dynamic 3D Gaussians and Composed Diffusion Models". 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE. pp. 8576–8588. arXiv:2312.13763. doi:10.1109/cvpr52733.2024.00819. ISBN 979-8-3503-5300-6. 979-8-3503-5300-6 ↩