GS-LTS: 3D Gaussian Splatting-Based Adaptive Modeling for Long-Term Service Robots

Bin Fu, Jialin Li, Bin Zhang, Ruiping Wang, Xilin Chen,
Institute of Computing Technology, Chinese Academy of Sciences
Submit to IROS 2025

Abstract

3D Gaussian Splatting (3DGS) has garnered significant attention in robotics for its explicit, high fidelity dense scene representation, demonstrating strong potential for robotic applications. However, 3DGS-based methods in robotics primarily focus on static scenes, with limited attention to the dynamic scene changes essential for long-term service robots. These robots demand sustained task execution and efficient scene updates—challenges current approaches fail to meet. To address these limitations, we propose GS-LTS (Gaussian Splatting for Long-Term Service), a 3DGS-based system enabling indoor robots to manage diverse tasks in dynamic environments over time. GS-LTS detects scene changes (e.g., object addition or removal) via single-image change detection, employs a rule-based policy to autonomously collect multi-view observations, and efficiently updates the scene representation through Gaussian editing. Additionally, we propose a simulation-based benchmark that automatically generates scene change data as compact configuration scripts, providing a standardized, user-friendly evaluation benchmark. Experimental results demonstrate GS-LTS's advantages in reconstruction, navigation, and superior scene updates—faster and higher quality than the image training baseline—advancing 3DGS for long-term robotic operations.

Common Scene Changes

Common Scene Changes

The real-world environments involves object changes, as illustrated in the figure. In such environments, the robot must continuously observe the scene, autonomously detect changes, and update its scene representation to maintain accuracy.

GS-LTS System

Sizes of model trees

we propose GS-LTS, a 3DGS-based system designed for Long-Term Service robots in indoor environments. The GS-LTS framework integrates four key modules: (1) Gaussian Mapping Engine, which constructs a semantic-aware 3DGS representation, integrating geometry, visual appearance, and semantics; (2) Multi-Task Executor, which helps robots perform downstream tasks like object navigation using the informative 3DGS representation; (3) Change Detection Unit, a long-running module that detects scene changes at a specified frequency by comparing the robot's current RGB observations with historical 3DGS-rendered images, locating altered regions and analyzing change types and positions; and (4) Active Scene Updater, which is guided by a rule-based policy, directs the robot to collect multi-view images around detected areas, and applies pre-editing and fine-tuning to dynamically update the 3DGS representation based on the detect change type and new observations. Together, these components enable the robot to adapt to evolving surroundings while maintaining robust performance over extended periods.

Experiments

Common Scene Changes

This figure presents the quantitative results of GS-LTS, showcasing one representative case from each of three scene changes. The rendering results more intuitively demonstrate that GS-LTS achieves superior and faster scene update capabilities, while the baseline method requires significantly more iterations to obtain comparable outcomes.

Conclusion

In this work, we introduce GS-LTS, a 3DGS-based system designed for long-term service robots operating in dynamic environments. By integrating object-level change detection, multi-view observation, and efficient Gaussian editing-based scene updates, GS-LTS enables robots to adapt to scene variations over time. Additionally, we propose a scalable simulation benchmark for evaluating object-level scene changes, facilitating systematic assessment and sim-to-real transfer. Experimental results demonstrate that GS-LTS achieves faster and higher-quality scene updates, advancing the applicability of 3DGS for long-term robotic operations.