DynoSAM Project Hub

A Unified Open-Source Framework for Dynamic Object Smoothing and Mapping

DynoSAM Project Hub

The DynoSAM project is a Stereo/RGB-D Visual Odometry pipeline for Dynamic SLAM developed by the ACFR-RPG research group at the University of Sydney. A number of publications have been produced as part of this open-source project, which can be found below.

DynoSAM is a factor-graph based framework that integrates static and dynamic measurements into a unified optimization problem solved using factor graphs, simultaneously estimating camera poses, static scene, object motion or poses, and object structures.

DynoSAM Demo

Example output running on the Oxford Multimotion Dataset (OMD, 'Swinging 4 Unconstrained'). This visualisation was generated using playback after full-batch optimisation.

Parallel-Hybrid Demo

DynoSAM running Parallel-Hybrid formulation in incremental optimisation mode on a indoor sequence recorded with an Intel RealSense. Playback is 2x speed.

DynoSAM: Open-Source Smoothing and Mapping Framework for Dynamic SLAM

Jesse Morris, Yiduo Wang, Mikolaj Kliniewski, Viorela Ila

Accepted to: IEEE Transcations on Robotics (T-RO) 2025

Abstract

Traditional Visual Simultaneous Localization and Mapping systems focus solely on static scene structures, overlooking dynamic elements in the environment. Although effective for accurate visual odometry in complex scenarios, these methods discard crucial information about moving objects. By incorporating this information into a Dynamic SLAM framework, the motion of dynamic entities can be estimated, enhancing navigation whilst ensuring accurate localization. However, the fundamental formulation of Dynamic SLAM remains an open challenge, with no consensus on the optimal approach for accurate motion estimation within a SLAM pipeline. Therefore, we developed DynoSAM, an open-source framework for Dynamic Objects SLAM that enables the efficient implementation, testing, and comparison of various Dynamic SLAM optimization formulations. We further propose a novel formulation that encodes rigid-body motion model in object pose estimation as well as an error metric agnostic to object frame definition. DynoSAM integrates static and dynamic measurements into a unified optimization problem solved using factor graphs, simultaneously estimating camera poses, static scene, object motion or poses, and object structures. We evaluate DynoSAM across diverse simulated and real-world datasets, achieving state-of-the-art motion estimation in indoor and outdoor environments, with substantial improvements over existing systems. Additionally, we demonstrate DynoSAM's contributions to downstream applications, including 3D reconstruction of dynamic scenes and trajectory prediction, thereby showcasing potential for advancing dynamic object-aware SLAM systems


Supplementary Video


Citation

BibTeX:


@inproceedings{morris2025dynosam,
  author={Morris, Jesse and Wang, Yiduo and Kliniewski, Mikolaj and Ila, Viorela},
  journal={IEEE Transactions on Robotics},
  title={DynoSAM: Open-Source Smoothing and Mapping Framework for Dynamic SLAM},
  year={2025},
  volume={},
  number={},
  pages={1-19},
  keywords={Simultaneous localization and mapping;Accuracy;Vehicle dynamics;Aerodynamics;Cameras;Trajectory;Robots;Pose estimation;Optimization;Kinematics;Dynamic SLAM;Mapping;RGBD Perception},
  doi={10.1109/TRO.2025.3641813}}
          

Online Dynamic SLAM with Incremental Smoothing and Mapping

Jesse Morris, Yiduo Wang, Viorela Ila

Accepted to: IEEE Robotics and Automation Letters (RA-L) 2025

Abstract

Dynamic SLAM methods jointly estimate for the static and dynamic scene components, however existing ap- proaches, while accurate, are computationally expensive and unsuitable for online applications. In this work, we present the first application of incremental optimisation techniques to Dy- namic SLAM. We introduce a novel factor-graph formulation and system architecture designed to take advantage of existing incremental optimisation methods and support online estima- tion. On multiple datasets, we demonstrate that our method achieves equal to or better than state-of-the-art in camera pose and object motion accuracy. We further analyse the structural properties of our approach to demonstrate its scalability and provide insight regarding the challenges of solving Dynamic SLAM incrementally. Finally, we show that our formulation results in problem structure well-suited to incremental solvers, while our system architecture further enhances performance, achieving a 5x speed-up over existing methods.

Key Contributions

  • A novel Hybrid formulation for Dynamic SLAM that combines the benefits of existing representations.
  • A novel Parallel-Hybrid architecture for Dynamic SLAM that facilitates online and incremental estimation. To the best of our knowledge, this is the first work to apply incremental optimisation methods to the Dynamic SLAM problem
  • An analysis of the proposed architecture in the context of incremental inference, which highlights the practical trade-off's between accuracy and computation, as well as key challenges specific to Dynamic SLAM
Key figure left
(a) Hybrid representation for Dynamic SLAM.
Key figure right
(b) Factor graph design for Hybrid representation.

Supplementary Video


Citation

BibTeX:


@inproceedings{morris2025online,
  author={Morris, Jesse and Wang, Yiduo and Ila, Viorela},
  journal={IEEE Robotics and Automation Letters},
  title={Online Dynamic SLAM with Incremental Smoothing and Mapping},
  year={2026},
  volume={},
  number={},
  pages={1-8},
  keywords={Simultaneous localization and mapping;Estimation;Accuracy;Aerodynamics;Optimization;Cameras;Trees (botanical);Bayes methods;Tracking;Scalability;SLAM;Localization;RGB-D Perception},
  doi={10.1109/LRA.2026.3655286}}