false marp: true theme: default size: 4K paginate: false

footer: 'Hamid Ebadi'

header: '   SIMLAN Project'

SIMLAN open-source project


Open-Source Simulation for Multi-Camera Robotics

The SIMLAN Framework

Hamid Ebadi, senior researcher at Infotiv AB


Research Projects

  • SMILE-IV: safety assurance framework for transport services

  • ARTWORK: The smart and connected worker


Volvo Projects

Volvo GTO in Tuve, Göteborg:

  • RITA (Robot In The Air): collaborative robot designed to assist with kitting

  • GPSS (Generic Photogrammetry-based Sensor System): ceiling-mounted cameras act as the shared "eyes" of the robot fleet. (more later)


Autonomous Robotics (intro)


Autonomous Robotics

SLAM

  • Vacuum cleaner
  • Simultaneous localization and mapping
  • Reliance on onboard sensors
  • Distributed decision making
  • Communication and synchronization

bg right:45% 100% "SLAM-R Algorithm of Simultaneous Localization and Mapping Using RFID for Obstacle Location and Recognition"


Autonomous Robotics

  • Limited field of view
  • Sensor interference (LiDAR)
  • No global view
  • resolving right-of-way
  • avoiding gridlock
  • Handling challenging environments
  • no landmarks
  • repetitive landmarks
  • dynamic landmarks

bg right:45% 100% "SLAM-R Algorithm of Simultaneous Localization and Mapping Using RFID for Obstacle Location and Recognition"


Video Preview

Watch the Video


Centralised Robotics (pros)

  • GPSS (camera-based)
  • Simpler onboard computation
  • Focus on control
  • Energy consumption
  • Simpler hardware
  • Easier to maintain and upgrade
  • No robot-to-robot communication

bg right:45% 100% "todo"


Centralised Robotics (pros)

  • Improved explainability and accountability
  • Camera is used for safety monitoring and repudiation.
  • Improving the safety by using both onboard and offboard sensors
  • More flexible to add ML based models

bg right:45% 100% ""


Centralised Robotics (cons)

  • Real-time needs and latency
  • Centralised processing and a single point of failure
  • No mapping but only localization using fixed cameras

Continuously testing these ML systems is challenging.


Open-Source Simulation for Multi-Camera Robotics

The SIMLAN Framework

  • Using simulation for complex human-robot collaboration.
  • Inspired by Volvo Group’s GPSS/RITA
  • Models ceiling-mounted cameras + factory layouts

bg right:50% 100%


SIMLAN: Asset & Environment Modeling (1)

  • Realistic warehouse models
  • Free/Open-source 3D software: FreeCAD, Blender
  • Relevant Assets:
  • shelves
  • pallets
  • boxes, ...
  • Configurable physical properties:
  • collision, inertia, mass, dimensions, visuals

bg contain right:40%


SIMLAN: Asset & Environment Modeling (2)

bg right:40% 100%

  • Sensors:
  • camera
  • semantic segmentation
  • depth sensors
  • collision sensors
  • Static Elements:
  • layouts
  • camera coordination and orientation
  • ArUco markers on agents

DEMO: SIMLAN physics failures


SIMLAN: Asset & Environment Modeling (3)

  • Dynamic Elements:

  • Pallet truck

  • Forklift
  • Human worker
  • Robotic Arm

bg right:40% 100%


Multi-Agent & Namespace Support & DOMAIN ID

  • Unique namespace + ArUco ID
  • Spawning static & dynamic agents
  • Localisation and Navigation

bg right:40% 100%


Camera Configuration/Calibration

  • Intrinsics: focal lengths, principal point, distortion coeffs
  • Extrinsics: rotation matrix + translation vector
  • enables precise world-to-pixel projection
  • crucial for image stitching & ArUco localization

bg right:40% 100%


Bird’s-Eye View & Image stitching

  • transform world → camera → pixel coordinates
  • enables stitching of multiple camera feeds
  • camera_bird_eye_view package bg right:50% 100%

ArUco Localization

  • proof-of-concept GPSS system in SIMLAN

  • uses OpenCV ArUco markers for localization

  • aruco_localization package

bg right:35% 100%


ArUco Navigation

  • Input: tf2 (positions)
  • Nav2 navigates (with a lot of wiring)
  • multi-camera robustness

bg right:40% 100%


Safety

"Behavior Tree" for Geo-fencing

  • loss of observability
  • restricted area
  • collision

bg right:40% 100%


SIMLAN GPSS video demo


RITA (Robot In The Air): collaborative robot designed to assist with kitting


bg


Panda arm demo Panda arm and humanoid demo

bg right:30% 100%


Gazebo Actors

  • Gazebo actors: skeleton animation from COLLADA or BVH files and scripted trajectories
  • Gazebo actors are static (scripted trajectories only) and cannot interact physically.
  • This limits their behavior to what they are strictly scripted for

bg right:30% 100%


Humanoid Motion Capture

bg right:30% 100%

Humanoid robots replicate a real worker's movements.

  • Google MediaPipe pose estimation (landmarks)
  • Neural Network translates landmarks to joint controls.
  • MoveIt2 handles motion planning and execution of the humanoid in Gazebo.

Humanoid training


Summary of SIMLAN Features

  • Dockerized dev environment
  • Lower barriers for research in robotics/ML
  • Features:
  • Bird’s-eye stitching
  • ArUco-based localization
  • ROS 2 / Nav2 integration
  • Panda arm and humanoid

SIMLAN Use Cases

  • Rapid prototyping of ML-based localization/navigation
  • Reproducible experiments : consistent testing
  • Synthetic data generation for ML models
  • Safety testing without risking physical assets
  • High level of interaction reinforcement learning & genetic algorithm experimentation
  • CI/CD : continuous development
  • V&V to support verification and validation of complex, machine learning-based systems

SIMLAN Use Cases

Testing and Development

  • Cost-efficient
  • Fast
  • Scalable
  • Safe
  • Privacy-friendly
  • Reproducible (CI/CD)

Open source


Technical Highlights

  • Middleware: ROS2 (Robot Operating System) - Jazzy Jalisco
  • standard interfaces
  • Simulation Engine: Ignition Gazebo, simulating
  • Physics
  • Sensor
  • Developer environment: Docker + VS Code devcontainers (consistency and reproducibility)
  • Documentation: extensive & reproducible

Testing or Development

  • Simulation is for testing ONLY?
  • Pushing simulation toward the entire Software Development Life Cycle (SDLC)

bg 100%


Demo 1 , Demo 2


Future Work

Generative AI

  • Style transfer w/ GANs for higher visual fidelity
  • Forward diffusion process
  • Reverse diffusion process
  • Hallucination
  • Integrate World Foundation Models (e.g., NVIDIA Cosmos)

bg right:40% 100%


bg right:50% 100%


Conclusion

  • SIMLAN: powerful platform for indoor multi-camera robotics
  • Reproducible, scalable, open-source
  • Academia & Industry
  • Roadmap: ML integration, human-robot collaboration, sim-to-real transfer
  • Need your support

Acknowledgements

  • INFOTIV AB
  • SMILE IV (Vinnova grant 2023-00789)
  • EUREKA ITEA4 ArtWork (Vinnova grant 2023-00970)
  • INFOTIV Colleagues: Pär Aronsson, Anton Hill, David Espedalen, Siyu Yi, Anders Bäckelie, Jacob Rohdin, Vasiliki Kostara, Nazeeh Alhosary, Marwa Naili
  • Other contributors: Tove Casparsson, Filip Melberg (Chalmers), Christoffer Johannesson, Sebastian Olsson, Hjalmar Ruscck from Dyno-robotics, Erik Brorsson (Chalmers/Volvo),
  • Other Partners: Infotiv AB, RISE, Volvo Group, Dyno-Robotics, Chalmers

INFOTIV AB Dyno-robotics RISE Research Institutes of Sweden CHALMERS Volvo Group