[Documentation] [TitleIndex] [WordIndex

Planet ROS

Planet ROS - http://planet.ros.org

Planet ROS - http://planet.ros.org[WWW] http://planet.ros.org


ROS Industrial: Industrial Calibration Refresh

A number of ease of use updates have been made to the ROS-Industrial Industrial Calibration repository. These updates seek to improve the ease of use and provides a significant update around documentation. The updates include:

  1. Cleaning up the intrinsic and extrinsic calibration widgets so they now share common infrastructure reducing total code.
  2. The widgets are now in the application main window such that the widgets may now more easily used in Rviz where the image display cannot or does not need to be in the main widget.
  3. The update also includes a widget that separates the calibration configuration/data viewer from the results page (also added to support usage in other contexts (like Rviz) that are more limited on vertical space, and it makes both pages more easily readable).
  4. An “instructions” action and tool bar button has been added to provide information on how to run the calibration apps.

A new documentation page has been created that includes information and examples to help users get the most out of the calibration and get the most accurate hand-eye calibration. This includes an example and unit test for the camera intrinsic calibration using the 10 x 10 modified circle grid data set.

 The updated documentation page includes a primer on calibration that covers the basics of calibration and a “getting started” page that includes guidance on building the application and the ROS 1 and ROS 2 interfaces, as well as links to the GUI applications. A docker has also been provided for those that prefer to work from a Docker container.

 We look forward to getting feedback on this latest update and hope that the community finds this update useful relative to having robust industrial calibration for your robotic perception systems and applications.

[WWW] https://rosindustrial.org/news/2025/8/22/industrial-calibration-refresh

ROS Discourse General: Ros-python-wheels: pip-installable ROS 2 packages

Hello ROS community,

I’d like to share that I’ve been working on making ROS 2 packages pip-installable with a project called ros-python-wheels! Furthermore, a select number of ROS packages are made available on a Python package index which I’ve hosted at Cloudsmith.io

Getting started with ROS 2 in Python is as simple as running:

pip install --extra-index-url https://dl.cloudsmith.io/public/ros-python-wheels/kilted/python/simple ros-rclpy[fastrtps]

Key Benefits

This enable a first-class developer experience when working with ROS in Python projects:

Comparison

This approach provides a distinct alternative to existing solutions:


I’d love to hear your thoughts on this project. I’d also appreciate a star on my project if you find this useful or if you think this is a good direction for ROS!

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-python-wheels-pip-installable-ros-2-packages/49688

ROS Discourse General: Announcement: rclrs 0.5.0 Release

We are thrilled to announce the latest official release of rclrs (0.5.0), the Rust client library for ROS 2! This latest version brings significant improvements and enhancements to rclrs, making it easier than ever to develop robotics applications in Rust.

Some of the highlights of this release include:

In addition to the new release of rclrs, we’re also happy to announce that rosidl_generator_rs is now released on the ROS buildfarm ( ROS Package: rosidl_generator_rs ). Paving the way for shipping generated Rust messages alongside current messages for C++ and Python.

I’ll be giving a talk at ROSCon UK in Edinburgh and at ROSCon in Singapore about ros2-rust, rclrs and all the projects that we’ve been working on to bring Rust support to ROS. Happy to chat with anyone interested in our work, or in Rust in ROS in general :slight_smile:

And if you want to be part of the development of the next release, you’re more than welcome to join us on our Matrix chat channel!

I can’t emphasize enough how much this release was a community effort, we’re happy to have people from so many different affiliations contributing to rclrs. This release wouldn’t have been possible without:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcement-rclrs-0-5-0-release/49681

ROS Discourse General: [Project] ROS 2 MediaPipe Suite — parameterized multi-model node with landmarks & gesture events

We’re releasing a small ROS 2 suite that turns Google MediaPipe Tasks into reusable components.
A single parameterized node switches between hand / pose / face, publishes landmarks + high-level gesture events, ships RViz viz (overlay + MarkerArray), and a turtlesim demo for perception → event → behavior.

Newcomer note: This is my first ROS 2 package—I kept the setup minimal so students and prototypers can plug MediaPipe into ROS 2 in minutes.

Turtlesim Demo:

Repo: https://github.com/PME26Elvis/mediapipe_ros2_suite · ROS 2: Humble (CI), Jazzy (experimental) · License: Apache-2.0
CI: Humble — passing (required) · Jazzy — experimental (non-blocking)
Quick start:

ros2 run v4l2_camera v4l2_camera_node
ros2 launch mediapipe_ros2_py mp_node.launch.py model:=hand image_topic:=/image_raw start_rviz:=true
ros2 run turtlesim turtlesim_node & ros2 run mediapipe_ros2_py gesture_to_turtlesim

Feedback & contributions welcome.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/project-ros-2-mediapipe-suite-parameterized-multi-model-node-with-landmarks-gesture-events/49680

ROS Discourse General: New packages and patch release for Jazzy Jalisco 2025-08-20

We’re happy to announce 82 new packages and 705 updates are now available on Ubuntu Noble on amd64 for Jazzy Jalisco.

This sync was tagged as jazzy/2025-08-20 .

:jazzy::jazzy::jazzy:

Package Updates for jazzy

Note that package counts include dbgsym packages which have been filtered out from the list below

Added Packages [82]:

Updated Packages [705]:

Removed Packages [1]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-packages-and-patch-release-for-jazzy-jalisco-2025-08-20/49667

ROS Discourse General: Continuous Trajectory Recording and Replay for AgileX PIPER Robotic Arm

Hi ROS Community,

I’m excited to share details about implementing continuous trajectory recording and replay for the AgileX PIPER robotic arm. This solution leverages time-series data to accurately replicate complex motion trajectories, with full code, usage guides, and step-by-step demos included. It’s designed to support teaching demonstrations and automated operations, and I hope it brings value to your projects.

Abstract

This paper achieves continuous trajectory recording and replay based on the AgileX PIPER robotic arm. Through the recording and reproduction of time-series data, it achieves the perfect replication of the complex motion trajectories of the robotic arm. In this paper, we will analyze the code implementation and provide complete code, usage guidelines, and step-by-step demonstrations.

Keywords

Trajectory control; Continuous motion; Time series; Motion reproduction; AgileX PIPER

Code Repository

github link: https://github.com/agilexrobotics/Agilex-College.git

Function Demonstration

From Code → to Motion 🤖 The Magic of Robotic Arm

1. Preparation Before Use

1.1. Preparation Work

1.2. Environment Configuration

2. Operation Steps for Continuous Trajectory Recording and Replay Function

2.1. Operation Steps

  1. Power on the robotic arm and connect the USB-to-CAN module to the computer (ensure that only one CAN module is connected).

  2. Open the terminal and activate the CAN module.

    sudo ip link set can0 up type can bitrate 1000000
    
  3. Clone the remote code repository.

    git clone https://github.com/agilexrobotics/Agilex-College.git
    
  4. Switch to the recordAndPlayTraj directory.

    cd Agilex-College/piper/recordAndPlayTraj/
    
  5. Run the recording program.

    python3 recordTrajectory_en.py
    
  6. Short-press the teach button to enter the teaching mode.

  7. Set the initial position of the robotic arm. After pressing Enter in the terminal, drag the robotic arm to record the trajectory.

  8. After recording, short-press the teach button again to exit the teaching mode.

  9. Precautions before replay:
    When exiting the teaching mode for the first time, a specific initialization process is required to switch from the teaching mode to the CAN mode. Therefore, the replay program will automatically perform a reset operation to return joints 2, 3, and 5 to safe positions (zero points) to prevent the robotic arm from suddenly falling due to gravity and causing damage. In special cases, manual assistance may be needed to return joints 2, 3, and 5 to zero points.

  10. Run the replay program.

    python3 playTrajectory_en.py
    
  11. After successful enabling, press Enter in the terminal to play the trajectory.

2.2. Recording Techniques and Strategies

Motion Planning Strategies:

Before starting the recording, the trajectory to be recorded should be planned:

  1. Starting Position Selection:

    • Select a safe position of the robotic arm as the starting point.
    • Ensure that the starting position is convenient for initialization during subsequent replay.
    • Avoid choosing a position close to the joint limit.
  2. Trajectory Path Design:

    • Plan a smooth motion path to avoid sharp direction changes.
    • Consider the kinematic constraints of the robotic arm to avoid singular positions.
    • Reserve sufficient safety margins to prevent collisions.
  3. Speed Control:

    • Maintain a moderate movement speed to ensure both recording quality and avoid being too slow.
    • Appropriately slow down at key positions to improve accuracy.
    • Avoid sudden acceleration or deceleration.

3. Problems and Solutions

Problem 1: No Piper Class

image

Reason: The currently installed SDK is not the version with API.

Solution: Execute pip3 uninstall piper_sdk to uninstall the current SDK, and then install the 1_0_0_beta version of the SDK according to the method in 1.2. Environment Configuration.

Problem 2: The Robotic Arm Does Not Move, and the Terminal Outputs as Follows

Reason: The teach button was short-pressed during the program operation.

Solution: Check whether the indicator light of the teach button is off. If it is, re-run the program. If not, short-press the teach button to exit the teaching mode first and then run the program.

4. Implementation of Trajectory Recording Program

The trajectory recording program is the data collection module of the system, responsible for capturing the position information of the continuous joint movements of the robotic arm in the teaching mode.

4.1. Program Initialization and Configuration

4.1.1. Parameter Configuration Design

# Whether there is a gripper
have_gripper = True
# Maximum recording time in seconds (0 = unlimited, stop by terminating program)
record_time = 10.0
# Teach mode detection timeout in seconds
timeout = 10.0
# CSV file path for saving trajectory
CSV_path = os.path.join(os.path.dirname(__file__), "trajectory.csv")

Analysis of Configuration Parameters:

4.1.2. Robotic Arm Connection and Initialization

# Initialize and connect to robotic arm
piper = Piper("can0")
piper.connect()
interface = piper.init()
time.sleep(0.1)

Analysis of Connection Mechanism:

4.1.3. Position Acquisition and Data Storage

4.1.3.1. Position Acquisition Function
def get_pos():
    joint_state = piper.get_joint_states()[0]
    if have_gripper:
        '''Get current joint angles and gripper opening distance'''
        return joint_state + (piper.get_gripper_states()[0][0], )
    return joint_state
4.1.3.2. Position Change Detection
if current_pos != last_pos:  # Record only when position changes
    current_pos = get_pos()
    wait_time = round(time.time() - last_time, 4)
    print(f"INFO: Wait time: {wait_time:0.4f}s, current position: {current_pos}")
    last_pos = current_pos
    last_time = time.time()
    csv.write(f"{wait_time}," + ",".join(map(str, current_pos)) + "\n")

Position Processing:

Time Processing:

4.1.4. Mode Detection and Switching

print("step 1: Press teach button to enter teach mode")
while interface.GetArmStatus().arm_status.ctrl_mode != 2:
    over_time = time.time() + timeout
    if over_time < time.time():
        print("ERROR: Teach mode detection timeout. Please check if teach mode is enabled")
        exit()
    time.sleep(0.01)

Status Polling Strategy:
The program uses the polling method to detect the control mode. This method has the following characteristics:

Timeout Protection Mechanism:
The 10-second timeout setting takes into account the needs of actual operations:

Safety Features of Teaching Mode:

4.1.5. Data Storage

# ... Recording loop ...
csv.write(f"{wait_time}," + ",".join(map(str, current_pos)) + "\n")
# ... End of recording ...
csv = open(CSV_path, "w")
csv.close()

Data Integrity Guarantee:
After each recording, the data is immediately written to the file, and the buffer is refreshed to ensure that the data will not be lost due to abnormal program exit.

Data Format Selection:
Reasons for Choosing CSV Format for Data Storage:

Data Column Attributes:

4.1.6. Recording Control Logic

over_time = last_time + record_time
while record_time == 0 or time.time() < over_time:
    csv = open(CSV_path, "w")
    input("step 2: Press Enter to start recording trajectory")
    last_pos = get_pos()
    last_time = time.time()
    # Recording logic
    time.sleep(0.01)
csv.close()

Time Control Strategy:
The system supports two recording modes:

  1. Timed recording: when record_time > 0, the recording stops automatically after the specified duration.
  2. Infinite recording: When record_time == 0, the program needs to be manually closed.

The flexibility of this design:

4.1.7. Complete Code Implementation of Trajectory Recording Program

#!/usr/bin/env python3
# -*-coding:utf8-*-
# Record continuous trajectory
import os, time
from piper_sdk import *

if __name__ == "__main__":
    # Whether there is a gripper
    have_gripper = True
    # Maximum recording time in seconds (0 = unlimited, stop by terminating program)
    record_time = 10.0
    # Teach mode detection timeout in seconds
    timeout = 10.0
    # CSV file path for saving trajectory
    CSV_path = os.path.join(os.path.dirname(__file__), "trajectory.csv")

    # Initialize and connect to robotic arm
    piper = Piper("can0")
    interface = piper.init()
    piper.connect()
    time.sleep(0.1)

    def get_pos():
        joint_state = piper.get_joint_states()[0]
        if have_gripper:
            '''Get current joint angles and gripper opening distance'''
            return joint_state + (piper.get_gripper_states()[0][0], )
        return joint_state

    print("step 1: Press teach button to enter teach mode")
    over_time = time.time() + timeout
    while interface.GetArmStatus().arm_status.ctrl_mode != 2:
        if over_time < time.time():
            print("ERROR: Teach mode detection timeout. Please check if teach mode is enabled")
            exit()
        time.sleep(0.01)

    input("step 2: Press Enter to start recording trajectory")
    csv = open(CSV_path, "w")
    last_pos = get_pos()
    last_time = time.time()
    over_time = last_time + record_time

    while record_time == 0 or time.time() < over_time:
        current_pos = get_pos()
        if current_pos != last_pos:  # Record only when position changes
            wait_time = round(time.time() - last_time, 4)
            print(f"INFO: Wait time: {wait_time:0.4f}s, current position: {current_pos}")
            csv.write(f"{wait_time}," + ",".join(map(str, current_pos)) + "\n")
            last_pos = current_pos
            last_time = time.time()
        time.sleep(0.01)

    csv.close()
    print("INFO: Recording complete. Press teach button again to exit teach mode")

5. Implementation of Trajectory Replay Program

The trajectory replay program is the execution module of the system, responsible for reading recorded position data and controlling the robotic arm to reproduce these positions.

5.1. Parameter Configuration and Data Loading

Replay Parameter Configuration:

# replay times (0 means infinite loop)
play_times = 1
# replay interval in seconds
play_interval = 1.0
# Motion speed percentage (recommended range: 10-100)
move_spd_rate_ctrl = 100
# replay speed multiplier (recommended range: 0.1-2)
play_speed = 1.0

Play times control:
The play_times parameter supports three modes:

Dual Mechanism for Speed Control:
The system provides two speed control methods:

  1. move_spd_rate_ctrl: Controls the overall movement speed of the robotic arm.
  2. play_speed: Controls the time scaling of trajectory replay.

Advantages of this dual control:

Role of Play Interval:
The play_interval parameter plays an important role in continuous replay:

5.2. Data Loading: Reading Data Files

try:
    with open(CSV_path, 'r', encoding='utf-8') as f:
        track = list(csv.reader(f))
        if not track:
            print("ERROR: Trajectory file is empty")
            exit()
        track = [[float(j) for j in i] for i in track  # Convert to float lists
except FileNotFoundError:
    print("ERROR: Trajectory file not found")
    exit()

Exception Handling Strategy:
The program adopts a comprehensive exception handling mechanism to cover common file operation errors:

Data type conversion: The program uses list comprehensions to convert string data to floating-point numbers.

5.3. Safety Stop Function

def stop():
    '''Stop robotic arm; must call this function when first exiting teach mode before using CAN mode'''
    interface.EmergencyStop(0x01)
    time.sleep(1.0)
    limit_angle = [0.1745, 0.7854, 0.2094]  # Arm only restored when joints 2,3,5 are within safe range
    pos = get_pos()
    while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
        # Restore arm
        piper.disable_arm()
        time.sleep(0.01)
        pos = get_pos()
    time.sleep(1.0)

Staged Stop Strategy:
The stop function adopts a phased safety stop strategy:

  1. Emergency stop phase: EmergencyStop(0x01) sends an emergency stop command to immediately halt all joint movements (joints with impedance).
  2. Safe position waiting: Waits for key joints (joints 2, 3, and 5) to move within the safe range.
  3. System recovery phase: Sends a recovery command to reactivate the control system.

Safety Range Design:
The program focuses on the positions of joints 2, 3, and 5 based on the mechanical structure characteristics of the PIPER robotic arm:

The safety angle ranges (10°, 45°, 12°) are set based on the following considerations:

Real-time monitoring mechanism: The program uses real-time polling to monitor joint positions, ensuring the next operation is performed only when safety conditions are met.

5.4. System Enable Function

def enable():
    while not piper.enable_arm():
        time.sleep(0.01)
    if have_gripper:
        interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
        piper.enable_gripper()
        time.sleep(0.01)
    print("INFO: Enable successful")

Robotic Arm Enable: enable_arm()

Gripper Enable: enable_gripper()

Control Mode Settings:
Parameters for ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00):

5.5. Replay Control Logic

count = 0
while play_times == 0 or abs(play_times) != count:
    input("step 2: Press Enter to start trajectory replay")
    for n, pos in enumerate(track):
        piper.move_j(pos[1:-1], move_spd_rate_ctrl)
        if have_gripper and len(pos) == 8:
            piper.move_gripper(pos[-1], 1)
        print(f"INFO: replay #{count + 1}, wait time: {pos[0] / play_speed:0.4f}s, target position: {pos[1:]}")
        if n == len(track) - 1:
            time.sleep(play_interval)
        else:
            time.sleep(pos[0] / play_speed)
    count += 1

Joint Control: move_j()

Gripper Control: move_gripper()

Replay Speed Adjustment Mechanism:
pos[0] / play_speed enables dynamic adjustment of trajectory replay speed:

Advantages of this implementation:

5.6. Complete Code of the Trajectory Replay Program

#!/usr/bin/env python3
# -*-coding:utf8-*-
# Play continuous trajectory
import os, time, csv
from piper_sdk import *

if __name__ == "__main__":
    # Whether there is a gripper
    have_gripper = True
    # replay times (0 means infinite loop)
    play_times = 1
    # replay interval in seconds
    play_interval = 1.0
    # Motion speed percentage (recommended range: 10-100)
    move_spd_rate_ctrl = 100
    # replay speed multiplier (recommended range: 0.1-2)
    play_speed = 1.0
    # CAN mode switch timeout in seconds
    timeout = 5.0
    # CSV file path for saved trajectory
    CSV_path = os.path.join(os.path.dirname(__file__), "trajectory.csv")

    # Read trajectory file
    try:
        with open(CSV_path, 'r', encoding='utf-8') as f:
            track = list(csv.reader(f))
            if not track:
                print("ERROR: Trajectory file is empty")
                exit()
            track = [[float(j) for j in i] for i in track  # Convert to float lists
    except FileNotFoundError:
        print("ERROR: Trajectory file not found")
        exit()

    # Initialize and connect to robotic arm
    piper = Piper("can0")
    interface = piper.init()
    piper.connect()
    time.sleep(0.1)

    def get_pos():
        joint_state = piper.get_joint_states()[0]
        if have_gripper:
            '''Get current joint angles and gripper opening distance'''
            return joint_state + (piper.get_gripper_states()[0][0], )
        return joint_state

    def stop():
        '''Stop robotic arm; must call this function when first exiting teach mode before using CAN mode'''
        interface.EmergencyStop(0x01)
        time.sleep(1.0)
        limit_angle = [0.1745, 0.7854, 0.2094]  # Arm only restored when joints 2,3,5 are within safe range
        pos = get_pos()
        while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
            time.sleep(0.01)
            pos = get_pos()
        # Restore arm
        piper.disable_arm()
        time.sleep(1.0)

    def enable():
        while not piper.enable_arm():
            time.sleep(0.01)
        if have_gripper:
            interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
            piper.enable_gripper()
            time.sleep(0.01)
        print("INFO: Enable successful")

    print("step 1: Ensure robotic arm has exited teach mode before replay")
    if interface.GetArmStatus().arm_status.ctrl_mode != 1:
        over_time = time.time() + timeout
        stop()  # Required when first exiting teach mode
        while interface.GetArmStatus().arm_status.ctrl_mode != 1:
            if over_time < time.time():
                print("ERROR: CAN mode switch failed. Please confirm teach mode is exited")
                exit()
            interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
            time.sleep(0.01)
    enable()

    count = 0
    input("step 2: Press Enter to start trajectory replay")
    while play_times == 0 or abs(play_times) != count:
        for n, pos in enumerate(track):
            piper.move_j(pos[1:-1], move_spd_rate_ctrl)
            if have_gripper and len(pos) == 8:
                piper.move_gripper(pos[-1], 1)
            print(f"INFO: replay #{count + 1}, wait time: {pos[0] / play_speed:0.4f}s, target position: {pos[1:]}")
            if n == len(track) - 1:
                time.sleep(play_interval)  # Final point delay
            else:
                time.sleep(pos[0] / play_speed)  # Point-to-point delay
        count += 1

6. Summary

Based on the AgileX PIPER robotic arm, the above has realized the continuous trajectory recording and replay functions. Through the application of Python SDK, the recording and repeated execution of the robotic arm’s trajectory can be achieved, providing strong technical support for teaching demonstrations and automated operations.

If you have any questions or feedback about this implementation, feel free to share in the comments. Let’s discuss and improve it together! You can also contact us directly at support@agilex.ai for further assistance.

Thanks,
AgileX Robotics

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/continuous-trajectory-recording-and-replay-for-agilex-piper-robotic-arm/49656

ROS Discourse General: Looking for beta testers: Video Recorder – a black box for your robots

Ever had to review an incident that happened in the field but you weren’t recording a bagfile when it happened? Recording, uploading, and reviewing video and other data from your robot is surprisingly tricky. We wanted to make that easier.

Video Recorder (beta)

We’ve just released a new Video Recorder capability for Transitive in beta. It continuously records defined video tracks in a rolling buffer from ROS topics, USB cameras, RTSP feeds, or GStreamer pipelines, uploads them to the cloud in segments, and makes them playable on the web through a calendar and embeddable video player component.

How you can control it

On the robot you can control via a ROS API:

On the web frontend, use a simple JavaScript API to query available recordings by date/time—ideal for embedding in your dashboards.

We’re looking for feedback on:

Want to help?

If you’d like to try it out:

Your early feedback will shape how this tool becomes something that can really help startups with their robot operations.

Looking forward to your insights, questions, and suggestions!

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/looking-for-beta-testers-video-recorder-a-black-box-for-your-robots/49652

ROS Discourse General: 📢 Announcing CRISP: Closing the Gap Between ROS 2 and Robot Learning

Hi everyone,

In our lab we’ve been working on robot learning, in particular with Vision-Language-Action models (VLAs). To make it easier for the community to experiment with these methods on real robots, we decided to open source our package: CRISP :tada:

Our main contributions are:

We also provide detailed documentation so that both novice ROS 2 users and researchers can get started quickly. Our hope is to lower the entry barrier to robot learning and accelerate research in this domain!

:hammer_and_wrench: Features and Notes

:raising_hands: How You Can Get Involved

We’d love to hear your thoughts, suggestions, and experiences.

Cheers,
Daniel San José Pro | Learning Systems and Robotics Lab @ TUM

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/announcing-crisp-closing-the-gap-between-ros-2-and-robot-learning/49625

ROS Discourse General: Logging & Observability Review | Cloud Robotics WG Meeting 2025-08-25

Please come and join us for this coming meeting at Mon, Aug 25, 2025 4:00 PM UTCMon, Aug 25, 2025 5:00 PM UTC, where the group plans to review all of the research performed so far into Logging & Observability. After hosting guest speakers from Roboto AI, Heex Technologies, and Bagel, the group will review their findings so far and consider what to look into next. The research is intended for a community guide on Logging & Observability, written by the group.

Last meeting, guest speakers Arun Venkatadri and Shouheng Yi came to present Bagel, a new open source project that lets you chat with your robotics data by using AI to search through recorded data. If you’d like to see the meeting, the recording is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/logging-observability-review-cloud-robotics-wg-meeting-2025-08-25/49623

ROS Discourse General: New packages for Humble Hawksbill 2025-08-15

Package Updates for Humble

Added Packages [128]:

Updated Packages [309]:

Removed Packages [0]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/new-packages-for-humble-hawksbill-2025-08-15/49593

ROS Discourse General: UBLOX ZED-X20P Integration Complete - 25Hz NavSatFix

I’ve completed initial UBLOX ZED-X20P integration in the ublox_dgnss package with 25Hz NavSatFix output.

Quick Start

ros2 launch ublox_dgnss ublox_x20p_rover_hpposllh_navsatfix.launch.py -- device_family:=x20p

What’s New

Available Now

Available now on GitHub for local compilation:

Architecture Notes

X20P main interface (0x01ab) fully supported with F9P/F9R compatibility.

UART interfaces (0x050c/0x050d) under investigation - see X20P UART1/UART2 interfaces (0x050c/0x050d) not supported - use main interface (0x01ab) · Issue #48 · aussierobots/ublox_dgnss · GitHub.

Have an X20P?

If you want to test it out and give us feedback, it would be appreciated!

3 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ublox-zed-x20p-integration-complete-25hz-navsatfix/49586

ROS Discourse General: RMW-RMW bridge - is it possible, has anyone done it?

We’re more and more thinking that there should be a RMW-RMW bridge for ROS 2.

Our specific use-case is simple - a microcontroller with MicroROS (thus FastDDS) and the rest would be better with Zenoh RMW. But we can’t use Zenoh in the rest of the system because DDS and Zenoh don’t talk to each other.

I know (or guess) that between DDS-based RMWs, there is the possibility to interoperate on the DDS level (through it’s incomplete for some combinations AFAIU).

But if you need to connect a non-DDS RMW, there’s currently no option.

I haven’t dived into RMW details too much yet, but I guess in principle, creating such bridge at the RMW level should be possible, right?

Has anyone tried that? Is it achievable to create something that is “RMW-agnostic”, meaning one generic bridge for any pair (or n-tuple) of RMWs to connect?

Of course, such solution would hinder performance (all messages would have to be brokered by the bridge), but in our case, we only have a uC feeding one IMU stream, odometry, some state and diagnostics, and receieving only cmd_vel and a few other commands. So performance should not be a problem at least in these simpler cases.

6 posts - 5 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/rmw-rmw-bridge-is-it-possible-has-anyone-done-it/49564

ROS Discourse General: Bagel's New Release -- Cursor Integration

We are thrilled to announce a new integration for our open-source tool, Bagel! Two weeks ago, we presented Bagel at the ROS/PX4 meetup in Los Angeles, and the community’s excitement was incredible. As promised, we’ve integrated Bagel with the Cursor IDE to make robotics development even easier.

You can find the full tutorial here: bagel/doc/tutorials/mcp/2_cursor_px4.ipynb at stage · Extelligence-ai/bagel · GitHub

What is Bagel?

If you’re new to Bagel, it’s a tool that lets you chat with your rosbags using natural language queries. This allows you to quickly get insights from your log files without writing code. For example, you can ask questions like:

Bagel currently was tested in:


How to Get Involved

Bagel is a community-driven project, and we’d love for you to be a part of it. Your contributions are what will make this tool truly great.

Here are a few ways you can help:

Many people have done so! The community found two bugs and filed two feature requests already!

Thank you for your support!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/bagels-new-release-cursor-integration/49562

ROS Discourse General: Localization of ROS 2 Documentation

Hello, Open Robotics Community,

I’m glad to announce that the :tada: ros2-docs-l10n :tada: project is published now:

The goal of this project is to translate the ROS 2 documentation into multiple languages. Translations are contributed via the Crowdin platform and automatically synchronized with the GitHub repository. Translations can be previewed on GitHub Pages.

13 posts - 4 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/localization-of-ros-2-documentation/49558

ROS Discourse General: Open-sourcing ROS 1 code from RUVU (AMCL reimplementation, planners, controllers and more)

Hey everyone,

As part of the acqui-hire of our startup RUVU, we’re open-sourcing a large portion of the ROS 1 code we’ve built over the years.

While it’s all written for ROS 1 so not immediately plug-and-play for ROS 2 users, we hope some of it might still be useful, inspirational, or a good starting point for your own projects.

Some highlights:

Everything is released under the MIT license, so feel free to fork, adapt, and use anything you find interesting.

We’re not planning on actively maintaining this code right now, but that could change if there’s enough community interest.

If you have questions, ideas, or want to discuss this code, you can reach me here or at my current role at Nobleo Technology.

— The (old) RUVU Team

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/open-sourcing-ros-1-code-from-ruvu-amcl-reimplementation-planners-controllers-and-more/49556

ROS Discourse General: Fixed Position Recording and Replay for AgileX PIPER Robotic Arm

We recently implemented a fixed position recording and replay function for the AgileX PIPER robotic arm using the official Python SDK. This feature allows recording specific arm positions and replaying them repeatedly, which is useful for teaching demonstrations and automated operations.

In this post, I will share the detailed setup steps, code implementation, usage instructions, and a demonstration video to help you get started.

Tags

Position recording, Python SDK, teaching demonstration, position reproduction, AgileX PIPER

Code Repository

GitHub link: https://github.com/agilexrobotics/Agilex-College.git

Function Demonstration

PIPER Robotic Arm | Fixed Position Recording & Replay Demo

Preparation Before Use

Hardware Preparation for PIPER Robotic Arm

Environment Configuration

sudo apt install git
sudo apt install python3-pip
sudo apt install can-utils ethtool
git clone -b 1_0_0_beta https://github.com/agilexrobotics/piper_sdk.git
cd piper_sdk
pip3 install .

Operation Steps for Fixed Position Recording and Replay Function

  1. Power on the robotic arm and connect the USB-to-CAN module to the computer (ensure that only one CAN module is connected)
  2. Open the terminal and activate the CAN module

sudo ip link set can0 up type can bitrate 1000000

  1. Clone the remote code repository

git clone https://github.com/agilexrobotics/Agilex-College.git

  1. Switch to therecordAndPlayPosdirectory

cd Agilex-College/piper/recordAndPlayPos/

  1. Run the recording program

python3 recordPos_en.py

  1. Short-press the teach button to enter the teaching mode

  2. Place the position of the robotic arm well, press Enter in the terminal to record the position, and input ‘q’ to end the recording.

  3. After recording, short-press the teach button again to exit the teaching mode

  1. Notes before replay: When exiting the teaching mode for the first time, a specific initialization process is required to switch from the teaching mode to the CAN mode. Therefore, the replay program will automatically perform a reset operation to return joints 2, 3, and 5 to safe positions (zero points) to prevent the robotic arm from suddenly falling due to gravity and causing damage. In special cases, manual assistance may be required to return joints 2, 3, and 5 to zero points.
  2. Run the replay program

python3 playPos_en.py

  1. After successful enabling, press Enter in the terminal to play the positions

Problems and Solutions

Problem 1: There is no Piper class.

image

Reason: The currently installed SDK is not the version with API.

Solution: Execute pip3 uninstall piper_sdkto uninstall the current SDK, then install the 1_0_0_beta version of the SDK according to the method in 1.2. Environment Configuration.

Problem 2: The robotic arm does not move, and the terminal outputs as follows.

Reason: The teach button was short-pressed during the operation of the program.

Solution: Check whether the indicator light of the teach button is off. If yes, re-run the program; if not, short-press the teach button to exit the teaching mode first and then run the program.

Code/Principle and Parameter Description

Implementation of Position Recording Program

The position recording program is the data collection module of the system, which is responsible for capturing the joint position information of the robotic arm in the teaching mode.

Program Initialization and Configuration

Parameter Configuration Design

#  Whether there is a gripper
have_gripper = True
# Timeout for teaching mode detection, unit: second
timeout = 10.0
# CSV file path for saving positions
CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")

Analysis of configuration parameters:

Thehave_gripperparameter is of boolean type, and True means there is a gripper.

Thetimeoutparameter sets the timeout for teaching mode detection. After starting the program, if the teaching mode is not entered within 10s, the program will exit.

TheCSV_pathparameter sets the save path of the trajectory file, which defaults to the same path as the program, and the file name is pos.csv

Robotic Arm Connection and Initialization

# Initialize and connect the robotic arm
piper = Piper("can0")
interface = piper.init()
piper.connect()
time.sleep(0.1)

Analysis of connection mechanism:

Piper()is the core class of the API, which simplifies some common methods on the basis of the interface.

init()will create and return an interface instance, which can be used to call some special methods of Piper.

connect()will start a thread to connect to the CAN port and process CAN data.

time.sleep(0.1)is added to ensure that the connection is fully established. In embedded systems, hardware initialization usually takes a certain amount of time, and this short delay ensures the reliability of subsequent operations.

Position Acquisition and Data Storage

Implementation of Position Acquisition Function

def get_pos():
    '''Get the current joint radians of the robotic arm and the gripper opening distance'''
    joint_state = piper.get_joint_states()[0]
    if have_gripper:
        return joint_state + (piper.get_gripper_states()[0][0], )
    return joint_state

Mode Detection and Switching

print("INFO: Please click the teach button to enter the teaching mode")
over_time = time.time() + timeout
while interface.GetArmStatus().arm_status.ctrl_mode != 2:
    if over_time < time.time():
        print("ERROR: Teaching mode detection timeout, please check whether the teaching mode is enabled")
        exit()
    time.sleep(0.01)

Status polling strategy
The program uses polling to detect the control mode, and this method has the following characteristics:

Timeout protection mechanism:
The 10-second timeout setting takes into account the needs of actual operations:

Safety features of teaching mode:

Data Recording and Storage

count = 1
csv = open(CSV_path, "w")
while input("INPUT: Input q to exit, press Enter directly to record:") != "q":
    current_pos = get_pos()
    print(f"INFO: {count}th position, recorded position:  {current_pos}")
    csv.write(",".join(map(str, current_pos)) + "\n")
    count += 1
csv.close()
print("INFO: Recording ends, click the teach button again to exit the teaching mode")

Data integrity guarantee:
After each recording, the data is immediately written to the file and the buffer is refreshed to ensure that the data will not be lost due to abnormal exit of the program.

Data Format Selection
Reasons for choosing CSV format for data storage:

Data column attributes:

Complete Code Implementation of Position Recording Program

#!/usr/bin/env python3
# -*-coding:utf8-*-
# Record positions
import os, time
from piper_sdk import *

if __name__ == "__main__":
    # Whether there is a gripper
    have_gripper = True
    # Timeout for teaching mode detection, unit: second
    timeout = 10.0
    # CSV file path for saving positions
    CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
    # Initialize and connect the robotic arm
    piper = Piper("can0")
    interface = piper.init()
    piper.connect()
    time.sleep(0.1)

    def get_pos():
        '''Get the current joint radians of the robotic arm and the gripper opening distance'''
        joint_state = piper.get_joint_states()[0]
        if have_gripper:
            return joint_state + (piper.get_gripper_states()[0][0], )
        return joint_state
    
    print("INFO: Please click the teach button to enter the teaching mode")
    over_time = time.time() + timeout
    while interface.GetArmStatus().arm_status.ctrl_mode != 2:
        if over_time < time.time():
            print("ERROR:Teaching mode detection timeout, please check whether the teaching mode is enabled")
            exit()
        time.sleep(0.01)

    count = 1
    csv = open(CSV_path, "w")
    while input("INPUT: Enter q to exit, press Enter directly to record:  ") != "q":
        current_pos = get_pos()
        print(f"INFO:  {count}th position, recorded position: {current_pos}")
        csv.write(",".join(map(str, current_pos)) + "\n")
        count += 1
    csv.close()
    print("INFO: Recording ends, click the teach button again to exit the teaching mode")

Implementation of Position Replay Program

The position replay program is the execution module of the system, responsible for reading the recorded position data and controlling the robotic arm to reproduce these positions.

Parameter Configuration and Data Loading

replay Parameter Configuration

# Number of replays, 0 means infinite loop
play_times = 1
# replay interval, unit: second, negative value means manual key control
play_interval = 0
# Movement speed percentage, recommended range: 10-100
move_spd_rate_ctrl = 100

Analysis of parameter design:

Theplay_timesparameter supports three replay modes:

The negative value design ofplay_intervalis an ingenious user interface design:

Themove_spd_rate_ctrlparameter provides a speed control function, which is very important for different application scenarios:

Data File Reading

try:
    with open(CSV_path, 'r', encoding='utf-8') as f:
        track = list(csv.reader(f))
        if not track:
            print("ERROR: The position file is empty")
            exit()
        track = [[float(j) for j in i] for i in track]    # Convert to a list of floating-point numbers
except FileNotFoundError:
    print("ERROR: The position file does not exist")
    exit()

Exception handling strategies:

Data type conversion:
In the process of converting string data to floating-point numbers, the program uses list comprehensions.

Safety Stop Function

def stop():
    '''Stop the robotic arm; when exiting the teaching mode for the first time, this function must be called first to control the robotic arm in CAN mode'''
    interface.EmergencyStop(0x01)
    time.sleep(1.0)
    limit_angle = [0.1745, 0.7854, 0.2094]  # The robotic arm can be restored only when the angles of joints 2, 3, and 5 are within the limit range to prevent damage caused by falling from a large angle
    pos = get_pos()
    while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
        time.sleep(0.01)
        pos = get_pos()
    # Restore the robotic arm
    piper.disable_arm()
    time.sleep(1.0)

Staged stop strategy:
The stop function adopts a staged safety stop strategy:

  1. Emergency stop stage: EmergencyStop(0x01) sends an emergency stop command to immediately stop all joint movements (joints with impedance)
  2. Safe position waiting: Wait for key joints (joints 2, 3, and 5) to move within the safe range
  3. System recovery stage: Send a recovery command to reactivate the control system

Safety range design:
The program pays special attention to the positions of joints 2, 3, and 5, which is based on the mechanical structure characteristics of the PIPER robotic arm:

The setting of the safe angle range (10°, 45°, 12°) is based on the following considerations:

Real-time monitoring mechanism: The program uses real-time polling to monitor the joint positions to ensure that the next step is performed only when the safety conditions are met.

System Enable Function

def enable():
    '''Enable the robotic arm and gripper'''
    while not piper.enable_arm():
        time.sleep(0.01)
    if have_gripper:
        time.sleep(0.01)
        piper.enable_gripper()
    interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
    print("INFO: Enable successful")

Robotic arm enabling:enable_arm()

Gripper enabling:enable_gripper()

Control mode setting:
ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)Control mode parameters:

Replay Control Logic

count = 0
input("step 2: Press Enter to start playing positions")
while play_times == 0 or abs(play_times) != count:
    for n, pos in enumerate(track):
        while True:
            piper.move_j(pos[:-1], move_spd_rate_ctrl)
            time.sleep(0.01)
            current_pos = get_pos()
            print(f"INFO: {count + 1}th playback, {n + 1}th position, current position: {current_pos}, target position: {pos}")
            if all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6)):
                break
        if have_gripper and len(pos) == 7:
            piper.move_gripper(pos[-1], 1)
            time.sleep(0.5)
        if play_interval < 0:
            if n != len(track) - 1 and input("Enter q to exit, press Enter directly to play:  ") == 'q':
                exit()
        else:
            time.sleep(play_interval)
    count += 1

Joint control: move_j()

Gripper control: move_gripper()

Position control closed-loop system:

  1. Target setting: Send target position commands to each joint through themove_j()function
  2. Status feedback: Obtain the current actual position through theget_pos()function
  3. Error calculation: Compare the difference between the target position and the actual position
  4. Convergence judgment: Consider reaching the target when the error is less than the threshold

Multi-joint coordinated control:
all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6))ensures that the next step is performed only after all six joints reach the target position.

Gripper control strategy:
The gripper control adopts an independent control logic:

replay rhythm control:
The program supports three replay rhythms:

Complete Code Implementation of Position replay Program

#!/usr/bin/env python3
# -*-coding:utf8-*-
# Play positions
import os, time, csv
from piper_sdk import Piper

if __name__ == "__main__":
    # Whether there is a gripper
    have_gripper = True
    # Number of playbacks, 0 means infinite loop
    play_times = 1
    # Playback interval, unit: second; negative value means manual key control
    play_interval = 0
    # Movement speed percentage, recommended range: 10-100
    move_spd_rate_ctrl = 100
    # Timeout for switching to CAN mode, unit: second
    timeout = 5.0
    # CSV file path for saving positions
    CSV_path = os.path.join(os.path.dirname(__file__), "pos.csv")
    # Read the position file
    try:
        with open(CSV_path, 'r', encoding='utf-8') as f:
            track = list(csv.reader(f))
            if not track:
                print("ERROR: Position file is empty")
                exit()
            track = [[float(j) for j in i] for i in track]    # Convert to a list of floating-point numbers
    except FileNotFoundError:
        print("ERROR: Position file does not exist")
        exit()

    # Initialize and connect the robotic arm
    piper = Piper("can0")
    interface = piper.init()
    piper.connect()
    time.sleep(0.1)

    def get_pos():
        '''Get the current joint radians of the robotic arm and the gripper opening distance'''
        joint_state = piper.get_joint_states()[0]
        if have_gripper:
            return joint_state + (piper.get_gripper_states()[0][0], )
        return joint_state    

    def stop():
        '''Stop the robotic arm; this function must be called first when exiting the teaching mode for the first time to control the robotic arm in CAN mode'''
        interface.EmergencyStop(0x01)
        time.sleep(1.0)
        limit_angle = [0.1745, 0.7854, 0.2094]  # The robotic arm can be restored only when the radians of joints 2, 3, and 5 are within the limit range to prevent damage caused by falling from a large radian
        pos = get_pos()
        while not (abs(pos[1]) < limit_angle[0] and abs(pos[2]) < limit_angle[0] and pos[4] < limit_angle[1] and pos[4] > limit_angle[2]):
            time.sleep(0.01)
            pos = get_pos()
        # Restore the robotic arm
        piper.disable_arm()
        time.sleep(1.0)
    
    def enable():
        '''Enable the robotic arm and gripper'''
        while not piper.enable_arm():
            time.sleep(0.01)
        if have_gripper:
            time.sleep(0.01)
            piper.enable_gripper()
        interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
        print("INFO: Enable successful")

    print("step 1:  Please ensure the robotic arm has exited the teaching mode before playback")
    if interface.GetArmStatus().arm_status.ctrl_mode != 1:
        stop()  # This function must be called first when exiting the teaching mode for the first time to switch to CAN mode
    over_time = time.time() + timeout
    while interface.GetArmStatus().arm_status.ctrl_mode != 1:
        if over_time < time.time():
            print("ERROR: Failed to switch to CAN mode, please check if the teaching mode is exited")
            exit()
        interface.ModeCtrl(0x01, 0x01, move_spd_rate_ctrl, 0x00)
        time.sleep(0.01)
    
    enable()
    count = 0
    input("step 2: Press Enter to start playing positions")
    while play_times == 0 or abs(play_times) != count:
        for n, pos in enumerate(track):
            while True:
                piper.move_j(pos[:-1], move_spd_rate_ctrl)
                time.sleep(0.01)
                current_pos = get_pos()
                print(f"INFO: {count + 1}th playback, {n + 1}th position, current position: {current_pos}, target position: {pos}")
                if all(abs(current_pos[i] - pos[i]) < 0.0698 for i in range(6)):
                    break
            if have_gripper and len(pos) == 7:
                piper.move_gripper(pos[-1], 1)
                time.sleep(0.5)
            if play_interval < 0:
                if n != len(track) - 1 and input("INPUT: Enter 'q' to exit, press Enter directly to play:  ") == 'q':
                    exit()
            else:
                time.sleep(play_interval)
        count += 1

Summary

The above implements the fixed position recording and replay function based on the AgileX PIPER robotic arm. By applying the Python SDK, it is possible to record and repeatedly execute specific positions of the robotic arm, providing strong technical support for teaching demonstrations and automated operations.

If you have any questions regarding the use, please feel free to contact us at support@agilex.ai.

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/fixed-position-recording-and-replay-for-agilex-piper-robotic-arm/49533

ROS Discourse General: ROS Kerala | Building a Robotics Career in the USA | Robotics Talk | Jerin Peter | Lentin Joseph

:studio_microphone: ROS Kerala Presents: Robotic Talk Series Topic: Building a Robotics Career in the US – Myths, Challenges & Reality

Join Jerin Peter (Graduate Student – Robotics, UC Riverside) and Lentin Joseph (Senior ROS & AI Consultant, CTO & Co-Founder – RUNTIME Robotics) as they share real-world insights on launching and growing a career in robotics in the United States.

From higher education choices and visa hurdles to mastering ROS and cracking robotics interviews, this talk covers it all. Whether you’re a student, a robotics enthusiast, or a professional looking to go abroad, you’ll find valuable tips and lessons here.

Building a Robotics Career in the USA | Robotics Talk | Jerin Peter | Lentin Joseph

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-kerala-building-a-robotics-career-in-the-usa-robotics-talk-jerin-peter-lentin-joseph/49507

ROS Discourse General: Native rcl::tensor type

We propose introducing the concept of a tensor as a natively supported type in ROS 2 Lyrical Luth. Below is a sketch of how this would work for initial feedback before we write a proper REP for review.

Abstract

Tensors are a fundamental data structure often used to represent multi-modal information for deep neural networks (DNNs) at the core of policy-driven robots. We introduce rcl::tensor as a native type in rcl, as a container for memory that can be optionally externally managed. This type would be supported through all client libraries (rclcpp, rclpy, …) the ROS IDL rosidl, and all RMW implementations. This enables tensor_msgs ROS messages based on sensor_msgs which use tensor instead of uint8[]. The default implementation of rcl::tensor operations for creation/destruction and manipulation will be available on all tiers of supported platforms.. With the presence of an optional package and an environment variable, a platform-optimized implementation for rcl::tensor operations can then be swapped in at runtime to take advantage of accelerator-managed memory/compute. Through adoption of rcl::tensor in developer code and ROS messages, we can enable seamless platform-specific acceleration determined at runtime without any recompilation or deployment.

Motivation

ROS 2 should be accelerator-aware but accelerator-agnostic like other popular frameworks such as PyTorch or NumPy. This enables package developers that conform to ROS 2 standards to gain platform-specific optimizations for free (“optimal where possible, compatible where necessary”).

Background

AI robots and policy-driven physical agents rely on accelerated deep neural network (DNN) model inference through tensors. Tensors are a fundamental data structure to represent multi-dimensional data from scalar (rank 0), vectors (rank 1), and matrices (rank 2) to batches of multi-channel matrices (rank 4). These can be used to encode all data flowing through such graphs including images, text, joint positions, poses, trajectories, IMU readings, and more.

Performing inference on these DNN model policies requires these tensors to reside in accelerator memory. ROS messages, however, expect their payloads to reside in main memory with field types such as uint8[] or multi-dimensional arrays. This requires these payloads to be copied from main memory to accelerator memory and then copied back to main memory after processing in order to populate a new ROS message to publish. This quickly becomes the primary bottleneck for policy inference. Type adaptation in rclcpp provides a solution for this, but it requires all participating packages to have accelerator-specific dependencies and only applies within the client library, so RMW implementations cannot apply optimized-for-accelerator memory, for example.

Additionally, without a canonical tensor type in ROS 2, a patchwork of different tensor libraries across various ROS packages is causing impedance mismatches with popular deep learning frameworks including PyTorch.

Requirements

Rough Sketch

struct rcl::tensor
{
    std::vector<size_t> shape; // shape of the tensor
    std::vector<size_t> strides; // strides of the tensor
    size_t rank; // number of dimensions

    union {
        void* data; // pointer to the data in memory handle
        size_t handle; // token stored by rcl::tensor for externally managed memory
    }
    size_t byte_size; // size of the data

    data_type_enum type; // the data type
}

Core Tensor APIs

Inline APIs available on all platforms in core ROS 2 rcl.

Creation

Create a new tensor from main memory.

Common operations

Manipulations performed on tensors that can be optionally accelerated. The more complete these APIs are, the less fragmented the ecosystem will be but the higher the burden on implementers. These should be modeled after PyTorch tensor API and existing C tensor libraries such as libXM or C++ libraries like xtensor.

Managed access

Provide a way to access elements individually in parallel.

Direct access

Retrieve the underlying data in main memory but may involve movement of data.

Other Conveniences

  1. rcl functions to check which tensor implementation is active.
  2. tensor_msgs::Image to mirror sensor_msgs::Image to enable smooth migration to using tensor type in common ROS messages. Alternative is to add a “union” field in sensor_msgs::Image with the uint8[] data field.
  3. cv_bridge API to convert between cv::Mat and tensor_msgs::Image.

Platform-specific tensor implementation

Without loss of generality, suppose we have an implementation of tensor that uses an accelerated library, such as rcl_tensor_cuda for CUDA. This package provides shared libraries that implement all of the core tensor APIs. An environment variable for RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda enables the loading of rcl_tensor_cuda at runtime without rebuilding any other packages. Unlike the native implementation, rcl_tensor_cuda copies the input buffer into a CUDA buffer and uses CUDA to perform operations on that CUDA buffer.

It also provides new APIs for creating a tensor from a CUDA buffer, for checking whether the rcl_tensor_cuda implementation is active, and for accessing the CUDA buffer from a tensor available for any other package libraries the link to rcl_tensor_cuda directly. An RMW implementation linked against rcl_tensor_cuda would query the CUDA buffer backing a tensor and use optimized transport paths to handle it, while a general RMW implementation could just call rcl_tensor_materialize_bytes and transport the main memory payload as normal.

Simple Examples

Example #1: rcl::tensor with “accelerator-aware” subscriber

Node A publishes a ROS message with rcl::tensor from main memory bytes and sends it to a topic Node B subscribes to. Node B happens to be written to first check whether the rcl::tensor is backed by externally managed memory AND checks that rcl_tensor_cuda is active (indicates this is backed by CUDA). Node B has a direct dependency on rcl_tensor_cuda in order to perform this check.

Alternatively, Node B could have also been written with no dependency on any rcl::tensor implementation to simply retrieve the bytes from the rcl::tensor and ignore the externally managed memory flag altogether, which would have forced a copy back from accelerator memory in Scenario 2.

MyMsg.msg
—--------
std_msgs/Header header
tensor payload

Scenario 1: RCL_TENSOR_IMPLEMENTATION = <none>
----------------------------------------------

┌─────────────────┐    ROS Message    ┌─────────────────┐
│   Node A        │ ────────────────► │   Node B        │
│                 │                   │                 │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Create Tensor│ │                   │ │Receive MyMsg│ │
│ │in MyMsg     │ │                   │ │             │ │
│ └─────────────┘ │                   │ └─────────────┘ │
│         │       │                   │         │       │
│         ▼       │                   │         ▼       │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Publish      │ │                   │ │Check if     │ │
│ │MyMsg        │ │                   │ │Externally   │ │
│ └─────────────┘ │                   │ │Managed      │ │
└─────────────────┘                   │ └─────────────┘ │
                                      │         │       │
                                      │         ▼       │
                                      │ ┌─────────────┐ │
                                      │ │Copy         │ │
                                      │ │to Accel Mem │ │
                                      │ └─────────────┘ │
                                      │          │       │
                                      │         ▼       │
                                      │ ┌─────────────┐ │
                                      │ │Process on   │ │
                                      │ │Accelerator  │ │
                                      │ └─────────────┘ │
                                      └─────────────────┘

Scenario 2: RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda
--------------------------------------------------------

┌─────────────────┐    ROS Message    ┌─────────────────┐
│   Node A        │ ────────────────► │   Node B        │
│                 │                   │                 │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Create Tensor│ │                   │ │Receive MyMsg│ │
│ │in MyMsg     │ │                   │ │             │ │
│ └─────────────┘ │                   │ └─────────────┘ │
│         │       │                   │         │       │
│         ▼       │                   │         ▼       │
│ ┌─────────────┐ │                   │ ┌─────────────┐ │
│ │Publish MyMsg│ │                   │ │Check if     │ │
│ └─────────────┘ │                   │ │Externally   │ │
└─────────────────┘                   │ │Managed      │ │
                                      │ └─────────────┘ │
                                      │         │       │
                                      │         ▼       │
                                      │ ┌─────────────┐ │
                                      │ │Process on   │ │
                                      │ │Accelerator  │ │
                                      │ └─────────────┘ │
                                      └─────────────────┘

In Scenario 2, the same tensor function call in Node A creates a tensor backed by accelerator memory instead. This allows Node B, which was checking for a rcl_tensor_cuda-managed tensor to skip the extra copy.

Example #2: CPU versus accelerated implementations

SCENARIO 1: RCL_TENSOR_IMPLEMENTATION = <none> (CPU/Main Memory Path)
========================================================================

┌─────────────────────────────────────────────────────────────────────────────┐
│                              CPU/Main Memory Path                           │
└─────────────────────────────────────────────────────────────────────────────┘

┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   Create    │    │  Normalize  │    │   Reshape   │    │ Materialize │
│   Tensor    │───▶│  Operation  │───▶│  Operation  │───▶│    Bytes    │
│  [CPU Mem]  │    │   [CPU]     │    │   [CPU]     │    │  [CPU Mem]  │
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘
        │                   │                   │                   │
        ▼                   ▼                   ▼                   ▼
┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│ Allocate    │    │ CPU-based   │    │ CPU-based   │    │ Return      │
│ main memory │    │ normalize   │    │ reshape     │    │ pointer to  │
│ for tensor  │    │ computation │    │ computation │    │ byte array  │
│ data        │    │ on CPU      │    │ on CPU      │    │ in main mem │
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘

Memory Layout:
┌─────────────────────────────────────────────────────────────────────────────┐
│                              Main Memory                                    │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐         │
│  │   Tensor    │  │  Normalized │  │  Reshaped   │  │ Materialized│         │
│  │   Data      │  │   Tensor    │  │   Tensor    │  │    Bytes    │         │
│  │  [CPU]      │  │   [CPU]     │  │   [CPU]     │  │   [CPU]     │         │
│  └─────────────┘  └─────────────┘  └─────────────┘  └─────────────┘         │
└─────────────────────────────────────────────────────────────────────────────┘

SCENARIO 2: RCL_TENSOR_IMPLEMENTATION = rcl_tensor_cuda (GPU/CUDA Path)
=======================================================================

┌─────────────────────────────────────────────────────────────────────────────┐
│                              GPU/CUDA Path                                  │
└─────────────────────────────────────────────────────────────────────────────┘

┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│   Create    │    │  Normalize  │    │   Reshape   │    │ Materialize │
│   Tensor    │───▶│  Operation  │───▶│  Operation  │───▶│    Bytes    │
│  [GPU Mem]  │    │   [CUDA]    │    │   [CUDA]    │    │  [CPU Mem]  │
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘
        │                   │                   │                   │
        ▼                   ▼                   ▼                   ▼
┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐
│ Allocate    │    │ CUDA kernel │    │ CUDA kernel │    │ Copy from   │
│ GPU memory  │    │ for normalize│   │ for reshape │    │ GPU to CPU  │
│ for tensor  │    │ computation │    │ computation │    │ memory      │
│ data        │    │ on GPU      │    │ on GPU      │    │ (cudaMemcpy)│
└─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘

Memory Layout:
┌─────────────────────────────────────────────────────────────────────────────┐
│                              GPU Memory                                     │
│  ┌─────────────┐  ┌─────────────┐  ┌─────────────┐                          │
│  │   Tensor    │  │  Normalized │  │  Reshaped   │                          │
│  │   Data      │  │   Tensor    │  │   Tensor    │                          │
│  │  [GPU]      │  │   [GPU]     │  │   [GPU]     │                          │
│  └─────────────┘  └─────────────┘  └─────────────┘                          │
└─────────────────────────────────────────────────────────────────────────────┘
                                    │
                                    ▼
┌─────────────────────────────────────────────────────────────────────────────┐
│                              Main Memory                                    │
│                                                                             │
│                                                                             │
│  ┌─────────────┐                                                            │
│  │ Materialized│                                                            │
│  │    Bytes    │                                                            │
│  │   [CPU]     │                                                            │
│  └─────────────┘                                                            │
└─────────────────────────────────────────────────────────────────────────────┘

IMPLEMENTATION NOTES
===================

• Environment variable RCL_TENSOR_IMPLEMENTATION controls which path is taken
• Same API calls work in both scenarios (transparent to user code)
• GPU path requires CUDA runtime and rcl_tensor_cuda package
• Memory management handled automatically by implementation
• Backward compatibility maintained for CPU-only systems

Discussion Questions

  1. Should we constrain tensor creation functions to using memory allocators instead? rcl::tensor implementations would need to provide custom memory allocators for externally managed memory, for example.

  2. Do we allow for mixed runtimes of cpu-backed/external memory managed tensors in one runtime? What creation pattern would allow for precompiled packages to “pick up” accelerated memory dynamically at runtime by default but also explicitly opt-out from it for specific tensors as well?

  3. Do we need to expose the concept of “streams” and “devices” through the rcl::tensor API or can that be kept under the abstraction layer? They are generic concepts but may too strongly proscribe the underlying implementation. However, exposing them would let developers provide stronger intent on how they want their code to be executed in an accelerator-agnostic manner.

  4. What common tensor operations should we keep as supported? The more we choose, the higher the burden on the rcl::tensor implementations, but the more standardized and less fragmented our ROS 2 developer base. For example, we do not want fragmentation where packages begin to depend on rcl_tensor_cuda and thus fallback only to CPU for rcl_tensor_opencl (wlog).

  5. Should tensors have a multi-block interfaces from the get-go? Assuming one memory address seems problematic for rank 4 tensors, for example (e.g., sets of images from multiple cameras).

  6. Should the ROS 2 canonical implementation of rcl::tensor be inline or based on an existing, open source library? If so, which one?

Summary

8 posts - 7 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/native-rcl-tensor-type/49497

ROS Discourse General: ROS 2 Performance Benchmark - Code Release

In our ROS 2 Performance Benchmark tests, we had interesting findings demonstrating potential bottlenecks for message transport in ROS 2(rolling). Now, we’re excited to release the code which can be used to reproduce our results. Check it out here!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-performance-benchmark-code-release/49495

ROS Discourse General: ROS 2 Rust Meeting: August 2025

The next ROS 2 Rust Meeting will be Mon, Aug 11, 2025 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

With the recent announcement about OSRF funding for adding Cargo dependency management to the buildfarm, and a few people having questions on that, I would like to reiterate that this meeting is open to everyone - working group member or not. If you want to learn what we’re trying to accomplish, please drop by! We’d love to have you!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-rust-meeting-august-2025/49487

ROS Discourse General: ROS 2 Cross-compilation / Multi architecture development

Hi,

I’m in the process of looking into migrating our indoor service robot from an amd64 based system to the Jetson Orin Nano.

How are you doing development when targeting aarch64/arm64 machines?

My development machine is not the newest, but reasonably powerful. (AMD Ryzen 9 3900X, 32GB RAM) But it struggles with the officially recommended QEMU based approach. Even the vanilla osrf/ros docker image is choppy under emulation. Building the actual image, stack or running a simulated environment is totally out of the question.

The different pathways I investigated so far are:

I’m interested in your approach of this problem. I imagine that using ARM based systems in production robots is a fairly common practice given the recent advances in this field.

7 posts - 6 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/ros-2-cross-compilation-multi-architecture-development/49449

ROS Discourse General: Why do robotics companies choose not to contribute to open source?

Hi all!

We wrote a blog post at Henki Robotics to share some of our thoughts on open-source collaboration, based on what we’ve seen and learned so far. We thought that it would be interesting for the community to hear and discuss the challenges open-source contributions pose from a company standpoint, while also highlighting the benefits of doing so and encouraging more companies to collaborate together.

We’d be happy to hear your thoughts and if you’ve had similar experiences!

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/why-do-robotics-companies-choose-not-to-contribute-to-open-source/49448

ROS Discourse General: A Dockerfile and a systemd service for starting a rmw-zenoh server

Meanwhile there’s no official method for autostarting rmw-zenoh server this might be useful:

4 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/a-dockerfile-and-a-systemd-service-for-starting-a-rmw-zenoh-server/49438

ROS Discourse General: How to Implement End-to-End Tracing in ROS 2 (Nav2) with OpenTelemetry for Pub/Sub Workflows?

I’m working on implementing end-to-end tracing for robotic behaviors using OpenTelemetry (OTel) in ROS 2. My goal is to trace:

  1. High-level requests (e.g., “move to location”) across components to analyze latency

  2. Control commands (e.g., teleop) through the entire pipeline to motors

Current Progress:

Challenges with Nav2:

Questions:

  1. Are there established patterns for OTel context propagation in ROS 2 pub/sub systems?

  2. How should we handle fan-out scenarios (1 publisher → N subscribers)?

  3. Any Nav2-specific considerations for tracing (e.g., lifecycle nodes, behavior trees)?

  4. Alternative approaches besides OTel that maintain compatibility with observability tools?

2 posts - 2 participants

Read full topic

[WWW] https://discourse.openrobotics.org/t/how-to-implement-end-to-end-tracing-in-ros-2-nav2-with-opentelemetry-for-pub-sub-workflows/49418

ROS Discourse General: Space ROS Jazzy 2025.07.0 Release

Hello ROS community!

The Space ROS team is excited to announce Space ROS Jazzy 2025.07.0 was released last week and is available as osrf/space-ros:jazzy-2025.07.0 on DockerHub.

Release details

This release includes a significant refactor the build of our base image making the main container over 60% smaller! Additionally, development images are now pushed to DockerHub to make building with Space ROS and an underly easier than ever. For an exhaustive list of all the issues addressed and PRs merged, check out the GitHub Project Board for this release here.

Code

Current versions of all packages released with Space ROS are available at:

What’s Next

This release comes 3 months after the last release. The next release is planned for October 31, 2025. If you want to contribute to features, tests, demos, or documentation of Space ROS, get involved on the Space ROS GitHub issues and discussion board.

All the best,

The Space ROS Team

1 post - 1 participant

Read full topic

[WWW] https://discourse.openrobotics.org/t/space-ros-jazzy-2025-07-0-release/49417


2025-08-23 12:17