[Documentation] [TitleIndex] [WordIndex

  Show EOL distros: 

Package Summary

This package is the main entry point for Active Scene Recognition (ASR). It contains helpers and resources, including the launch files to start ASR in simulation or on the real mobile robot. Moreover, it includes a customized rviz-configuration file for ASR, databases of recorded scenes and tools which ease the interaction with the ASR system.

Package Summary

This package is the main entry point for Active Scene Recognition (ASR). It contains helpers and resources, including the launch files to start ASR in simulation or on the real mobile robot. Moreover, it includes a customized rviz-configuration file for ASR, databases of recorded scenes and tools which ease the interaction with the ASR system.

Logo-white-medium.jpg

Description

This package contains various helpers and resources for our Active Scene Recognition - System. The package includes the following submodules:

Launch files:

Functionality

This package contains bash scripts to start multiple other modules in a tmux session. It also contains scenes as sqlite databases and a rviz configuration file. For easier handling with other modules it contains some python scripts.

Usage

This module has two main functionalities:

  1. run the scene recognition in simulation/real.
  2. record scenes.

This module also contains a rviz configuration file that includes all required visualization topics for the Active Scene Recognition (formerly designated as scene-exploration) and related applications. It is in the rviz_configurations folder and named scene_exploration.rviz. The scene_recordings folder contains some recorded scenes as sqlite database files, which can be configured in sqlitedb.yaml in the asr_recognizer_predictionand asr_ism package. Depending on the shell script you are running you will see different tmux windows, their usage/module is described here. If you are wondering how the tmux windows should look like after a successful lauch, take a look at the images.

roscore.png

roscore.png

ism.png

nbv.png

viz_server.png

state_machine.png

direct_search_manager.png

move_mock.png

ptu_mock.png

fake_recognizer.png

recognition_for_grasping.png descriptor_surface.png aruko.png

vision.png

kinect.png

The control script (in the control tmux window) can be used to do almost anything the state_machine is capable of, but in a more manual way. The following list describes functions that can be used thorugh the repl interface/in other python scripts:

You can use tab complete if you don't know function names. The personal_stuff.py is for personal python code.

Needed packages

there might be more needed packages which are depending on the ones above.

Needed software

tmux

If you source the ROS environment inside your ~/.bashrc, pay attentation that tmux reads only ~/.profile by default. Thus you might need to source your ROS environment in ~/.profile too.

recognition_manual_manager

Contains the manager.py script, that allows to manually start and stop detection of single objects. In order for the script to work, all required object recognizers need to be launched in advance (See above: you can use the start_recognizers script to do so).

To find the object types of a recognizer you can run e.g. rosservice call /asr_object_database/object_type_list "recognizer: 'segmentable'" to get all segmentable objects, for more information see asr_object_database.

The following commands are supported by the script:

constellation_transformation_tool

Contains the transformation_tool.py script, which allows the transformation of a list of multiple object poses by a predefined rotation and translation. Furthermore, the original as well as the transformed object pose are visualized and printed to the console in a loop until interrupted. The topic must be added in rviz and it is named "transformation_tool". You can visualize only the new/original pose visualization by using the namespaces "transformation_orig" and "transformation_new".

Note, that currently each object and its pose needs to be added directly to the code in the main function. The translation and rotation values also need to be set here before running the script.

In addition, a roscore needs to be up and running before you can launch the script.

Available object recognizers

There are currently two recognizers integrated into the system to detect objects:

ROS Nodes

Parameters

asr_next_best_view

asr_state_machine

asr_fake_object_recognition

asr_world_model

asr_recognizer_prediction_ism

asr_ism

Tutorials

Troubleshooting

If running ROS commands in a file (./script.sh) fails in a tmux pane, run these commands in the current shell (source ./script.sh) instead.


2024-02-24 12:25