[Documentation] [TitleIndex] [WordIndex

Only released in EOL distros:  

object_manipulation: bayesian_grasp_planner | compressed_pointcloud_transport | current_state_validator | household_objects_database | household_objects_database_msgs | interactive_marker_helpers | object_manipulation_msgs | object_manipulator | point_cloud_server | probabilistic_grasp_planner | rviz_interaction_tools | static_transform_broadcaster

Package Summary

Core functionality for pickup and place tasks. Services Pickup and Place action goals.

ROS API

Actions and Services Provided

The object_manipulator provides two SimpleActionServers:

Actions and Services Required

The object manipulator assumes a number of actions and services are available for it to call. Note that, for the PR2 robot equipped with a gripper, default implementations are available for all of these. The launch files in pr2_object_manipulation_launch will bring up all of these components and make sure all the topics are mapped correctly.

Services internal to the manipulation pipeline:

Services external to the manipulation pipeline:

Arm/Hand Description

The manipulator also requires information about the arm/hand combo used for grasping. This information is expected to be on the parameter server, as described below. An example configuration file with these parameters for the PR2 arm and gripper can be found in the pr2_object_manipulation_launch package.

A hand description must contain the following:

Running the Manipulation Pipeline

To launch the manipulation pipeline and execute pickup and place tasks using the PR2 robot, tutorials and launch files are provided in pr2_tabletop_manipulation_apps.

Note that, in addition to the functionality covered here, the manipulation pipeline requires external input for sensor processing, identifying graspable objects, managing a collision environment, etc. A default implementation of these, sufficient for running the manipulation pipeline, is available on the tabletop_object_perception page.

Grasping and Placing: Details and Implementation

The main goals of the manipulation pipeline are:

A Complete Pickup Task

Chronologically, the process of grasping an object goes through the following stages:

Placing an object goes through very similar stages. In fact, grasping and placing are mirror images of each other, with the main difference that choosing a good grasp point is replaced by choosing a good location to place the object in the environment.

Object Perception

Object perception is an active area of research; it is not internal to the manipulation pipeline, but rather expected to provide input for grasping. For running the manipulation pipeline, a simple implementation for object perception is provided in tabletop_object_detector.

Grasp Point Selection

For either a recognized object (with an attached database model id) or an unrecognized cluster, the goal of this component is to generate a list of good grasp points. Here the object to be grasped is considered in isolation: a good grasp point refers strictly to a position of the gripper relative to the object, and knows nothing about the rest of the environment.

For unknown objects, we use a module that operates strictly on the perceived point cluster. Details on this module can be found in the following paper: Contact-Reactive Grasping of Objects with Partial Shape Information that Willow researchers presented at the ICRA 2010 Workshop on Mobile Manipulation.

For recognized objects, our model database also contains a large number of pre-computed grasp points. These grasp points have been pre-computed using GraspIt!, an open-source simulator for robotic grasping.

The output from this module consists of a list of grasp points, ordered by desirability (it is recommended that downstream modules in charge of grasp execution try the grasps in the order in which they are presented).

Environment Sensing and Collision Map Processing

For collision-free operation, it is important to perceive not only the target object, but also the rest of the environment, containing potential obstacles. We rely on collision free operation using the arm_navigation; the additional sensor processing performed to provide input to the collision_environment see the tabletop_collision_map_processing package.

Motion Planning

Motion planning is also done using the arm_navigation stack. The algorithm that is being used is an RRT variant.

The motion planner reasons about collision in a binary fashion - that is, a state that does not bring the robot in collision with the environment is considered feasible, regardless of how close the robot is to the obstacles. As a result, the planner might compute paths that bring the arm very close to the obstacles. To avoid collisions due to sensor errors or mis-calibration, we pad the robot model used for collision detection by a given amount (normally 2cm).

Grasp Execution

The motion planner only works in collision-free robot states. Grasping, by nature, means that the gripper must get very close to the object, and that immediately after grasping, the object, now attached to the gripper, is very close to the table. Since we can not use the motion planner for these states, we introduce the notion of a pre-grasp position: a pre-grasp is very close to the final grasp position, but far enough from the object that the motion planner will compute a path to it. The same happens after pickup: we use a lift position which is far enough above the table that the motion planner can be used from there.

To plan motion from pre-grasp to grasp and from grasp to lift we use the interpolated_ik_motion_planner, while also doing more involved reasoning about which collisions must be avoided and which collisions are OK. For example, during the lift operation (which is guaranteed to be in the "up" direction), we can ignore collisions between the object and the table.

The motion of the arm for executing a grasp is:

      starting_arm_position
               |
               | <-(motion planner)
               |
              \ /
               V
        pre-grasp position
               |
               | <-(interpolated IK)
               |   (with tactile sensor reactive grasping)
              \ /
               V
         grasp position
      (fingers are closed)
(object model is attached to gripper)
               |
               | <-(interpolated IK)
               |
              \ /
               V
         lift position
               |
               | <-(motion planner)
               |
              \ /
               V
    some desired arm position 

grasp1.png

Interpolated IK path from pre-grasp to grasp planned for a grasp point of an unknown object

grasp2.png

A path to get the arm to the pre-grasp position has been planned using the motion planner and executed.

grasp3.png

The interpolated IK path from pre-grasp to grasp has been executed.

grasp4.png

The object has been lifted. Note that the object Collision Model has been attached to the gripper.

Reactive Grasping Using Tactile Sensing

There are many possible sources of error in the pipeline, including errors in object recognition or segmentation, grasp point selection, mis-calibration, etc. A common result of these errors will be that a grasp that is thought to be feasible will actually fail, either by missing the object altogether or by hitting the object in an unexpected way and displacing it before the fingers are closed.

We can correct for some of these errors by using the tactile sensors in the fingertips. We use a grasp execution module that slots in during the move from pre-grasp to grasp and uses a set of heuristics to correct for errors detected by the tactile sensors. Details on this module can be found in the following paper: Contact-Reactive Grasping of Objects with Partial Shape Information that Willow researchers presented at the ICRA 2010 Workshop on Mobile Manipulation.


2024-11-02 14:36