Welcome to the second lab session of CoTeSys-ROS Fall School on Cognition-enabled Mobile Manipulation on 2D/3D Perception.
The challenge of the day will be to get 2D/3D Perception using OpenCV and PCL working with both simulated camera/laser data in Gazebo, as well as with data coming from the robots.
The concrete tasks are:
- recognize and find the pose of several objects using 2D features
- segment planar tables, extract Point Cloud clusters that represent the objects (optional: use VFH (Viewpoint Feature Histogram) to obtain object matches with their pose)
Pre-work
- Make sure you have followed the installation steps from http://wiki/Events/CoTeSys-ROS-School/it correctly, and re-run the following step: - rm $HOME/ros/.rosinstall rosinstall $HOME/ros /opt/ros/cturtle https://svn.code.sf.net/p/tum-ros-pkg/code/rosinstall/fall_school2010.rosinstall 
- Run $HOME/ros/setup.sh and add this to the bottom of your .bashrc.
This will guarantee that you will get any changes that we made to the repository.
- Compile features_2d package rosmake features_2d 
- If you get error on missing cblas when compiling calonder_descriptor, check if you have /usr/lib/libcblas.so.3gf and if yes, run sudo ln -s /usr/lib/libcblas.so.3gf /usr/lib/libcblas.so 
- Compile PCL by running - rosmake pcl_ros 
- Download the following file cp /opt/ros/cturtle/stacks/point_cloud_perception/pcl_tutorials/data/table_scene_mug_stereo_textured.pcd `rospack find pcl_tutorials`/data/ 
This might take a few minutes, so please be patient. Let us know if you encounter any errors.
- Use images and BAG files containing data from the robot from /data/public on the desktops and http://fallschool2010.informatik.tu-muenchen.de/public/ on your laptop 
2D Perception: Planar textured object detection
- Documentation: - Important: for detector/descriptor functionality, use features_2d ROS package! http://opencv.willowgarage.com/wiki/ 
 
- Create a new package that reads training and test images of a planar object and finds a geometrically consistent solution - Read images Mat img1 = imread(argv[1], 0); //load a grayscale image 
- Compute keypoints features_2d::FastFeatureDetector detector(threshold); std::vector<KeyPoint> keypoints1; detector.detect(img1, keypoints1);
- Visualize the keypoints Mat drawImg1; features_2d::drawKeypoints(img1, keypoints1, drawImg1); namedWindow("keypoints", 1); imshow("keypoints", drawImg1); waitKey(0);
- Run the program to visually control the amount of keypoints and tune the detector threshold
- Calculate descriptors features_2d::SurfDescriptorExtractor extractor; Mat descriptors1; extractor.compute(img1, keypoints1, descriptors1);
- Find the matches features_2d::BruteForceMatcher<features_2d::L2<float> > matcher; vector<features_2d::Match> matches; matcher.matchWindowless(descriptors1, descriptors2, matches);
- Extract matches into vector<Point2f> arrays and run homography calculation with ransac: - vector<Point2f> points1, points2; // fill the arrays with the points .... Mat H = findHomography(Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold);
- Create a set of inlier matches and draw them. Use perspectiveTransform function to map points with homography: Mat points1Projected; perspectiveTransform(Mat(points1), points1Projected, H);
- Draw the matches using features_2d::drawMatches function.
- Test on the data from '/data/public/images' that has cropped objects and test images
 
- Read images 
OUR SOLUTION: match_desc.cpp
How the results depend on training/test object, detector and descriptor types, ransac reprojection threshold?
3D Perception: Segmentation and Recognition
- Go over the PCL tutorials from http://wiki/pcl/Tutorials, chapters 2 (Filtering), 3 (Segmentation), and 4 (Surface). Make sure you understand the basics. 
- Go over the PCL_ROS tutorials from http://wiki/pcl_ros/Tutorials. - Start a roscore roscore 
- Start publishing an example PointCloud2 message on the network using: - rosrun pcl_ros pcd_to_pointcloud `rospack find pcl_tutorials`/data/table_scene_mug_stereo_textured.pcd 1 cloud_pcd:=/scene_pointcloud2 
- Download table_scene_mug_textured.vcg and start RViz with it - rosrun rviz rviz -d table_scene_mug_textured.vcg 
- Download and launch some of the tutorials at http://wiki/pcl_ros/Tutorials. Example: - roslaunch voxel_grid.launch 
- Start dynamic_reconfigure and play with the parameters. Observe the effect of those parameters in RViz - rosrun dynamic_reconfigure reconfigure_gui 
 
- Start a roscore 
- Create a new PCL launch file that segments the objects on the table using nodelets in Gazebo. Hints: - First launch - export ROBOT_INITIAL_POSE="-x 3 -Y 3.14" roslaunch ias_gazebo ros_fallschool_day2.launch 
 
- To get more points on the table you need to lift the torso. You can easily do this from the command line using this: - rostopic pub /torso_controller/position_joint_action/goal pr2_controllers_msgs/SingleJointPositionActionGoal '{header: auto, goal_id: {stamp: now, id: ""}, goal: {position: 0.195, max_velocity: 1.0}}'
 
- Use pr2_teleop to move the PR2 to get a better viewpoint: - roslaunch pr2_teleop teleop_keyboard.launch 
 
- the input PointCloud2 data will be available on /full_cloud_filtered. Use RViz to visualize it. 
- Chain two PassThrough/VoxelGrid filters to filter data on two dimensions. Example: - 1 <node pkg="nodelet" type="nodelet" name="passthrough" args="load pcl/PassThrough pcl_manager" output="screen"> 2 <remap from="~input" to="/full_cloud_filtered" /> 3 <rosparam> 4 filter_field_name: z 5 filter_limit_min: 0.5 6 filter_limit_max: 1.5 7 input_frame: base_link 8 output_frame: base_link 9 </rosparam> 10 </node> 11 <node pkg="nodelet" type="nodelet" name="voxel_grid" args="load pcl/VoxelGrid pcl_manager" output="screen"> 12 <remap from="~input" to="/passthrough/output" /> 13 <rosparam> 14 filter_field_name: y 15 filter_limit_min: -0.5 16 filter_limit_max: 0.5 17 </rosparam> 18 </node> - The input_frame and output_frame parameters, are transforming the data in a coordinate frame that has the z-axis point up, at the center of the robot. See the TF tutorials for more information. 
- To enforce the planar segmentation to extract a horizontal plane, _add_ this to your pcl/SACSegmentationFromNormals nodelet entry in the launch file: 
- The z-axis is pointing upwards, which means the normal of the horizontal plane is pointing down (remember what I said during the lecture - normals are flipped towards the 0,0,0 of the coordinate system!!!). You need to change the parameters for ExtractPolygonalPrismData from - height_min: 0 height_max: 0.5to- height_min: -0.5 height_max: 0
 
- First launch 
- Try the same things on some of the BAG files recorded from the real sensors. Use
rosbag play <bag_file>
What changes? What fails? What parameters do you need to change to make things work?
- Bonus 1: Since you have all the points belonging to the table (/project_plane_inliers/output), write a node that computes a new approach pose for the robot, thus replacing the need of the checkerboard from Day1 for navigation. 
- Bonus 2: Implement a cluster recognition and pose identification system using VFH. (Nico, Zoli) 
Advanced task: use 2d inliers together with pcl to find object point cloud in 3D
- Subscribe to image messages and check that you are receiving them (use highgui)
- Run recognition and visualize the inliers, sending out an image message - Test using bag files available
- Test on a robot in realtime
 
- Subscribe to a point cloud message and visualize it in rviz
- Subscribe to CameraInfo message in order to project 3d points into the left image 
- Identify the 3D points of the inliers using point cloud
- Find a checkerboard and 3D position of its corners
- Publish checkerboard and object poses to rviz - Test on bag files first, then on a robot in realtime
 
