[Documentation] [TitleIndex] [WordIndex

  Show EOL distros: 

people: face_detector | leg_detector | people_msgs | people_tracking_filter | people_velocity_tracker

Package Summary

Face detection in images.


Provides detections of faces in stereo camera data. The face detector employs the OpenCV face detector (based on a cascade of Haar-like features) to obtain an initial set of detections. It then prunes false positives using stereo depth information. The depth information is used to predict the real-world size of the detected face, which is then preserved as a true face detection only if the size is realistic for a human face. This removes the majority of false positives given by the OpenCV detector.

In the two images below, all of the bounding boxes are detections from the OpenCV frontal face detector. The blue detection does not have sufficient depth information (due to low illumination) and hence will be considered a false positive by the face_detector node. The red detection has an unrealistic real-world size, and will be considered a false positive. Only the green detections are returned as true face detections.

Raw images Raw images



The face_detector node can be used in two ways, continuously or as an action. Continuous mode: detect faces in the entire image stream. Action mode: an action is used to start face detection and the detector processes images until it has found at least one face.

Action Result

face_positions (people_msgs/PositionMeasurement)

Subscribed Topics

<camera>/left/<image> (sensor_msgs/Image) <camera>/disparity (stereo_msgs/DisparityImage) <camera>/left/camera_info (sensor_msgs/CameraInfo) <camera>/right/camera_info (sensor_msgs/CameraInfo) <camera>/rgb/<image> (sensor_msgs/Image) <camera>/depth_registered/<image> (sensor_msgs/Image) <camera>/rgb/camera_info (sensor_msgs/CameraInfo) <camera>/depth_registered/camera_info (sensor_msgs/CameraInfo)

Published Topics

face_detector/people_tracker_measurements (people_msgs/PositionMeasurement) face_detector/faces_cloud (sensor_msgs/PointCloud)


~classifier_name (string) ~classifier_filename (string) ~classifier_reliability (double (0-1)) ~do_continuous (bool) ~do_publish_faces_of_unknown_size (bool) ~do_display (bool) ~face_size_min_m (double) ~face_size_max_m (double) ~max_face_z_m (double) ~face_separation_dist_m (double) ~use_rgbd (bool) ~approx_sync (bool)

Example Usage

See face_detector/launch/face_detector.<camera>.launch to see how to run the face detector continuously, or face_detector/launch/face_detector_action.<camera>.launch to run it as an action. launch files above are stored in the repository.

Possible pitfall may be the argument names (in face_detector.rgbd.launch for example):

  <arg name="camera" default="camera" />
  <arg name="depth_ns" default="depth_registered" />
  <arg name="image_topic" default="image_rect_color" />
  <arg name="depth_topic" default="image_rect_raw" />
  <arg name="fixed_frame" default="camera_rgb_optical_frame" />

camera and depth_ns together consists the namespace prefix, and *_topic makes the body of the topic to subscribe. In the example above, you're looking for:

See this answer for the name of the topics you need to correct.

Config example with Xtion

  <arg name="camera" default="camera" />
  <arg name="image_topic" default="image_raw" />
  <arg name="depth_topic" default="image_raw" />
  <arg name="fixed_frame" default="camera_rgb_optical_frame" />
  <arg name="depth_ns" default="depth_registered" />

Example with robots

No doubt face_detector works on the robots on simulator. Examples below show how you can run this package using stereo camera on PR2 and Turtlebot. Additionally, using some other useful packages, you can visualize face detection result on RViz.

  1. Install some useful packages (jsk_pr2_startup that depends on jsk_rviz_plugins, which provides a feature to show the result of face_detector on RViz) :

    apt-get install jsk_pr2_startup 
  2. Run the robot's launch files:

    For PR2:

    roslaunch pr2_gazebo pr2_empty_world.launch
    roslaunch face_detector facedetector_rgb_pr2.launch  (*a)

    For Turtlebot:

    roslaunch turtlebot_gazebo turtlebot_world.launch
    roslaunch turtlebot_miscsample_cpp facedetector_rgb.launch (*b)
  3. Then run RViz using jsk_pr2_startup package:

    roslaunch jsk_pr2_startup rviz.launch
  4. (Simulation only) Add a standing person model on Gazebo; On Gazebo, Insert -> Standing person then place him at certain location.

  5. You may want to open Image plugin on RViz then specify /head_mount_kinect/rgb/image_raw on Image Topic row, to make sure that the robot is seeing the person's face.

  6. You may want to move the robot` so that the human is in the sight.

    (*a)...Requires this pull request merged and released. (*b)...Requires (pull request TBA)


Gazebo and RViz img sample



2019-06-15 12:40