-
Notifications
You must be signed in to change notification settings - Fork 70
Reference
WhyCon is composed of a shared library, which encapsulates the main functionality, and a ROS package which employs this library in order to process images and compute the pose of the targets.
This node provides the main functionality of tracking and localization of patterns.
- targets : mandatory parameter, indicating how many targets are to be tracked in the
- outer_diameter: diameter (in meters) of outer portion (black) of circle
- inner_diameter: diameter (in meters) of inner portion (white) of circle
- max_attempts: maximum number of attempts to detect a given circle while processing a single frame
- max_refine: maximum number of refinement steps to be performed after succesfully detecting a given circle (1 = no refine)
- axis: filename (no extension) of the file containing the user-defined transform (see below). If not supplied, an identity transformation is used.
Changing max_attempts and max_refine you can balance precision/robustness against computational time to process each frame. Since the detection is really fast, you will probably be able to increase these values without loosing camera frames.
In order to detect a circle, a threshold value separating black from white portion needs to be determined. Every time a circle is first detected, a threshold is attempted. If detection fails, a new threshold is used. Once a circle is detected, a better threshold is computed by taking the average between black and white portions of the circle. This implies that, for a given image, the threshold for a given circle can be improved iteratively until convergence. The variable max_refine controls how many steps to perform this iteration at a maximum.
-
subscribed topics:
- /camera/image_rect_color [sensor_msgs/Image]: calibrated camera images
- /camera/camera_info [sensor_msgs/CameraInfo] : camera calibration (camera needs to be calibrated)
-
advertised topics:
- ~/image_out [sensor_msgs/Image]: output image showing detected circles
- ~/visualization_marker [visualization_msgs/Marker]: allows to see targets as markers in rviz
- ~/points [whycon/PointArray]: the target's positions in image-space, as a list of 2D pixel coordinates
- ~/poses [geometry_msgs/PoseArray] : the target's poses in camera-space (before any transformation is applied)
- ~/trans_poses [geometry_msgs/PoseArray] : the target's poses in user-defined space, after applying the tranformation obtained during axis-setting (see corresponding node).
- advertised
- ~/reset [std_srvs/Empty]: reset detection (reset any tracking information and re-detect all circles)
This node is used to obtain a transformation from camera-space to world-space. This allows to have an user-defined position in the real world as the coordinate frame from where target positions are obtained.
When the targets are known to be moving on a plane (for example, you are tracking a ground robot and you place the pattern on the robot, parallel to the floor), a 3D->2D transform can be computed. This not only guarantees that transformed coordinates will be placed on the corresponding plane, with respect to an user-defined coordinate system, but also greatly improves precision of the system.
When working in 3D space, a 3D->3D transform can be obtained.
NOTE: 3D->3D transform was previously available and was removed during the introduction of 3D->2D transforms. This capability will be added soon.
For a tutorial on how to perform the axis-setting, see the tutorials.
- outer_diameter: diameter (in meters) of outer portion (black) of circle
- inner_diameter: diameter (in meters) of inner portion (white) of circle
- axis: filename (no extension) to be used for the file that will be written, containing the found transform
-
subscribed topics:
- /camera/image_rect_color [sensor_msgs/Image]: calibrated camera images
- /camera/camera_info [sensor_msgs/CameraInfo] : camera calibration (camera needs to be calibrated)
-
advertised topics:
- ~/image_out [sensor_msgs/Image]: output image showing the calibration circles used to define the axis.
If you want to estimate the 6DoF pose of a robot with WhyCon, a single pattern is not sufficient, since the yaw angle cannot be recovered (due to the simmetry of the pattern). For full absolute 6DoF pose estimation, three circles placed in an "L"-shape can be used.
TODO: this node is being written, it works assuming a 3D->2D transformation was used.