Computer Vision News - May 2024

Computer Vision News 8 Fig. 1: Experimental Setup. This setup includes a computer running ROS2 with RViz on Ubuntu 22.04, the camera on stand, the robot arm, and the objects. There is nothing special about this setup. Just set up the camera in a way that the AprilTag and the objects are clearly visible. Also, choose objects that can fit into the robot’s gripper and avoid reflective ones, as they may disrupt the depth camera’s ability to detect them using infrared light. Also, the scene cannot have direct sunlight present for the same reason. For the robot to detect the position of each object and pick them, first, it is necessary for us to know the position of the camera relative to the arm. We can do this by manually measuring the offset between the camera’s color optical frame and the robot’s ’base_link’ . However, using this method is extremely time-consuming and prone to errors. Instead, we utilize the apriltag_ros ROS2 package to determine the transformation of the AprilTag visual fiducial marker on the arm’s end-effector relative to the camera’s color optical frame. Afterward, the transformation from the camera’s color optical frame to the arm’s ’base_link’ frame is computed and then published as a static transform. To run the perception pipeline, run the following launch command in a terminal (note that you should first install the ROS2, and Python-ROS API from previous parts): Lessons in Robotics

RkJQdWJsaXNoZXIy NTc3NzU=