Computer Vision News 14 After we developed our low-level API’s in previous part, now we need to write down a new script that contains our mission planner. Let’s call this script px100_vision_IK (or any name that you like). We will import our custom-defined library in this file, as well as the vision module and some other basic libraries, by adding the following imports: We now need to define some constants required for the vision module operation as well as the standard homogeneous transformation for the basket (or any object based on your design) pose. Remember that the vision module needs to define a number of reference frames: the camera’s reference frame (the vision module references the 3D coordinate data with respect to the camera’s optical center), the arm tag reference frame (the frame of the AprilTag attached to the robot arm, which is captured by the camera in the very beginning to determine where the end-effector stands from the camera’s optical center), and finally the robot’s base frame (which is required to reference everything with respect to the robot’s base frame instead of the camera’s optical center). This can be done as follows: We now have all the global variables and dependencies that we need to design our mission planner; it is time to create our main function and start to create a robot object, a cloud interface object (for 3D point cloud and depth data generation), an arm tag interface object (for AprilTag identification), and an inverse kinematics object: Lessons in Robotics
RkJQdWJsaXNoZXIy NTc3NzU=