Computer Vision News - August 2022
5 See Eye to Eye 1 2 The pipeline of SEE includes: 1. Object isolation . This step is different for the source domain- where ground truth boxes are available and can be used to crop the point cloud- and the target domain, where image instance segmentation with clustering is employed. This point seems to be an essential and particularly challenging one. This can be observed in Figure 2, where loosely fitted instance masks and calibration errors lead to inclusion of points which do not belong to the main object (vehicles). The solution to this seems to be having a well- calibrated lidar and camera pair with minimal viewpoint misalignment. Hopefully, this can approximate the ideal scenario of ground truth bounding boxes in the target domain, which is discussed later on in the article; 2. Surface Completion : the Ball-pivoting Algorithm is used to interpolate a triangle mesh and recover the geometry. This seems to work well addressing the issue of partial occlusion (most recurrent in driving datasets); 3. Point sampling . Since points at closer ranges typicallyhavemore confident detections, the trianglemeshes obtained in 2 are upsampled using Poisson disk sampling. This is done by emulating the point density of objects at a closer range, which should generally improve performance unless there are errors in the corresponding isolation phase for that particular object. The SEE method is validated on three public datasets (Waymo, KITTI, nuScenes) and a novel one (Baraja Spectrum-Scan™ dataset), on the “Car” or “Vehicle” class. The difference between the public datasets is shown in Figure 3, where it’s possible to observe the effect of different types of lidars and scan patterns on ring separation. Figure 2: Issues with background points in KITTI dataset
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=