Computer Vision News - October 2021
“A computer vision model can predict some good landing spots,” Stefan explains. “The best method to choose is an AI model. Semantic segmentation can semantically parse the scene and find the best spots. We know the model is 99 per cent accurate, but we still don’t want to hit anyone on the head, so we take those landing spots and put a safety goal around them. We use a second sensor from either the thermal imager or a LiDAR sensor and we look at that predicted spot and validate if it is really safe. With the thermal imager we can validate it in the sense of, are there people, animals, or vehicles there? Those are the most critical things we want to avoid . With the LiDAR sensor, we can estimate if the scene is geometrically flat and if there is any dynamic object in sight. With two deterministic passes to validate it, we can then say, okay, this spot is safe.” This idea is not fully new in the automated domain, but it has not often been combined with AI. The model must be validated with a large amount of labeled data , which is expensive. Deploying a model in a different domain, a different country for example, requires domain adaptation , which significantly lowers the cost. “We have a simulation engine where all the flights and the labels can be generated,” Stefan tells us. “We also have a lot of labeled data from companies and are working on how we can transfer this data to different domains. We call it cross-domain AI . That is our core product. We have been working on a pure software product called Visionairy perception software , which has several features for UAVs and aviation.” 2 Summary 18 Landing Sensor Fusion AI Systems for Autonomous Mobility
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=