Computer Vision News - December 2020

9 Experiment 2: This experiment investigates the effect of mixing real data and simulated data to train the grasping model, showing that the performance is hugely improved by including simulations using simulation-to-real methods. And specifically, for the first setup (grasping of unseen objects), even with a large available dataset of 580,000 real world trials, RL-CycleGAN success rate rises from 87% to 94%. Similar results are obtained on the second setup. Experiment 3: In the end, the authors experiment with fine-tuning the grasping models, using on- policy real data. The amount of off-policy real data to train RL-CycleGAN is then reduced to 5,000 for this, and it is found that, in comparison with state-of-the-art methods based only on on-policy data, it reaches the same performance using less episodes and no domain randomization. Conclusion: Tested on two different robot grasping setups, RL-CycleGAN achieves incredible performance. This work shows significant improvement on real world vision-based robotics. You can just have a look at the examples of robot grasping performed by RL-CycleGAN and make up your own mind on this interesting work. RL-CycleGAN We are closing this year’s reviews on research with this exciting insight on robotics and deep learning. We look forward to yet many more stimulating papers in 2021!

RkJQdWJsaXNoZXIy NTc3NzU=