Computer Vision News - December 2020

Research 4 RL - CycleGAN Every month, Computer Vision News selects a research paper to review. For the end of the year, we have had plenty of choice given the recent works presented at MICCAI and CVPR, and we decided to get inspired from the latter and review the paper RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real. We are indebted to the authors (Kanishka Rao, Chris Harris, Alex Irpan, Sergey Levine, Julian Ibarz, Mohi Khansari), for allowing us to use their images to illustrate this review. You can find their paper at this link. Introduction: This paper focuses on the problem of training deep neural networks to perform certain complex tasks, such as grasping objects. To this end, Reinforcement Learning (RL) is often used to learn visual representations, but this can require lots of task specific data, hard to retrieve. A possible solution is to train such systems using simulations and then transfer the learned representations on real problems. Of course, there is an issue linked to the simulation-to-reality gap which needs to be taken in consideration, and it’s usually accounted by manual task-specific engineering. To make this task automatic, the authors propose a method that employs generative models to translate simulated images into realistic ones combined with a RL-scene consistency loss for image translation, to enforce that the Q-values predicted by an RL-trained Q-function are invariant under the domain adaptation (DA) transformation. This is accurately named RL- CycleGAN: Reinforcement Learning Aware Simulation-To-Real , an extension of the CycleGAN for unpaired image-to-image translation introduced here. by Marica Muffoletto

RkJQdWJsaXNoZXIy NTc3NzU=