Computer Vision News - December 2022
7 Chelsea Finn in the real world. You said before that the credit goes to the software when it works. Should we also credit the software when it does not work? Yeah, absolutely. I think that there are a number of mismatches between simulations in the real world that could be the cause. One could be that when you read camera images in simulation, you can get camera images instantaneously. Whereas in the real world, there's latency, there's a delay. If your algorithm isn't prepared to handle that delay from the camera images, then it might not work, and it'll fail in a way. It won’t tell you that the camera images are delayed; therefore, it’s not working. You kind of need to figure out what the issue is. The public imagines a robot, like a humanoid with arms and a head, and eyes, which isn’t always true. What kind of machines are you working on? have to have it try to run again. But in the real world, you can't just reset the robot back to where it started. If it fell down, you need to actually pick it back up. We've been trying to develop algorithms that actually allow robots to learn autonomously, where if they fall down, they can get back up. Or if they push an object into the corner of the workspace, they're able to get it back to the main part of the workspace. This is one problem that we've been looking at, and at first, it can be very tedious to train algorithms with the reset, but when we try to actually develop algorithms that work without the resets, there have been times where we will run an algorithm in simulation, and it's working beautifully in simulation. Then we put it on the robot, and it's just not working or learning at all, and it gets stuck. It's hard to tell what the difference is between the simulation, what's the learning process in simulation, and the learning process in the real world to try to understand why it's not working
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=