ICCV Daily 2021 - Thursday
“ We were able to show an improvement on the kind of normal scenes people use NeRF for, ” Jon reveals. “The main improvement is that NeRF “ For big regions, we didn’t want to descend all the way down into the tree, ” Jon explains. “ The way we got the math working for this is we said we have this volume of space, and we don’t want to just featurize one point in that space, we want to featurize the entire volume, and the way we’re going to do that is we’re going to marginalize over the volume. We’re going to integrate out the entire volume of space. We compute what we call the integrated positional encoding. This is the expected positional encoding as you integrate over this volume. It is a hard thing to do, but if you approximate this conical frustum as just a standard multivariate Gaussian, this integration has a closed form. ” The result is a nice map that simplifies out and you can say I’m going to featurize this conical sub frustrum using these positional encodings. The math simplifies greatly. If the volume is very big then the high frequencies get zeroed out, and if the volume is very small, the high frequencies stick around. The neural network that then takes these features has a way of saying, ‘I know I’m exactly here,’ or it can say, ‘I know I’m somewhere in this whole space, but I don’t know where. ’ The model can reason about scale naturally. It knows if something is big, it knows if something is small, and importantly, if something is big, it doesn’t have false information about where it might be at these higher frequencies. They just don’t exist. The frequencies get completely deleted. would fail catastrophically in certain circumstances and this was whenever there was significant scale variation. If I take a bunch of pictures of an object, in NeRF we would always take these pictures from the same distance. That worked great because whenever we saw something it was at the same distance, so NeRF’s issues with aliasing and scale didn’t pose a problem for us. But then if we started moving the camera in or out, this became a big problem. If we zoomed the camera in too far, we were shooting these small rays that corresponded to small pixels but the model didn’t know the pixels were small so we would see ghosting and weirdness. If the camera was far back, we’d shoot these rays that corresponded to big pixels, but it didn’t know the pixels were big and we’d get bad aliasing. You can see this in the video results for the paper. ” 6 DAILY ICCV Thursday Oral Presentation
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=