CVPR Daily - Wednesday
Paul Henderson is a British postdoc at IST Austria. His work is about using 2D data to learn 3D textured shapes. He speaks to us ahead of his oral presentation today (Wednesday). There aren’t currently many datasets that contain 3D data, whereas there are lots of datasets of images. This work proposes the learning of a generative model that can generate 3D colored shapes using 2D images as data without any 3D annotations . To do this, the model is trained to explain the images as a 3D object rendered over a 2D background. These 3D objects are forced to lie in a low-dimensional manifold. The model is trained like a variational autoencoder, so it’s forced to reconstruct its training images and to minimize the divergence between the distribution of reconstructed shapes and some prior. It also introduces a new parametrization for 3D shapes that ’s designed to ensure that they can’t contain intersecting faces. If you generate the vertices of a mesh on a neural network, then you normally end up with lots of triangles intersecting each other. To define the parametrization which ensures that can’t happen, it sets up a linear programming proble m and solves DAILY Wednesday Oral Presentation 6 that linear programming problem inside the neural network. Paul explains: “The training process is a variational autoencoder , which means it uses a CNN to encode the input images, and then in order to reconstruct them, it uses another neural network to produce mesh vertices. Then it uses a differentiable renderer to render that 3D mesh over the background into a 2D image.” One benefit of this work in the real world is for 3D content creation . There are many films and games that need 3D assets generated. Normally, a human artist would manually build these shapes using Leveraging 2D Data to Learn Textured 3D Mesh Generation “A generative model that can generate 3D colored shapes using 2D images as data without any 3D annotations.”
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=