CVPR Daily - Wednesday
Which was enough to impress you? Which was enough to impress us, but also, we knew that could never succeed. There’s a compounding probability of failure because the system did the thing that all computer vision engineers love which is, if I knew what I was doing 30 milliseconds ago, then surely I can use that information to learn what I’m doing now. What we did was we didn’t use that and that was the crazy thing. I now have a question for the engineer that you feel you are. What is the next boundary that you would love to break? I would love us to be able to train much smaller neural networks because I feel very uncomfortable whenever I see two floating-point numbers getting multiplied together. If you make them 16-bit floats, it doesn’t make me much more comfortable. Well, that enlargement is today considered acceptable. Yes, but someday we’re going to have to do it with much less computer. Obviously, today the HoloLens is a head-mounted device. We want to think about the number of milliwatts it takes to compute an answer to some question. It’s happening. One of my colleagues, Manik Varma in India, has a VGG net kind of architecture which is one kilobyte in size, whereas you think about neural networks as millions of parameters. In the medical world, there are diseases which are so rare that you never have enough samples in order to put them into a large state-of-the-art neural network. Absolutely, and that’s one of the focuses of my research group, All Data AI. What that means is big data and small data. You might have your example of a medical scenario where you have two training examples of this important disease, and maybe you have millions of examples of other situations. How do you train using tiny amounts of data? We know the answer to that is essentially you use Bayesian methods. Bayesian methods are great when you lack training data. Sometimes you see a world where there’s data everywhere. It’s like with the phrase, ‘Water everywhere and not a drop to drink.’ You have data everywhere, but only two training examples. There’s a whole question there about what we can do with semi-supervised learning, with unsupervised learning, and I think our strength in variational inference almost looks like it’s being overshadowed by this deep learning revolution, but it’s crucially important for real-world problems. Now a question for the human behind the engineer. What boundary would you like to break in this community in order to work better together? Someday we’re going to have to figure 12 DAILY CVPR Wednesday Guest
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=