Computer Vision News - August 2019

Tensorflow Eager Execution As you can see, the eager execution makes things a lot easier! In the rest of this article, I will implement an MNIST classifier to show you some cool things we can do with eager execution. Hopefully, after reading this article you will be able to write your own networks using eager execution. We will now dive into building an MNIST classifier. We will not focus on achieving state-of-the-art accuracy, but rather on the eager execution style of writing. We begin with enabling the eager execution with TensorFlow by simply writing: 1 2 3 import tensorflow as tf tf . enable_eager_execution() Next, as all practical pipelines require, we will use the sklearn API to load the digit dataset. We will also make some basic preprocessing to the data to define the features and the target. It looks like this: 1 2 3 4 5 6 from sklearn.datasets import load_digits digits = load_digits() X = digits . data y = digits . target y = tf . one_hot(y, 10 ,dtype = 'float64' ) images = digits . images Now for the fun part. We will explain each of the special features we use until we reach the complete network. Eager execution contains an automatic differentiation. This means we can define any function we desire and with a simple command to compute its gradient. In contrast to the computation graph scheme, the eager gradient calculation defines the gradient as a simple Python function. This, of course, gives us flexibility and the opportunity to play with the gradient. We might even use it to build our own optimizers. In the following example, we define a loss function and then, we define a function to calculate the gradient of this loss: ˆ ‰  21

RkJQdWJsaXNoZXIy NTc3NzU=