Computer Vision News - August 2019
At training time this will assist us to go over batches and train our net with SGD. Now we are ready to build our network. This can be done in many different ways. We can even define our network using the Keras API. Here I will define a very simple network for the sake of the example. I will use track_layer to add Dense layers to the network. Our network code is the following: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 class MNISTModel (tfe . Network): def __init__ ( self ): super (MNISTModel, self ) . __init__() self . layer1 = self . track_layer(tf . layers . Dense(units = 100 )) self . layer2 = self . track_layer(tf . layers . Dense(units = 100 )) self . layer3 = self . track_layer(tf . layers . Dense(units = 50 )) self . layer4 = self . track_layer(tf . layers . Dense(units = 10 )) def call ( self , input ): result = self . layer1( input ) result = self . layer2(result) result = self . layer3(result) result = self . layer4(result) return result model = MNISTModel() "Another cool feature of eager execut ion. . . " At the last line I created an instance of the class which we will use later. Note that we could add any number of layers to the above class and we could also custom define activation, convolution, dilation and other features. Finally, we are ready to begin training our network. Given everything we defined up till now, we can iterate through the batches, compute the loss function and the gradient and then minimize our objective. We do it by defining the optimizer, accuracy object and step object. Then we iterate 500 epochs and for each epoch, we go through all of our batches and perform the optimization. We also track the change in our loss and accuracy and print it once in a few iterations. It looks like this: Tensorflow Eager Execution 23
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=