Computer Vision News - August 2019

1 2 3 4 5 6 7 8 def loss (model,X, desired_y): predicted_y = model(X) return tf . reduce_mean(tf . square(predicted_y - desired_y)) def grad (model, inputs, targets): with tf . GradientTape() as tape: loss_value = loss(model, inputs, targets) return loss_value, tape . gradient(loss_value, model . trainable_variables) Above, model(x) is our neural net, that we will build later on this article. The grad() function we defined returns the loss value as well as the gradient at the specific point. Note that with tensorflow.contrib.eager we could also calculate the gradient for any Python function (not necessarily containing TensorFlow variables). Another cool feature of the eager execution is the iterator. When eager execution is enabled, dataset objects support iterations. This becomes very useful when we want to split our data into batches. Instead of manually iterating through the data and splitting it by ourselves, we can use data set object and get access to the batches. For example, in the code snippet below we iterate through the data with the iterator object and, for each batch, we print the percentage of samples from the first class at each batch: 1 2 3 train_dataset = tf . data . Dataset . from_tensor_slices((X, y)) for xBatch,yBatch in tfe . Iterator(train_dataset . batch( 32 )): print ( 'Percentage of first class : ' , tf . reduce_mean(yBatch[:, 0 ]) . numpy() * 100 , ' %' ) We Tried for You: 22  

RkJQdWJsaXNoZXIy NTc3NzU=