Computer Vision News - November 2022

38 Computer Vision Tool Shuffling the dataset Of course, we need to add a variable of randomness, so let’s shuffle the dataset! X_trains,Y_trains = unison_shuffled_copies(X_traine, Y_traine) X_tests,Y_tests = unison_shuffled_copies(X_teste, Y_teste) # Provide the same seed and keyword arguments to the fit and flow methods seed = rand_sv batch_s = 120 strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0") model = get_unet_light(img_rows=128, img_cols=128) model.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=.001, ), loss=log_dice_loss, metrics=[dice_coef] # Understand the model model.summary() I won’t include the output for brevity, but you’ll see all the layers. Notice X_tests.shape (2380, 128, 128, 3) # Model Training Step if (train): history = model.fit( X_trains,Y_trains, epochs=Epochs, batch_size=128, ) Saving the weights An important part which you should always remember is to save weights during training! if (train): model.load_weights(os.path.join('/content/drive/My Drive/Medical Image/cnn_term_paper/ model2_cup_last_check_point.hdf5')) # evaluate the model with the test data result = model.evaluate(X_tests[0:700,:,:,:],Y_tests[0:700,:,:,:]) print("log dice loss for test set = ", result[0], ' ||| ',"dice coeffecient (accuracy) for test set = ", result[1])

RkJQdWJsaXNoZXIy NTc3NzU=