Computer Vision News - November 2021
63 ... on semi-supervised learning at UCL Proxy-labelling has the goal to generate pseudo-labels for unlabelled data in order to enhance the dataset and get more training samples. Self-training, which I implemented, uses a model that is trained on labelled data in the first instance. Then, pseudo labels are iteratively generated for (portions of) unlabelled images and then used for the next training iteration. Co-training uses two (or more) models, which are trained simultaneously and generate pseudo labels for each other’s unlabelled data after the initial training on labelled data. The models are supposed to agree on predictions and disagree on errors. •Consistency regularisation (e.g. temporal ensembling, mean teacher) Consistency regularisation follows the assumption that different perturbations produce the same output. Instead of considering predictions as ground truth, the distance of outputs is minimized to achieve consistency. •Proxy-labelling with hybrid methods (e.g. MixMatch, FixMatch) MixMatch combines entropy minimisation and consistency regularisation. FixMatch first predicts pseudo labels from weakly augmented data and later uses these as the ground truth for the same images, but strongly augmented. • GAN (e.g. SGAN): Semi-Supervised Generative Adversarial Networks (SGAN) consist of a generator network and a discriminator network. The generator is Semi-supervised learning using a student-teacher approach with consistency loss and exponential moving average (EMA).
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=