Computer Vision News - July 2018
Jingya explains that previous re- identification work has focused on the unsupervised setting that needed a larger amount of pairwise data. In this work, they want to jointly learn the attribute and identity space to get the better feature extraction for the person re-identification. In terms of challenges, she says that because this work uses surveillance video, it’s not like most computer vision works that use well annotated or well-structured annotation. This is unsupervised . Also, because it is surveillance video, it has poor image quality, the background is normally blurred, and sometimes there are different view conditions because of different camera locations. She adds that person re-identification is more challenging than facial recognition . Jingya explains more about the computer vison methods used: “ The first one that we introduced was progressive knowledge fusion mechanism by encoder-decoder intermediate space. Second, we proposed a normal domain adaptation method, but it should be the consistency scheme, and because of this we can transfer the knowledge in the attribute space. The reason we transfer the knowledge in the attribute space is because normally for the human re-identification, the ID labels from different domains, different data sets, they are independent – they don’t have overlaps – but in our study, we want to transfer the knowledge. This is an open-set recognition problem. In our work, we want to introduce attribute space, because it’s more uniform space for transferring the knowledge, because they share the most common description. ” The next step is to deploy a better domain adaptation model . Being this an open-set domain adaptation problem, it is very challenging in the re-identification setting, and Jingya would like to explore more on that aspect. Jingya Wang presented her work on a poster at CVPR on June 19. “This is an open-set recognition problem.” Tuesday 13 Jingya Wang The next step is to deploy a better domain adaptation model .
Made with FlippingBook
RkJQdWJsaXNoZXIy NTc3NzU=