Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/601905
Title: Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels
Authors: Yanwei Fu;Timothy M. Hospedales;Tao Xiang;Jiechao Xiong;Shaogang Gong;Yizhou Wang;Yuan Yao
subject: outlier detection|Subjective visual properties|robust ranking
Year: 2016
Publisher: IEEE
Abstract: The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.
Description: 
URI: http://localhost/handle/Hannan/155219
http://localhost/handle/Hannan/601905
ISSN: 0162-8828
volume: 38
issue: 3
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7159107.pdf1.13 MBAdobe PDFThumbnail
Preview File
Title: Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels
Authors: Yanwei Fu;Timothy M. Hospedales;Tao Xiang;Jiechao Xiong;Shaogang Gong;Yizhou Wang;Yuan Yao
subject: outlier detection|Subjective visual properties|robust ranking
Year: 2016
Publisher: IEEE
Abstract: The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.
Description: 
URI: http://localhost/handle/Hannan/155219
http://localhost/handle/Hannan/601905
ISSN: 0162-8828
volume: 38
issue: 3
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7159107.pdf1.13 MBAdobe PDFThumbnail
Preview File
Title: Robust Subjective Visual Property Prediction from Crowdsourced Pairwise Labels
Authors: Yanwei Fu;Timothy M. Hospedales;Tao Xiang;Jiechao Xiong;Shaogang Gong;Yizhou Wang;Yuan Yao
subject: outlier detection|Subjective visual properties|robust ranking
Year: 2016
Publisher: IEEE
Abstract: The problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.
Description: 
URI: http://localhost/handle/Hannan/155219
http://localhost/handle/Hannan/601905
ISSN: 0162-8828
volume: 38
issue: 3
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7159107.pdf1.13 MBAdobe PDFThumbnail
Preview File