Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/601905
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYanwei Fuen_US
dc.contributor.authorTimothy M. Hospedalesen_US
dc.contributor.authorTao Xiangen_US
dc.contributor.authorJiechao Xiongen_US
dc.contributor.authorShaogang Gongen_US
dc.contributor.authorYizhou Wangen_US
dc.contributor.authorYuan Yaoen_US
dc.date.accessioned2020-05-20T08:57:18Z-
dc.date.available2020-05-20T08:57:18Z-
dc.date.issued2016en_US
dc.identifier.issn0162-8828en_US
dc.identifier.other10.1109/TPAMI.2015.2456887en_US
dc.identifier.urihttp://localhost/handle/Hannan/155219en_US
dc.identifier.urihttp://localhost/handle/Hannan/601905-
dc.descriptionen_US
dc.description.abstractThe problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.en_US
dc.publisherIEEEen_US
dc.relation.haspart7159107.pdfen_US
dc.subjectoutlier detection|Subjective visual properties|robust rankingen_US
dc.titleRobust Subjective Visual Property Prediction from Crowdsourced Pairwise Labelsen_US
dc.typeArticleen_US
dc.journal.volume38en_US
dc.journal.issue3en_US
dc.journal.titleIEEE Transactions on Pattern Analysis and Machine Intelligenceen_US
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7159107.pdf1.13 MBAdobe PDFThumbnail
Preview File
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYanwei Fuen_US
dc.contributor.authorTimothy M. Hospedalesen_US
dc.contributor.authorTao Xiangen_US
dc.contributor.authorJiechao Xiongen_US
dc.contributor.authorShaogang Gongen_US
dc.contributor.authorYizhou Wangen_US
dc.contributor.authorYuan Yaoen_US
dc.date.accessioned2020-05-20T08:57:18Z-
dc.date.available2020-05-20T08:57:18Z-
dc.date.issued2016en_US
dc.identifier.issn0162-8828en_US
dc.identifier.other10.1109/TPAMI.2015.2456887en_US
dc.identifier.urihttp://localhost/handle/Hannan/155219en_US
dc.identifier.urihttp://localhost/handle/Hannan/601905-
dc.descriptionen_US
dc.description.abstractThe problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.en_US
dc.publisherIEEEen_US
dc.relation.haspart7159107.pdfen_US
dc.subjectoutlier detection|Subjective visual properties|robust rankingen_US
dc.titleRobust Subjective Visual Property Prediction from Crowdsourced Pairwise Labelsen_US
dc.typeArticleen_US
dc.journal.volume38en_US
dc.journal.issue3en_US
dc.journal.titleIEEE Transactions on Pattern Analysis and Machine Intelligenceen_US
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7159107.pdf1.13 MBAdobe PDFThumbnail
Preview File
Full metadata record
DC FieldValueLanguage
dc.contributor.authorYanwei Fuen_US
dc.contributor.authorTimothy M. Hospedalesen_US
dc.contributor.authorTao Xiangen_US
dc.contributor.authorJiechao Xiongen_US
dc.contributor.authorShaogang Gongen_US
dc.contributor.authorYizhou Wangen_US
dc.contributor.authorYuan Yaoen_US
dc.date.accessioned2020-05-20T08:57:18Z-
dc.date.available2020-05-20T08:57:18Z-
dc.date.issued2016en_US
dc.identifier.issn0162-8828en_US
dc.identifier.other10.1109/TPAMI.2015.2456887en_US
dc.identifier.urihttp://localhost/handle/Hannan/155219en_US
dc.identifier.urihttp://localhost/handle/Hannan/601905-
dc.descriptionen_US
dc.description.abstractThe problem of estimating subjective visual properties from image and video has attracted increasing interest. A subjective visual property is useful either on its own (e.g. image and video interestingness) or as an intermediate representation for visual recognition (e.g. a relative attribute). Due to its ambiguous nature, annotating the value of a subjective visual property for learning a prediction model is challenging. To make the annotation more reliable, recent studies employ crowdsourcing tools to collect pairwise comparison labels. However, using crowdsourced data also introduces outliers. Existing methods rely on majority voting to prune the annotation outliers/errors. They thus require a large amount of pairwise labels to be collected. More importantly as a local outlier detection method, majority voting is ineffective in identifying outliers that can cause global ranking inconsistencies. In this paper, we propose a more principled way to identify annotation outliers by formulating the subjective visual property prediction task as a unified robust learning to rank problem, tackling both the outlier detection and learning to rank jointly. This differs from existing methods in that (1) the proposed method integrates local pairwise comparison labels together to minimise a cost that corresponds to global inconsistency of ranking order, and (2) the outlier detection and learning to rank problems are solved jointly. This not only leads to better detection of annotation outliers but also enables learning with extremely sparse annotations.en_US
dc.publisherIEEEen_US
dc.relation.haspart7159107.pdfen_US
dc.subjectoutlier detection|Subjective visual properties|robust rankingen_US
dc.titleRobust Subjective Visual Property Prediction from Crowdsourced Pairwise Labelsen_US
dc.typeArticleen_US
dc.journal.volume38en_US
dc.journal.issue3en_US
dc.journal.titleIEEE Transactions on Pattern Analysis and Machine Intelligenceen_US
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7159107.pdf1.13 MBAdobe PDFThumbnail
Preview File