Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/233489
Title: Co-Bootstrapping Saliency
Authors: Huchuan Lu;Xiaoning Zhang;Jinqing Qi;Na Tong;Xiang Ruan;Ming-Hsuan Yang
Year: 2017
Publisher: IEEE
Abstract: In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.
URI: http://localhost/handle/Hannan/233489
volume: 26
issue: 1
More Information: 414,
425
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7742419.pdf6.16 MBAdobe PDF
Title: Co-Bootstrapping Saliency
Authors: Huchuan Lu;Xiaoning Zhang;Jinqing Qi;Na Tong;Xiang Ruan;Ming-Hsuan Yang
Year: 2017
Publisher: IEEE
Abstract: In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.
URI: http://localhost/handle/Hannan/233489
volume: 26
issue: 1
More Information: 414,
425
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7742419.pdf6.16 MBAdobe PDF
Title: Co-Bootstrapping Saliency
Authors: Huchuan Lu;Xiaoning Zhang;Jinqing Qi;Na Tong;Xiang Ruan;Ming-Hsuan Yang
Year: 2017
Publisher: IEEE
Abstract: In this paper, we propose a visual saliency detection algorithm to explore the fusion of various saliency models in a manner of bootstrap learning. First, an original bootstrapping model, which combines both weak and strong saliency models, is constructed. In this model, image priors are exploited to generate an original weak saliency model, which provides training samples for a strong model. Then, a strong classifier is learned based on the samples extracted from the weak model. We use this classifier to classify all the salient and non-salient superpixels in an input image. To further improve the detection performance, multi-scale saliency maps of weak and strong model are integrated, respectively. The final result is the combination of the weak and strong saliency maps. The original model indicates that the overall performance of the proposed algorithm is largely affected by the quality of weak saliency model. Therefore, we propose a co-bootstrapping mechanism, which integrates the advantages of different saliency methods to construct the weak saliency model thus addresses the problem and achieves a better performance. Extensive experiments on benchmark data sets demonstrate that the proposed algorithm outperforms the state-of-the-art methods.
URI: http://localhost/handle/Hannan/233489
volume: 26
issue: 1
More Information: 414,
425
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7742419.pdf6.16 MBAdobe PDF