Please use this identifier to cite or link to this item:
http://localhost/handle/Hannan/235795
Title: | Fully Deep Blind Image Quality Predictor |
Authors: | Jongyoo Kim;Sanghoon Lee |
Year: | 2017 |
Publisher: | IEEE |
Abstract: | In general, owing to the benefits obtained from original information, full-reference image quality assessment (FR-IQA) achieves relatively higher prediction accuracy than no-reference image quality assessment (NR-IQA). By fully utilizing reference images, conventional FR-IQA methods have been investigated to produce objective scores that are close to subjective scores. In contrast, NR-IQA does not consider reference images; thus, its performance is inferior to that of FR-IQA. To alleviate this accuracy discrepancy between FR-IQA and NR-IQA methods, we propose a blind image evaluator based on a convolutional neural network (BIECON). To imitate FR-IQA behavior, we adopt the strong representation power of a deep convolutional neural network to generate a local quality map, similar to FR-IQA. To obtain the best results from the deep neural network, replacing hand-crafted features with automatically learned features is necessary. To apply the deep model to the NR-IQA framework, three critical problems must be resolved: 1) lack of training data; 2) absence of local ground truth targets; and 3) different purposes of feature learning. BIECON follows the FR-IQA behavior using the local quality maps as intermediate targets for conventional neural networks, which leads to NR-IQA prediction accuracy that is comparable with that of state-of-the-art FR-IQA methods. |
URI: | http://localhost/handle/Hannan/235795 |
volume: | 11 |
issue: | 1 |
More Information: | 206, 220 |
Appears in Collections: | 2017 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
7782419.pdf | 1.38 MB | Adobe PDF |
Title: | Fully Deep Blind Image Quality Predictor |
Authors: | Jongyoo Kim;Sanghoon Lee |
Year: | 2017 |
Publisher: | IEEE |
Abstract: | In general, owing to the benefits obtained from original information, full-reference image quality assessment (FR-IQA) achieves relatively higher prediction accuracy than no-reference image quality assessment (NR-IQA). By fully utilizing reference images, conventional FR-IQA methods have been investigated to produce objective scores that are close to subjective scores. In contrast, NR-IQA does not consider reference images; thus, its performance is inferior to that of FR-IQA. To alleviate this accuracy discrepancy between FR-IQA and NR-IQA methods, we propose a blind image evaluator based on a convolutional neural network (BIECON). To imitate FR-IQA behavior, we adopt the strong representation power of a deep convolutional neural network to generate a local quality map, similar to FR-IQA. To obtain the best results from the deep neural network, replacing hand-crafted features with automatically learned features is necessary. To apply the deep model to the NR-IQA framework, three critical problems must be resolved: 1) lack of training data; 2) absence of local ground truth targets; and 3) different purposes of feature learning. BIECON follows the FR-IQA behavior using the local quality maps as intermediate targets for conventional neural networks, which leads to NR-IQA prediction accuracy that is comparable with that of state-of-the-art FR-IQA methods. |
URI: | http://localhost/handle/Hannan/235795 |
volume: | 11 |
issue: | 1 |
More Information: | 206, 220 |
Appears in Collections: | 2017 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
7782419.pdf | 1.38 MB | Adobe PDF |
Title: | Fully Deep Blind Image Quality Predictor |
Authors: | Jongyoo Kim;Sanghoon Lee |
Year: | 2017 |
Publisher: | IEEE |
Abstract: | In general, owing to the benefits obtained from original information, full-reference image quality assessment (FR-IQA) achieves relatively higher prediction accuracy than no-reference image quality assessment (NR-IQA). By fully utilizing reference images, conventional FR-IQA methods have been investigated to produce objective scores that are close to subjective scores. In contrast, NR-IQA does not consider reference images; thus, its performance is inferior to that of FR-IQA. To alleviate this accuracy discrepancy between FR-IQA and NR-IQA methods, we propose a blind image evaluator based on a convolutional neural network (BIECON). To imitate FR-IQA behavior, we adopt the strong representation power of a deep convolutional neural network to generate a local quality map, similar to FR-IQA. To obtain the best results from the deep neural network, replacing hand-crafted features with automatically learned features is necessary. To apply the deep model to the NR-IQA framework, three critical problems must be resolved: 1) lack of training data; 2) absence of local ground truth targets; and 3) different purposes of feature learning. BIECON follows the FR-IQA behavior using the local quality maps as intermediate targets for conventional neural networks, which leads to NR-IQA prediction accuracy that is comparable with that of state-of-the-art FR-IQA methods. |
URI: | http://localhost/handle/Hannan/235795 |
volume: | 11 |
issue: | 1 |
More Information: | 206, 220 |
Appears in Collections: | 2017 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
7782419.pdf | 1.38 MB | Adobe PDF |