Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/178903
Title: Blind Deep S3D Image Quality Evaluation via Local to Global Feature Aggregation
Authors: Heeseok Oh;Sewoong Ahn;Jongyoo Kim;Sanghoon Lee
Year: 2017
Publisher: IEEE
Abstract: Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having &x007E; 91&x0025; correlation in terms of MOS.
URI: http://localhost/handle/Hannan/178903
volume: 26
issue: 10
More Information: 4923,
4936
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7973187.pdf4.55 MBAdobe PDF
Title: Blind Deep S3D Image Quality Evaluation via Local to Global Feature Aggregation
Authors: Heeseok Oh;Sewoong Ahn;Jongyoo Kim;Sanghoon Lee
Year: 2017
Publisher: IEEE
Abstract: Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having &x007E; 91&x0025; correlation in terms of MOS.
URI: http://localhost/handle/Hannan/178903
volume: 26
issue: 10
More Information: 4923,
4936
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7973187.pdf4.55 MBAdobe PDF
Title: Blind Deep S3D Image Quality Evaluation via Local to Global Feature Aggregation
Authors: Heeseok Oh;Sewoong Ahn;Jongyoo Kim;Sanghoon Lee
Year: 2017
Publisher: IEEE
Abstract: Previously, no-reference (NR) stereoscopic 3D (S3D) image quality assessment (IQA) algorithms have been limited to the extraction of reliable hand-crafted features based on an understanding of the insufficiently revealed human visual system or natural scene statistics. Furthermore, compared with full-reference (FR) S3D IQA metrics, it is difficult to achieve competitive quality score predictions using the extracted features, which are not optimized with respect to human opinion. To cope with this limitation of the conventional approach, we introduce a novel deep learning scheme for NR S3D IQA in terms of local to global feature aggregation. A deep convolutional neural network (CNN) model is trained in a supervised manner through two-step regression. First, to overcome the lack of training data, local patch-based CNNs are modeled, and the FR S3D IQA metric is used to approximate a reference ground-truth for training the CNNs. The automatically extracted local abstractions are aggregated into global features by inserting an aggregation layer in the deep structure. The locally trained model parameters are then updated iteratively using supervised global labeling, i.e., subjective mean opinion score (MOS). In particular, the proposed deep NR S3D image quality evaluator does not estimate the depth from a pair of S3D images. The S3D image quality scores predicted by the proposed method represent a significant improvement over those of previous NR S3D IQA algorithms. Indeed, the accuracy of the proposed method is competitive with FR S3D IQA metrics, having &x007E; 91&x0025; correlation in terms of MOS.
URI: http://localhost/handle/Hannan/178903
volume: 26
issue: 10
More Information: 4923,
4936
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7973187.pdf4.55 MBAdobe PDF