Please use this identifier to cite or link to this item: http://localhost:80/handle/Hannan/146715
Title: Joint Learning of Multiple Regressors for Single Image Super-Resolution
Authors: Kai Zhang;Baoquan Wang;Wangmeng Zuo;Hongzhi Zhang;Lei Zhang
subject: Image super-resolution|joint learning|mixture of experts|local learning|linear regression
Year: 2016
Publisher: IEEE
Abstract: Using a global regression model for single image super-resolution (SISR) generally fails to produce visually pleasant output. The recently developed local learning methods provide a remedy by partitioning the feature space into a number of clusters and learning a simple local model for each cluster. However, in these methods the space partition is conducted separately from local model learning, which results in an abundant number of local models to achieve satisfying performance. To address this problem, we propose a mixture of experts (MoE) method to jointly learn the feature space partition and local regression models. Our MoE consists of two components: gating network learning and local regressors learning. An expectation-maximization (EM) algorithm is adopted to train MoE on a large set of LR/HR patch pairs. Experimental results demonstrate that the proposed method can use much less local models and time to achieve comparable or superior results to state-of-the-art SISR methods, providing a highly practical solution to real applications.
URI: http://localhost/handle/Hannan/146715
ISSN: 1070-9908
1558-2361
volume: 23
issue: 1
More Information: 102
106
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7339441.pdf930.67 kBAdobe PDFThumbnail
Preview File
Title: Joint Learning of Multiple Regressors for Single Image Super-Resolution
Authors: Kai Zhang;Baoquan Wang;Wangmeng Zuo;Hongzhi Zhang;Lei Zhang
subject: Image super-resolution|joint learning|mixture of experts|local learning|linear regression
Year: 2016
Publisher: IEEE
Abstract: Using a global regression model for single image super-resolution (SISR) generally fails to produce visually pleasant output. The recently developed local learning methods provide a remedy by partitioning the feature space into a number of clusters and learning a simple local model for each cluster. However, in these methods the space partition is conducted separately from local model learning, which results in an abundant number of local models to achieve satisfying performance. To address this problem, we propose a mixture of experts (MoE) method to jointly learn the feature space partition and local regression models. Our MoE consists of two components: gating network learning and local regressors learning. An expectation-maximization (EM) algorithm is adopted to train MoE on a large set of LR/HR patch pairs. Experimental results demonstrate that the proposed method can use much less local models and time to achieve comparable or superior results to state-of-the-art SISR methods, providing a highly practical solution to real applications.
URI: http://localhost/handle/Hannan/146715
ISSN: 1070-9908
1558-2361
volume: 23
issue: 1
More Information: 102
106
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7339441.pdf930.67 kBAdobe PDFThumbnail
Preview File
Title: Joint Learning of Multiple Regressors for Single Image Super-Resolution
Authors: Kai Zhang;Baoquan Wang;Wangmeng Zuo;Hongzhi Zhang;Lei Zhang
subject: Image super-resolution|joint learning|mixture of experts|local learning|linear regression
Year: 2016
Publisher: IEEE
Abstract: Using a global regression model for single image super-resolution (SISR) generally fails to produce visually pleasant output. The recently developed local learning methods provide a remedy by partitioning the feature space into a number of clusters and learning a simple local model for each cluster. However, in these methods the space partition is conducted separately from local model learning, which results in an abundant number of local models to achieve satisfying performance. To address this problem, we propose a mixture of experts (MoE) method to jointly learn the feature space partition and local regression models. Our MoE consists of two components: gating network learning and local regressors learning. An expectation-maximization (EM) algorithm is adopted to train MoE on a large set of LR/HR patch pairs. Experimental results demonstrate that the proposed method can use much less local models and time to achieve comparable or superior results to state-of-the-art SISR methods, providing a highly practical solution to real applications.
URI: http://localhost/handle/Hannan/146715
ISSN: 1070-9908
1558-2361
volume: 23
issue: 1
More Information: 102
106
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7339441.pdf930.67 kBAdobe PDFThumbnail
Preview File