Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/220010
Title: Rotational Invariant Dimensionality Reduction Algorithms
Authors: Zhihui Lai;Yong Xu;Jian Yang;Linlin Shen;David Zhang
Year: 2017
Publisher: IEEE
Abstract: A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L<sub>2</sub> norm as the metric. In this paper, a series of methods based on the L<sub>2,1</sub>-norm are proposed for linear dimensionality reduction. Since the L<sub>2,1</sub>-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L<sub>2</sub> norm based subspace learning algorithms.
URI: http://localhost/handle/Hannan/220010
volume: 47
issue: 11
More Information: 3733,
3746
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7502120.pdf2.21 MBAdobe PDF
Title: Rotational Invariant Dimensionality Reduction Algorithms
Authors: Zhihui Lai;Yong Xu;Jian Yang;Linlin Shen;David Zhang
Year: 2017
Publisher: IEEE
Abstract: A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L<sub>2</sub> norm as the metric. In this paper, a series of methods based on the L<sub>2,1</sub>-norm are proposed for linear dimensionality reduction. Since the L<sub>2,1</sub>-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L<sub>2</sub> norm based subspace learning algorithms.
URI: http://localhost/handle/Hannan/220010
volume: 47
issue: 11
More Information: 3733,
3746
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7502120.pdf2.21 MBAdobe PDF
Title: Rotational Invariant Dimensionality Reduction Algorithms
Authors: Zhihui Lai;Yong Xu;Jian Yang;Linlin Shen;David Zhang
Year: 2017
Publisher: IEEE
Abstract: A common intrinsic limitation of the traditional subspace learning methods is the sensitivity to the outliers and the image variations of the object since they use the L<sub>2</sub> norm as the metric. In this paper, a series of methods based on the L<sub>2,1</sub>-norm are proposed for linear dimensionality reduction. Since the L<sub>2,1</sub>-norm based objective function is robust to the image variations, the proposed algorithms can perform robust image feature extraction for classification. We use different ideas to design different algorithms and obtain a unified rotational invariant (RI) dimensionality reduction framework, which extends the well-known graph embedding algorithm framework to a more generalized form. We provide the comprehensive analyses to show the essential properties of the proposed algorithm framework. This paper indicates that the optimization problems have global optimal solutions when all the orthogonal projections of the data space are computed and used. Experimental results on popular image datasets indicate that the proposed RI dimensionality reduction algorithms can obtain competitive performance compared with the previous L<sub>2</sub> norm based subspace learning algorithms.
URI: http://localhost/handle/Hannan/220010
volume: 47
issue: 11
More Information: 3733,
3746
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7502120.pdf2.21 MBAdobe PDF