Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/643769
Title: Approximate Orthogonal Sparse Embedding for Dimensionality Reduction
Authors: Zhihui Lai;Wai Keung Wong;Yong Xu;Jian Yang;David Zhang
subject: image recognition|Dimensionality reduction|elastic net|manifold learning|sparse projections.
Year: 2016
Publisher: IEEE
Abstract: Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L<sub>1</sub> -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.
URI: http://localhost/handle/Hannan/176764
http://localhost/handle/Hannan/643769
ISSN: 2162-237X
2162-2388
volume: 27
issue: 4
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7102762.pdf2.25 MBAdobe PDFThumbnail
Preview File
Title: Approximate Orthogonal Sparse Embedding for Dimensionality Reduction
Authors: Zhihui Lai;Wai Keung Wong;Yong Xu;Jian Yang;David Zhang
subject: image recognition|Dimensionality reduction|elastic net|manifold learning|sparse projections.
Year: 2016
Publisher: IEEE
Abstract: Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L<sub>1</sub> -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.
URI: http://localhost/handle/Hannan/176764
http://localhost/handle/Hannan/643769
ISSN: 2162-237X
2162-2388
volume: 27
issue: 4
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7102762.pdf2.25 MBAdobe PDFThumbnail
Preview File
Title: Approximate Orthogonal Sparse Embedding for Dimensionality Reduction
Authors: Zhihui Lai;Wai Keung Wong;Yong Xu;Jian Yang;David Zhang
subject: image recognition|Dimensionality reduction|elastic net|manifold learning|sparse projections.
Year: 2016
Publisher: IEEE
Abstract: Locally linear embedding (LLE) is one of the most well-known manifold learning methods. As the representative linear extension of LLE, orthogonal neighborhood preserving projection (ONPP) has attracted widespread attention in the field of dimensionality reduction. In this paper, a unified sparse learning framework is proposed by introducing the sparsity or L<sub>1</sub> -norm learning, which further extends the LLE-based methods to sparse cases. Theoretical connections between the ONPP and the proposed sparse linear embedding are discovered. The optimal sparse embeddings derived from the proposed framework can be computed by iterating the modified elastic net and singular value decomposition. We also show that the proposed model can be viewed as a general model for sparse linear and nonlinear (kernel) subspace learning. Based on this general model, sparse kernel embedding is also proposed for nonlinear sparse feature extraction. Extensive experiments on five databases demonstrate that the proposed sparse learning framework performs better than the existing subspace learning algorithm, particularly in the cases of small sample sizes.
URI: http://localhost/handle/Hannan/176764
http://localhost/handle/Hannan/643769
ISSN: 2162-237X
2162-2388
volume: 27
issue: 4
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7102762.pdf2.25 MBAdobe PDFThumbnail
Preview File