Please use this identifier to cite or link to this item: http://localhost:80/handle/Hannan/146679
Title: Joint Feature Selection and Subspace Learning for Cross-Modal Retrieval
Authors: Kaiye Wang;Ran He;Liang Wang;Wei Wang;Tieniu Tan
subject: coupled feature selection|half-quadratic minimization|cross-modal retrieval|Subspace learning
Year: 2016
Publisher: IEEE
Abstract: Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the &#x2113;<sub>2</sub>-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.
Description: 
URI: http://localhost/handle/Hannan/146679
ISSN: 0162-8828
volume: 38
issue: 10
More Information: 2010
2023
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7346492.pdf2.87 MBAdobe PDFThumbnail
Preview File
Title: Joint Feature Selection and Subspace Learning for Cross-Modal Retrieval
Authors: Kaiye Wang;Ran He;Liang Wang;Wei Wang;Tieniu Tan
subject: coupled feature selection|half-quadratic minimization|cross-modal retrieval|Subspace learning
Year: 2016
Publisher: IEEE
Abstract: Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the &#x2113;<sub>2</sub>-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.
Description: 
URI: http://localhost/handle/Hannan/146679
ISSN: 0162-8828
volume: 38
issue: 10
More Information: 2010
2023
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7346492.pdf2.87 MBAdobe PDFThumbnail
Preview File
Title: Joint Feature Selection and Subspace Learning for Cross-Modal Retrieval
Authors: Kaiye Wang;Ran He;Liang Wang;Wei Wang;Tieniu Tan
subject: coupled feature selection|half-quadratic minimization|cross-modal retrieval|Subspace learning
Year: 2016
Publisher: IEEE
Abstract: Cross-modal retrieval has recently drawn much attention due to the widespread existence of multimodal data. It takes one type of data as the query to retrieve relevant data objects of another type, and generally involves two basic problems: the measure of relevance and coupled feature selection. Most previous methods just focus on solving the first problem. In this paper, we aim to deal with both problems in a novel joint learning framework. To address the first problem, we learn projection matrices to map multimodal data into a common subspace, in which the similarity between different modalities of data can be measured. In the learning procedure, the &#x2113;<sub>2</sub>-norm penalties are imposed on the projection matrices separately to solve the second problem, which selects relevant and discriminative features from different feature spaces simultaneously. A multimodal graph regularization term is further imposed on the projected data,which preserves the inter-modality and intra-modality similarity relationships.An iterative algorithm is presented to solve the proposed joint learning problem, along with its convergence analysis. Experimental results on cross-modal retrieval tasks demonstrate that the proposed method outperforms the state-of-the-art subspace approaches.
Description: 
URI: http://localhost/handle/Hannan/146679
ISSN: 0162-8828
volume: 38
issue: 10
More Information: 2010
2023
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7346492.pdf2.87 MBAdobe PDFThumbnail
Preview File