Please use this identifier to cite or link to this item: http://localhost:80/handle/Hannan/155289
Title: Robust Visual Knowledge Transfer via Extreme Learning Machine-Based Domain Adaptation
Authors: Lei Zhang;David Zhang
subject: Domain adaptation|extreme learning machine|cross-domain learning|knowledge adaptation
Year: 2016
Publisher: IEEE
Abstract: We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the &#x2113;<sub>2,1</sub>-norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.
URI: http://localhost/handle/Hannan/155289
ISSN: 1057-7149
1941-0042
volume: 25
issue: 10
More Information: 4959
4973
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7539280.pdf3.22 MBAdobe PDFThumbnail
Preview File
Title: Robust Visual Knowledge Transfer via Extreme Learning Machine-Based Domain Adaptation
Authors: Lei Zhang;David Zhang
subject: Domain adaptation|extreme learning machine|cross-domain learning|knowledge adaptation
Year: 2016
Publisher: IEEE
Abstract: We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the &#x2113;<sub>2,1</sub>-norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.
URI: http://localhost/handle/Hannan/155289
ISSN: 1057-7149
1941-0042
volume: 25
issue: 10
More Information: 4959
4973
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7539280.pdf3.22 MBAdobe PDFThumbnail
Preview File
Title: Robust Visual Knowledge Transfer via Extreme Learning Machine-Based Domain Adaptation
Authors: Lei Zhang;David Zhang
subject: Domain adaptation|extreme learning machine|cross-domain learning|knowledge adaptation
Year: 2016
Publisher: IEEE
Abstract: We address the problem of visual knowledge adaptation by leveraging labeled patterns from source domain and a very limited number of labeled instances in target domain to learn a robust classifier for visual categorization. This paper proposes a new extreme learning machine (ELM)-based cross-domain network learning framework, that is called ELM-based Domain Adaptation (EDA). It allows us to learn a category transformation and an ELM classifier with random projection by minimizing the &#x2113;<sub>2,1</sub>-norm of the network output weights and the learning error simultaneously. The unlabeled target data, as useful knowledge, is also integrated as a fidelity term to guarantee the stability during cross-domain learning. It minimizes the matching error between the learned classifier and a base classifier, such that many existing classifiers can be readily incorporated as the base classifiers. The network output weights cannot only be analytically determined, but also transferrable. In addition, a manifold regularization with Laplacian graph is incorporated, such that it is beneficial to semisupervised learning. Extensively, we also propose a model of multiple views, referred as MvEDA. Experiments on benchmark visual datasets for video event recognition and object recognition demonstrate that our EDA methods outperform the existing cross-domain learning methods.
URI: http://localhost/handle/Hannan/155289
ISSN: 1057-7149
1941-0042
volume: 25
issue: 10
More Information: 4959
4973
Appears in Collections:2016

Files in This Item:
File Description SizeFormat 
7539280.pdf3.22 MBAdobe PDFThumbnail
Preview File