Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/195319
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhizhen Chien_US
dc.contributor.authorHongyang Lien_US
dc.contributor.authorHuchuan Luen_US
dc.contributor.authorMing-Hsuan Yangen_US
dc.date.accessioned2013en_US
dc.date.accessioned2020-04-06T07:43:38Z-
dc.date.available2020-04-06T07:43:38Z-
dc.date.issued2017en_US
dc.identifier.other10.1109/TIP.2017.2669880en_US
dc.identifier.urihttp://localhost/handle/Hannan/195319-
dc.description.abstractVisual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.en_US
dc.format.extent2005,en_US
dc.format.extent2015en_US
dc.publisherIEEEen_US
dc.relation.haspart7857085.pdfen_US
dc.titleDual Deep Network for Visual Trackingen_US
dc.typeArticleen_US
dc.journal.volume26en_US
dc.journal.issue4en_US
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7857085.pdf3.19 MBAdobe PDF
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhizhen Chien_US
dc.contributor.authorHongyang Lien_US
dc.contributor.authorHuchuan Luen_US
dc.contributor.authorMing-Hsuan Yangen_US
dc.date.accessioned2013en_US
dc.date.accessioned2020-04-06T07:43:38Z-
dc.date.available2020-04-06T07:43:38Z-
dc.date.issued2017en_US
dc.identifier.other10.1109/TIP.2017.2669880en_US
dc.identifier.urihttp://localhost/handle/Hannan/195319-
dc.description.abstractVisual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.en_US
dc.format.extent2005,en_US
dc.format.extent2015en_US
dc.publisherIEEEen_US
dc.relation.haspart7857085.pdfen_US
dc.titleDual Deep Network for Visual Trackingen_US
dc.typeArticleen_US
dc.journal.volume26en_US
dc.journal.issue4en_US
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7857085.pdf3.19 MBAdobe PDF
Full metadata record
DC FieldValueLanguage
dc.contributor.authorZhizhen Chien_US
dc.contributor.authorHongyang Lien_US
dc.contributor.authorHuchuan Luen_US
dc.contributor.authorMing-Hsuan Yangen_US
dc.date.accessioned2013en_US
dc.date.accessioned2020-04-06T07:43:38Z-
dc.date.available2020-04-06T07:43:38Z-
dc.date.issued2017en_US
dc.identifier.other10.1109/TIP.2017.2669880en_US
dc.identifier.urihttp://localhost/handle/Hannan/195319-
dc.description.abstractVisual tracking addresses the problem of identifying and localizing an unknown target in a video given the target specified by a bounding box in the first frame. In this paper, we propose a dual network to better utilize features among layers for visual tracking. It is observed that features in higher layers encode semantic context while its counterparts in lower layers are sensitive to discriminative appearance. Thus we exploit the hierarchical features in different layers of a deep model and design a dual structure to obtain better feature representation from various streams, which is rarely investigated in previous work. To highlight geometric contours of the target, we integrate the hierarchical feature maps with an edge detector as the coarse prior maps to further embed local details around the target. To leverage the robustness of our dual network, we train it with random patches measuring the similarities between the network activation and target appearance, which serves as a regularization to enforce the dual network to focus on target object. The proposed dual network is updated online in a unique manner based on the observation, that the target being tracked in consecutive frames should share more similar feature representations than those in the surrounding background. It is also found that for a target object, the prior maps can help further enhance performance by passing message into the output maps of the dual network. Therefore, an independent component analysis with reference algorithm is employed to extract target context using prior maps as guidance. Online tracking is conducted by maximizing the posterior estimate on the final maps with stochastic and periodic update. Quantitative and qualitative evaluations on two large-scale benchmark data sets show that the proposed algorithm performs favorably against the state-of-the-arts.en_US
dc.format.extent2005,en_US
dc.format.extent2015en_US
dc.publisherIEEEen_US
dc.relation.haspart7857085.pdfen_US
dc.titleDual Deep Network for Visual Trackingen_US
dc.typeArticleen_US
dc.journal.volume26en_US
dc.journal.issue4en_US
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7857085.pdf3.19 MBAdobe PDF