Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/163876
Title: End-to-End Comparative Attention Networks for Person Re-Identification
Authors: Hao Liu;Jiashi Feng;Meibin Qi;Jianguo Jiang;Shuicheng Yan
Year: 2017
Publisher: IEEE
Abstract: Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, <italic>i.e.</italic>, the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively <italic>comparing</italic> their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.
URI: http://localhost/handle/Hannan/163876
volume: 26
issue: 7
More Information: 3492,
3506
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7918589.pdf3.09 MBAdobe PDF
Title: End-to-End Comparative Attention Networks for Person Re-Identification
Authors: Hao Liu;Jiashi Feng;Meibin Qi;Jianguo Jiang;Shuicheng Yan
Year: 2017
Publisher: IEEE
Abstract: Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, <italic>i.e.</italic>, the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively <italic>comparing</italic> their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.
URI: http://localhost/handle/Hannan/163876
volume: 26
issue: 7
More Information: 3492,
3506
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7918589.pdf3.09 MBAdobe PDF
Title: End-to-End Comparative Attention Networks for Person Re-Identification
Authors: Hao Liu;Jiashi Feng;Meibin Qi;Jianguo Jiang;Shuicheng Yan
Year: 2017
Publisher: IEEE
Abstract: Person re-identification across disjoint camera views has been widely applied in video surveillance yet it is still a challenging problem. One of the major challenges lies in the lack of spatial and temporal cues, which makes it difficult to deal with large variations of lighting conditions, viewing angles, body poses, and occlusions. Recently, several deep-learning-based person re-identification approaches have been proposed and achieved remarkable performance. However, most of those approaches extract discriminative features from the whole frame at one glimpse without differentiating various parts of the persons to identify. It is essentially important to examine multiple highly discriminative local regions of the person images in details through multiple glimpses for dealing with the large appearance variance. In this paper, we propose a new soft attention-based model, <italic>i.e.</italic>, the end-to-end comparative attention network (CAN), specifically tailored for the task of person re-identification. The end-to-end CAN learns to selectively focus on parts of pairs of person images after taking a few glimpses of them and adaptively <italic>comparing</italic> their appearance. The CAN model is able to learn which parts of images are relevant for discerning persons and automatically integrates information from different parts to determine whether a pair of images belongs to the same person. In other words, our proposed CAN model simulates the human perception process to verify whether two images are from the same person. Extensive experiments on four benchmark person re-identification data sets, including CUHK01, CHUHK03, Market-1501, and VIPeR, clearly demonstrate that our proposed end-to-end CAN for person re-identification outperforms well established baselines significantly and offer the new state-of-the-art performance.
URI: http://localhost/handle/Hannan/163876
volume: 26
issue: 7
More Information: 3492,
3506
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7918589.pdf3.09 MBAdobe PDF