Please use this identifier to cite or link to this item: http://localhost/handle/Hannan/208968
Title: Temporal Coherence-Based Deblurring Using Non-Uniform Motion Optimization
Authors: Congbin Qiao;Rynson W. H. Lau;Bin Sheng;Benxuan Zhang;Enhua Wu
Year: 2017
Publisher: IEEE
Abstract: Non-uniform motion blur due to object movement or camera jitter is a common phenomenon in videos. However, the state-of-the-art video deblurring methods used to deal with this problem can introduce artifacts, and may sometimes fail to handle motion blur due to the movements of the object or the camera. In this paper, we propose a non-uniform motion model to deblur video frames. The proposed method is based on superpixel matching in the video sequence to reconstruct sharp frames from blurry ones. To identify a suitable sharp superpixel to replace a blurry one, we enrich the search space with a non-uniform motion blur kernel, and use a generalized PatchMatch algorithm to handle rotation, scale, and blur differences in the matching step. Instead of using pixel-based or regular patch-based representation, we adopt a superpixel-based representation, and use color and motion to gather similar pixels. Our non-uniform motion blur kernels are estimated from the motion field of these superpixels, and our spatially varying motion model considers spatial and temporal coherence to find sharp superpixels. Experimental results showed that the proposed method can reconstruct sharp video frames from blurred frames caused by complex object and camera movements, and performs better than the state-of-the-art methods.
URI: http://localhost/handle/Hannan/208968
volume: 26
issue: 10
More Information: 4991,
5004
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7990170.pdf3.52 MBAdobe PDF
Title: Temporal Coherence-Based Deblurring Using Non-Uniform Motion Optimization
Authors: Congbin Qiao;Rynson W. H. Lau;Bin Sheng;Benxuan Zhang;Enhua Wu
Year: 2017
Publisher: IEEE
Abstract: Non-uniform motion blur due to object movement or camera jitter is a common phenomenon in videos. However, the state-of-the-art video deblurring methods used to deal with this problem can introduce artifacts, and may sometimes fail to handle motion blur due to the movements of the object or the camera. In this paper, we propose a non-uniform motion model to deblur video frames. The proposed method is based on superpixel matching in the video sequence to reconstruct sharp frames from blurry ones. To identify a suitable sharp superpixel to replace a blurry one, we enrich the search space with a non-uniform motion blur kernel, and use a generalized PatchMatch algorithm to handle rotation, scale, and blur differences in the matching step. Instead of using pixel-based or regular patch-based representation, we adopt a superpixel-based representation, and use color and motion to gather similar pixels. Our non-uniform motion blur kernels are estimated from the motion field of these superpixels, and our spatially varying motion model considers spatial and temporal coherence to find sharp superpixels. Experimental results showed that the proposed method can reconstruct sharp video frames from blurred frames caused by complex object and camera movements, and performs better than the state-of-the-art methods.
URI: http://localhost/handle/Hannan/208968
volume: 26
issue: 10
More Information: 4991,
5004
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7990170.pdf3.52 MBAdobe PDF
Title: Temporal Coherence-Based Deblurring Using Non-Uniform Motion Optimization
Authors: Congbin Qiao;Rynson W. H. Lau;Bin Sheng;Benxuan Zhang;Enhua Wu
Year: 2017
Publisher: IEEE
Abstract: Non-uniform motion blur due to object movement or camera jitter is a common phenomenon in videos. However, the state-of-the-art video deblurring methods used to deal with this problem can introduce artifacts, and may sometimes fail to handle motion blur due to the movements of the object or the camera. In this paper, we propose a non-uniform motion model to deblur video frames. The proposed method is based on superpixel matching in the video sequence to reconstruct sharp frames from blurry ones. To identify a suitable sharp superpixel to replace a blurry one, we enrich the search space with a non-uniform motion blur kernel, and use a generalized PatchMatch algorithm to handle rotation, scale, and blur differences in the matching step. Instead of using pixel-based or regular patch-based representation, we adopt a superpixel-based representation, and use color and motion to gather similar pixels. Our non-uniform motion blur kernels are estimated from the motion field of these superpixels, and our spatially varying motion model considers spatial and temporal coherence to find sharp superpixels. Experimental results showed that the proposed method can reconstruct sharp video frames from blurred frames caused by complex object and camera movements, and performs better than the state-of-the-art methods.
URI: http://localhost/handle/Hannan/208968
volume: 26
issue: 10
More Information: 4991,
5004
Appears in Collections:2017

Files in This Item:
File SizeFormat 
7990170.pdf3.52 MBAdobe PDF