TOP  >  Thesis/Dissertation  >  Motion-Blur-Free Image Capturing by Exposure Splitting and Pixel Tracking

Motion-Blur-Free Image Capturing by Exposure Splitting and Pixel Tracking

The motion of a camera or an object in a scene causes motion blur, due to the shift of light convergence on the sensor of the camera during an exposure time. By shortening the exposure time, we can decrease the motion blur but also increase shot noise, due to the fluctuation of light amount detected by the sensor. Our goal is to capture an image without the motion blur and the increased shot noise. Optical stabilization by lens or sensor shift is adopted by many cameras, though this approach requires special optical hardware. In addition, since this approach cannot deal with the motions of the objects in a scene, it is not effective in, for example, capturing an image of a walker with a hand-held camera. We propose a method of motion-blur-free capturing by exposure splitting. In this method, we capture the sequence of short-exposure images by splitting an exposure time, and then integrate the images with image registration by pixel tracking. Since the integration with the registration relies only on software, the method require no special optical hardware, and is applicable to conventional cameras that have multi-shot functions. Moreover, the pixel-wise registration enables the method to deal with camera shake and object motion simultaneously.
As a previous approach that requires no special hardware, deconvolution restores a blurred image. This approach assumes a spatially homogeneous motion and fails if an object moves independently from its background. A few methods of image capturing by the exposure splitting have been proposed. They assume spatially or temporally homogeneous motions and can not fully deal with camera shake involving in-plane rotation. In addition, the studies of these methods have not theoretically presented the principle of the exposure splitting. There are some consumer cameras with stabilization functions by integrating multiple images. However, their details are not published.
The exposure splitting is based on exposure shortening and image integration. An image captured in a short exposure time contains little motion blur and much shot noise. Owing to the probabilistic property of the shot noise, we can reduce the shot noise by integrating multiple images of the same scene. Meanwhile, by registering the images accurately, we can integrate the images without producing any more motion blur. Therefore, we can suppress both motion blur and shot noise by registering and integrating short-exposure images.
In the proposed method, we split a total exposure time and capture the sequence of short-exposure images. The number of the splits should be possibly large, depending on the noise immunity of a camera and the brightness of a scene. Then, we perform the pixel tracking in the sequence. We utilize the large displacement optical flow tracker, which is one of the state-of-the-art methods of the pixel tracking, and is able to detect occlusions. Finally, we integrate the images in the sequence into a single image. For each pixel, we calculate its resulting value as the sum of the intensity values from where the tracker locates the pixel. If the pixel is occluded in some images, we scale the value of the pixel, as the sum is relatively small.
We conducted experiments to demonstrate the effectiveness of the proposed method. We compared images resulting from four methods of capturing: capturing in an exposure time without splitting, capturing in an shorter exposure time, a blind deconvolution method, and the proposed method. First, we captured images of a controlled scene, and qualitatively and quantitatively compared them, in terms of appearance and peak-signal-to-noise ratio. Second, we captured images of a practical scene, and compared their appearance. The proposed method produced better results than did the other methods in both of the experiments. However, we found that the pixel tracking in the proposed method is computationally heavy.
A remaining task is to improve the computational efficiency of the pixel tracking. We need to develop a more efficient algorithm of the pixel tracking than the LDOF tracker.