[Japanese | Thesis | Researches in Minoh Lab | Minoh Lab]
Accurate reconstruction of a real space in CG is needed for VR to give users immersive feeling of reality. In order to reconstruct a real space with multi-camera, their locations have to be estimated in advance. The factorization method is usually used as an estimation method for multi-camera location. This method requires sets of 2D points each of which a 3D point is mapped on each image plane of the camera. We call a 3D point and acquisition of a set of the projected points, as a feature point and feature point acquisition respectively. We can divide multi-camera location estimation into two processes. One is the feature point acquisition, and the other is the camera location calculation.
Several methods have been proposed as an extension of the factorization method. They do not mention the process of feature point acquisition. The multi-camera location estimation should be operated as a series of the two processes. The projected points that acquired by the process of feature point acquisition usually involve the coordinate errors to some extent. Those errors lower the accuracy of the camera location calculation. We propose a method of multi-camera location estimation that is robust to the errors. Our method is easy to operate because it is completely automatic.
In this paper, we discuss the following three issues. First, we describe how to acquire feature points automatically under unlimited camera arrangement. We acquire the feature points, by observing a sphere, as the center of the sphere. We assume that the center of the projected circle of the sphere is the projected point of the center of the sphere. By detecting the center of the projected circle, the projected points of the feature point on all the camera images can be acquired automatically. By moving the sphere and observing it with all the cameras simultaneously, the feature points are acquired incrementally.
As the second issue, we discuss the number of feature points that are efficient for the location calculation process. By using the least number of feature points with the desired accuracy, we reduce the total time of the location estimation process. In order to find the least number of feature points, we repeat the process of increasing the feature points and calculating multi-camera locations until the camera location calculation satisfies the desired accuracy.
In the third issue, we argue how to eliminate the influence of coordinate errors of projected points in order to keep the accuracy of the camera location calculation at each repetition. We define reliability of a feature point according to the epipolar error. Selecting feature points by the reliability, we remove the feature points that involve large error at the location calculation.
We conducted experiments with both simulation and real data. In the simulation, we prepared four cameras and several 3D points in their view space located arbitrarily, and added some error to the projected points of those 3D points. Then, we estimated those camera locations. In the experiment with real data, we use a ball whose radius was about 10cm. By observing the ball with four cameras arranged in a room, we estimated those camera locations. In the both experiments, the average epipolar error decreased as the feature points with large error were removed by our method. In the result of the second experiment, after six repetitions of the location calculation, average epipolar error converged into subpixel accuracy.