[Japanese | Thesis | Researches in Minoh Lab | Minoh Lab]
Lecture video has been used in the distance learning and the lecture archive.
There are a lot of work for producing video of lectures automatically.
These lecture videos need to be able to make the viewer easily understand the contents of the lecture.
In order to make such lecture viedeos, it is important to shoot appropriate subsects.
We aim to shoot subject that is necessary to be seen in order to understand the contents of the lecture.
In order to obtain video video including these subjects,
We define the combination of subject that should be shooted to understand the contents of the lecture
as the "lecture situation".
By recognizing the lecture situation at each moment,
it becomes possible to obtain such video.
We aim to recognize those lecture situations from the lecturer motion,
which are the sensory information.
In the previous research, it has been discussed how to recognize the position and the motion of the lecturer for automatic shooting of lecture video. The position and the motion of the lecturer can be uniquely recognized from the information observed by the camera at each moment. In recognition of lecture situation on the other hand, various kinds of motions and positions are generated by the lecturer in the same lecture situation. It is difficult to uniquely recognize the lecture situation from those motions and positions of each moment. In this research, we propose the method to recognize those situation using the frequency of occurence of the motions and positions of lecturer, together with the probability of transition between lecture situations. We represent these statistical features by HMM. In the experiment, we verified that the proposed method can precisely recognize those situations. We also confirmed that the lecture video based on the lecture situation is useful for understanding lecture contents.