Abstract
[Japanese | Thesis | Researches in Minoh Lab | Minoh Lab]


Image Synthesis for Presenting Facial Information of Learners in e-Learning System


One of the major advantages of recent e-learning systems compared with the conventional web based training is that learning log files of learners are stored in the system. From the learning log files, teachers are able to know when each learner accessed to course materials and what answer e ach learner replied to quizzes. However, they are not able to know how each learner learned mater ials, in other words, which object in the material s/he focused on, with which facial expressions . At a real lecture, the teacher give a lecture on some subject by considering facial expressions and focus attentions of learners because that information is useful for estimating the degree of understanding and interest of the learners. This is also the case in learning by e-learning syst em. In this paper, we propose the system that stores the data of facial expression and focus atte ntion of each learner and presents these data to the teachers.

It is not easy to recognize facial expressions appearing on the face of learners because those fa cial expressions do not have distinct difference with each other. For detecting focus attention o f learners, particular devices such as eye-mark cameras are often used. Putting those devices dur ing e-learning is inconvenient and frustrating for the learners. In this paper, we propose to pre sent the facial images of the learners with which the teachers are able to understand their facia l expressions and focus attentions. For presenting the focus attention by facial images, image of materials is also presented so that the teachers understand which object in the materials the le arners focus on. We use these images as the learning log in which facial expressions and focus at tentions of each learner are recorded.

For the images to present facial expressions of learners to the teachers, we use real facial imag es captured by a camera. For the images to present focus attentions, we use images in which the f ace of learners appears with the monitor so that their positional relation can be easily recogniz ed. If we create these images when the viewpoint does not exist in front of the monitor, the cont ent of materials looks distorted. And, if the viewpoint does not exist in front of the face of le arners, the lines of sight of the learner cannot be easily recognized from the images. Hence, we need to set the viewpoint behind the monitor so that the viewpoint must exist in front of the fac e of learners and the monitor, and observe the face of learners through the monitor.

Since these images cannot be obtained by the camera installed around the monitor, we create synth etic facial image with a virtual viewpoint from images captured by stereo-camera installed around the monitor, and overlay the image of the course material on the facial images. In this process, we first set the virtual viewpoint on the line passing through the center of the learner's face and the monitor. In order to make the facial image sufficiently large compared with the image of material by preserving the learner's focus on the material, the monitor scaled down along the pyr amid with the apex at the center of the face and the bottom at the monitor by keeping the distanc e between the virtual viewpoint and the scaled monitor to the focal length of the camera.

The frontal face of the learner is recovered by adjusting the 3Dmodel that approximates the learn er's face to the 3D positions of facial features including the centers of the eyes and the mouth extracted from images of the stereo-camera, and mapping each part of the stereo-camera images on the corresponding part of the model for the texture. By observing the model from the virtual view point above, the frontal facial image is obtained. The image of the course material is reversed a nd overlaid on the frontal facial image so that the relation between the learner's focus and the course materials is preserved. Finally, the overlaid image is reversed again to restore the prope r direction of the image of the material.

Using WebCT as an example of e-learning system, we created the facial images in order to v erify how properly the teachers can recognize focus attentions of the learners by observing those images. On the monitor, there is a window that displays some real facial images of learners. And , a window that displays an overlaid image of the learner appears by clicking the real facial ima ge. These windows are presented to the test subjects who were asked which object in the window th e learner looked. As the result, correct answer was replied with 75% to 80% precision.


Go back to Thesis Page