When aligning real images with computer-generated images, if the two sets of images don't have the same image quality, then you could lose the sense of reality. Wasn't this also a problem?

- Matsui
- Yes it was. During the early stages of development, the CG images were often too bright and sharp, and didn't fit properly within the real space. We needed to adjust the CG rendering (including coloring and shading) to match the real space, such as the contours and colors that often blend into one another.
So, you have a function that adjusts the virtual images automatically so that they match the brightness of the real space?
- Matsui
- The technology hasn't been fully developed into a product yet, but we're experimenting with it.
How do you create the shadows of a virtual object in real space?
- Matsui
- Shadows are created by appearing on a flat plane. By inputting data about the location and direction of the light source in real space, the shadows of the virtual objects can be created automatically and change depending on the position of the user. However, these shadows are just flat shadows. In real space, even flat shadows can appear jagged and uneven depending on the texture of the surface they fall on, and can look as if they are draped over other objects. Unfortunately, we aren't yet able to produce flexible shadows. We need to develop technology that can determine where and how shadows should fall based on the real-space images, but that's a project for the future.
What about light reflected from the virtual objects?
- Matsui
- We can render reflected light, although it's not as realistic as the reflections in real space. The real-space light source data is inputted in advance. This is the same mechanism used to create shadows.
If you pass your hand over a virtual object, the image quality becomes slightly rough, with the image of the hand being prioritized. How is this done?
- Matsui
- By using the principle of stereo measurement for the hand only, the relationship of the hand's skin tone to the surrounding CG image can be recognized.
User operates a (virtual) copying machine using the HMD
(Monitor displays what the user is seeing)
It seems that the pursuit to achieve image quality that enables virtual objects to appear as real as possible is continuing, right?
- Matsui
- We would like to continue to pursue technology capable of automatically gathering the information needed to render real space, such as lighting, materials and background objects.
You mean technology that can automatically recognize real space conditions?
- Matsui
- Yes. It's a question of making the MR technology user-friendly. This means not having to spend too much time setting the position and image-quality alignment. We're searching for the ideal format.
- Aratani
- The ideal format might still be a long way off, but user-friendliness is steadily improving. Before performing position matching, it used to take a lot of time and effort to measure the 3-D placement information from the markers placed in real space, and to pre-calibrate the relative positions and orientation between the image sensors and video camcorders mounted on the HMD. But now, we are able to easily complete a job by recording markers placed in the real space and simply pressing the "Calculate" button on the GUI (Graphical User Interface). In addition, we've developed an SDK (Software Development Kit) that allows the camcorders' position and orientation data to work smoothly with the rendering application.

