Research News
Public Deliverables
News & Events
Press Corner

News Alert
   Get up-to-date
    >>>SIGN UP<<<

Members' Area

Breaking news                                                                  [Back to archive]
The latest results achieved by the project consortium.

Vergence simulator

(left-up) 3D view of vergence simulation experiment. (right-up) Horizontal and vertical disparity maps. (right-middle) Disparity maps masked (weighted) by the foveal region. (left-bottom) Rendered views from left and right eyes. (right-bottom) Vergence angle plot.

The first version of the vergence simulator is ready. Originally, the simulator has been developed as a testing platform for the development and modeling of the vergence control strategies. Obviously, it is much safer (and at the end of the day, much cheaper) to test the algorithms on the model before switching to a real robotic head setup. In the same time the simulator gives to researchers almost infinite flexibility. It is easy to adapt the simulator to basically any task where precise information about the 3D scene is needed. That is why we also have used it for such tasks as: synthetic stereo image database generation, ground truth optical flow computation and the disparity statistics estimation.

The simulator consists of three core components:
    - robotic head model,
    - model of the environment and
    - ray-tracing rendering engine.

Robotic head model
The robotic head model has a number of parameters. Some parameters (e.g. interocular distance (baseline), focal length, camera resolution, camera field of view and so on) are usually set up and fixed during the simulations, and some (head position, head orientation, gaze direction, vergence of the “eyes”) can be controlled from outside of the model.

Environment model
In the simulator the environment could be modeled by means of basic 2D- (triangles, rectangles), 3D- (boxes, tetrahedron) primitives and lights. Primitives can be combined into more complex structures. The objects can be static or moving, the user has full control over the dynamics of the objects and lights. The same is true for shading parameters (color or textures) of the objects and properties of the light sources (color, intensity, type).

Ray-tracing engine
This is the most important and most complex component of the simulator. As existing ray-tracers do not give full access to internal data, we decided to develop our own ray-tracing engine from the scratch. Matlab has been chosen as software development environment. On one hand, in this case we get a flexible and convenient interface to all other components and modules (which are being implemented mostly in Matlab). On the other hand, with Matlab we can guarantee cross-platform compatibility.
Most important features of the engine:
    - multiply view rendering: arbitrary number of independently configurable (focal length, resolution, field of view) cameras,
    - objects can shaded with textures or just colors (useful for fast preview generation),
    - arbitrary number of independently configurable (position, type, color, velocity) lights,
    - motion of the objects and lights can controlled by simple rotation/translation velocity parameters or by external func tions,
    - engine is able to compute accurate horizontal and vertical disparities during the rendering process,
    - the same for optical flow.

Nikolay Chumerin
Computational Neuroscience Research Group
Laboratorium voor Neuro- en Psychofysiologie

                                                      WEBMASTER: Agostino Gibaldi (UG)