Research News
Public Deliverables
News & Events
Press Corner

News Alert
   Get up-to-date
    >>>SIGN UP<<<

Members' Area

Breaking news                                                                  [Back to archive]
The latest results achieved by the project consortium.

A Virtual Reality Simulator for Active Vision Systems

An indoor scene acquired by the Konica Minolta Vivid 910 laser scanner. The active vision simulator allows us to obtain the anaglyph, by superimposing the left and the right views, the horizontal and the vertical ground truth disparity map, for a given fixation point.

The novelty of the proposed approach is to use the Virtual Reality as a tool to simulate the behavior of physical systems, in particular of the visual system of a robot, rather than to make the perceptual rendering of the visual information exploitable by human users. We have developed a tool to simulate a realistic virtual environment that can be used:

  • To assist in the design of the kinematic of the robotís vision system (e.g. a stereoscopic camera pair)

  • To assist in the design of the interaction between the movements of the stereoscopic camera pair and the visual information acquired by the system.

Algorithmic benchmarks
In 3D computer vision, in particular in stereoscopic vision, it is extremely important to asses quantitatively the progress in the field. In particular, it is important to have:

  • Ground truth data (disparities) for each possible left/right image pairs

  • Depth of each point of the scene to benchmark vergence/version algorithms

Behavioral benchmarks
In the design and implementation of active vision systems it is important to assess the performances of the robotic head. In particular it is important the possibility of reproducing a task by varying:

  • The observed scene and its parameters

  • The parameters of the vision system

  • The strategies implemented to solve the given task.

The simulator is implemented by using C++ programming language, OpenGL libraries and the Coin3D toolkit (www.coin3d.org). To obtain a stereoscopic visualization of the scene useful to mimic an active stereo vision system, rather than to make a human perceive stereoscopy, we have modified the SoCamera node of the Coin3D toolkit. In this way we have obtained a fast tool, capable to handle the commonly used 3D modeling formats (e.g. VRML and OpenInventor) and the data acquired by a three-dimensional laser scanner (Konica Minolta Vivid 910). Moreover the tool allows to access the buffers used for the 3D rendering of the scenes.
The Figure shows an indoor scene, acquired by the 3D laser scanner. The 3D data and the textures have been loaded in the virtual simulator, then the left and the right projections, the horizontal and the vertical ground truth disparity maps, are obtained, for each possible fixation point.

The tool is currently used to create a database of stereo pairs with data about the vergence point and the ground truth disparities, available at the website (www.pspc.unige.it/Research/vr.html).

    M. Chessa, S.P. Sabatini, F. Solari
    Department of Biophysical and Electronic Engineering (DIBE), University of Genoa

                                                      WEBMASTER: Agostino Gibaldi (UG)