Welcome to EYESHOTS Project, a Collaborative three-year project funded by the European Commission through its Cognitive Systems, Interaction, Robotics Unit (E5) under the Information and Communication Technologies.
The project started from the idea of investigating the cognitive value of eye movements when one is engaged in active exploration of the peripersonal space. In particular, we argued that, to interact effectively with the environment, humans use complex motion strategies at ocular level (but possibly extended to other body parts, e.g., head and arms, so possibly using multimodal feedback), to extract information useful to build representations of the 3D space which are coherent and stable with respect to time.
All the EYESHOTS’ processing modules build on distributed representations in which sensorial and motor aspects coexist explicitly or implicitly. The models resort to a hierarchy of learning stages at different levels of abstraction, ranging from the coordination of binocular eye movements (e.g., learning disparity-vergence servos), to the definition of contingent saliency maps (e.g., learning of object detection properties), up to the development of the sensorimotor representation for bidirectional eye-arm coordination. On our opinion, this can be considered, an interesting methodological result of the project. Through the distributed coding, indeed, it is possible to avoid a sequentialization of sensorial and motor processes, that is certainly desirable for the development of cognitive abilities at a pre-interpretative (i.e., sub-symbolic) level, e.g., when a system must learn binocular eye coordination, handling the inaccuracies of the motor system, and actively measure the space around it.
This web-site is a repository of dowloadable material, and a shopwindow for the main results and research products of the project.
[more details on the project's key research actions]
Funding: European Commission (FP7-ICT, grant no. 217077)
Start date: 1 Mar 2008