Overview
The objective of the project is to define new models and implement advanced tools for audio-video analysis, synthesis and representation in order to provide essential technologies for the implementation of large-scale virtual and augmented environnments. This work is oriented to make man-machine interaction as natural as possible, based on everyday human communication by speech, facial expressions and body gestures. Man to machine action will be based on coherent analysis of audio-video channels to perfofme either now level tasks, or high level interpretation and data fusion, speech emotion understanding or facial expression classification. Machine to man action, on the other hand, will be based on human-like audio-video feedback simulating a “person in the machine”. A common sw platform will be developed by theproject for the creation of Internet-based applications. A case study application will be developed, demonstrated and evaluated
Partners
Curtin University of Technology
Australia
www.computing.edu.au
Ecole Polytechnique Féderale de Lausanne, Virtual Reality Laboratory (VRLab)
Switzerland
vrlab.epfl.ch
Elan Informatique
France
www.elan.fr
Heudiasyc Centre National de la Recherche Scientifique
France
www.hds.utc.fr
Informatics and Telematics Institute
Greece
www.iti.gr
Linköpings universitet
Sweden
www.isy.liu.se/en
MIRALab, University of Geneva
Switzerland
www.miralab.ch
Tecnologia Automazione Uomo scrl
Italy
www.tau-online.it
Umea University
Sweden
www.umu.se
Università di Genova – Dipartimento di Informatica Sistemistica e Telematica
Italy
www.dist.unige.it
University of Maribor
Slovenia
www.dsplab.uni-mb.si
UPC, Univesitat Politecnica de Catalunya
Spain
www-tsc.upc.es
Winteractive
France
www.winteractive.fr