This is a NOS-HS NORDCORP project with participating research groups at the universities of Gothenburg, Copenhagen and Helsinki
Project coordinatorJens Allwood, Professor
SSKKII Interdisciplinary Center, Department of Applied Information Technology
IT Faculty, University of Gothenburg, 412 96 Gothenburg, Sweden
Project partnersPatrizia Paggio and Costanza Navaretta, Senior Researchers
University of Copenhagen, Centre for Language Technology (CST)
Njalsgade 140-142. Bldg 25, 2300 Copenhagen, Denmark
(Also Visiting professor, University of Tartu)
University of Helsinki, Institute of Behavioural Sciences
PO BOX 9, FIN-00014 University of Helsinki, Finland
SSKKII Interdisciplinary Center & Dept of Linguistics, University of Gothenburg
Box 200, SE 405 30 Gothenburg, Sweden
The purpose of the project is to carry out collaborative research involving development and analysis of multimodal spoken language corpora in the Nordic countries. The corpora are annotated resources where the various modalities involved in human communication, or human-computer interaction, are recorded and annotated at many different levels. This makes it possible to study how manual gesture, head movements, facial expressions and body posture interact with speech in face-to-face communication. The findings can be used for a number of purposes, among them to develop models of multimodal communication for the design of embodied communicative agents in computer interfaces to databases and of robots. Multimodal corpora for different language and cultures can also be the bases for comparative research, which can be used for the design and adaptation of multimodal agents for use in different countries.
The project will
- further develop research building on the earlier results we have obtained in this field.
- start up and pursue a closer cooperation with the purpose of establishing multimodal corpora for Danish, Swedish, Finnish and Estonian with a number of standardized coding features which will make comparative studies possible.
- carry out a number of specified studies testing hypotheses on multimodal communicative interaction.
- develop, extend and adapt models of multimodal interactive communication management that can serve as a basis for interactive systems.
- apply machine learning techniques in order to test the possibilities for automatic recognition of manual gestures, head movements and facial expressions with different interactive communication functions.
Please click here to help our crowdsourcing project by taking a short questionnaire!
The project team has a well-established network for cooperation
- with Nordic researchers on Multimodal Interaction from the MUMIN Nordic Network.
- with Nordic researchers on Spoken Language from the NORDTALK Nordic Network (NorFA).
- internationally with researchers on Embodied Communication in Humans and Machines through the ZiF Working Group from 2005-2006.
- internationally with researchers on Spoken Language corpora from the WORLDTALK network.
- internationally with the European Network of Excellence SSPNet (Social Signal Processing).
Samples of corpora, transcriptions and coding schemas will be available from this webpage.