This is a NOS-HS NORDCORP project with participating research groups at the universities of Gothenburg, Copenhagen and Helsinki

Project coordinator

Jens Allwood, Professor

SSKKII Interdisciplinary Center, Department of Applied Information Technology
IT Faculty, University of Gothenburg, 412 96 Gothenburg, Sweden

Project partners

Patrizia Paggio and Costanza Navaretta, Senior Researchers

University of Copenhagen, Centre for Language Technology (CST)
Njalsgade 140-142. Bldg 25, 2300 Copenhagen, Denmark

Kristiina Jokinen, Adjunct Professor of Language Technology

(Also Visiting professor, University of Tartu)
University of Helsinki, Institute of Behavioural Sciences
PO BOX 9, FIN-00014 University of Helsinki, Finland

Elisabeth Ahlsén, Professor

SSKKII Interdisciplinary Center & Dept of Linguistics, University of Gothenburg
Box 200, SE 405 30 Gothenburg, Sweden


The purpose of the project is to carry out collaborative research involving development and analysis of multimodal spoken language corpora in the Nordic countries. The corpora are annotated resources where the various modalities involved in human communication, or human-computer interaction, are recorded and annotated at many different levels. This makes it possible to study how manual gesture, head movements, facial expressions and body posture interact with speech in face-to-face communication. The findings can be used for a number of purposes, among them to develop models of multimodal communication for the design of embodied communicative agents in computer interfaces to databases and of robots. Multimodal corpora for different language and cultures can also be the bases for comparative research, which can be used for the design and adaptation of multimodal agents for use in different countries.

The project will

  1. further develop research building on the earlier results we have obtained in this field.
  2. start up and pursue a closer cooperation with the purpose of establishing multimodal corpora for Danish, Swedish, Finnish and Estonian with a number of standardized coding features which will make comparative studies possible.
  3. carry out a number of specified studies testing hypotheses on multimodal communicative interaction.
  4. develop, extend and adapt models of multimodal interactive communication management that can serve as a basis for interactive systems.
  5. apply machine learning techniques in order to test the possibilities for automatic recognition of manual gestures, head movements and facial expressions with different interactive communication functions.

Please click here to help our crowdsourcing project by taking a short questionnaire!

Research networks

The project team has a well-established network for cooperation

Project material

Samples of corpora, transcriptions and coding schemas will be available from this webpage.

Valid XHTML 1.0 Transitional Valid CSS!