Modeling sound gestures in multimodal human-computer interaction.

Relatore
Antonio Rodà - Università di Udine
Data e ora
martedì 10 luglio 2007 alle ore 17.00 - Aula Verde (Ciclo COVAR)
Luogo
Ca' Vignal - Piramide, Piano 0, Sala Verde
Referente
Marco Squassina
Referente esterno
Data pubblicazione
27 giugno 2007
Dipartimento
 

Riassunto

Multimodal interfaces, since they provide the user with multiple modes of interaction with a system, require to combine signals acquired from different kinds of sensors. The talk will focus on an interaction paradigm combining movement and sound in a multimodal environment. This paradigm is based on the extension of the gesture concept to sounds and is validated by several experiments that investigate the communication capabilities of sound gestures. The use of multivariate analysis and pattern recognition methods will be also discussed, for defining a semantic space that allows to represent qualities of sound and gestures at a higher level of abstraction.






© 2002 - 2021  Università degli studi di Verona
Via dell'Artigliere 8, 37129 Verona  |  P. I.V.A. 01541040232  |  C. FISCALE 93009870234