Gesturally parameterized sound and video synthesis

Sha Xin Wei - Topological Media Lab, Concordia University, Canada
Date and time
Tuesday, October 4, 2005 at 5:30 PM - caffe`, te` & C. ore 17.00
Ca' Vignal - Piramide, Floor 0, Hall Verde
Programme Director
Davide Rocchesso
External reference
Publication date
September 6, 2005


Current computer hardware permits the real-time synthesis of time-based media such as video and sound textures based on physical models. These models, like variants of the wave equation, the Navier-Stokes equation for turbulent flow, and richer models used in music synthesis, have dozens to hundreds of continuous parameters. Some of these parameters can be provided by functions of sensor data from cameras or other physical sensors: accelerometers, photometers, force sensors and so forth.

This provides a phenomenologically rich responsive medium and experimental apparatus for the study of intentional and non-intentional gesture. One technical problem is how to map gesture or movement to rich temporal media in a ways that are a-linguistically learnable, yet plausibly rich and expressive.

© 2002 - 2021  Verona University
Via dell'Artigliere 8, 37129 Verona  |  P. I.V.A. 01541040232  |  C. FISCALE 93009870234