information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
01 h 01 min
date
July 1, 2015

For sonic interactive systems, the definition of user-specific mappings between sensors capturing performer’s gesture and sound engine parameters can be a complex task, especially when using large network of sensors to control a high number of synthesis variables. Generative techniques based on machine learning can compute such mappings only if users provide a sufficient amount of examples embedding and underlying learnable model. Instead, the combination of automated listening and unsupervised learning techniques can minimize effort and expertise required for implementing personalized mapping, while rising the perceptual relevance of the control abstraction. The vocal control of sound synthesis is presented as a challenging context for this mapping approach.

speakers


share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.