Vous constatez une erreur ?
NaN:NaN
00:00
Spectral models attempt to parameterize sound at the basilar membrane of the ear. Thus, sound representations and transformations in these models should be closely linked to the perception. However, the perceived quality is highly dependent on the analysis stage. For decades, researchers have spent lots of efforts improving the precision of sound analysis. And yet this quality is not sufficient for demanding applications. One approach is to try to improve the analysis methods even further, without guarantee of success though, since theoretical bounds may exist, indicating the minimal error (i.e. maximal quality) reachable without extra information (blind approach). Another approach is to inject some information. This can be prior knowledge about the sound sources and / or the way the human auditory system will perceive them (computational auditory scene analysis approach). But when access to the compositional process is given, another option is to use some bits of the ground truth as an additional information in order to help the analysis process. This is the concept of “informed analysis” (in opposition to the blind approach), used recently to improve sound source separation. The additional information can be embedded in the sound signal itself, using audio watermarking techniques. The stereo mix can then be stored on a CD-audio, in a manner fully backward compatible with standard CD players while permitting an enhanced sound analysis thanks to the additional information. The precision of the analysis gets improved a way beyond the limitations of the blind approach. This helps inverting the compositional process, bridging the gap between the composer and the listener, and opens up new impressive applications, such as “active listening”, enabling the listener to interact with the sound while it is played, like composers of electroacoustic music. The musical parameters (e.g. loudness or spatial location) of the sound entities (sources) present in the musical mix stored on the CD can thus be changed interactively, with a great perceived quality.
Sylvain Marchand received the M.Sc. degree in algorithmics and the Ph.D. degree, while carrying out research in computer music and sound modeling, from the University of Bordeaux 1, France, on 1996 and 2000, respectively. He was appointed Associate Professor at the LaBRI (Computer Science Laboratory), University of Bordeaux 1, in 2001. He was also a member of “Studio de Création et de Recherche en Informatique et Musique Electroacoustique” (SCRIME). Since 2011, he is Professor at the University of Brest, France. He is there at the head of the Image and Sound curriculum. He is the leader of the French ANR DReaM project, and member of the scientific committee of the international conference on Digital Audio Effects (DAFx). He is an active IEEE Senior Member, and was associate editor of the IEEE Transactions on Audio, Speech, and Language Processing. Pr Marchand is particularly involved in musical sound analysis, transformation, and synthesis. He focuses on spectral representations, taking perception into account. Among his main research topics are sinusoidal models, analysis / synthesis of deterministic and stochastic sounds, sound localization / spatialization (“3D sound”), separation of sound entities (sources) present in polyphonic music, or “active listening” (enabling the user to interact with the musical sound while it is played).
Vous constatez une erreur ?
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche
Hôtel de Ville, Rambuteau, Châtelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.