Menu VisualVoice / AdaptiveControl

Visual Voice Home

About Artistic Vision People Teams Contact us


Publications Media Images/Movies


Related Links

Local Only

Website problems?

edit SideBar

Current work on sound synthesis includes the re-implementation of machine learning and the adaptive interface mapping from a learned data base to target vowel production. In parallel with this we are developing a system connecting the data from the gestural controllers with an articulated model of a face built inside the ArtiSynth modelling framework. A dictionary of phonemes and facial configurations combined with interpolation schemes will provide the coordinated mapping between hand gesture, acoustics, and the visual face movement.