Menu VisualVoice / Undergraduates

Visual Voice Home

About Artistic Vision People Teams Contact us


Publications Media Images/Movies


Related Links

Local Only

Website problems?

edit SideBar

Undergraduate opportunities

  • 496 projects
    • Portable, wireless gesture-to-speech device
    • Max/MSP objects
    • Face synthesis models
  • Summer/NSERC USRA students
  • COGS students

Co-op/Summer student position available

Position: Gesture Interface Developer
Organization: UBC Media and Graphics Interdisciplinary Centre (MAGIC)
Activity: Adaptive control of a Gesture-to-Speech Synthesizer
Work Term: 4 months (Summer 2010), with the possibility to extend


We are creating a new system that uses hand gestures to synthesize audiovisual speech and song by means of an intermediate conversion of hand gestures to vocal articulator (e.g., tongue, jaw, lip, vocal chords) parameters of a vocal tract model. The system, called DIVA (DIgital Ventriloquized Actor), is adaptive in that it learns the mapping between a person's gestures and the sounds produced. This approach reduces learning time and increases actors' ability to master the production of complex expressive vocal behavior. The articulatory synthesis system, ArtiSynth, is used to incorporate new techniques for fast simulation of both vowels and consonants. DIVAs will be used in three composed stage works of increasing complexity to be performed in Canada and internationally.

This project combines hardware interfacing, computer graphics, physical modeling, machine learning and Java and C coding to create the gesture-to-speech synthesis system.

The successful applicant will be fluent in C/C++ and will have experience with hardware interfacing, software engineering, and user interface design. Strong math skills are important. Familiarity with software development environments (i.e. eclipse), Macintosh OS X, Mac sound routines, machine learning and computer graphics would be an asset. Additional benefits could include some knowledge of linguistics or speech sciences, and/or some knowledge of the Max/MSP programming language.

Responsibilities may include:

  1. working with software development, experimental and performance team
  2. interfacing USB and other types of devices in Java and/or C environments
  3. implementing user interface components to make the synthesis system easy to use for performers
  4. making the system robust and mobile for experimental and stage use
  5. implementing the adaptive gesture-to-speech mapping
  6. documenting the architecture of the system and how to use it

If you have an interest in gesture, speech and performance, this is an exciting project to be involved in. Hours are flexible and the work environment is academic and informal. This is an excellent opportunity to work on a real world system with a broad application audience that will be used for advancing science and music.

Salary: $8,000 to $10,000 per term depending upon qualifications.

Contact: Dr. John Lloyd, Computer Science Dept., 822-4946, or Dr. Sidney Fels, Dept. of Electrical and Computer Engineering, 822-5338.