Menu | VisualVoice / Summary | |
About Artistic Vision People Teams Contact us Publications Media Images/Movies Opportunities Related Links |
Working MaterialsAs described in Background / Outline, the implementation of a talking face has three requirements - to run a face model ready to receive data values, enable the user to create vizeme mappings, and run the system in performance mode. To gain a better understanding of what needs to be accomplished, it is instructive to examine the limitations of the software used, and of the programming languages involved: i) The DIVA Code base The DIVA project is coded in MAX/MSP, a visual language which consists of objects with inlets and outlets connected by "patch cords". It is well designed for audio processing, but can be difficult to debug, and susceptible to crashing, since many operations are often running in parallel. ii) Artisynth Software Artisynth is a Java project which contains both the backend software implementation, and all relevant models including the KuraFace. The KuraFaceModel is a class in the Artisynth project which runs by instantiating it. The task of implementing a talking face can be interpreted as a translator, sender and receiver - an object in the DIVA project converts phoneme data to face parameter values and sends these values along, to be received by Artisynth's KuraFace. It is necessary, then, to find a way to communicate values from MAX to Java. In addition to this, the whole must be launched from within the DIVA project, and thus the KuraFace must be started from MAX. This presents, then, two challenges: i) Communicating values from MAX to Java. ii) Launching Artisynth's KuraFace from MAX. The difficulty lies in MAX's limitations -- MAX's set of included objects do not support communication with Java, nor are capable of starting other system processes. Fortunately, there are tools which allow us to circumvent these difficulties: i) MXJ - MAX supports the creation of external objects in Java, using MXJ, a set of Java classes which support methods for MAX objects. ii) the Java Socket class - Java contains an implementation of the socket connection, allowing client and server to connect through socket and pass data to one another. iii) the Java Process class - Java contains a class capable of starting system processes, as on a command line. Thus, MXJ can be used to code Java objects in MAX, which are capable both of starting a system process, and creating a socket connection to transfer information. We now have all the materials required to tackle the requirements -- the user mapping tools and the processing of map files can be implemented in MAX, and an MXJ object can be used to launch Artisynth and connect to it. Implementation AlgorithmThe progress of work towards the implementation of a talking face can be divided into six steps. A short summary of each task is given below. i) Step 1 - set up Artisynth to act as a receiver, starting the KuraFace automatically and waiting as server for a socket connection ii) Step 2 - write an mxj object with an inlet for PC values which connects to Artisynth as client and sends along the PC values iii) Step 3 - extend the mxj object from step 2 to read a map file containing phoneme-to-PC vector mappings, and use this to convert phoneme values to PC values, and send these to the KuraFace. iv) Step 4 - create an object capable of sending streams of phoneme data to the object from step 3, in order to visually test the face's transitions from one phoneme to the next. v) Step 5 - create a set of tools enabling the user to create their own vizeme mappings, and save these as map files vi) Step 6 - embed and link step 3 and step 5 objects in the DIVA project, such that the user can edit and save map files, and then load these in performance mode when starting up the talking face. |
|
View Edit Attributes History Attach Print Search Page last modified on August 19, 2008, at 12:58 PM |