UCSF - Synthetic Speech from Neural Decoding

UCSF Decodes neural activity to produce synthesized speech

Our goal was to help UCSF show how the muscles of the vocal tract move in concert to create speech which in this case resulted in a synthesized voice.

Production Stages

  1. Process the audio

    We received the audio from UCSF along with transcripts that described the content of the synthetic voice recordings.

  2. Develop the look

    Together with UCSF we designed the look of the vocal tract.  The goal was to create a visual that in its style matched the feel of the synthesized voice.

  3. Animate the model

    By processing the synthesized voice audio clips we were able to automatically animated our chosen model of the vocal tract.

  4. Rendering and compositing

    The rendering was done in multiple passes that were composited together to arrive at the final “diagnostic” visual style we were aiming for.

Previous
Previous

State of Decay 2