TIGA Games Industry Awards 2012 Finalist

Tiga_logo

Speech Graphics was honoured to be a finalist at the TIGA Games Industry Awards 2012.

TIGA is quoted on their website: “As trade body for the UK Games industry , we’ve set up our awards with a difference. As well as focusing on the best games in the industry over the last 12 months, we want to highlight best practice and reward those in the industry contributing to its long-term innovation.”
More details of the awards can be found here.

Speech Graphics at the Symposium for Facial Analysis and Animation 2012

Speech Graphics was out in force last week in Vienna at the 3rd International Symposium on Facial Analysis and Animation. As a co-sponsor of the event we were happy to see so many researchers from academia and industry met in an impressive setting to discuss the latest developments in facial tracking, facial retargeting, facial scanning, and many other topics related to facial animation. We are extremely pleased to be part of such a strong community and are looking forward to participating in any future events.

Interview with Gregor Hofer at GDC 2012

Interview with Gregor about Speech Graphics by Scottish Development International at the 2012 Games Developers Conference in San Francisco in March.

In the Light Stage

Our team recently paid a visit to the Institute for Creative Technologies at USC in Los Angeles and discussed the future of facial synthesis and animation with research scientists there working on virtual humans, simulation and graphics. This photo of Gregor Hofer and Michael Berger was taken in one of the several Light Stages used to create photorealistic 3D characters. This particular Light Stage is dedicated to facial capture. We look forward to future collaborations with ICT.

Automating nonverbal facial animation

Some folks at icrontic e-mailed us this suggestion:

“Some friends and I had a brief discussion about the technology you are developing. This idea came up: ‘I wonder if they could work with linguistic experts and statisticians to develop databases that essentially mapped common facial expressions to common verbal expressions. For example .. you see something cute and say “awwww” .. your eyebrows lift and you make the “isn’t that cute face” If they could automate that and create an API for it along with their lower jaw/mouth/tongue animations then I think they would be in serious business here.’ and a colleague responded: ‘The man you would be looking for is Dr. Paul Ekman and his creation, FACS. He figured out the emotional significance of every facial expression and mapped it to the specific muscles involved.’  So there you have it, perhaps this idea helps you.”

Yes! There is definitely a need to automate non-verbal facial activity as well, especially in the upper face, and in fact this is something we’re actively working on, based on those correlations between speech and non-speech behavior which you rightly point out. And you’re right – Ekman is definitely the man in terms of understanding the muscular composition of these expressions, and their connection to psychological states. From there the trick is to get reasonable and robust predictions of nonverbal events from speech. We are open to using speech-based information from various levels – acoustics, syntax, lexical semantics – in a statistical framework. In addition we need to be able to synthesize those events with natural-looking dynamics.

This is certainly an easier problem than speech (which has insane dynamical properties and also has to be perfectly in sync with an audio channel), but still demands really good fidelity to be convincing to us humans, innately specialized in reading the tiniest movements on faces. It will certainly be a nice complement to our existing capabilities with lip sync.

Some other near-term strategies for the upper face are close on the horizon, so stay tuned!

We’re set to Launch at GDC 2012

We are having our big launch at the Game Developers Conference Expo in San Francisco (March 7-9). We’ll be demonstrating our tech producing high-fidelity lip sync in a wide variety of languages. We look forward to presenting our new service offerings and our exciting enterprise to transform the way the game industry thinks about speech.

Come visit our stand at #1843.