... It works by analyzing acoustic features to create a stream of animation data that is then mapped onto a character’s facial poses. The data translates to “accurate lip-sync and emotional expressions,” says Nvidia, noting the imagery can be rendered offline for pre-scripted content or streamed in real time for dynamic characters with accurate lip-sync and emotional expressions. ...