• Security: data is accessible to the customer only, files are automatically erased once processed. Security: data is accessible to the customer only, files are automatically erased once processed.
  • Quality: all orders are manually checked before and after processing to ensure the best quality. Quality: all orders are manually checked before and after processing to ensure the best quality.
  • Performance: support is provided so that each order is fully functional. Performance: support is provided so that each order is fully functional.

Talking Avatar

Facial rigging ready for text-to-speech & video-tracking animation

Transform your 3D head models into expressive talking characters!
Talking Avatar is a service which generates the blendshapes and rigging for your character head to make it fully compliant with most standard Text-to-Speech (Amazon Polly, Oculus Lipsync, IBM Watson) and mobile device's Face Tracking (FaceLink in Unreal, Rokoko for Unity, FaceCap, Hyprface) solutions. From photorealistic Virtual Influencers, stylized VTubers to cartoonish Virtual Assistants, upload your static 3D model and receive your FBX rigged avatar ready to be integrated in your 3D Chatbot and AI Conversational Agents application in Unity/Unreal, with all head motions, speech, gaze and emotions capabilities.

21 speech visemes and 52 ARKit/ARCore facial blendshapes

Flexible, robust and easy to use, Talking Avatar smoothly incorporate any precomp or real-time animation, no matter the modeling style or the animation technique.

To automatically get your rigged head character in FBX, simply purchase this service, upload your 3D head model and download the result within 24 hours!

If you want to finetune your facial animations issued from text-to-speech and/or face-tracking for CGI or real-time animation, order the Maya option and you get delivered with a complete Maya rig in addition to the FBX export.

Download a sample

Option
  • FBX
  • Maya
€499.00

Presenting the Talking Avatar face rig

Talking Avatar is a service which generates the blendshapes and rigging for your character head to make it fully compliant with most standard Text-to-Speech (Amazon Polly, Oculus Lipsync, IBM Watson) and mobile device's Face Tracking solutions (FaceLink in Unreal Engine, Rokoko for Unity, FaceCap, Hyprface...).

The rig encompasses joints linked to the head rigid motions, eyes gaze and jaw opening
as well as the 52 ARKit/ARCore facial expressions blendshapes
and the 21 visemes blendshapes related to speech.

52 ARKit 2.0 / ARCore 1.7 blendshapes for facial tracking

Our set of 52 blendshapes closely follows the ARKit 2 and ARCore 1.7 documentation, including the new targetshape attached to the tongue.
The set is delivered with the ARKit or ARCore nomenclature unless the user requests otherwise, and comes ready to be animated thanks to Apple iPhones embedding TrueDepth sensor or Android 7 mobiles with standard camera.

21 visemes blendshapes for automatic text-to-speech and lipsync or manual keyframe speech animations

Visemes represent the deformations that a face makes while speaking. The number of visemes is a subset of the phonemes and depends on the considered language in essence but a common set enables to qualitativaly emulate most speech animation linked to natural langage processing.
Our set of 21 blendshapes gathers all widened and rounded consonants as well as vowels monophthongs involved in standard text-to-speech engines and lipsync technologies. You can thus use Microsoft Speech API or Amazon Polly to implement your 3D Chatbot or consider Oculus Lipsync plugin for Unity and Unreal to drive the animation of your Virtual Assistant from a simple text or from a streamed voice. Also, these inputs can either be predefined or be generated through your conversational AI engine like IBM Watson and NVidia Jarvis.

Naturally you can combine both face tracking and text-to-speech/lipsync animations to mix emotions and locutions
and the Maya option also empowers you to keyframe or edit your resulting animation in Maya.

Rig any 3D head

Talking Avatar works for any type of 3D character regardless of its morphology, whether it’s a scanned head, a photorealistic head model or a cartoonish character. It preserves the user's topology and fits perfectly within any 3D pipeline.

How does this service work?

All you have to do is purchase this service and upload your 3D model. You will then automatically get it back within 24 hours fully rigged, with the complete set of joints and blendshapes, directly generated onto your mesh and perfectly adapted to the morphology of your character.

FBX & Maya Delivery Options

All users can select between two different buying options: 
- Option FBX is an .fbx file with the 73 rigged blendshapes that can be imported in any 3D software and engine
- Option Maya is a package containing an FBX file with 73 Blendshapes as well as a rigged Maya scene with the same set to allow for keyframe animation or for the editing your resulting text-to-speech and face tracking animation.

What should I upload to order Polywink's Talking Avatar?

A 3D model with a neutral pose (eyes open and mouth shut) and two additional elements: eyes and inner mouth (teeth, gum, tongue).
Additional features like eyelashes, eyebrows, or beards should be included.
Different geometries can be combined in one mesh as the result will be combined in order to work with the iPhoneX and Android Face Tracking.

Data sheet

Blendshapes
73
Compatibility
Android 7 ARCore 1.7 or later
Compatibility
IOS 11 ARKit 2 or later
Delivery Format
.FBX or .MB + .FBX for the "Maya" option
Delivery
24h

Specific References

Download

POLYWINK_TA_73_SAMPLE

Sample of POLYWINK - Talking Avatar under CC NonCommercial NonDerivatives Attribution license

Download (96.63M)