Publisher | Tiny Angle Labs |
---|---|
File size | 24.85MB |
Number of files | 18 |
Latest version | 1 |
Latest release date | 2023-08-14 02:01:37 |
First release date | 2019-07-17 08:21:07 |
Supported Unity versions | 2018.4.2 or higher |
SpeechBlend provides accurate, real-time lip syncing in Unity.
SpeechBlend works by analyzing the audio coming from any Audio Source and uses machine learning to predict realistic mouth shapes (visemes).
Currently the following viseme blendshape sets are supported:
- Daz Studio (Genesis 2/3/8)
- Character Creator 3
- iClone (v5.x/v6.x)
- Adobe Fuse (Mixamo Rigging)
- Any character model with similar blendshapes to the above
Now with WebGL Support!
To use SpeechBlend, just drop the component onto your character, select the voice audio source and head mesh blendshapes and you're ready to go!
SpeechBlend can be used with just a single jaw joint or "mouth open" blendshape for simple mouth tracking with your audio file. To create realistic lip syncing it's recommended to use a character model with viseme blendshapes available.
You can even lip-sync your own voice live with microphone input! Check out the included demo to see how.
Many options are available to tweak the viseme prediction to get the realistic look you want at the right performance level.
SpeechBlend works by analyzing the audio coming from any Audio Source and uses machine learning to predict realistic mouth shapes (visemes).
Currently the following viseme blendshape sets are supported:
- Daz Studio (Genesis 2/3/8)
- Character Creator 3
- iClone (v5.x/v6.x)
- Adobe Fuse (Mixamo Rigging)
- Any character model with similar blendshapes to the above
Now with WebGL Support!
To use SpeechBlend, just drop the component onto your character, select the voice audio source and head mesh blendshapes and you're ready to go!
SpeechBlend can be used with just a single jaw joint or "mouth open" blendshape for simple mouth tracking with your audio file. To create realistic lip syncing it's recommended to use a character model with viseme blendshapes available.
You can even lip-sync your own voice live with microphone input! Check out the included demo to see how.
Many options are available to tweak the viseme prediction to get the realistic look you want at the right performance level.