Publisher | RF Solutions |
---|---|
File size | 244.02MB |
Number of files | 419 |
Latest version | 1 |
Latest release date | 2024-10-21 11:58:14 |
First release date | 2024-01-22 12:10:16 |
Supported Unity versions | 2018.4.2 or higher |
This is the next step in the evolution of depth sensor examples (incl. "Azure Kinect and Femto Bolt Examples", "Kinect-v2 Examples", etc.). Instead of a depth sensor though this asset uses a plain web camera or video recording as input, and AI models to provide depth estimation, body tracking and other streams. The package contains over thirty demo scenes.
Web | Documentation | Twitter | LinkedIn
The avatar-demo scenes show how to utilize user-controlled avatars in your scenes, gesture demo – how to use discrete and continuous gestures in your projects, fitting room demos – how to overlay or blend the user’s body with virtual models, background removal demo – how to display user silhouettes on virtual background, etc. Short descriptions of all demo-scenes are available in the online documentation.
This package works with plain web cameras and video clips that can be played with Unity video player. It can be used with all versions of Unity – Free, Plus & Pro.
How to run the demo scenes:
1. Create a new Unity project (Unity 2023.2 or later, required by Sentis).
2. Open the 'Package Manager'-window in Unity editor, click the '+'-button, select 'Install package by name' from the menu and enter 'com.unity.sentis'. Then hit Enter or the 'Install'-button. This will install 'Sentis' - the Unity package for AI model inference.
3. Import this package into the Unity project.
4. Open ‘File / Build settings’ and switch to ‘PC, Mac & Linux Standalone’, Target architecture: 'Intel 64 bit'.
5. In 'Build settings' click the 'Player settings'-button and make sure the 'Color space' is set to 'Gamma'.
6. Please check if 'Direct3D11' is the first option in the ‘Auto Graphics API for Windows’-list setting, in 'Player Settings / Other Settings / Rendering'.
7. First off, please open and run a demo scene that contains body tracking from a subfolder of the 'ComputerVisionExamples/DemoScenes'-folder (e.g. AvatarDemo1-scene or OverlayDemo2-scene). Stand in front of the camera to calibrate. This is needed once, to estimate the camera intrinsic parameters.
8. Open and run a demo scene of your choice from a subfolder of the 'ComputerVisionExamples/DemoScenes'-folder. Short descriptions of all demo-scenes are available in the online documentation.
Current limitations:
1. Because of the intensive GPU utilization, please currently use the package on desktop platforms only.
2. Please don't utilize the Linear color space in your project for now. It causes issues in AI model inferences.
3. If possible, please avoid tracking more than one user at the moment.
One request:
Please don't share this package or its demo scenes in source form with others, or as part of public repositories, without my explicit consent.
Troubleshooting:
* Please note, this is the first release of the CVE-package. Don't expect everything to be perfect. If you get any issues or errors, please contact me to report your issues. Then have some patience, until all issues get resolved.
* If you get errors in console, like 'Texture2D' does not contain a definition for 'LoadImage' or 'Texture2D' does not contain a definition for 'EncodeToJPG', please open the Package Manager, select 'Built-in packages' and make sure 'Image conversion' and 'Physics 2D' packages are enabled.
* For other known issues, please look here.
Documentation:
* The basic documentation is available in the Readme-pdf file in the package.
* The online documentation is available here.
Third-Party Software:
This asset uses AI models for monocular depth estimation under MIT License, as well as AI models and scripts for detecting landmarks of human bodies in an image or video under Apache License Version 2.0. See 'Third-Party-Notices.txt' file in the package for details.