ABOUT THIS SOFTWARE
FaceRig is a program enabling anyone with a webcam to digitally embody awesome characters. It is meant to be an open creation platform so everybody can make their own characters, backgrounds or props and import them into FaceRig.
FaceRig is being developed by Holotech Studios and the image-based face tracking SDK being used is provided by ULSee Inc. To find out more about ULSee Inc., chnology was provided by Visage Technologies.
FaceRig has three versions:
- FaceRig Classic is the base version of FaceRig, which allows for home non-profit use and even limited monetization on YouTube/Twitch or similar as long as the commercial aspect of it is not significant. We consider commercially significant using it on any avenue (channel) that produces you more than $500 monthly revenue. This includes ad-based revenue and voluntary donations (one time or recurrent such as a Patreon). If you are under this threshold you can use Classic, if you are above you need to upgrade to Pro or Studio.
- the FaceRig Pro DLC, which allows you to monetize videos on YouTube/Twitch regardless of the monthly revenue, but not for content exclusive to paid/subscription based services, such as the new YouTube Red (for content that is exclusive to YouTube Red or similar you will need FaceRig Studio).
FaceRig Pro does not bring new assets, technical features or improvements to FaceRig Classic. For more details, take a look at the Pro DLC description
- the Studio Version allows commercial use, as well as access to the expression mo-cap data. For more details about this version visit https://facerig.com
We believe that FaceRig will entertain many gamers, modelling and animation artists, members of various fandoms, streamers, web casters, YouTubers and their audiences.
For now we're focusing on just tracking and rendering the portrait with its expressions, but we aim to do more in the future. The FaceRig end goal is to provide a full featured real time digital actor set for home use.Key Features:
- Real time head and expression tracking in an input video stream, (also with audio analysis).
- Combining tracked data with additional virtual puppeteering input.
- On-the-fly animation re-targeting for the tracked data and applying the animation on a user selected 3d model, with audio processing (voice alteration)
- Rendering and lighting the animated model in real time, in a user selected background.
- Encoding the rendered video and sending it further as output from a virtual webcam (it basically intercepts webcam input, and swaps the images captured by your real webcam with the fantastic content before sending it further).
- Provide the interface for tuning the parameters of all operations above.
- Customizable avatars.
- Open creation platform: you can make your own models (outside of FaceRig), import and use them as your avatars. The models need to be created according to a set of published specs.
Numeric export of the tracked expression data (exporting the motion capture data to use in other applications) is a feature reserved for the Studio version.
FaceRig Classic and Pro only output already rendered video and audio.
FaceRig is an indie initiative, and its development so far has been successfully crowd-funded through Indiegogo. A warm thank you goes to our amazing backers for their support and enthusiasm.