• <xmp id="om0om">
  • <table id="om0om"><noscript id="om0om"></noscript></table>
  • Get Started With NVIDIA ACE

    NVIDIA ACE is a suite of digital human technologies that bring game characters and digital assistants to life with generative AI. NVIDIA ACE encompasses technology for every aspect of the digital human—from speech and translation, to vision and intelligence, to realistic animation and behavior, to lifelike appearance.


    Cloud Deployment: NVIDIA NIM

    NVIDIA NIM? microservices are easy-to-use inference microservices for LLMs, VLMs, ALMs, speech, animation, and more that accelerate the deployment of AI on any cloud or data center. Try out the latest NVIDIA NIM microservices today at ai.nvidia.com.

    ACE Agent

    For dialog management and RAG workflows.

    Animation Graph

    For animation blending, playback and control.

    Audio2Face-2D

    Animate a person’s portrait photo using audio with support for lip sync, blinking and head pose animation.

    Audio2Face-3D

    For audio to 3D facial animation and lip sync. On-device coming soon.

    Nemo Retriever

    Seamlessly connect custom models to diverse business data and deliver highly accurate responses for AI applications using RAG.

    Omniverse RTX Renderer

    For streaming ultra-realistic visuals to any device.

    Riva Automatic Speech Recognition

    For speech-to-text workloads.

    Riva Neural Machine Translation

    For text translation for up to 32 languages.

    Riva Text-to-Speech

    For text to speech. On-device inference coming soon.

    Unreal Render 5 Renderer

    This microservice allows you to use Unreal Engine 5.4 to customize and render your avatars.

    VoiceFont

    Create a unique voice with 30 seconds of reference data.


    Cloud Deployment: NVIDIA AI Blueprints

    The Digital Human for Customer Service NVIDIA AI Blueprint is a cutting-edge solution that allows enterprises to create 2D and 3D animated digital human avatars, enhancing user engagement beyond traditional customer service methods.

    Digital Human Customer Service AI Blueprint

    Powered by NVIDIA ACE, Omniverse RTX?, Audio2Face?, and Llama 3.1 NIM microservices, this blueprint integrates seamlessly with existing generative AI applications built using RAG.


    PC Deployment: In-Game Inferencing (IGI) SDK

    The IGI SDK streamlines AI model deployment and integration for PC application developers. The SDK preconfigures the PC with the necessary AI models, engines, and dependencies. It orchestrates in-process AI inference for C++ games and applications and supports all major inference backends across different hardware accelerators (GPU, NPU, CPU).

    Learn More

    PC Deployment: ACE On-Device Models

    ACE on-device models enable agentic workflows for autonomous game characters. These characters can perceive their environment, understand multi-modal inputs, strategically plan a set of actions and execute them all in real-time, providing dynamic experiences for players.

    Audio2Face-3D (Authoring)

    Generate high-quality, audio-driven facial animation offline using the Autodesk Maya reference application. Utilize the Audio2Face-3D service through a simple, streamlined interface or dive in the source code to develop your own custom clients.

    Audio2Face-3D (Runtime)

    Use AI to convert streaming audio to facial blendshapes for real-time lip-syncing and facial animations. Use our Audio2Face-3D plugin for Unreal Engine 5 alongside a configuration sample to enhance your Metahuman. Audio2Face-3D 3.0 coming April with on-device Unreal Engine 5 support.

    E5 Large Unsupervised

    Embedded model for retrieval augmented generation. Provides context and memory for autonomous agents.

    Llama-3.2-3B-Instruct

    Agentic small language models that enable better role-play, retrieval-augmented generation (RAG) and function calling capabilities.

    Mistral-7B-Instruct

    Agentic small language model that enables better role-play, retrieval-augmented generation (RAG) and function calling capabilities. This model works across any GPU architecture that supports ONNX Runtime and DirectML.

    Mistral-Nemo-Minitron Family

    Agentic small language models that enable better role-play, retrieval-augmented generation (RAG) and function calling capabilities. They come in 8B, 4B and 2B parameter models to fit your VRAM and performance requirements. The on-device models run on NVIDIA GPUs and any CPU.

    Nemovision-4B-Instruct

    Agentic multi-modal small language model that provides game characters with visual imagery understanding of the real world and on screen actions for better context aware responses.

    Access from Cloud

    On-Device via NVIGI (coming soon)

    Documentation

    Nemotron-Mini-4B-Instruct

    Agentic small language models that enable better role-play, retrieval-augmented generation (RAG) and function calling capabilities.

    Riva ASR

    Transcribes human language and adds speech to text in real time.

    Access from Cloud

    On-Device via NVIGI (coming soon)

    Documentation

    Whisper ASR

    Take an audio stream as input and return a text transcript. It is compatible with NVIDIA GPUs and any CPUs.


    PC Deployment: Engine Plugins and Samples

    Plugins and samples for Unreal Engine developers looking to bring their MetaHumans to life with generative AI on RTX PCs.

    NVIDIA ACE Unreal Engine 5 Reference

    The NVIDIA Unreal Engine 5 reference showcases NPCs interacting with natural language. This workflow contains an Audio2Face on-device plugin for Unreal Engine 5 alongside a configuration sample.


    ACE Tools

    Technologies for customization and simple deployment.

    Autodesk Maya ACE

    Streamline facial animation in Autodesk Maya or dive into the source code to develop your own plugin for the digital content creation tool of your choice.

    Avatar Configurator

     Build and configure custom characters with base, hair, and clothes.

    Unified Cloud Services Tools

    Simplify deployment of multimodal applications.


    ACE Examples

    Get started with ACE microservices below. These video tutorials provide tips for common digital human use cases.

    Text to Gesture

    Create Sentiment Analysis and Send Audio to A2X and AnimGraph (00:44)

    Connect All Microservices in UCF (6:34)

    Reallusion Character

    Exporting Character From Reallusion Character Creator and Preparing Character in Audio2Face (11:07)

    Setup and Streaming Through a Reference App and Fine Tuning (14:41)

    Stylised Avatar

    Making and Animating a Stylised 3D Avatar From Text Inputs (1:43)

    Make Vincent Rig Compatible For UE5 and A2X Livelink (5:35)

    Make Vincent Blueprint Receive A2X Animation Data (11:53)

    Create Python App to Generate Audio From Text and Animate Vincent (8:17)

    人人超碰97caoporen国产