• <xmp id="om0om">
  • <table id="om0om"><noscript id="om0om"></noscript></table>
  • NVIDIA NeMo Guardrails for Developers

    NVIDIA NeMo? Guardrails is a scalable AI guardrail orchestration platform for safeguarding generative AI applications. With NeMo Guardrails, you can implement and orchestrate multiple AI guardrails to ensure the safety, security, accuracy, and topical relevance of LLM interactions. Extensible and customizable, NeMo Guardrails is easy to use with popular gen AI development frameworks including LangChain and LlamaIndex, along with a growing ecosystem of AI safety models, rails, and observability tools.

    View on GitHubDocumentation
    Forum


    See NVIDIA NeMo Guardrails in Action

    Implementing AI guardrails to build safe and secure LLM applications.


    How NVIDIA NeMo Guardrails Works

    AI guardrail orchestration to keep LLM applications secure and on track.

    NeMo Guardrails is a scalable platform for orchestrating AI guardrails for LLM applications, including AI guardrails for content safety, topic control, PII detection, RAG enforcement, and jailbreak prevention. NeMo Guardrails leverages Colang for designing flexible dialogue flows and is compatible with popular LLMs and frameworks like LangChain. With its modular, easy-to-implement architecture, NeMo Guardrails ensures safe, reliable, and customizable AI applications, including RAG-enabled AI agents, copilots, and chatbots.

    A diagram showing how NVIDIA NeMo Guardrails supports multiple AI guardrails

    Introductory Blog

    Simplify building trustworthy LLM apps with AI guardrails for safety, security, and control.

    Read Blog

    Documentation

    Explore resources for getting started, such as examples, the user guide, security guidelines, evaluation tools, and more.

    Read Documentation

    Example Configurations

    The configurations in this folder showcase various features of NeMo Guardrails, e.g., using a specific LLM, enabling streaming, and enabling fact-checking.

    Explore Examples

    Content Moderation Blog

    Learn how to integrate advanced content moderation for safer user interactions using community models like Meta’s LlamaGuard model and AlignScore, integrated with NVIDIA NeMo Guardrails.

    Read Blog

    Ways to Get Started With NVIDIA NeMo Guardrails

    Use the right tools and technologies to safeguard AI applications with NeMo Guardrails scalable AI guardrail orchestration platform.

    AI guardrails code

    Download Code

    To use the latest features and source code for adding AI guardrails to LLM applications, NeMo Guardrails is available as an open-source project on GitHub.

    Download (Github)
    AI guardrails microservice

    Apply

    Request early access to the NeMo Guardrails microservice, which orchestrates AI guardrails for LLMs, ensuring accuracy, appropriateness, and security in LLM applications.

    Apply Now

    NVIDIA NeMo Guardrails Learning Library


    More Resources

    AI guardrails community

    Explore the Community

    AI guardrails training

    Get Training and Certification

    AI guardrails startup

    Accelerate Your Startup


    Ethical AI

    NVIDIA’s platforms and application frameworks enable developers to build a wide array of AI applications. Consider potential algorithmic bias when choosing or creating the models being deployed. Work with the model’s developer to ensure that it meets the requirements for the relevant industry and use case; that the necessary instruction and documentation are provided to understand error rates, confidence intervals, and results; and that the model is being used under the conditions and in the manner intended.

    Stay up to date on the latest generative AI news from NVIDIA.

    Sign Up

    人人超碰97caoporen国产