Vision language models (VLMs) have transformed video analytics by enabling broader perception and richer contextual understanding compared to traditional computer vision (CV) models. However, challenges like limited context length and lack of audio transcription still exist, restricting how much video a VLM can process at a time. To overcome this, the NVIDIA AI Blueprint for video search and��
]]>As AI workloads grow in complexity and scale��from large language models (LLMs) to agentic AI reasoning and physical AI��the demand for faster, more scalable compute infrastructure has never been greater. Meeting these demands requires rethinking system architecture from the ground up. NVIDIA is advancing platform architecture with NVIDIA ConnectX-8 SuperNICs, the industry��s first SuperNIC to��
]]>The integration of NVIDIA NIM microservices into Azure AI Foundry marks a major leap forward in enterprise AI development. By combining NIM microservices with Azure��s scalable, secure infrastructure, organizations can now deploy powerful, ready-to-use AI models more efficiently than ever before. NIM microservices are containerized for GPU-accelerated inferencing for pretrained and customized��
]]>NVIDIA Agent Intelligence toolkit is an open-source library for efficiently connecting and optimizing teams of AI agents. It focuses on enabling developers to quickly build, evaluate, profile, and accelerate complex agentic AI workflows?��?systems in which multiple AI agents collaborate to perform tasks. The Agent Intelligence toolkit acts as a unifying framework that integrates existing��
]]>It��s 10 p.m. on a Tuesday when the phone rings at the Sapochnick Law Firm, a specialized law practice in San Diego, California. The caller, a client of the firm, is anxious as the phone rings. They received an important letter containing? potentially life-changing news, and had urgent questions for their lawyer. The client quickly realizes the Sapochnick team likely left the office hours ago��
]]>The age of passive AI is over. A new era is beginning, where AI doesn��t just respond��it thinks, plans, and acts. The rapid advancement of large language models (LLMs) has unlocked the potential of agentic AI systems, enabling the automation of tedious tasks across many fields, including cybersecurity. Traditionally, AI applications in cybersecurity have focused primarily on detecting��
]]>This is the first post in the LLM Benchmarking series, which shows how to use GenAI-Perf to benchmark the Meta Llama 3 model when deployed with NVIDIA NIM. Researchers from the University College London (UCL) Deciding, Acting, and Reasoning with Knowledge (DARK) Lab leverage NVIDIA NIM microservices in their new game-based benchmark suite, Benchmarking Agentic LLM and VLM Reasoning On Games��
]]>Enterprise data is constantly changing. This presents significant challenges for maintaining AI system accuracy over time. As organizations increasingly rely on agentic AI systems to optimize business processes, keeping these systems aligned with evolving business needs and new data becomes crucial. This post dives into how to build an iteration of a data flywheel using NVIDIA NeMo��
]]>Build a high-performance agentic AI system using the open-source NVIDIA Agent Intelligence toolkit �� contest runs May 12 to May 23.
]]>AI is no longer just about generating text or images��it��s about deep reasoning, detailed problem-solving, and powerful adaptability for real-world applications in business and in financial, customer, and healthcare services. Available today, the latest Llama Nemotron Ultra reasoning model from NVIDIA delivers leading accuracy among open-source models across intelligence and coding benchmarks��
]]>As more enterprises integrate LLMs into their applications, they face a critical challenge: LLMs can generate plausible but incorrect responses, known as hallucinations. AI guardrails��or safeguarding mechanisms enforced in AI models and applications��are a popular technique to ensure the reliability of AI applications. This post demonstrates how to build safer��
]]>Industrial enterprises are embracing physical AI and autonomous systems to transform their operations. This involves deploying heterogeneous robot fleets that include mobile robots, humanoid assistants, intelligent cameras, and AI agents throughout factories and warehouses. To harness the full potential of these physical AI enabled systems, companies rely on digital twins of their facilities��
]]>Since the release of ChatGPT in November 2022, the capabilities of large language models (LLMs) have surged, and the number of available models has grown exponentially. With this expansion, LLMs now vary widely in cost, performance, and specialization. For example, straightforward tasks like text summarization can be efficiently handled by smaller, general-purpose models. In contrast��
]]>With emerging use cases such as digital humans, agents, podcasts, images, and video generation, generative AI is changing the way we interact with PCs. This paradigm shift calls for new ways of interfacing with and programming generative AI models. However, getting started can be daunting for PC developers and AI enthusiasts. Today, NVIDIA released a suite of NVIDIA NIM microservices on��
]]>The growing volume and complexity of medical data��and the pressing need for early disease diagnosis and improved healthcare efficiency��are driving unprecedented advancements in medical AI. Among the most transformative innovations in this field are multimodal AI models that simultaneously process text, images, and video. These models offer a more comprehensive understanding of patient data than��
]]>With the release of NVIDIA Agent Intelligence toolkit��an open-source library for connecting and optimizing teams of AI agents��developers, professionals, and researchers can create their own agentic AI applications. This tutorial shows you how to develop apps in the Agent Intelligence toolkit through an example of AI code generation. We build a test-driven coding agent using LangGraph and reasoning��
]]>As agentic AI systems evolve and become essential for optimizing business processes, it is crucial for developers to update them regularly to stay aligned with ever-changing business and user needs. Continuously refining these agents with AI and human feedback ensures that they remain effective and relevant. NVIDIA NeMo microservices is a fully accelerated, enterprise-grade solution designed��
]]>NVIDIA announced the release of NVIDIA Dynamo today at GTC 2025. NVIDIA Dynamo is a high-throughput, low-latency open-source inference serving framework for deploying generative AI and reasoning models in large-scale distributed environments. The framework boosts the number of requests served by up to 30x, when running the open-source DeepSeek-R1 models on NVIDIA Blackwell.
]]>With the recent advancements in generative AI and vision foundational models, VLMs present a new wave of visual computing wherein the models are capable of highly sophisticated perception and deep contextual understanding. These intelligent solutions offer a promising means of enhancing semantic comprehension in XR settings. By integrating VLMs, developers can significantly improve how XR��
]]>Learn from and connect with leading AI developers building the next generation of AI agents.
]]>Safeguarding AI agents and other conversational AI applications to ensure safe, on-brand and reliable behavior is essential for enterprises. NVIDIA NeMo Guardrails offers robust protection with AI guardrails for content safety, topic control, jailbreak detection, and more to evaluate and optimize guardrail performance. In this post, we explore techniques for measuring and optimizing your AI��
]]>AI agents are transforming business operations by automating processes, optimizing decision-making, and streamlining actions. Their effectiveness hinges on expert reasoning, enabling smarter planning and efficient execution. Agentic AI applications could benefit from the capabilities of models such as DeepSeek-R1. Built for solving problems that require advanced AI reasoning��
]]>Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have. The rise of small language models (SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to��
]]>Vision language models (VLMs) are evolving at a breakneck speed. In 2020, the first VLMs revolutionized the generative AI landscape by bringing visual understanding to large language models (LLMs) through the use of a vision encoder. These initial VLMs were limited in their abilities, only able to understand text and single image inputs. Fast-forward a few years and VLMs are now capable of��
]]>Chip and hardware design presents numerous challenges stemming from its complexity and advancing technologies. These challenges result in longer turn-around time (TAT) for optimizing performance, power, area, and cost (PPAC) during synthesis, verification, physical design, and reliability loops. Large language models (LLMs) have shown a remarkable capacity to comprehend and generate natural��
]]>Explore visually perceptive AI agents, the latest vision AI technologies, hands-on training, and inspiring deployments.
]]>As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking, this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one��
]]>At NVIDIA, the Sales Operations team equips the Sales team with the tools and resources needed to bring cutting-edge hardware and software to market. Managing this across NVIDIA��s diverse technology is a complex challenge shared by many enterprises. Through collaboration with our Sales team, we found that they rely on internal and external documentation��
]]>AI agents present a significant opportunity for businesses to scale and elevate customer service and support interactions. By automating routine inquiries and enhancing response times, these agents improve efficiency and customer satisfaction, helping organizations stay competitive. However, alongside these benefits, AI agents come with risks. Large language models (LLMs) are vulnerable to��
]]>This post was originally published July 29, 2024 but has been extensively revised with NVIDIA AI Blueprint information. Traditional video analytics applications and their development workflow are typically built on fixed-function, limited models that are designed to detect and identify only a select set of predefined objects. With generative AI, NVIDIA NIM microservices��
]]>Agentic AI, the next wave of generative AI, is a paradigm shift with the potential to revolutionize industries by enabling AI systems to act autonomously and achieve complex goals. Agentic AI combines the power of large language models (LLMs) with advanced reasoning and planning capabilities, opening a world of possibilities across industries, from healthcare and finance to manufacturing and��
]]>Agentic AI workflows often involve the execution of large language model (LLM)-generated code to perform tasks like creating data visualizations. However, this code should be sanitized and executed in a safe environment to mitigate risks from prompt injection and errors in the returned code. Sanitizing Python with regular expressions and restricted runtimes is insufficient��
]]>AI agents powered by large language models (LLMs) help organizations streamline and reduce manual workloads. These agents use multilevel, iterative reasoning to analyze problems, devise solutions, and execute tasks with various tools. Unlike traditional chatbots, LLM-powered agents automate complex tasks by effectively understanding and processing information. To avoid potential risks in specific��
]]>When interfacing with generative AI applications, users have multiple communication options��text, voice, or through digital avatars. Traditional chatbot or copilot applications have text interfaces where users type in queries and receive text-based responses. For hands-free communication, speech AI technologies like automatic speech recognition (ASR) and text-to-speech (TTS) facilitate��
]]>The exponential growth of visual data��ranging from images to PDFs to streaming videos��has made manual review and analysis virtually impossible. Organizations are struggling to transform this data into actionable insights at scale, leading to missed opportunities and increased risks. To solve this challenge, vision-language models (VLMs) are emerging as powerful tools��
]]>AI agents are emerging as the newest way for organizations to increase efficiency, improve productivity, and accelerate innovation. These agents are more advanced than prior AI applications, with the ability to autonomously reason through tasks, call out to other tools, and incorporate both enterprise data and employee knowledge to produce valuable business outcomes. They��re being embedded into��
]]>For any data center, operating large, complex GPU clusters is not for the faint of heart! There is a tremendous amount of complexity. Cooling, power, networking, and even such benign things like fan replacement cycles all must be managed effectively and governed well in accelerated computing data centers. Managing all of this requires an accelerated understanding of the petabytes of telemetry data��
]]>Immerse yourself in NVIDIA technology with our full-day, hands-on technical workshops at our AI Summit in Washington D.C. on October 7, 2024.
]]>Learn how to build high-performance solutions with NVIDIA visual AI agents that help streamline operations across a range of industries.
]]>NVIDIA NIM, part of NVIDIA AI Enterprise, now supports tool-calling for models like Llama 3.1. It also integrates with LangChain to provide you with a production-ready solution for building agentic workflows. NIM microservices provide the best performance for open-source models such as Llama 3.1 and are available to test for free from NVIDIA API Catalog in LangChain applications.
]]>