LangChain – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-05-30T21:51:33Z http://www.open-lab.net/blog/feed/ Zenodia Charpy <![CDATA[Build Your First Human-in-the-Loop AI Agent with NVIDIA NIM]]> http://www.open-lab.net/blog/?p=91339 2024-12-12T19:38:38Z 2024-11-21T22:45:13Z AI agents powered by large language models (LLMs) help organizations streamline and reduce manual workloads. These agents use multilevel, iterative reasoning to...]]> AI agents powered by large language models (LLMs) help organizations streamline and reduce manual workloads. These agents use multilevel, iterative reasoning to...

AI agents powered by large language models (LLMs) help organizations streamline and reduce manual workloads. These agents use multilevel, iterative reasoning to analyze problems, devise solutions, and execute tasks with various tools. Unlike traditional chatbots, LLM-powered agents automate complex tasks by effectively understanding and processing information. To avoid potential risks in specific��

Source

]]>
20
Xhoni Shollaj <![CDATA[Create a Custom Slackbot LLM Agent with NVIDIA NIM and LangChain]]> http://www.open-lab.net/blog/?p=89825 2025-02-17T05:12:38Z 2024-11-19T17:00:00Z In the dynamic world of modern business, where communication and efficient workflows are crucial for success, AI-powered solutions have become a competitive...]]> In the dynamic world of modern business, where communication and efficient workflows are crucial for success, AI-powered solutions have become a competitive...Chatbot avatar in front of a stylized chat screen on a purple background.

In the dynamic world of modern business, where communication and efficient workflows are crucial for success, AI-powered solutions have become a competitive advantage. AI agents, built on cutting-edge large language models (LLMs) and powered by NVIDIA NIM provide a seamless way to enhance productivity and information flow. NIM, part of NVIDIA AI Enterprise, is a suite of easy-to-use��

Source

]]>
1
Amit Bleiweiss <![CDATA[Evaluating Medical RAG with NVIDIA AI Endpoints and Ragas]]> http://www.open-lab.net/blog/?p=89625 2024-11-07T23:29:42Z 2024-10-01T16:00:00Z In the rapidly evolving field of medicine, the integration of cutting-edge technologies is crucial for enhancing patient care and advancing research. One such...]]> In the rapidly evolving field of medicine, the integration of cutting-edge technologies is crucial for enhancing patient care and advancing research. One such...Avatars of a patient in a bed with a doctor sitting at a desk in another location, looking at a computer screen.

In the rapidly evolving field of medicine, the integration of cutting-edge technologies is crucial for enhancing patient care and advancing research. One such innovation is retrieval-augmented generation (RAG), which is transforming how medical information is processed and used. RAG combines the capabilities of large language models (LLMs) with external knowledge retrieval��

Source

]]>
0
Amit Bleiweiss <![CDATA[Enhancing RAG Pipelines with Re-Ranking]]> http://www.open-lab.net/blog/?p=86037 2024-10-28T21:56:26Z 2024-07-30T16:00:00Z In the rapidly evolving landscape of AI-driven applications, re-ranking has emerged as a pivotal technique to enhance the precision and relevance of enterprise...]]> In the rapidly evolving landscape of AI-driven applications, re-ranking has emerged as a pivotal technique to enhance the precision and relevance of enterprise...A connected grid of AI applications, optimizing RAG pipelines.

In the rapidly evolving landscape of AI-driven applications, re-ranking has emerged as a pivotal technique to enhance the precision and relevance of enterprise search results. By using advanced machine learning algorithms, re-ranking refines initial search outputs to better align with user intent and context, thereby significantly improving the effectiveness of semantic search.

Source

]]>
0
Aditi Bodhankar <![CDATA[Building Safer LLM Apps with LangChain Templates and NVIDIA NeMo Guardrails]]> http://www.open-lab.net/blog/?p=83057 2025-02-04T19:52:06Z 2024-05-31T21:37:43Z An easily deployable reference architecture can help developers get to production faster with custom LLM use cases. LangChain Templates are a new way of...]]> An easily deployable reference architecture can help developers get to production faster with custom LLM use cases. LangChain Templates are a new way of...An illustration representing NeMo Guardrails.

An easily deployable reference architecture can help developers get to production faster with custom LLM use cases. LangChain Templates are a new way of creating, sharing, maintaining, downloading, and customizing LLM-based agents and chains. The process is straightforward. You create an application project with directories for chains, identify the template you want to work with��

Source

]]>
0
Amit Bleiweiss <![CDATA[Tips for Building a RAG Pipeline with NVIDIA AI LangChain AI Endpoints]]> http://www.open-lab.net/blog/?p=81895 2025-03-11T16:19:32Z 2024-05-08T16:00:00Z Retrieval-augmented generation (RAG) is a technique that combines information retrieval with a set of carefully designed system prompts to provide more...]]> Retrieval-augmented generation (RAG) is a technique that combines information retrieval with a set of carefully designed system prompts to provide more...Decorative image of a RAG pipeline.

Retrieval-augmented generation (RAG) is a technique that combines information retrieval with a set of carefully designed system prompts to provide more accurate, up-to-date, and contextually relevant responses from large language models (LLMs). By incorporating data from various sources such as relational databases, unstructured document repositories, internet data streams, and media news feeds��

Source

]]>
7
Chintan Patel <![CDATA[New LLM: Snowflake Arctic Model for SQL and Code Generation]]> http://www.open-lab.net/blog/?p=81484 2024-05-07T16:53:04Z 2024-04-27T00:42:50Z Large language models (LLMs) have revolutionized natural language processing (NLP) in recent years, enabling a wide range of applications such as text...]]> Large language models (LLMs) have revolutionized natural language processing (NLP) in recent years, enabling a wide range of applications such as text...Decorative image of LLM workflow.

Large language models (LLMs) have revolutionized natural language processing (NLP) in recent years, enabling a wide range of applications such as text summarization, question answering, and natural language generation. Arctic, developed by Snowflake, is a new open LLM designed to achieve high inference performance while maintaining low cost on various NLP tasks. Arctic Arctic is��

Source

]]>
0
Chintan Patel <![CDATA[Mistral Large and Mixtral 8x22B LLMs Now Powered by NVIDIA NIM and NVIDIA API]]> http://www.open-lab.net/blog/?p=80850 2024-06-06T14:50:14Z 2024-04-22T17:00:00Z This week��s model release features two new NVIDIA AI Foundation models, Mistral Large and Mixtral 8x22B, both developed by Mistral AI. These cutting-edge...]]> This week��s model release features two new NVIDIA AI Foundation models, Mistral Large and Mixtral 8x22B, both developed by Mistral AI. These cutting-edge...

This week��s model release features two new NVIDIA AI Foundation models, Mistral Large and Mixtral 8x22B, both developed by Mistral AI. These cutting-edge text-generation AI models are supported by NVIDIA NIM microservices, which provide prebuilt containers powered by NVIDIA inference software that enable developers to reduce deployment times from weeks to minutes. Both models are available through��

Source

]]>
1
Amanda Saunders <![CDATA[Develop Custom Enterprise Generative AI with NVIDIA NeMo]]> http://www.open-lab.net/blog/?p=80360 2025-02-17T05:27:49Z 2024-03-27T20:00:00Z Generative AI is transforming computing, paving new avenues for humans to interact with computers in natural, intuitive ways. For enterprises, the prospect of...]]> Generative AI is transforming computing, paving new avenues for humans to interact with computers in natural, intuitive ways. For enterprises, the prospect of...

Generative AI is transforming computing, paving new avenues for humans to interact with computers in natural, intuitive ways. For enterprises, the prospect of generative AI is vast. Businesses can tap into their rich datasets to streamline time-consuming tasks��from text summarization and translation to insight prediction and content generation. But they must also navigate adoption challenges.

Source

]]>
0
Jacob Liberman <![CDATA[How to Take a RAG Application from Pilot to Production in Four Steps]]> http://www.open-lab.net/blog/?p=79558 2024-10-28T21:58:37Z 2024-03-18T22:00:00Z Generative AI has the potential to transform every industry. Human workers are already using large language models (LLMs) to explain, reason about, and solve...]]> Generative AI has the potential to transform every industry. Human workers are already using large language models (LLMs) to explain, reason about, and solve...

Generative AI has the potential to transform every industry. Human workers are already using large language models (LLMs) to explain, reason about, and solve difficult cognitive tasks. Retrieval-augmented generation (RAG) connects LLMs to data, expanding the usefulness of LLMs by giving them access to up-to-date and accurate information. Many enterprises have already started to explore how��

Source

]]>
0
Jess Nguyen <![CDATA[Video: Build a RAG-Powered Chatbot in Five Minutes]]> http://www.open-lab.net/blog/?p=78248 2024-05-02T16:46:56Z 2024-02-27T21:30:00Z Retrieval-augmented generation (RAG) is exploding in popularity as a technique for boosting large language model (LLM) application performance. From highly...]]> Retrieval-augmented generation (RAG) is exploding in popularity as a technique for boosting large language model (LLM) application performance. From highly...

Retrieval-augmented generation (RAG) is exploding in popularity as a technique for boosting large language model (LLM) application performance. From highly accurate question-answering AI chatbots to code-generation copilots, organizations across industries are exploring how RAG can help optimize processes. According to State of AI in Financial Services: 2024 Trends, 55%

Source

]]>
0
Benedikt Schifferer <![CDATA[Evaluating Retriever for Enterprise-Grade RAG]]> http://www.open-lab.net/blog/?p=78222 2024-10-28T21:59:05Z 2024-02-23T19:02:26Z The conversation about designing and evaluating Retrieval-Augmented Generation (RAG) systems is a long, multi-faceted discussion. Even when we look at retrieval...]]> The conversation about designing and evaluating Retrieval-Augmented Generation (RAG) systems is a long, multi-faceted discussion. Even when we look at retrieval...Illustration demonstrating RAG.

The conversation about designing and evaluating Retrieval-Augmented Generation (RAG) systems is a long, multi-faceted discussion. Even when we look at retrieval on its own, developers selectively employ many techniques, such as query decomposition, re-writing, building soft filters, and more, to increase the accuracy of their RAG pipelines. While the techniques vary from system to system��

Source

]]>
0
Tanay Varshney <![CDATA[Build an LLM-Powered API Agent for Task Execution]]> http://www.open-lab.net/blog/?p=77925 2024-05-02T16:46:58Z 2024-02-21T21:30:00Z Developers have long been building interfaces like web apps to enable users to leverage the core products being built. To learn how to work with data in your...]]> Developers have long been building interfaces like web apps to enable users to leverage the core products being built. To learn how to work with data in your...

Developers have long been building interfaces like web apps to enable users to leverage the core products being built. To learn how to work with data in your large language model (LLM) application, see my previous post, Build an LLM-Powered Data Agent for Data Analysis. In this post, I discuss a method to add free-form conversation as another interface with APIs. It works toward a solution that��

Source

]]>
0
Prabhu Ramamoorthy <![CDATA[Accelerating Inference on End-to-End Workflows with H2O.ai and NVIDIA]]> http://www.open-lab.net/blog/?p=75946 2024-11-20T23:02:13Z 2024-01-04T14:00:00Z Data scientists are combining generative AI and predictive analytics to build the next generation of AI applications. In financial services, AI modeling and...]]> Data scientists are combining generative AI and predictive analytics to build the next generation of AI applications. In financial services, AI modeling and...Decorative image.

Data scientists are combining generative AI and predictive analytics to build the next generation of AI applications. In financial services, AI modeling and inference can be used for solutions such as alternative data for investment analysis, AI intelligent document automation, and fraud detection in trading, banking, and payments. H2O.ai and NVIDIA are working together to provide an end-to��

Source

]]>
1
Hayden Wolff <![CDATA[RAG 101: Retrieval-Augmented Generation Questions Answered]]> http://www.open-lab.net/blog/?p=75743 2024-11-20T23:02:36Z 2023-12-18T19:44:42Z Data scientists, AI engineers, MLOps engineers, and IT infrastructure professionals must consider a variety of factors when designing and deploying a RAG...]]> Data scientists, AI engineers, MLOps engineers, and IT infrastructure professionals must consider a variety of factors when designing and deploying a RAG...Image of a person sitting at a desk with a computer. Three monitor images float above the desk.

Data scientists, AI engineers, MLOps engineers, and IT infrastructure professionals must consider a variety of factors when designing and deploying a RAG pipeline: from core components like LLM to evaluation approaches. The key point is that RAG is a system, not just a model or set of models. This system consists of several stages, which were discussed at a high level in RAG 101��

Source

]]>
2
Hayden Wolff <![CDATA[RAG 101: Demystifying Retrieval-Augmented Generation Pipelines]]> http://www.open-lab.net/blog/?p=75493 2024-08-22T21:46:12Z 2023-12-18T19:44:31Z Large language models (LLMs) have impressed the world with their unprecedented capabilities to comprehend and generate human-like responses. Their chat...]]> Large language models (LLMs) have impressed the world with their unprecedented capabilities to comprehend and generate human-like responses. Their chat...Stylized image of a RAG pipeline.

Large language models (LLMs) have impressed the world with their unprecedented capabilities to comprehend and generate human-like responses. Their chat functionality provides a fast and natural interaction between humans and large corpora of data. For example, they can summarize and extract highlights from data or replace complex queries such as SQL queries with natural language.

Source

]]>
1
Ike Nnoli <![CDATA[Create Lifelike Avatars with AI Animation and Speech Features in NVIDIA ACE]]> http://www.open-lab.net/blog/?p=74159 2024-11-20T23:02:47Z 2023-12-04T22:00:00Z NVIDIA today unveiled major upgrades to the NVIDIA Avatar Cloud Engine (ACE) suite of technologies, bringing enhanced realism and accessibility to AI-powered...]]> NVIDIA today unveiled major upgrades to the NVIDIA Avatar Cloud Engine (ACE) suite of technologies, bringing enhanced realism and accessibility to AI-powered...

NVIDIA today unveiled major upgrades to the NVIDIA Avatar Cloud Engine (ACE) suite of technologies, bringing enhanced realism and accessibility to AI-powered avatars and digital humans. These latest animation and speech capabilities enable more natural conversations and emotional expressions. Developers can now easily implement and scale intelligent avatars across applications using new��

Source

]]>
0
Tanay Varshney <![CDATA[Building Your First LLM Agent Application]]> http://www.open-lab.net/blog/?p=74179 2025-01-09T03:33:26Z 2023-11-30T19:12:44Z When building a large language model (LLM) agent application, there are four key components you need: an agent core, a memory module, agent tools, and a...]]> When building a large language model (LLM) agent application, there are four key components you need: an agent core, a memory module, agent tools, and a...Stylized image of a computer monitor on a purple background and the words Part 2.

When building a large language model (LLM) agent application, there are four key components you need: an agent core, a memory module, agent tools, and a planning module. Whether you are designing a question-answering agent, multi-modal agent, or swarm of agents, you can consider many implementation frameworks��from open-source to production-ready. For more information, see Introduction to LLM��

Source

]]>
0
Nigel Nelson <![CDATA[Deploy Large Language Models at the Edge with NVIDIA IGX Orin Developer Kit]]> http://www.open-lab.net/blog/?p=72986 2024-05-02T16:47:03Z 2023-11-15T17:30:00Z As large language models (LLMs) become more powerful and techniques for reducing their computational requirements mature, two compelling questions emerge....]]> As large language models (LLMs) become more powerful and techniques for reducing their computational requirements mature, two compelling questions emerge....

As large language models (LLMs) become more powerful and techniques for reducing their computational requirements mature, two compelling questions emerge. First, what is the most advanced LLM that can be run and deployed at the edge? And second, how can real-world applications leverage these advancements? Running a state-of-the-art open-source LLM like Llama 2 70B, even at reduced FP16��

Source

]]>
0
Yi Dong <![CDATA[Announcing NVIDIA SteerLM: A Simple and Practical Technique to Customize LLMs During Inference]]> http://www.open-lab.net/blog/?p=68954 2024-05-02T16:47:04Z 2023-10-11T14:30:00Z With the advent of large language models (LLMs) such as GPT-3, Megatron-Turing, Chinchilla, PaLM-2, Falcon, and Llama 2, remarkable progress in natural language...]]> With the advent of large language models (LLMs) such as GPT-3, Megatron-Turing, Chinchilla, PaLM-2, Falcon, and Llama 2, remarkable progress in natural language...

With the advent of large language models (LLMs) such as GPT-3, Megatron-Turing, Chinchilla, PaLM-2, Falcon, and Llama 2, remarkable progress in natural language generation has been made in recent years. However, despite their ability to produce human-like text, ?foundation LLMs can fail to provide helpful and nuanced responses aligned with user preferences. The current approach to improving��

Source

]]>
0
Phoebe Lee <![CDATA[Power Your Business with NVIDIA AI Enterprise 4.0 for Production-Ready Generative AI]]> http://www.open-lab.net/blog/?p=70509 2024-05-02T16:47:06Z 2023-09-12T21:00:00Z Crossing the chasm and reaching its iPhone moment, generative AI must scale to fulfill exponentially increasing demands. Reliability and uptime are critical for...]]> Crossing the chasm and reaching its iPhone moment, generative AI must scale to fulfill exponentially increasing demands. Reliability and uptime are critical for...Workflow examples.

Crossing the chasm and reaching its iPhone moment, generative AI must scale to fulfill exponentially increasing demands. Reliability and uptime are critical for building generative AI at the enterprise level, especially when AI is core to conducting business operations. NVIDIA is investing its expertise into building a solution for those enterprises ready to take the leap.

Source

]]>
0
Rich Harang <![CDATA[Securing LLM Systems Against Prompt Injection]]> http://www.open-lab.net/blog/?p=68819 2024-07-08T20:08:30Z 2023-08-03T18:43:12Z Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is...]]> Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is...

Prompt injection is a new attack technique specific to large language models (LLMs) that enables attackers to manipulate the output of the LLM. This attack is made more dangerous by the way that LLMs are increasingly being equipped with ��plug-ins�� for better responding to user requests by accessing up-to-date information, performing complex calculations, and calling on external services through��

Source

]]>
0
Ike Nnoli <![CDATA[Generative AI Sparks Life into Virtual Characters with NVIDIA ACE for Games]]> http://www.open-lab.net/blog/?p=65490 2024-11-20T23:04:23Z 2023-05-29T03:30:00Z Generative AI technologies are revolutionizing how games are conceived, produced, and played. Game developers are exploring how these technologies impact 2D and...]]> Generative AI technologies are revolutionizing how games are conceived, produced, and played. Game developers are exploring how these technologies impact 2D and...Game NPC scene in ramen shop

Generative AI technologies are revolutionizing how games are conceived, produced, and played. Game developers are exploring how these technologies impact 2D and 3D content-creation pipelines during production. Part of the excitement comes from the ability to create gaming experiences at runtime that would have been impossible using earlier solutions. The creation of non-playable characters��

Source

]]>
0
Annamalai Chockalingam <![CDATA[NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems]]> http://www.open-lab.net/blog/?p=63745 2024-11-20T23:04:35Z 2023-04-25T13:00:00Z Large language models (LLMs) are incredibly powerful and capable of answering complex questions, performing feats of creative writing, developing, debugging...]]> Large language models (LLMs) are incredibly powerful and capable of answering complex questions, performing feats of creative writing, developing, debugging...NeMo Guardrails illustration.

Large language models (LLMs) are incredibly powerful and capable of answering complex questions, performing feats of creative writing, developing, debugging source code, and so much more. You can build incredibly sophisticated LLM applications by connecting them to external tools, for example reading data from a real-time source, or enabling an LLM to decide what action to take given a user��s��

Source

]]>
1
���˳���97caoporen����