Siddharth Sharma – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-03-20T17:01:18Z http://www.open-lab.net/blog/feed/ Siddharth Sharma <![CDATA[NVIDIA cuML Brings Zero Code Change Acceleration to scikit-learn]]> http://www.open-lab.net/blog/?p=97091 2025-03-20T17:01:18Z 2025-03-18T17:42:25Z Scikit-learn, the most widely used ML library, is popular for processing tabular data because of its simple API, diversity of algorithms, and compatibility with...]]>

Scikit-learn, the most widely used ML library, is popular for processing tabular data because of its simple API, diversity of algorithms, and compatibility with popular Python libraries such as pandas and NumPy. NVIDIA cuML now enables you to continue using familiar scikit-learn APIs and Python libraries while enabling data scientists and machine learning engineers to harness the power of CUDA on…

Source

]]>
Siddharth Sharma <![CDATA[Join the First NVIDIA LLM Developer Day: Elevate Your App-Building Skills]]> http://www.open-lab.net/blog/?p=72618 2023-11-16T19:16:46Z 2023-11-06T21:37:40Z NVIDIA LLM Developer Day is a virtual event providing hands-on guidance for developers exploring and building LLM-based applications and services. You can gain...]]>

NVIDIA LLM Developer Day is a virtual event providing hands-on guidance for developers exploring and building LLM-based applications and services. You can gain an understanding ‌of key technologies, their pros and cons, and explore example applications. The sessions also cover how to create, customize, and deploy applications using managed APIs, self-managed LLMs…

Source

]]>
0
Siddharth Sharma <![CDATA[SDKs Accelerating Industry 5.0, Data Pipelines, Computational Science, and More Featured at NVIDIA GTC 2023]]> http://www.open-lab.net/blog/?p=62370 2024-05-07T19:32:22Z 2023-03-22T16:00:00Z At NVIDIA GTC 2023, NVIDIA unveiled notable updates to its suite of NVIDIA AI software for developers to accelerate computing. The updates reduce costs in...]]>

At NVIDIA GTC 2023, NVIDIA unveiled notable updates to its suite of NVIDIA AI software for developers to accelerate computing. The updates reduce costs in several areas, such as data science workloads with NVIDIA RAPIDS, model analysis with NVIDIA Triton, AI imaging and computer vision with CV-CUDA, and many more. To keep up with the newest SDK advancements from NVIDIA, watch the GTC keynote…

Source

]]>
0
Siddharth Sharma <![CDATA[Explainer: What Is Conversational AI?]]> http://www.open-lab.net/blog/?p=54534 2024-06-05T22:05:49Z 2022-12-05T20:00:00Z Real-time natural language understanding will transform how we interact with intelligent machines and applications.]]>

Real-time natural language understanding will transform how we interact with intelligent machines and applications.

Source

]]>
0
Siddharth Sharma <![CDATA[New SDKs Accelerating AI Research, Computer Vision, Data Science, and More]]> http://www.open-lab.net/blog/?p=54866 2024-05-07T19:33:29Z 2022-09-21T15:20:00Z NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK...]]>

NVIDIA revealed major updates to its suite of AI software for developers including JAX, NVIDIA CV-CUDA, and NVIDIA RAPIDS. To learn about the latest SDK advancements from NVIDIA, watch the keynote from CEO Jensen Huang. Just today at GTC 2022, NVIDIA introduced JAX on NVIDIA AI, the newest addition to its GPU-accelerated deep learning frameworks. JAX is a rapidly growing…

Source

]]>
0
Siddharth Sharma <![CDATA[Build Speech AI in Multiple Languages and Train Large Language Models with the Latest from Riva and NeMo Framework]]> http://www.open-lab.net/blog/?p=45648 2023-06-12T20:54:30Z 2022-03-28T16:00:00Z Major updates to Riva, an SDK for building speech AI applications, and a paid Riva Enterprise offering were announced at NVIDIA GTC 2022 last week. Several key...]]>

Major updates to Riva, an SDK for building speech AI applications, and a paid Riva Enterprise offering were announced at NVIDIA GTC 2022 last week. Several key updates to the NeMo framework, a framework for training Large Language Models, were also announced. Riva offers world-class accuracy for real-time automatic speech recognition (ASR) and text-to-speech (TTS) skills across multiple…

Source

]]>
0
Siddharth Sharma <![CDATA[Major Updates to NVIDIA AI Software Advancing Speech, Recommenders, Inference, and More Announced at NVIDIA GTC 2022]]> http://www.open-lab.net/blog/?p=45446 2023-03-22T01:19:12Z 2022-03-22T17:02:00Z At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing...]]>

At GTC 2022, NVIDIA announced major updates to its suite of NVIDIA AI software, for developers to build real-time speech AI applications, create high-performing recommenders at scale and optimize inference in every application, and more. Watch the keynote from CEO, Jensen Huang, to learn about the latest advancements from NVIDIA. Today, NVIDIA announced Riva 2.0…

Source

]]>
0
Siddharth Sharma <![CDATA[ICYMI: New AI Tools and Technologies Announced at NVIDIA GTC Keynote]]> http://www.open-lab.net/blog/?p=39300 2023-03-22T01:16:48Z 2021-11-09T19:08:00Z At NVIDIA GTC this November, new software tools were announced that help developers build real-time speech applications, optimize inference for a variety of...]]>

At NVIDIA GTC this November, new software tools were announced that help developers build real-time speech applications, optimize inference for a variety of use-cases, optimize open-source interoperability for recommender systems, and more. Watch the keynote from CEO, Jensen Huang, to learn about the latest NVIDIA breakthroughs. Today, NVIDIA unveiled a new version of NVIDIA Riva with a…

Source

]]>
0
Siddharth Sharma <![CDATA[NVIDIA Announces Riva Speech AI and Large Language Modeling Software For Enterprise]]> http://www.open-lab.net/blog/?p=39420 2024-07-24T21:26:46Z 2021-11-09T19:06:00Z NVIDIA recently unveiled new breakthroughs in NVIDIA Riva for speech AI, and NVIDIA NeMo for large-scale language modeling (LLM). Riva is a GPU-accelerated...]]>

NVIDIA recently unveiled new breakthroughs in NVIDIA Riva for speech AI, and NVIDIA NeMo for large-scale language modeling (LLM). Riva is a GPU-accelerated Speech AI SDK for enterprises to generate expressive human-like speech for their brand and virtual assistants. NeMo is an accelerated training framework for speech and NLU, that now has the capabilities to develop large-scale language models…

Source

]]>
0
Siddharth Sharma <![CDATA[NVIDIA Releases Updates and New Features in CUDA-X AI Software]]> http://www.open-lab.net/blog/?p=38636 2022-08-21T23:52:53Z 2021-10-14T18:09:14Z NVIDIA CUDA-X AI is a deep learning software stack for researchers and software developers to build high performance GPU-accelerated applications for...]]>

NVIDIA CUDA-X AI is a deep learning software stack for researchers and software developers to build high performance GPU-accelerated applications for conversational AI, recommendation systems, and computer vision. Learn what’s new in the latest releases of the CUDA-X AI tools and libraries. For more information on NVIDIA’s developer tools, join live webinars, training, and “Connect with the…

Source

]]>
0
Siddharth Sharma <![CDATA[Real-Time Natural Language Processing with BERT Using NVIDIA TensorRT (Updated)]]> http://www.open-lab.net/blog/?p=34688 2023-06-12T21:08:51Z 2021-07-20T13:00:00Z This post was originally published in August 2019 and has been updated for NVIDIA TensorRT 8.0. Large-scale language models (LSLMs) such as BERT, GPT-2, and...]]>

This post was originally published in August 2019 and has been updated for NVIDIA TensorRT 8.0. Join the NVIDIA Triton and NVIDIA TensorRT community to stay current on the latest product updates, bug fixes, content, best practices, and more. Large-scale language models (LSLMs) such as BERT, GPT-2, and XL-Net have brought exciting leaps in accuracy for many natural language processing…

Source

]]>
0
Siddharth Sharma <![CDATA[Speeding Up Deep Learning Inference Using NVIDIA TensorRT (Updated)]]> http://www.open-lab.net/blog/?p=34881 2022-10-10T18:51:45Z 2021-07-20T13:00:00Z This post was updated July 20, 2021 to reflect NVIDIA TensorRT 8.0 updates. NVIDIA TensorRT is an SDK for deep learning inference. TensorRT provides APIs and...]]>

This post was updated July 20, 2021 to reflect NVIDIA TensorRT 8.0 updates. NVIDIA TensorRT is an SDK for deep learning inference. TensorRT provides APIs and parsers to import trained models from all major deep learning frameworks. It then generates optimized runtime engines deployable in the datacenter as well as in automotive and embedded environments. This post provides a simple…

Source

]]>
5
Siddharth Sharma <![CDATA[ICYMI: New AI Tools and Technologies Announced at GTC 2021 Keynote]]> http://www.open-lab.net/blog/?p=30063 2024-10-28T19:06:39Z 2021-04-12T19:39:00Z At GTC 2021, NVIDIA announced new software tools to help developers build optimized conversational AI, recommender, and video solutions. Watch the keynote from...]]>

At GTC 2021, NVIDIA announced new software tools to help developers build optimized conversational AI, recommender, and video solutions. Watch the keynote from CEO, Jensen Huang, for insights on all of the latest GPU technologies. Today NVIDIA announced major conversational AI capabilities in NVIDIA Riva that will help enterprises build engaging and accurate applications for their…

Source

]]>
0
Siddharth Sharma <![CDATA[Announcing Megatron for Training Trillion Parameter Models and NVIDIA Riva Availability]]> http://www.open-lab.net/blog/?p=30236 2023-12-30T00:45:19Z 2021-04-12T19:38:00Z Conversational AI is opening new ways for enterprises to interact with customers in every industry using applications like real-time transcription, translation,...]]>

Conversational AI is opening new ways for enterprises to interact with customers in every industry using applications like real-time transcription, translation, chatbots, and virtual assistants. Building domain-specific interactive applications requires state-of-the-art models, optimizations for real-time performance, and tools to adapt those models with your data. This week at GTC…

Source

]]>
0
Siddharth Sharma <![CDATA[Speeding Up Deep Learning Inference Using TensorRT]]> http://www.open-lab.net/blog/?p=17026 2022-10-10T18:51:44Z 2020-04-22T00:39:30Z [stextbox id="info"]Looking for more? Check out the hands-on DLI training course: Optimization and Deployment of TensorFlow Models with TensorRT[/stextbox] This...]]>

Source

]]>
5
Siddharth Sharma <![CDATA[Real-Time Natural Language Understanding with BERT Using TensorRT]]> http://www.open-lab.net/blog/?p=15432 2022-10-10T18:51:43Z 2019-08-13T13:00:19Z Large scale language models (LSLMs) such as BERT, GPT-2, and XL-Net have brought about exciting leaps in state-of-the-art accuracy for many natural language...]]>

Large scale language models (LSLMs) such as BERT, GPT-2, and XL-Net have brought about exciting leaps in state-of-the-art accuracy for many natural language understanding (NLU) tasks. Since its release in Oct 2018, BERT1 (Bidirectional Encoder Representations from Transformers) remains one of the most popular language models and still delivers state of the art accuracy at the time of writing2.

Source

]]>
11
Siddharth Sharma <![CDATA[Object Detection on GPUs in 10 Minutes]]> http://www.open-lab.net/blog/?p=15047 2022-08-21T23:39:32Z 2019-06-26T19:00:39Z Object detection remains the primary driver for applications such as autonomous driving and intelligent video analytics. Object detection applications require...]]>

Object detection remains the primary driver for applications such as autonomous driving and intelligent video analytics. Object detection applications require substantial training using vast datasets to achieve high levels of accuracy. NVIDIA GPUs excel at the parallel compute performance required to train large networks in order to generate datasets for object detection inference.

Source

]]>
8
Siddharth Sharma <![CDATA[How to Speed Up Deep Learning Inference Using TensorRT]]> http://www.open-lab.net/blog/?p=12738 2022-10-10T18:51:43Z 2018-11-08T15:00:52Z [stextbox id="info"]Looking for more? Check out the hands-on DLI training course: Optimization and Deployment of TensorFlow Models with TensorRT[/stextbox]...]]>

Source

]]>
18
Siddharth Sharma <![CDATA[Neural Machine Translation Inference with TensorRT 4]]> http://www.open-lab.net/blog/?p=17146 2023-03-14T19:00:03Z 2018-07-18T19:00:00Z Neural machine translation exists across a wide variety consumer applications, including web sites, road signs, generating subtitles in foreign languages, and...]]>

Neural machine translation exists across a wide variety consumer applications, including web sites, road signs, generating subtitles in foreign languages, and more. TensorRT, NVIDIA’s programmable inference accelerator, helps optimize and generate runtime engines for deploying deep learning inference apps to production environments. NVIDIA released TensorRT 4 with new features to accelerate…

Source

]]>
2
Siddharth Sharma <![CDATA[TensorRT 4 Accelerates Neural Machine Translation, Recommenders, and Speech]]> http://www.open-lab.net/blog/?p=10726 2023-03-14T19:00:22Z 2018-06-19T13:00:45Z NVIDIA has released TensorRT?4 at CVPR 2018. This new version of TensorRT, NVIDIA��s powerful inference optimizer and runtime engine provides: New Recurrent...]]>

NVIDIA has released TensorRT 4 at CVPR 2018. This new version of TensorRT, NVIDIA’s powerful inference optimizer and runtime engine provides: Additional features include the ability to execute custom neural network layers using FP16 precision and support for the Xavier SoC through NVIDIA DRIVE AI platforms. TensorRT 4 speeds up deep learning inference applications such as neural machine…

Source

]]>
0
Siddharth Sharma <![CDATA[TensorRT Integration Speeds Up TensorFlow Inference]]> http://www.open-lab.net/blog/?p=9984 2022-08-21T23:38:49Z 2018-03-27T17:33:00Z Update, May 9, 2018: TensorFlow v1.7 and above integrates with TensorRT 3.0.4. NVIDIA is working on supporting the integration for a wider set of configurations...]]>

Update, May 9, 2018: TensorFlow v1.7 and above integrates with TensorRT 3.0.4. NVIDIA is working on supporting the integration for a wider set of configurations and versions. We’ll publish updates when these become available. Meanwhile, if you’re using , simply download TensorRT files for Ubuntu 14.04 not16.04, no matter what version of Ubuntu you’re running.

Source

]]>
40
���˳���97caoporen����