Shashank Prasanna – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2022-08-21T23:39:17Z http://www.open-lab.net/blog/feed/ Shashank Prasanna <![CDATA[Kubernetes For AI Hyperparameter Search Experiments]]> http://www.open-lab.net/blog/?p=13162 2022-08-21T23:39:17Z 2018-12-14T15:10:15Z The software industry has recently seen a huge shift in how software deployments are done thanks to technologies such as containers and orchestrators. While...]]>

The software industry has recently seen a huge shift in how software deployments are done thanks to technologies such as containers and orchestrators. While container technologies have been around, credit goes to Docker for making containers mainstream, by greatly simplifying the process of creating, managing and deploying containerized applications. We’re now seeing a similar paradigm shift for…

Source

]]>
0
Shashank Prasanna <![CDATA[RAPIDS Accelerates Data Science End-to-End]]> http://www.open-lab.net/blog/?p=12361 2022-08-21T23:39:09Z 2018-10-15T21:24:31Z Today's data science problems demand a dramatic increase in the scale of data?as well as?the computational power required to process it. Unfortunately, the...]]>

Source

]]>
12
Shashank Prasanna <![CDATA[TensorRT 3: Faster TensorFlow Inference and Volta Support]]> http://www.open-lab.net/blog/parallelforall/?p=8664 2022-08-21T23:38:34Z 2017-12-04T17:00:59Z NVIDIA TensorRT™ is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep...]]>

NVIDIA TensorRT is a high-performance deep learning inference optimizer and runtime that delivers low latency, high-throughput inference for deep learning applications. NVIDIA released TensorRT last year with the goal of accelerating deep learning inference for production deployment. In this post we’ll introduce TensorRT 3, which improves performance versus previous versions and includes new…

Source

]]>
16
Shashank Prasanna <![CDATA[Deploying Deep Neural Networks with NVIDIA TensorRT]]> http://www.open-lab.net/blog/parallelforall/?p=7674 2022-08-21T23:38:08Z 2017-04-03T05:53:05Z [caption id="attachment_7710" align="alignright" width="300"] Figure 1: NVIDIA Tensor RT provides 16x higher energy efficiency for neural network inference with...]]>

Editor’s Note: An updated version of this, with additional tutorial content, is now available. See “How to Speed Up Deep Learning Using TensorRT“. NVIDIA TensorRT is a high-performance deep learning inference library for production environments. Power efficiency and speed of response are two key metrics for deployed deep learning applications, because they directly affect the user experience…

Source

]]>
17
Shashank Prasanna <![CDATA[Deep Learning for Object Detection with DIGITS]]> http://www.open-lab.net/blog/parallelforall/?p=6931 2022-08-21T23:37:55Z 2016-08-11T07:00:58Z [caption id="attachment_6933" align="alignright" width="300"] Figure 1: A screenshot of DIGITS 4 showing the input image (top) and the final result with...]]>

Today we’re excited to announce the availability of NVIDIA DIGITS 4. DIGITS 4 introduces a new object detection workflow and DetectNet, a new deep neural network for object detection that enables data scientists and researchers to train models that can detect instances of faces, pedestrians, traffic signs, vehicles and other objects in images. Object detection is one of the most challenging…

Source

]]>
27
Shashank Prasanna <![CDATA[Production Deep Learning with NVIDIA GPU Inference Engine]]> http://www.open-lab.net/blog/parallelforall/?p=6823 2022-08-21T23:37:54Z 2016-06-20T06:01:59Z [caption id="attachment_6824" align="alignright" width="300"] Figure 1. NVIDIA GPU Inference Engine (GIE) provides even higher efficiency and performance for...]]>

[Update September 13, 2016: GPU Inference Engine is now TensorRT] Today at ICML 2016, NVIDIA announced its latest Deep Learning SDK updates, including DIGITS 4, cuDNN 5.1 (CUDA Deep Neural Network Library) and the new GPU Inference Engine. NVIDIA GPU Inference Engine (GIE) is a high-performance deep learning inference solution for production environments. Power efficiency and speed of response…

Source

]]>
18
Shashank Prasanna <![CDATA[Deep Learning for Computer Vision with MATLAB and cuDNN]]> http://www.open-lab.net/blog/parallelforall/?p=6024 2022-08-21T23:37:38Z 2015-10-27T05:21:00Z Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology, we are able to solve problems once considered...]]>

Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology, we are able to solve problems once considered impossible in fields such as computer vision, natural language processing, and robotics. Deep learning uses deep neural networks which have been around for a few decades; what’s changed in recent years is the availability of large labeled…

Source

]]>
38
���˳���97caoporen����