PCI Express (PCIe) – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-03-26T19:26:14Z http://www.open-lab.net/blog/feed/ Charu Chaubal <![CDATA[Choosing a Server for Deep Learning Training]]> http://www.open-lab.net/blog/?p=46092 2023-06-12T20:53:59Z 2022-04-06T06:07:33Z Deep learning has come to mean the most common implementation of a neural network for performing many AI tasks. Data scientists use software frameworks such as...]]> Deep learning has come to mean the most common implementation of a neural network for performing many AI tasks. Data scientists use software frameworks such as...

Deep learning has come to mean the most common implementation of a neural network for performing many AI tasks. Data scientists use software frameworks such as TensorFlow and PyTorch to develop and run DL algorithms. By this point, there has been a lot written about deep learning, and you can find more detailed information from many sources. For a good high-level summary, see What��s the��

Source

]]>
0
Adam Thompson <![CDATA[GPUDirect Storage: A Direct Path Between Storage and GPU Memory]]> http://www.open-lab.net/blog/?p=15376 2022-08-21T23:39:34Z 2019-08-06T13:00:31Z As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application��s...]]> As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application��s...

As AI and HPC datasets continue to increase in size, the time spent loading data for a given application begins to place a strain on the total application��s performance. When considering end-to-end application performance, fast GPUs are increasingly starved by slow I/O. I/O, the process of loading data from storage to GPUs for processing, has historically been controlled by the CPU.

Source

]]>
7
Mark Harris <![CDATA[How NVLink Will Enable Faster, Easier Multi-GPU Computing]]> http://www.open-lab.net/blog/parallelforall/?p=4058 2022-08-21T23:37:28Z 2014-11-14T15:05:15Z Accelerated systems have become the new standard for high performance computing (HPC) as GPUs continue to raise the bar for both performance and energy...]]> Accelerated systems have become the new standard for high performance computing (HPC) as GPUs continue to raise the bar for both performance and energy...

Accelerated systems have become the new standard for high performance computing (HPC) as GPUs continue to raise the bar for both performance and energy efficiency. In 2012, Oak Ridge National Laboratory announced what was to become the world��s fastest supercomputer, Titan, equipped with one NVIDIA? GPU per CPU �C over 18 thousand GPU accelerators. Titan established records not only in absolute��

Source

]]>
10
Denis Foley <![CDATA[NVLink, Pascal and Stacked Memory: Feeding the Appetite for Big Data]]> http://www.open-lab.net/blog/parallelforall/?p=3097 2022-08-21T23:37:04Z 2014-03-25T16:31:41Z For more recent info on NVLink, check out the?post, "How NVLink Will Enable Faster, Easier Multi-GPU Computing". NVIDIA GPU accelerators have emerged in...]]> For more recent info on NVLink, check out the?post, "How NVLink Will Enable Faster, Easier Multi-GPU Computing". NVIDIA GPU accelerators have emerged in...

For more recent info on NVLink, check out the post, ��How NVLink Will Enable Faster, Easier Multi-GPU Computing��. NVIDIA GPU accelerators have emerged in High-Performance Computing as an energy-efficient way to provide significant compute capability. The Green500 supercomputer list makes this clear: the top 10 supercomputers on the list feature NVIDIA GPUs. Today at the 2014 GPU Technology��

Source

]]>
14
���˳���97caoporen����