Magnum IO GPUDirect Storage
A Direct Path Between Storage and GPU Memory
As datasets increase in size, the time spent loading data can impact application performance. GPUDirect? Storage creates a direct data path between local or remote storage, such as NVMe or NVMe over Fabrics (NVMe-oF), and GPU memory. By enabling a direct-memory access (DMA) engine near the network adapter or storage, it moves data into or out of GPU memory—without burdening the CPU.
Key Features of v1.13.1
The following features have been added in v1.13.1:
- Support for NVMe P2PDMA, eliminates the need for custom MOFED patches and nvidia-fs.ko driver
- Added support for Amazon FSx for Lustre
- Improved unregistered buffer IO performance
- P2P mode enabled for DDN EXAScaler on Grace Hopper system
Software Download
GPUDirect Storage v1.13.1 Release
NVIDIA Magnum IO GPUDirect? Storage (GDS) is now part of CUDA.
See https://docs.nvidia.com/gpudirect-storage/index.html for more information.
Resources
- Read the blog: Accelerating IO in the modern data center - magnum IO storage partnerships
- NVIDIA Magnum IO? SDK
- Read the blog: Optimizing data movement in GPU applications with the NVIDIA Magnum IO developer environment
- Read the blog: accelerating IO in the modern data center: Magnum IO Architecture
- Watch the webinar: NVIDIA GPUDirect Storage: Accelerating the data path to the GPU
- NVIDIA-Certified Systems configuration guide
- NVIDIA-Certified Systems
- Contact us at gpudirectstorageext@nvidia.com