• <xmp id="om0om">
  • <table id="om0om"><noscript id="om0om"></noscript></table>
  • Our educational resources are designed to give you hands-on, practical instruction about using the Jetson platform. With step-by-step videos from our in-house experts, you will be up and running in no time.


    Get started on your AI learning today

    NVIDIA’s Deep Learning Institute (DLI) delivers practical, hands-on training and certification in AI at the edge for developers, educators, students, and lifelong learners. This is a great way to get the critical AI skills you need to thrive and advance in your career. You can even earn certificates to demonstrate your understanding of Jetson and AI when you complete these free, open-source courses. Enroll Now >

    Jetson Generative AI Lab

    The Jetson Generative AI Lab is your gateway to bringing generative AI to the world. Explore tutorials on text generation, text + vision models, image generation, and distillation techniques. Access resources to run these models on NVIDIA Jetson Orin. Experience real-time performance with vision LLMs and the latest one-shot ViT's. Deploy game-changing capabilities locally. Join the generative AI revolution and start today. Try Out Now >



    Two Days to a Demo

    Two Days to a Demo is our introductory series of deep learning tutorials for deploying AI and computer vision to the field with NVIDIA Jetson.


    Metropolis APIs and Microservices on Jetson

    Discover how NVIDIA Metropolis APIs and microservices can accelerate your vision AI applications for the edge on Jetson Orin. Building vision AI applications for the edge can often require long, costly development cycles. A powerful new collection of Metropolis APIs and microservices helps you accelerate the development and deployment of vision AI on Jetson from years to just months.

    JetPack 4.6 Deep Dive and Demo

    Get an in-depth understanding of the features included in JetPack? 4.6, including demos on select features. NVIDIA? Jetson? experts will also join for Q&A to answer your questions. JetPack SDK powers all Jetson modules and developer kits and enables developers to develop and deploy AI applications that are end-to-end accelerated. JetPack 4.6 is the latest production release and includes important features like Image-Based Over-The-Air update, A/B root file system redundancy, a new flashing tool to flash internal or external storage connected to Jetson, and new compute containers for Jetson on NVIDIA GPU Cloud (NGC).

    Accelerate Computer Vision and Image Processing using VPI 1.1

    VPI, the fastest computer vision and image processing Library on Jetson, now adds python support. Accelerate your OpenCV implementation with VPI algorithms, which offers significant speed up both on CPU and GPU. Come and learn how to write the most performant vision pipelines using VPI. We’ll cover all the new algorithms in VPI-1.1 included in JetPack 4.6, focusing on the recently added developer preview of Python bindings. Learn how this new library gives you an easy and efficient way to use the computing capabilities of Jetson-family devices and NVIDIA dGPUs.

    Protecting AI at the Edge with the Sequitur Labs Emspark Security Suite

    With accelerated deployment of AI & machine learning models at the edge, IoT device security is critical. Security at the device level requires an understanding of silicon, cryptography, and application design. Learn about implementing IoT security on the Jetson platform by covering critical elements of a trusted device, how to design, build, and maintain secure devices, how to protect AI/ML models at the network edge with the EmSPARK Security Suite and lifecycle management.

    NVIDIA JetPack 4.5 Overview and Feature Demo

    Develop high-performance AI applications on Jetson with end-to-end acceleration with JetPack SDK 4.5, the latest production release supporting all Jetson modules and developer kits. This release features an enhanced secure boot, a new Jetson Nano bootloader, and a new way of flashing Jetson devices using NFS. It also includes the first production release of VPI, the hardware-accelerated Vision Programming Interface. Get a comprehensive overview of the new features in JetPack 4.5 and a live demo for select features. Our Jetson experts answered questions in a Q&A.

    Implementing Computer Vision and Image Processing Solutions with VPI

    Get a comprehensive introduction to VPI API. You’ll learn how to build complete and efficient stereo disparity-estimation pipelines using VPI that run on Jetson family devices. VPI provides a unified API to both CPU and NVIDIA CUDA algorithm implementations, as well as interoperability between VPI and OpenCV and CUDA.

    Multimedia API Overview

    This video gives an overview of the Jetson multimedia software architecture, with emphasis on camera, multimedia codec, and scaling functionality to jump start flexible yet powerful application development.

    Develop a V4L2 Sensor Driver

    The video covers camera software architecture, and discusses what it takes to develop a clean and bug-free sensor driver that conforms to the V4L2 media controller framework.

    Introduction to Jetson OTA Update

    This presentaion covers Over-the-Air (OTA) Update enables you to update NVIDIA Jetson devices and host computers for Jetson development.

    Episode 0: Introduction to OpenCV

    Learn to write your first ‘Hello World’ program on Jetson with OpenCV. You’ll learn a simple compilation pipeline with Midnight Commander, cmake, and OpenCV4Tegra’s mat library, as you build for the first time.

    Episode 1: CV Mat Container

    Learn to work with mat, OpenCV’s primary container. You’ll learn memory allocation for a basic image matrix, then test a CUDA image copy with sample grayscale and color images.

    Episode 2: Multimedia I/O

    Learn to manipulate images from various sources: JPG and PNG files, and USB webcams. Run standard filters such as Sobel, then learn to display and output back to file. Implement a rudimentary video playback mechanism for processing and saving sequential frames.

    Episode 3: Basic Operations

    Start with an app that displays an image as a Mat object, then resize, rotate it or detect “canny” edges, then display the result. Then, to ignore the high-frequency edges of the image’s feather, blur the image and then run the edge detector again. With higher window sizes, the feather’s edges disappear, leaving behind only the more significant edges present in the input image.

    Episode 4: Feature Detection and Optical Flow

    Take an input MP4 video file (footage from a vehicle crossing the Golden Gate Bridge) and detect corners in a series of sequential frames, then draw small marker circles around the identified features. Watch as these demarcated features are tracked from frame to frame. Then, color the feature markers depending on how far they move frame to frame. This simplistic analysis allows points distant from the camera—which move less—to be demarcated as such.

    Episode 5: Descriptor Matching and Object Detection

    Use features and descriptors to track the car from the first frame as it moves from frame to frame. Store (ORB) descriptors in a Mat and match the features with those of the reference image as the video plays. Learn to filter out extraneous matches with the RANSAC algorithm. Then multiply points by a homography matrix to create a bounding box around the identified object. The result isn’t perfect, but try different filtering techniques and apply optical flow to improve on the sample implementation. Getting good at computer vision requires both parameter-tweaking and experimentation.

    人人超碰97caoporen国产