Training Duration: 3 sessions (6 hours per session)
PLEASE NOTE: This is a LIVE INSTRUCTOR-LED training event delivered ONLINE.
Course Description
This course describes how to use the Vitis™ AI development platform in conjunction with DNN algorithms, models, inference and training, and frameworks on cloud and edge computing platforms. The emphasis of this course is on:
- Illustrating the Vitis AI tool flow
- Utilizing the architectural features of the Deep Learning Processor Unit (DPU)
- Optimizing a model using the AI quantizer and AI compiler
- Utilizing the Vitis AI Library to optimize pre-processing and post-processing functions
- Creating a custom platform and application
- Deploying a design
- All modules: Support for Pytorch added
- All modules: DPU names updated with new naming conventions
- New lab: Vitis AI Library (VART) for Cloud
- New module: Creating a Vitis Embedded Acceleration Platform
Software and hardware developers, AI/ML engineers, data scientists, and anyone who needs to accelerate their software applications using AMD devices
- Basic knowledge of machine learning concepts
- Neural Networks Explained - Machine Learning Tutorial for Beginners: watch video
- How Convolutional Neural Networks Work: watch video
- Deep learning frameworks (such as TensorFlow, Pytorch, and Caffe)
- Comfort with the C/C++/Python programming language
- Software development flow
- For a comprehensive introduction to Deep Learning principle we suggest: Practical Deep Learning or Practical Deep Learning Online
- Vitis AI development environment
- Vivado Design Suite
Architecture: AMD SoCs, and Adaptive SoCs
After completing this comprehensive training, you will have the necessary skills to:
- Describe AMD machine learning solutions with the Vitis AI development environment
- Describe the supported frameworks, network modes, and pre-trained models for cloud and edge applications
- Utilize DNN algorithms, models, inference and training, and frameworks on cloud and edge computing platforms
- Use the Vitis AI quantizer and AI compiler to optimize a trained model
- Use the architectural features of the DPU processing engine to optimize a model for an edge application
- Identify the high-level libraries and APIs that come with the AMD Vitis AI Library
- Create a custom hardware overlay based on application requirements
- Create a custom application using a custom hardware overlay and deploy the design
- Introduction to the Vitis AI Development Environment
Describes the Vitis AI development environment, which consists of the Vitis AI development kit, for AI inference on AMD hardware platforms, including both edge devices and Alveo accelerator cards. {Lecture}
- Overview of ML Concepts
Overview of ML concepts such as DNN algorithms, models, inference and training, and frameworks. {Lecture}
- Frameworks Supported by the Vitis AI Development Environment
Discusses the support for many common machine learning frameworks such as Caffe, TensorFlow, and Pytorch. {Lecture}
- Setting Up the Vitis AI Development Environment
Demonstrates the steps to set up a host machine for developing and running AI inference applications on cloud or embedded devices. {Demo}
- AI Optimizer
Describes the optimization of a trained model that can prune a model up to 90%. This topic is for advanced users and will be covered in detail in the Advanced ML training course. {Lecture}
- AI Quantizer and AI Compiler
Describes the AI quantizer, which supports model quantization, calibration, and fine tuning. Also describes the AI compiler tool flow. With these tools, deep learning algorithms can deploy in the Deep Learning Processor Unit (DPU), which is an efficient hardware platform running on a AMD FPGA or SoC. {Lecture, Lab}
- AI Profiler and AI Debugger
Describes the AI profiler, which provides layer-by-layer analysis to help with bottlenecks. Also covers debugging the DPU running result. {Lecture}
- Introduction to the Deep Learning Processor Unit (DPU)
Describes the Deep Learning Processor Unit (DPU) and its variants for edge and cloud applications. {Lecture}
- DPUCADX8G Architecture Overview
Overview of the DPUCADX8G architecture, supported CNN operations, and design considerations {Lecture}
- DPUCZDX8G Architecture Overview
Overview of the DPUCZDX8G architecture, supported CNN operations, DPU data flow, and design considerations. {Lecture}
- Vitis AI Library
Reviews the Vitis AI Library, which is a set of high-level libraries and APIs built for efficient AI inference with the DPU. It provides an easy-to-use and unified interface for encapsulating many efficient and high-quality neural networks. {Lecture, Labs} Note that the edge flow version of the lab is not available in the OnDemand curriculum because an evaluation board is required for the entirety of the lab.
- Creating a Custom Hardware Platform with the DPU Using the Vivado Design Suite Flow (Edge)
Illustrates the steps to build a Vivado Design Suite project, add the DPUCZDX8G IP, and run the design on a target board. {Lab}
- Creating a DPU Kernel Using the Vitis Environment Flow (Edge)
Illustrates the steps to build a Vitis unified software platform project that adds the DPU as the kernel (hardware accelerator) and to run the design on a target board. {Lab}
- Creating a Vitis Embedded Acceleration Platform (Edge)
Describes the Vitis embedded acceleration platform, which provides product developers an environment for creating embedded software and accelerated applications on heterogeneous platforms based on FPGAs, Zynq™ SoCs, and Alveo data center cards. {Lecture}
- Creating a Custom Application (Edge)
Illustrates the steps to create a custom application, including building the hardware and Linux image, optimizing the trained model, and using the optimized model to accelerate a design.
{Lab}