Global training solutions for engineers creating the world's electronics

Essential Edge AI ONLINE

Standard Level - 4 sessions (6 hours per session)


PLEASE NOTE: This is a LIVE INSTRUCTOR-LED training event delivered ONLINE. It covers the same scope and content as a scheduled in-person class and delivers comparable learning outcomes.


Essential Edge AI is designed for engineers who need a practical understanding of deploying trained Neural Network models to constrained edge devices. 

From principles and procedures, to important rules and helpful tricks, the course enables attendees to appreciate the system’s perspective of embedding a deep learning model inference into an application and how to connect it to other parts of the system to make it useful.

The practical side of the training is based around hypothetical Edge AI applications that step through the process from planning and pre-processing, to creating Neural Network models, right through to inferencing and deployment. These exercises comprise approximately 50% of class time.

Course Overview Video:

Deep Learning practitioners who have a trained model and wish to deploy the model for an application in one or more of the following constrained edge device types - Linux based single board computers (x64 or ARM), Neural Network Accelerators or 32-bit Microcontrollers (such as a Cortex-M4)

Please note that the course does not delve into details of how to train a deep learning model or the basics of different neural network architectures.  These details are covered in the Practical Deep Learning course. This course does not discuss the inference of large models (such as natural language processing) which are likely to run on Cloud servers.

  • Data planning and pre-processing techniques for deep learning training and inference
  • Architectures of Fully Connected, Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) models
  • Apply Transfer Learning to customize models to application specific datasets
  • Classify time series data using CNNs
  • Convert models between different formats (ONNX, TFLite)
  • Perform model inference using different runtimes and Python / C++ API’s
  • Port a trained model to a microcontroller using TinyML (TFLite Micro)
  • Compile a model to a neural network accelerator
  • Perform Object Detection using Single Shot Detection (SSD) model
  • Host AI models on an IoT Device and provide inference using MQTT or REST APIs
  • Deploy AI models using Docker containers

Attendees should be familiar with and have experience of working with neural network models. Specifically, you should have:

  • An understanding of different types of neural network models
  • Use of Keras APIs to create and train models
  • An understanding of metrics needed to evaluate a trained model
  • Some background knowledge about edge devices, such as Linux based single board computers (x64 or ARM), Neural Network Accelerators or 32-bit Microcontrollers (such as a Cortex-M4 based)

Attendees should also have good working knowledge of a programming language like Python or C/C++. Prior attendance of Doulos Practical Deep Learning training would be an advantage.

Please contact Doulos directly to discuss and assess your specific experience against the pre-requisites.

Doulos training materials are renowned for being the most comprehensive and user-friendly available. Their style, content and coverage are unique in the training world and have made them sought after resources. The materials include:

  • Fully indexed class notes creating a complete reference manual
  • Jupyter Notebooks containing complete working code for all of the neural networks presented during the training class, which you can use after the class for revision or as the basis for your own networks.
  • Source code on GitHub for exercises using Linux SBC, Neural Network Accelerator and Microcontroller.

Introduction

Supervised learning • Inference engines at the edge • Edge AI components • Edge AI applications • Data acquisition
Practicals:  Try out model training environments based on CPU and GPU

Data Planning & Pre-processing

CRISP-DM Methodology & MLOps for Edge AI • Use case of audio feature pre-processing • Split audio file for training • Conversion of audio segments to spectrogram • Training data preparation
Practicals:  Use SoX (Sound eXchange) software to trim, filter, and plot Spectrograms • View Spectrograms before training • Organize spectrograms into folders for training • Work with Environment Sound Classification (ESC) dataset

Creating Neural Network (NN) Models for Edge

Review of different kinds of neural networks • Convolutional Neural Network (CNN) using 2D and 1D convolution • Simple Recurrent Neural Network model • Transfer Learning • Chaining NN models
Practicals:  Code (using Keras) simple Convolutional and Recurrent neural network using small datasets • Write code to use pretrained models (such as MobileNet) for transfer learning

Time Series

Terminology – univariate, multivariate, regression, classification • Pre-processing using Pandas • Time Series as supervised learning problem • Keras Time Series Generator API • Time Series classification using CNN • RNN for Time Series Data  • Time Series Database
Practicals: Set up time series dataset as supervised learning task • Perform CNN based fingerprint analysis of sections of time series data

Edge AI Hardware

Constrained Inference platforms • Inference server  •  Linux based SBC • 32 bit Microcontroller • NN Accelerator • GPU • FPGA
PracticalsTake picture, play and record audio using SBC • Read sensor data using serial port from SBC and Microcontroller

Model Formats

Model Formats • Open Neural Network Exchange Format (ONNX)   •  TensorFlow Lite (TFLite) • Viewing Model Graph • Model Format Conversion • Model Quantization
Practicals:  Train and convert Scikit-Learn (ML) model to ONNX • Train and convert Keras Models to ONNX and TFLite

Model Inference

Inferencing steps • Input Tensor Shape • ONNX runtime, TFLite Interpreter Python methods • TFLite C++ Classes
Practicals: Convert Scikit-learn model to ONNX format.  Perform Inference of NN model using ONNX/TFLite formats in Python environment. Infer TFLite model using C++ classes

TinyML

TinyML Implementation frameworks • TFLite Micro • Converting Keras Model • Setting up and using TFLite Micro Interpreter
Practicals:  Quantize Fully Connected TFLite Model • Convert Model to C array for storing in MPU • Read sensor value and execute model using TinyML (TFLite Micro) on Cortex-M4 device

Neural Network Accelerator

Neural Network Accelerator platforms  • Model compiler and workload partitioning • Executing Model on Accelerator • Working with multiple accelerators
Practicals:  Quantize and compile CNN Model for NN Accelerator • Run compiled model on accelerator and compare execution time with SBC

Object Detection

MobileNet architecture •Object Detection using MobileNet based SSD • Object Detector Output • Decoding and boxing detected output
Practicals: Use object detection code for determining if a car has been parked beyond a stipulated time

Monitor Model Inference and Performance

Accessing Model Inference using MQTT • Model inference using REST API • Monitoring Model Drift   
Practicals: Communicate model inference output using MQTT and Flask webserver

Deploy Model to Edge Device

Enumerate Model Inference Dependencies • Create Dockerfile for Model Inference Container • Push Model Inference Container to Edge Device
Practicals: Create ONNX/TFLite inference Dockerfiles • Use Dockerfile to create Docker container and test it on Edge device

Edge AI use cases

Edge AI application planning template • Discussion on creating Edge AI solution for different use cases

For on-site, team-based training, please contact Doulos about tailoring this course to suit your particular target hardware and software environments.

Course Dates

16 Dec 2024 ONLINE Americas Enquire
03 Feb 2025 ONLINE EurAsia Enquire
01 Apr 2025 ONLINE EurAsia Enquire
22 Apr 2025 ONLINE Americas Enquire

Looking for team-based training, or other locations?

Complete an enquiry form and a Doulos representative will get back to you.

Enquiry FormPrice on request

Next dates for this course