October 9, 2021 - infrastructure dapr

Parking Garage Sensor implementation using Dapr Actors

Xavier Geerinck

@XavierGeerinck

Did you enjoy reading? Or do you want to stay up-to-date of new Articles?

Consider sponsoring me or providing feedback so I can continue creating high-quality articles!

Video Analytics - Edge

For a recent use case I want to run an AI Edge Model that infers a real-time video stream as quickly as possible. Normally this is quite difficult as the inference might lack behind on the realtime stream and we have to stamp the stream to solve a lag behind inferencing model. Luckily for us, Azure has the "Azure Video Analyzer" service that can help us with this. But how do we develop a module with this and deploy it on IoT Edge?

Prerequisites

Before we get started, let's do some prerequisites:

Create Video Analyzer account in azure

Open up the Azure Portal and create the Video Analyzer Service with an Edge Module in it. In there we need to copy the provisioning token:

Create IoT Edge Device

Now we have that, we need to create an IoT Device that we want to deploy the module on. Not only will we use this for a real deployment, but also for simulating to ease development (I will be running under WSL initially).

  1. VS Code -> CTRL + P -> Create Azure IoT Edge Device
  2. Save the Connection String somewhere (e.g. the info below) more speci

If you can't find the connection string anymore, you can right click the device if you have the Azure IoT Hub extension installed in VS Code

Configure IoT Edge Simulator Locally

As a last prerequisite, we have to configure the IoT Edge Simulator so we can develop locally. So run the commands below for this.

AWAITING https://github.com/Azure/iotedgehubdev/issues/352

# Install Python
pyenv install 3.8.10
pyenv global 3.8.10
# Install iotedgehubdev (latest)
# pip install iotedgehubdev --upgrade --force-reinstall
pip install git+https://github.com/Azure/[email protected]
# Install Docker Compose (Linux Only)
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# Enable PyEnv binaries under sudo
# required for sudo iotedgehubdev
sed -r -i -e "s#Defaults\s+secure_path=\"([^\"]+)\"#Defaults secure_path=\"\1:$HOME/.pyenv/shims\"#" /etc/sudoers
# Configure the Device
sudo iotedgehubdev setup -c "<DEVICE_CONN_STRING>"

Now the simulator is setup and we can get started!

In case of issues, you can remove it completely through this link: https://github.com/Azure/iotedgehubdev/issues/221#issuecomment-536382620

If you broke WSL its /etc/sudoers as I did, you can fix it by running wsl -u root visudo in a Windows Terminal

1. IoT Edge - Video Stream Simulator

The first module we will be creating is a Video Stream simulator that allows us to transform a video file into an RTSP stream. A normal webcam or camera will output this, but by having a simple simulator we don't have to connect a webcam each time during development.

Configuring the device

To configure this device, we can run the script below that will create a local user account and initialize the folders containing our video files:

Note: this was taken from https://aka.ms/ava-edge/prep_device which we could run through bash -c "$(curl -sL https://aka.ms/ava-edge/prep_device)"

#!/usr/bin/env bash
####################################################################################################
# This script is designed to run an edge device.
#
# It does the following:
# - for the rtspsim module, it creates a `samples` folders and downloads sample media into it
# - it creates folders on the host that the edge module will mount
# - it create a local user and group with restricted access for the edge module to use
####################################################################################################
# create the local group and user for the edge module
# these are mapped from host to container in the deployment manifest in the desire properties for the module
sudo groupadd -g 1010 localedgegroup
sudo useradd --home-dir /home/localedgeuser --uid 1010 --gid 1010 localedgeuser
sudo mkdir -p /home/localedgeuser
# create folders to be used by the rtspsim module
sudo mkdir -p /home/localedgeuser/samples
sudo mkdir -p /home/localedgeuser/samples/input
sudo curl https://lvamedia.blob.core.windows.net/public/camera-300s.mkv --output /home/localedgeuser/samples/input/camera-300s.mkv
sudo curl https://lvamedia.blob.core.windows.net/public/lots_284.mkv --output /home/localedgeuser/samples/input/lots_284.mkv
sudo curl https://lvamedia.blob.core.windows.net/public/lots_015.mkv --output /home/localedgeuser/samples/input/lots_015.mkv
sudo curl https://lvamedia.blob.core.windows.net/public/t2.mkv --output /home/localedgeuser/samples/input/t2.mkv
sudo curl https://lvamedia.blob.core.windows.net/public/retailshop-15fps.mkv --output /home/localedgeuser/samples/input/retailshop-15fps.mkv
# give the local user access
sudo chown -R localedgeuser:localedgegroup /home/localedgeuser/
# set up folders for use by the Video Analyzer module
# these are mounted in the deployment manifest
# !NOTE! these folder locations are must match the folders used in `deploy-modules.sh` and ultimately the IoT edge deployment manifest
# general app data for the module
sudo mkdir -p /var/lib/videoanalyzer
sudo chown -R localedgeuser:localedgegroup /var/lib/videoanalyzer/
sudo mkdir -p /var/lib/videoanalyzer/tmp/
sudo chown -R localedgeuser:localedgegroup /var/lib/videoanalyzer/tmp/
sudo mkdir -p /var/lib/videoanalyzer/logs
sudo chown -R localedgeuser:localedgegroup /var/lib/videoanalyzer/logs
# output folder for file sink
sudo mkdir -p /var/media
sudo chown -R localedgeuser:localedgegroup /var/media/

2. IoT Solution - Deployment Manifest

We can now deploy our entire IoT Solution on our device (or on the simulator). So create a file named deployment.template.json in the root with the content from the Azure Video Analyzer repository

In my case I adapted the Hub version to 1.2.0

Since we copy over the file as the boilerplate, we can see that some variables have been inserted that still require replacement.

So let's just use the value replacement:

PROVISIONING_TOKEN="<YOUR_AVA_PROVISIONING_TOKEN>" # the token used for provisioing the edge module
VIDEO_OUTPUT_FOLDER_ON_DEVICE="/home/xanrin/Projects/iot-prototyping-team/customers/thinkthings/video-analytics-edge/VideoAnalyticsEdge/demo" # the folder where the file sink will store clips
VIDEO_INPUT_FOLDER_ON_DEVICE="/home/xanrin/Projects/iot-prototyping-team/customers/thinkthings/video-analytics-edge/VideoAnalyticsEdge/demo" # the folder where where rtspsim will look for sample clips
APPDATA_FOLDER_ON_DEVICE="/home/xanrin/Projects/iot-prototyping-team/customers/thinkthings/video-analytics-edge/VideoAnalyticsEdge/demo" # the folder where Video Analyzer module will store state
# Make sure the directory exists
mdkir -p /home/xanrin/Projects/iot-prototyping-team/customers/thinkthings/video-analytics-edge/VideoAnalyticsEdge/demo
sed -i "[email protected]\$AVA_PROVISIONING_TOKEN@${PROVISIONING_TOKEN}@g" deployment.template.json
sed -i "[email protected]\$VIDEO_OUTPUT_FOLDER_ON_DEVICE@${VIDEO_OUTPUT_FOLDER_ON_DEVICE}@g" deployment.template.json
sed -i "[email protected]\$VIDEO_INPUT_FOLDER_ON_DEVICE@${VIDEO_INPUT_FOLDER_ON_DEVICE}@g" deployment.template.json
sed -i "[email protected]\$APPDATA_FOLDER_ON_DEVICE@${APPDATA_FOLDER_ON_DEVICE}@g" deployment.template.json

3. Run in Simulator

We can now run the IoT Solution in our simulator by running VS Code -> CTRL + P -> Build and Run IoT Edge Solution in Simulator -> Select the deployment.template.json file.

This will boot up everything and run it in our simulator.

Reference

https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/get-started-detect-motion-emit-events-portal https://github.com/Azure/video-analyzer/ https://github.com/Azure/video-analyzer/tree/main/setup has scripts for setting up demo resources https://github.com/Azure/video-analyzer/blob/main/setup/general-sample-setup.modules.json deployment manifest for IoT Hub

Old

Local running container:

"avaedge": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.avaedge}",
"createOptions": {}
}
},

TO STRUCTURE

  1. We create Sim
    • Should have brebaked video files
    • Should output RTSP stream

-> Copied from avaedge and made some adaptations to remove the bind and make user experience more smooth

  1. We create AVA Edge and configure it
  • I had to edit the Twin Config?! Since properties.desired was not taken over into the module twin...
    • Right click the moodule in the Azure IoT Hub device explorer
    • Edit Module Twin Config
    • Add the lines under avaedge -> properties.desired
    • In prioperties.desired add the lines
    • Click "Update Module Twin" on the left top in VS Code
    • Restart the entire sim
  1. We create custom AVA Edge AI Model

TO STRUCTURE - DEBUGGING

-> container is mcr.microsoft.com/media/video-analyzer

-> we have 2 binds in /var/lib/videoanalyzer and /var/media

  • /var/lib/videoanalyzer
    • /var/lib/videoanalyzer/debuglogs: seems to hold the logs?
  • /var/media: seems to be used for recording video to a folder
    • can be controlled through the environment variable: VIDEO_OUTPUT_FOLDER_ON_DEVICE

-> the rtsp sim module:

more:

they have a usb to rtsp as well: https://github.com/Azure/video-analyzer/tree/main/edge-modules/sources/USB-to-RTSP how to use custom ai module: https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/analyze-live-video-custom-vision?pivots=programming-language-csharp line crossing explained: https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/use-line-crossing

https://github.com/Azure-Samples/video-analyzer-iot-edge-csharp/tree/main/src/edge https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/production-readiness https://docs.microsoft.com/en-us/azure/azure-video-analyzer/video-analyzer-docs/http-extension-protocol https://thewindowsupdate.com/2021/08/24/design-an-ai-enabled-nvr-with-ava-edge-and-intel-openvino/

Streams

Did you enjoy reading? Or do you want to stay up-to-date of new Articles?

Consider sponsoring me or providing feedback so I can continue creating high-quality articles!

Xavier Geerinck © 2020

Twitter - LinkedIn