Creating an Azure IoT Edge Golden Image

So you are working on a production scale and suddenly notice that you don't have to provision just 1, not 10 but 1000s of devices. What do you do?

This is where you start automating stuff, rather than doing everything manually, you ideally:

  1. Create an image
  2. Flash an SD Card
  3. Install the SD Card
  4. Boot-up the device

Applying this to Azure IoT Edge, I will walk you through how you can create your own Azure IoT Edge image for your devices. This way we can follow a previous blog post to automatically register our devices on boot with IoT Hub !

Packer Installation

To do so, we will be utilizing a tool named Packer. Packer is a tool created by Hashicorp to create "Golden Images" for a target platform through Builders. These builders are responsible for generating an image for that target platform. (e.g. we create an Image and save it to Azure, we create an Image to run on Hyper-V, …)

Installing Packer with ARM support

In our case, we want to utilize Packer to create an image that can run on our target devices which are ARM based. Packer didn't have this option natively, but provided a community supported module that can do just this.

Packer

Let's get started on creating our own image by installing Packer itself.

# Install Go (required for Packer)
# See: https://github.com/canha/golang-tools-install-script
wget -q -O - https://git.io/vQhTU | bash

# Install dependencies
sudo apt install -y git unzip qemu qemu-user-static e2fsprogs dosfstools

# Install Packer
# see version at: https://www.packer.io/downloads
PACKER_VERSION=1.7.4
wget "https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_amd64.zip"
unzip packer_${PACKER_VERSION}_linux_amd64.zip 
sudo mv packer /usr/local/bin/

ARM for Packer

Since ARM is not natively installed, we will have to install a plugin for this.

# Download the Packer ARM Builder
git clone https://github.com/mkaczanowski/packer-builder-arm

# Build it
cd packer-builder-arm
go mod download
go build

# Install to /usr/local/bin (packer will discover plugins with prefix packer-*)
sudo mv packer-builder-arm /usr/local/bin

Configuring

Once packer is installed and the ARM plugin was added we can get started on creating our own custom image. Let's recap on how our image should look like:

  1. Use a Jetson Nano with Ubuntu: We want to utilize a base Ubuntu image as our source compiled for a Jetson nano.
  2. Add a custom script for IoT Edge with DPS configuration: We utilize DPS to set-up our device, this script will take in the ScopeID and Primary Key of a Group Enrollment and automatically provision our Device in IoT Hub.
  3. Basic System Configuration: We want to configure time settings, ssh access, admin users, cleanup, … so we add that as well.

Create a folder structure that looks like the below:

```bash
. ├── jetson-azure-iot-edge.json    # Packer Configuration Details ├── scripts │   ├── 00-clean-nvidia-jetson.sh # Cleanup Nvidia Jetson base image (remove GUI) │   ├── 01-configure-system.sh    # Configure the system (hostname, DNS, upgrading) │   ├── 02-configure-ssh.sh       # Allow SSH Access │   ├── 03-configure-chrony.sh    # Time Synchronisation │   ├── 04-create-admin-user.sh   # Admin Login │   └── 05-install-iot-edge.sh    # IoT Edge Configuration on Boot

The main `jetson-azure-iot-edge.json` file will contain the configuration for how Packer should build our image. Here we will detail that:

* Variables for our scripts
* It should use the Nvidia Jetson Nano SD Card Image
* Copy our scripts to `/tmp/packer/scripts`
* Execute the different scripts with the variables passed as environment variables at build.

> For the full source code, feel free to check my [Public Repository](https://github.com/XavierGeerinck/PublicProjects/tree/master/Azure-IoT-Edge/PackerImage/NvidiaJetsonNano)

Our Packer JSON Image (`jetson-azure-iot-edge.json`) will then look like:

**jetson-azure-iot-edge.json:**

javascript { "variables": { "dpsgroupenrollmentkey": "NOTSETPASSBYCLI", "dpsscopeid": "NOTSETPASSBYCLI", "admincreate": "true", "adminuser": "admin", "admingroup": "admin", "adminpasswd": "admin", "adminuid": "1000", "admingid": "1000", "admingroups": "cdrom,floppy,sudo,audio,dip,video,plugdev,netdev,systemd-journal", "adminshell": "/bin/bash", "admingecos": "Administrator", "adminsshauthorisedkey": "" }, "builders": [ { "type": "arm", // The builder type "fileurls": [ // Our base image URL "https://developer.nvidia.com/jetson-nano-sd-card-image-r3221" ], "filechecksum": "7b87e26d59c560ca18692a1ba282d842", // Our base image checksum "filechecksumtype": "md5", // Our base image checksum type "filetargetextension": "zip", // The extension/compression of the image "imagebuildmethod": "reuse", // Reuse disk image or create from scratch? "imagepath": "jetson-nano.img", // Where to unpack to "imagesize": "12G", // The size of the image "imagetype": "dos", // Partition Table Scheme "imagepartitions": [ // Specifications of the partitions of the image { "name": "root", "type": "83", "startsector": "24576", "filesystem": "ext4", "size": "12G", "mountpoint": "/" } ], "imagechrootenv": [ // The shell environment that is passed to chroot "PATH=/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/sbin:/bin:/sbin" ], "qemubinarysourcepath": "/usr/bin/qemu-aarch64-static", // Where is qemu situated? "qemubinarydestinationpath": "/usr/bin/qemu-aarch64-static" // Where will copy it to? } ], "provisioners": [ { "type": "shell", "inline": [ "mkdir -p /tmp/packer/scripts" ] }, { "type": "file", "source": "./scripts", "destination": "/tmp/packer/" }, { "type": "shell", "remotefolder": "/tmp/packer", "skipclean": true, "environmentvars": [ "ADMINCREATE={{user admin_create}}", "ADMINGECOS={{user admin_gecos}}", "ADMINGID={{user admin_gid}}", "ADMINGROUP={{user admin_group}}", "ADMINGROUPS={{user admin_groups}}", "ADMINPASSWD={{user admin_passwd}}", "ADMINSHELL={{user admin_shell}}", "ADMINSSHAUTHORISEDKEY={{user admin_ssh_authorised_key}}", "ADMINUID={{user admin_uid}}", "ADMINUSER={{user admin_user}}", "DPSGROUPENROLLMENTKEY={{user dps_group_enrollment_key}}", "DPSSCOPEID={{user dps_scope_id}}" ], "scripts": [ "./scripts/00-clean-nvidia-jetson.sh", "./scripts/01-configure-system.sh", "./scripts/02-configure-ssh.sh", "./scripts/03-configure-chrony.sh", "./scripts/04-create-admin-user.sh", "./scripts/05-install-iot-edge.sh" ] } ], "post-processors": [] }

## Building

Finally, to build our image it is as simple as running the command below. Which sets the variables as defined in the variables section, more specifically variables `dps_scope_id` and `dps_group_enrollment_key`.

We can adapt the other variables as well the same way by passing them in the CLI. Otherwise, Packer will use their default values.

bash sudo packer build -var 'dpsscopeid=' -var 'dpsgroupenrollment_key=' jetson-nano.json ```

This should take around 12 minutes (mainly due to removal of packages and upgrading the system), but once that is finishes you will have a jetson-nano.img image created that can be flashed to an SD Card!

Conclusion

After following this article, you should be left with an Image that can be installed on an SD Card. Which, when booted completely configures a brand new device and registers it with the IoT Device Provisioning Service (DPS). All without us having to log in on the device, configure network, install DPS, configure DPS, configure hostname, …

If we would have to do the above on 1000s of devices, this would save us ~10min per device (the build time) thus saving ~10.000 minutes!

Troubleshooting

When working with complex software as the above (more specifically qemu virtualization), errors are bound to happen. In my case I receive the following errors sometimes:

  • ==> arm: Failed to find binfmtmisc for /usr/bin/qemu-aarch64-static under /proc/sys/fs/binfmtmisc
  • To resolve this I had to reinstall the qemu-user-static environment with sudo apt install --reinstall qemu-user-static more information can be found in the WSL Interop Issue

References

I always love to include some of the references I checked that were precious in me writing this. Below you can find the ones I used for this blog post.