Simplifying AI Edge deployment with Azure Percept

This post has been republished via RSS; it originally appeared at: New blog articles in Microsoft Tech Community.

As a technology strategist for West Europe, I have had multiple discussions with different customers on IoT projects and the enablement of AI on the Edge. A big portion of those conversation would be on how to utilize Microsoft’s Azure AI models and how to deploy them on the edge, so I was excited when Azure Percept was announced with the capability to simplify Azure cognitive capabilities to the edge. I was able to get my hands on an Azure Percept Device Kit to play around with and discover it’s potential and it did not disappoint. In this blog post I wanted to give a quick tour of the device, it’s components and how you can start deploying use cases in a quick and simple manner.

 

AhmedAssem_0-1628786580289.jpeg

 

What is Azure Percept?

Azure Percept is a software/hardware stack that allows you to quickly deploy modules on the edge utilizing different cognitive feeds such as Video and Audio. The applications range from Low Code/No Code using Azure cognitive services to more advanced use cases where you can write and deploy your own algorithms.

As it stands the stack consists of a software component and a hardware component.

  • Azure Percept Studio: is the launch point that allows you to create AI Edge models and solutions. It easily integrates your edge AI capable hardware with Azure’s suite of services such as IOT hub, cognitive services and more
  • Azure Percept Devkit: is an edge AI development kit that enables you to develop audio and vision AI solutions with Azure Percept studio

 

Components

Azure Percept Carrier Board:

  • NXP iMX8m processor
  • Trusted Platform Module (TPM) version 2.0
  • Wi-Fi and Bluetooth connectivity

AhmedAssem_1-1628786580316.jpeg

 

You can check more details in Azure Percept DK datasheet

 

Azure Percept Vision:

  • Intel Movidius Myriad X (MA2085) vision processing unit (VPU)
  • RGB camera sensor

AhmedAssem_2-1628786580350.jpeg

You can check more details in the Azure Percept Vision datasheet

 

Azure Percept Audio:

AhmedAssem_3-1628786580394.jpeg

You can check more details in the Azure Percept Audio datasheet

 

Connect to Azure Percept Studio:

1 – Open Azure Percept Studio

AhmedAssem_4-1628786580403.png

2 – Head to Devices and you will be able to view the devices that are currently connected

AhmedAssem_5-1628786580407.png

3 – Choose your device and click on the vision tab to see the different options for the vision module

AhmedAssem_6-1628786580411.png

4 – Click on View your device stream and it will start streaming directly from the camera in real time (You need to have the device in the same network as the PC you are connecting from)

AhmedAssem_7-1628786580431.png

5 - As can be seen it was able to provide an automatic object detection and could recognize the object as my reading chair.

 

Deploying a Custom Vision Model:

Next, I tried the object detection model on different other objects and found that it didn’t recognise my watch, so I started working on a no code custom vision model to identify watches

AhmedAssem_8-1628786580449.png

1 - Head to Overview and create a vision Prototype

AhmedAssem_9-1628786580455.png

2 – Fill in the details and create your prototype

AhmedAssem_10-1628786580460.png

3 – In the image capture tab I was able to take photos of my watch. You have the option to take the photos manually or using automatic image capturing. I ended up taking 15 photos

 

AhmedAssem_11-1628786580463.png

4 – In the next tab open the Open Project in custom vision link and you will be directed towards your project gallery

AhmedAssem_12-1628786580467.png

5 – Click on the untagged option to find all your saved photos

AhmedAssem_13-1628786580510.png

6 – Click on the photos and start tagging them

AhmedAssem_14-1628786580539.png

7 – Click Train to start training your Custom Vision Model

AhmedAssem_15-1628786580594.png

8 – Once the training has been done you will be able to review the results of your training. If satisfied you can go back towards the Azure portal

AhmedAssem_16-1628786580599.png

9 – Go to the final tab within the custom vision setup prototype and deploy your model

AhmedAssem_17-1628786580601.png

10 – Azure Percept was able to identify my watch as well as a different watch that wasn’t provided in the training data set

AhmedAssem_18-1628786580622.png

 

AhmedAssem_19-1628786580650.png

 

AhmedAssem_20-1628786580707.png

 

Impressions and next steps:

All in all, Azure Percept has been very easy to operate and to connect to Azure. Deploying the custom vision model was very streamlined and I am looking forward to discovering more and dive deeper into the world of Azure Percept. We are already as well in discussion with multiple customers on different use cases within the domain of Critical Infrastructure where Edge AI is playing a huge role in combination with other components such as Azure Digital Twins and Azure Maps which I am excited to explore over the next period.

 

Learn more about Azure Percept:

 

 

 

REMEMBER: these articles are REPUBLISHED. Your best bet to get a reply is to follow the link at the top of the post to the ORIGINAL post! BUT you're more than welcome to start discussions here:

This site uses Akismet to reduce spam. Learn how your comment data is processed.