Microsoft’s Vision AI Developer Kit is now generally available


In May 2018 during its annual Build developer conference in Seattle, Microsoft announced a partnership with Qualcomm to develop what it described as a developer kit for computer vision applications. It bore fruit in the Vision AI Developer Kit, a hardware base built on Qualcomm’s Vision Intelligence Platform designed to run AI models locally and integrate with Microsoft’s Azure ML and Azure IoT Edge cloud services, which became available to select customers last October.

Today, Microsoft and Qualcomm announced that the Vision AI Developer Kit (made by eInfochips) is now broadly available from distributor Arrow Electronics for $249. A software development kit containing Visual Studio Code with Python modules, a prebuilt Azure IoT deployment configurations, and a Vision AI Developer Kit extension for Visual Studio is on Github, along with a default module that recognizes upwards of 183 different objects.

Microsoft principal project manager Anne Yang notes that the Vision AI Developer Kit can be used to create apps that ensure every person on a construction site is wearing a hardhat, for instance, or to detect whether items are out-of-stock on a store shelf. “AI workloads include megabytes of data and potentially billions of calculations,” he wrote in a blog post. “In hardware, it is now possible to run time-sensitive AI workloads on the edge while also sending outputs to the cloud for downstream applications.”

Vision AI Developer Kit

Programmers tinkering with the Vision AI Developer Kit can tap Azure ML for AI model creation and monitoring and Azure IoT Edge for model management and deployment. They’re able to build a vision model by uploading  tagged pictures to Azure Blob Storage and letting Azure Custom Vision Service do the rest, or by employing Jupyter notebooks and Visual Studio Code to devise and train custom vision models using Azure Machine Learning (AML) and converting the trained models to DLC format and packaging them into an IoT Edge module.

READ ALSO  Why Apple Screen Time Mostly Makes Things Worse

Concretely, the Vision AI Developer Kit — which runs Yocto Linux — has a Qualcomm Snapdragon 603 at its core, paired with 4GB of LDDR4X and 64GB of onboard storage. An 8-megapixel camera sensor capable of recording in 4K UHD handles footage capture duties, while a four-microphone array captures sounds and commands. The kit connects via Wi-Fi (802.11b/g/n 2.4Ghz/5Ghz), but it has an HDMI out port, audio in and out ports, and USB-C port for data transfer, in addition to a Micro SD card for additional storage.

The Snapdragon Neural Processing Engine (SNPE) within Qualcomm’s Vision Intelligence 300 Platform powers the on-device execution of the aforementioned containerized Azure services, making the Vision AI Developer Kit the first “fully accelerated” platform supported end-to-end by Azure, according to Yang. “Using the Vision AI Developer Kit, you can deploy vision models at the intelligent edge in minutes, regardless of your current machine learning skill level,” he said.

The Vision AI Developer Kit has a rival in Amazon’s AWS DeepLens, which lets developers run deep learning models locally on a bespoke camera to analyze and take action on what it sees. For its part, Google recently made available the Coral Dev Board, a hardware kit for accelerated AI edge computing that ships alongside a USB camera accessory.



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com