Making machine learning work within the resource-constrained environment of an embedded device can become a quagmire. This article will focus on working through issues with a hypothetical device with significant ML components.
ednasia.com, Oct. 15, 2019 –
Machine learning (ML) is hard; making it work within the resource-constrained environment of an embedded device can easily become a quagmire. In light of this harsh reality, anyone attempting to implement ML in an embedded system must consider, and frequently revisit, the design aspects crucially affected by its requirements. A bit of upfront planning makes the difference between project success and failure.
For this article, our focus is on building commercial-grade applications with significant, or even dominant, ML components. We'll use a theoretical scenario in which you have a device, or better yet an idea for one, which will perform complex analytics, usually in something close to real time, and deliver results in the form of network traffic, user data displays, machine control, or all three. The earlier you are in the design process, the better positioned you'll be to adjust your hardware and software stack to match the ML requirements. The available tools, especially at the edge, are neither mature, nor general purpose. Also keep in mind that, the more flexible you are, the better your odds of building a viable product.
Let's start by describing a hypothetical device and then work through some of the ML-related issues that will affect its design.