Rapid development in automation technologies thanks to electrification at the beginning of the century and the invention of semiconductors in the second half of the century. However, automation has typically been limited to tightly controlled areas such as factories, where any scenario can be planned and incorporated into the design of the associated systems. The real world is typically far less predictable, so acceptance of autonomous systems has been relatively low due to safety concerns. However, automation promises too great benefits to be ignored. For example, it could give people with paraplegia back their freedom of movement through autonomous systems.
Machine learning (ML) algorithms can have a decisive influence on the development of autonomous systems. It is fascinating for developers of embedded systems to combine these efficient algorithms, which are modeled on the human brain, with inexpensive yet powerful microcontrollers and sensors. This technological connection gave rise to what is known as edge computing, which promises billions of affordable embedded electronic systems that interact with the physical world almost instantaneously – an essential property of the edge Internet connection. As a result, edge computing can provide full ML-enabled capabilities even in the most remote locations with no connectivity. Overall, edge computing represents a revolution in automation, both in terms of dimensions and capabilities.
With this revolution, embedded systems developers are challenged to redesign a wide range of consumer and industrial products and leverage ML technologies to make them safer, easier to use, or more efficient. Companies like Microchip Technology offer affordable yet high-performance development boards that allow developers to explore ML-centric technologies and incorporate them into product prototypes quickly. We’ll explore how rapid prototyping can be accomplished using Microchip Technology’s MPLAB X Integrated Development Environment (IDE), and it is a family of 32-bit microcontrollers and microprocessors.
Machine Learning: Brain Research In Interaction With Semiconductors
In humans, all experiences with the physical world are processed by the hundreds of billions of neurons that make up the brain. Its ability to learn and adapt and its exceptional energy efficiency make the biological brain a true feat of engineering in nature. We are still decades away from artificially reproducing the complete functionality of the brain (such as a real universal artificial intelligence or AI). However, thanks to new machine learning technologies, we can already reproduce certain sub-functions of the brain. For example, machine vision algorithms can give electronic devices the ability to identify and classify objects in the field of view of a camera.
Why is that important? The widespread adoption of automation means that people and technology will interact more frequently and in increasingly risky ways. To minimize these risks, machines must be able to perceive better and understand their surroundings. Machine vision is one such mechanism that gives devices the ability to perceive and understand physical three-dimensional space. From a practical point of view, detecting the presence of a person in a physical space is a skill that has far-reaching implications for numerous use cases in areas such as safety, security, elderly care, and childcare, among others.
Powerful ML algorithms also require powerful hardware. Microchip offers a wide range of 32-bit microprocessors and microcontrollers that meet almost any performance and cost requirement of developers looking to build lines of products close to AI. Microchip makes it easy to develop and test these solutions with ML evaluation kits, such as B. the EV18H79A or the EV45Y33A. The VectorBlox Accelerator Software Development Kit (SDK) enables the design of AI / ML applications with low power consumption and small form factor on Microchips PolarFire ®Field Programmable Gate Arrays (FPGAs). FPGAs are well suited for edge AI applications, including inferencing in environments with limited computing power.
This is because FPGAs can process more Giga operations per second (GOPS) with higher performance efficiency than central processing units (CPUs) or graphics processors (GPUs). Developers can implement their algorithms on PolarFire FPGAs to meet the growing demand for energy-efficient inferencing in edge applications. In addition, PolarFire FPGAs do not require any prior knowledge of FPGA design. With the Microchip VectorBlox Accelerator SDK, developers can program in C / C ++ and program energy-efficient neural networks.
For integrating image processing algorithms in microcontroller hardware, developers of embedded systems require additional knowledge and skills. Microchip has entered into partnerships with various AI-focused startups to integrate their AI training solutions directly into the MPLAB X IDE to support this advanced training. The first solution is the NanoEdge AI Suite from Cartesian. NanoEdge AI Library is a tool you can search for C programming language-based AI libraries and integrate them into your own embedded firmware project. With AI Studio, an embedded developer can abstract the details of signal processing and ML model training. The result is a static library that can be wrapped in the main .c file and run on any Microchip’s Arm Cortex-based microcontrollers.
Edge Impulse is a comprehensive TinyML training and deployment pipeline that includes data set collection, DSP, ML algorithm training, testing, and highly efficient inference code generation for various sensor, audio, and vision applications. Thanks to an MPLAB X IDE plug-in, training data from almost all 32-bit arm microcontrollers from Microchip can be quickly transferred to Edge Impulse. In addition, Microchip has partnered with Motion Gestures to provide a unique mechanism for providing gesture recognition capabilities to embedded systems.
Motion Gesture tools give developers pattern recognition tools to capture gestures based on movement, touch, and image. Developers can use Motion Gesture’s pre-configured gesture library or use a smartphone app to train their gestures. A plug-in for the MPLAB X IDE enables developers to integrate the Motion Gesture software library with libraries for various Microchip sensors (e.g., capacitive touch, inertial measurement units, or IMUs).
MPLAB X IDE is a powerful and highly expandable development suite for many Microchip microcontrollers and digital signal processors. It is available for Windows, Mac OS, and Linux. It offers several exciting features that will be of great interest to embedded developers, including data visualization, I / O Oib viewer, and even a web-based version that allows developers to work from any computer in the world to access their source code.
A base project can give you the confidence and skills to develop more sophisticated machine vision projects of your own using Microchip Technology’s 32-bit microprocessors and microcontrollers. As mentioned earlier, computer vision can be helpful in a wide variety of security applications. Instead of lighting an LED, a general-purpose input/output (GPIO) could trigger a relay that interrupts the flow of electricity to heavy machinery if a person enters a place they shouldn’t be. Or a security device could trigger an alarm if someone is detected after hours.
Of course, the developers don’t limit themselves to identifying people.
ML algorithms can be trained to identify and classify any number of object types. There are also use cases where visual recognition is not required, but other requirements must be met. ML audio identification algorithms could be replaced to trigger outputs based on sounds rather than images. Regardless of the type of input signals, the hardware and software tools from Microchip and its AI startup partners provide a quick and easy work process that can bring ML skills to the edge.