MicroFlow is an open-source project that aims to enable the deployment of Neural Networks on embedded systems. In particular, MicroFlow is a TinyML compiler written in Rust and specifically designed for efficiency and robustness, making it suitable for deploying applications in critical environments. To achieve these objectives, MicroFlow employs a compiler-based inference engine approach, coupled with Rust’s memory guarantees and features. The software fills the gap left by the majority of the existing solutions, such as TensorFlow Lite for Microcontrollers, Embedded Learning Library, and ARM-NN, which are written in C++ and do not provide the same level of portability, efficiency, and robustness that MicroFlow achieves. MicroFlow demonstrated successful deployment of Neural Networks on highly resource-constrained devices, including bare-metal 8-bit microcontrollers with only 2 kB of RAM. Furthermore, experimental results showed that MicroFlow has been able to use 30% less Flash memory and 21% less RAM with respect to TensorFlow Lite for Microcontrollers when deploying a MobileNet for person detection on an ESP32. MicroFlow has been able to achieve faster inference compared to other state-of-the-art engines on medium-sized networks, such as a TinyConv speech command recognizer, and similar performance on bigger models. Overall, the experimental results proved the efficiency and suitability of MicroFlow for deployment in highly critical environments where resources are limited.

MicroFlow is an open-source project that aims to enable the deployment of Neural Networks on embedded systems. In particular, MicroFlow is a TinyML compiler written in Rust and specifically designed for efficiency and robustness, making it suitable for deploying applications in critical environments. To achieve these objectives, MicroFlow employs a compiler-based inference engine approach, coupled with Rust’s memory guarantees and features. The software fills the gap left by the majority of the existing solutions, such as TensorFlow Lite for Microcontrollers, Embedded Learning Library, and ARM-NN, which are written in C++ and do not provide the same level of portability, efficiency, and robustness that MicroFlow achieves. MicroFlow demonstrated successful deployment of Neural Networks on highly resource-constrained devices, including bare-metal 8-bit microcontrollers with only 2 kB of RAM. Furthermore, experimental results showed that MicroFlow has been able to use 30% less Flash memory and 21% less RAM with respect to TensorFlow Lite for Microcontrollers when deploying a MobileNet for person detection on an ESP32. MicroFlow has been able to achieve faster inference compared to other state-of-the-art engines on medium-sized networks, such as a TinyConv speech command recognizer, and similar performance on bigger models. Overall, the experimental results proved the efficiency and suitability of MicroFlow for deployment in highly critical environments where resources are limited.

MicroFlow: A Rust TinyML Compiler for Neural Network Inference on Embedded Systems

CARNELOS, MATTEO
2022/2023

Abstract

MicroFlow is an open-source project that aims to enable the deployment of Neural Networks on embedded systems. In particular, MicroFlow is a TinyML compiler written in Rust and specifically designed for efficiency and robustness, making it suitable for deploying applications in critical environments. To achieve these objectives, MicroFlow employs a compiler-based inference engine approach, coupled with Rust’s memory guarantees and features. The software fills the gap left by the majority of the existing solutions, such as TensorFlow Lite for Microcontrollers, Embedded Learning Library, and ARM-NN, which are written in C++ and do not provide the same level of portability, efficiency, and robustness that MicroFlow achieves. MicroFlow demonstrated successful deployment of Neural Networks on highly resource-constrained devices, including bare-metal 8-bit microcontrollers with only 2 kB of RAM. Furthermore, experimental results showed that MicroFlow has been able to use 30% less Flash memory and 21% less RAM with respect to TensorFlow Lite for Microcontrollers when deploying a MobileNet for person detection on an ESP32. MicroFlow has been able to achieve faster inference compared to other state-of-the-art engines on medium-sized networks, such as a TinyConv speech command recognizer, and similar performance on bigger models. Overall, the experimental results proved the efficiency and suitability of MicroFlow for deployment in highly critical environments where resources are limited.
2022
MicroFlow: A Rust TinyML Compiler for Neural Network Inference on Embedded Systems
MicroFlow is an open-source project that aims to enable the deployment of Neural Networks on embedded systems. In particular, MicroFlow is a TinyML compiler written in Rust and specifically designed for efficiency and robustness, making it suitable for deploying applications in critical environments. To achieve these objectives, MicroFlow employs a compiler-based inference engine approach, coupled with Rust’s memory guarantees and features. The software fills the gap left by the majority of the existing solutions, such as TensorFlow Lite for Microcontrollers, Embedded Learning Library, and ARM-NN, which are written in C++ and do not provide the same level of portability, efficiency, and robustness that MicroFlow achieves. MicroFlow demonstrated successful deployment of Neural Networks on highly resource-constrained devices, including bare-metal 8-bit microcontrollers with only 2 kB of RAM. Furthermore, experimental results showed that MicroFlow has been able to use 30% less Flash memory and 21% less RAM with respect to TensorFlow Lite for Microcontrollers when deploying a MobileNet for person detection on an ESP32. MicroFlow has been able to achieve faster inference compared to other state-of-the-art engines on medium-sized networks, such as a TinyConv speech command recognizer, and similar performance on bigger models. Overall, the experimental results proved the efficiency and suitability of MicroFlow for deployment in highly critical environments where resources are limited.
AI
TinyML
Rust
Neural Networks
Embedded Systems
File in questo prodotto:
File Dimensione Formato  
Carnelos_Matteo.pdf

Open Access dal 03/07/2024

Dimensione 1.09 MB
Formato Adobe PDF
1.09 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/46961