In the last decade neural networks have been adopted for a variety of tasks that are not easily addressed by conventional algorithms, especially in case of intelligent autonomous systems. This has increased the demand for frameworks that allow for neural networks to be deployed on resource-constrained devices, both for inference and training. Among these, microcontrollers are widespread in Internet of Things because of their low power consumption. Neural networks can empower them to yield more meaningful information without the need of providing the raw data, but they are extremely limited by memory and computational power. For this reason, quantized neural networks are employed, which trade some accuracy for a significantly reduced memory and runtime footprint. However, models trained on benchmark datasets often encounter a distribution shift when deployed in real environments (e.g. due to lighting changes, sensor noise, or different terrain types) making fine-tuning on local data essential to maintain accuracy. While various systems have been proposed for neural network inference on microcontrollers, few exist to fine-tune quantized neural networks. This is due to several challenges, chief among them being the high memory requirements and the difficulty of training with low-precision arithmetic. In this thesis, an extension of the existing MicroFlow (MF) inference engine in Rust is proposed to implement On-Device Training (ODT), called MicroFlow-ODT (MF-ODT). Following MicroFlow’s methodology, this system determines all memory requirements at compile-time, ensuring stability for low-resource applications. In particular, the proposed algorithm computes the gradient directly on the quantized weights and uses integer-valued gradients. This approach significantly reduces the runtime footprint by minimizing the need for floating-point operations. However, these changes necessitate modifications to the standard backpropagation algorithm to accommodate the quantized architecture. The system is evaluated on two fronts: standard image-classification datasets executed on a desktop machine, and a terrain-classification task executed directly on a mobile robot. The results demonstrate that MicroFlow-ODT can be deployed on microcontrollers with less than a few hundred kilobytes of memory, while achieving more than 20% accuracy improvements on a real-world terrain classification task with an ESP32 and 520KB of RAM.
Nell'ultimo decennio, le reti neurali sono state adottate per una varietà di compiti che non sono facilmente affrontabili con algoritmi convenzionali, soprattutto nel caso di sistemi autonomi intelligenti. Ciò ha aumentato la domanda di framework che consentano l'implementazione di reti neurali su dispositivi con risorse limitate, sia per l'inferenza che per l'addestramento. Tra questi, i microcontrollori sono ampiamente diffusi nell'Internet delle Cose grazie al loro basso consumo energetico. Le reti neurali possono consentire loro di ottenere informazioni più significative senza la necessità di fornire i dati grezzi, ma sono estremamente limitati dalla memoria e dalla potenza di calcolo. Per questo motivo, vengono impiegate le reti neurali quantizzate, che sacrificano parte della precisione a favore di una significativa riduzione della memoria e dei tempi di esecuzione. Tuttavia, i modelli addestrati su dataset di benchmark spesso presentano uno spostamento della distribuzione quando vengono implementati in ambienti reali (ad esempio, a causa di variazioni di illuminazione, rumore dei sensori o diversi tipi di terreno), rendendo essenziale la messa a punto sui dati locali per mantenere la precisione. Sebbene siano stati proposti diversi sistemi per l'inferenza di reti neurali su microcontrollori, ne esistono pochi per la messa a punto di reti neurali quantizzate. Ciò è dovuto a diverse problematiche, tra cui spiccano gli elevati requisiti di memoria e la difficoltà di addestramento con aritmetica a bassa precisione. In questa tesi, viene proposta un'estensione del motore di inferenza MicroFlow (MF) esistente in Rust per implementare l'addestramento su dispositivo (On-Device Training, ODT), denominata MicroFlow-ODT (MF-ODT). Seguendo la metodologia di MicroFlow, questo sistema determina tutti i requisiti di memoria in fase di compilazione, garantendo stabilità per applicazioni con risorse limitate. In particolare, l'algoritmo proposto calcola il gradiente direttamente sui pesi quantizzati e utilizza gradienti a valori interi. Questo approccio riduce significativamente l'utilizzo a runtime minimizzando la necessità di operazioni in virgola mobile. Tuttavia, questi cambiamenti richiedono delle modifiche all'algoritmo di backpropagation standard per adattarsi all'architettura quantizzata. Il sistema viene valutato su due fronti: dataset standard per la classificazione di immagini eseguiti su un computer desktop e un'attività di classificazione del terreno eseguita direttamente su un robot mobile. I risultati dimostrano che MicroFlow-ODT può essere implementato su microcontrollori con meno di poche centinaia di kilobyte di memoria, ottenendo al contempo miglioramenti di precisione superiori al 20% in un'attività di classificazione del terreno reale con un ESP32 e 520 KB di RAM.
Microflow-ODT: On-Device Training of Quantized Neural Networks in Rust
ARTICO, GIOVANNI
2025/2026
Abstract
In the last decade neural networks have been adopted for a variety of tasks that are not easily addressed by conventional algorithms, especially in case of intelligent autonomous systems. This has increased the demand for frameworks that allow for neural networks to be deployed on resource-constrained devices, both for inference and training. Among these, microcontrollers are widespread in Internet of Things because of their low power consumption. Neural networks can empower them to yield more meaningful information without the need of providing the raw data, but they are extremely limited by memory and computational power. For this reason, quantized neural networks are employed, which trade some accuracy for a significantly reduced memory and runtime footprint. However, models trained on benchmark datasets often encounter a distribution shift when deployed in real environments (e.g. due to lighting changes, sensor noise, or different terrain types) making fine-tuning on local data essential to maintain accuracy. While various systems have been proposed for neural network inference on microcontrollers, few exist to fine-tune quantized neural networks. This is due to several challenges, chief among them being the high memory requirements and the difficulty of training with low-precision arithmetic. In this thesis, an extension of the existing MicroFlow (MF) inference engine in Rust is proposed to implement On-Device Training (ODT), called MicroFlow-ODT (MF-ODT). Following MicroFlow’s methodology, this system determines all memory requirements at compile-time, ensuring stability for low-resource applications. In particular, the proposed algorithm computes the gradient directly on the quantized weights and uses integer-valued gradients. This approach significantly reduces the runtime footprint by minimizing the need for floating-point operations. However, these changes necessitate modifications to the standard backpropagation algorithm to accommodate the quantized architecture. The system is evaluated on two fronts: standard image-classification datasets executed on a desktop machine, and a terrain-classification task executed directly on a mobile robot. The results demonstrate that MicroFlow-ODT can be deployed on microcontrollers with less than a few hundred kilobytes of memory, while achieving more than 20% accuracy improvements on a real-world terrain classification task with an ESP32 and 520KB of RAM.| File | Dimensione | Formato | |
|---|---|---|---|
|
Artico_Giovanni.pdf
embargo fino al 09/04/2027
Dimensione
3.76 MB
Formato
Adobe PDF
|
3.76 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/106480