This paper focuses on detecting weeds in agricultural fields using deep learning, with the detected locations communicated to a robot that eliminates them. The paper explores different phases, such as preparing the dataset collected from a grass field, annotating the classified objects in various ways, training different deep learning models with diverse parameters and dataset conditions, evaluating the training results, and finally inferring those models on unseen data to evaluate their performance. The study also analyzes and identifies the optimal camera placement on the robot, ensuring maximum visibility and accurate perception of the detected objects. Factors such as viewing angle and distance are considered to enhance detection performance and reliability. The paper presents the tools and environment setup employed in the study, along with a detailed explanation of the workflow, spanning from data collection to model evaluation. In addition, it lists the obstacles and challenges encountered during the project and provides recommendations for future work to optimize the system for different types of crops and various environmental conditions.
Il documento si concentra sul rilevamento delle infestanti nei campi agricoli utilizzando il deep learning, con le posizioni rilevate comunicate a un robot che le elimina. Il documento esplora diverse fasi, come la preparazione del dataset raccolto da un campo erboso, l’annotazione degli oggetti classificati in modi differenti, l’addestramento di vari modelli di deep learning con diversi parametri e condizioni del dataset, la valutazione dei risultati dell’addestramento e, infine, l’inferenza di quei modelli su dati non visti per valutare le prestazioni. Lo studio analizza inoltre e identifica il posizionamento ottimale della fotocamera sul robot, garantendo la massima visibilità e una percezione accurata dell’oggetto rilevato. Vengono considerati fattori come l’angolo di visuale e la distanza per migliorare le prestazioni di rilevamento e l’affidabilità. Il documento presenta gli strumenti e l’ambiente di setup utilizzati nello studio, insieme a una spiegazione dettagliata del flusso di lavoro, che va dalla raccolta dei dati alla valutazione del modello. Inoltre, elenca gli ostacoli e le sfide incontrate durante il progetto e fornisce raccomandazioni per lavori futuri al fine di ottimizzare il sistema per diversi tipi di colture e varie condizioni ambientali.
WeedEye: A Vision-Based Robotic System for Weed Detection in Open Farming
DIAB, KHALED
2025/2026
Abstract
This paper focuses on detecting weeds in agricultural fields using deep learning, with the detected locations communicated to a robot that eliminates them. The paper explores different phases, such as preparing the dataset collected from a grass field, annotating the classified objects in various ways, training different deep learning models with diverse parameters and dataset conditions, evaluating the training results, and finally inferring those models on unseen data to evaluate their performance. The study also analyzes and identifies the optimal camera placement on the robot, ensuring maximum visibility and accurate perception of the detected objects. Factors such as viewing angle and distance are considered to enhance detection performance and reliability. The paper presents the tools and environment setup employed in the study, along with a detailed explanation of the workflow, spanning from data collection to model evaluation. In addition, it lists the obstacles and challenges encountered during the project and provides recommendations for future work to optimize the system for different types of crops and various environmental conditions.| File | Dimensione | Formato | |
|---|---|---|---|
|
Khaled_Diab_2053399.pdf
embargo fino al 09/04/2027
Dimensione
34.57 MB
Formato
Adobe PDF
|
34.57 MB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/106495