Assembly processes in industrial environments are prone to errors that can reduce efficiency and increase costs. Automatic recognition of the current assembly state from visual data can support operators by providing timely feedback and reducing mistakes. An effective Assembly State Recognition system should not only achieve high accuracy on known states, but also generalize to unseen states, remain scalable as parts change, and be robust to execution errors. This thesis addresses Assembly State Recognition using the IndustReal dataset, which consists of egocentric videos of operators assembling and maintaining a toy vehicle. Initial experiments show that naive solutions, such as training a task-specific object detector, fail to meet the required generalization and scalability properties. The problem is therefore approached from a representation learning perspective, where video frames are mapped into an embedding space using contrastive learning. While prior work relies on generic visual features extracted directly from video frames as input to the embedding model, such representations lack explicit information about the object’s structural composition, which limits their discriminative properties. The main contribution of this thesis is a structure-aware representation that explicitly captures part-level assembly information. Two custom object detection models are trained to identify individual parts and the fully assembled object, enabling the filtering of unassembled parts. To ensure scalability and avoid costly manual annotation, a synthetic data generation pipeline is introduced, which produces realistic and balanced training data directly from 3D models of the assembly states. The resulting part detections are used to construct graphs that encode the assembly structure depicted in each frame. These graphs are processed by a custom Graph Neural Network–based model to produce graph embeddings, which act as a substitute for the image-based visual features. In addition, multi-modal approaches are investigated, exploring how graph-based and visual features can be combined to leverage the complementary strengths of both representations. Experimental results demonstrate that the proposed graph-based representation encodes meaningful and discriminative information, leading to a consistent improvement in Assembly State Recognition performance. Notably, the approach yields a significant gain in generalization capabilities, while remaining robust to errors and naturally scalable to new assembly configurations.
I processi di assemblaggio in ambienti industriali sono soggetti a errori che possono ridurre l’efficienza e aumentare i costi. Il riconoscimento automatico dello stato di assemblaggio (Assembly State Recognition) a partire da dati visivi può supportare gli operatori fornendo feedback tempestivo e riducendo gli errori. Un sistema efficace di Assembly State Recognition dovrebbe non solo ottenere elevate prestazioni sugli stati noti, ma anche generalizzare a stati di assemblaggio non visti, rimanere scalabile al variare dei componenti ed essere robusto a errori di esecuzione. Questa tesi affronta il problema dell’Assembly State Recognition utilizzando il dataset IndustReal, composto da video egocentrici di operatori impegnati nell’assemblaggio e nella manutenzione di un veicolo giocattolo. Esperimenti preliminari mostrano che soluzioni semplici, come l’addestramento di un object detector specifico per il task, non soddisfano i requisiti di generalizzazione e scalabilità. Il problema viene quindi affrontato da una prospettiva di representation learning, in cui i frame video vengono mappati in uno embedding space tramite contrastive learning. Mentre lavori precedenti si basano su feature visive estratte dai frame video come input per il modello di embedding, tali rappresentazioni mancano di informazioni esplicite sulla composizione strutturale degli oggetti, limitandone le proprietà discriminative. Il principale contributo di questa tesi è una rappresentazione structure-aware che cattura esplicitamente informazioni di assemblaggio a livello di componente. Vengono addestrati due object detection model per identificare le singole componenti e l’oggetto completamente assemblato, consentendo il filtraggio delle componenti non assemblate. Per garantire scalabilità ed evitare costose annotazioni manuali, viene introdotta una pipeline di generazione di dati sintetici, che produce dati di training realistici e bilanciati a partire dai modelli 3D degli stati di assemblaggio. Le detection delle parti vengono utilizzate per costruire grafi che codificano la struttura dell’assemblaggio rappresentata in ciascun frame. Tali grafi sono elaborati da un modello basato su Graph Neural Networks per produrre graph embeddings, che fungono da sostituto delle feature visive image-based. Inoltre, vengono investigate strategie multimodali, esplorando come le feature graph-based e visive possano essere combinate per sfruttare i punti di forza di entrambe. I risultati sperimentali dimostrano che la rappresentazione graph-based proposta codifica informazioni significative e discriminative, portando a un miglioramento consistente delle prestazioni nell’Assembly State Recognition. In particolare, l’approccio mostra un notevole incremento nelle capacità di generalizzazione, mantenendo robustezza agli errori e scalabilità verso nuove configurazioni di assemblaggio.
Graph-Based Representation Learning for Assembly State Recognition Using Synthetic-Driven Part Detection
TOFFOLON, MATTIA
2025/2026
Abstract
Assembly processes in industrial environments are prone to errors that can reduce efficiency and increase costs. Automatic recognition of the current assembly state from visual data can support operators by providing timely feedback and reducing mistakes. An effective Assembly State Recognition system should not only achieve high accuracy on known states, but also generalize to unseen states, remain scalable as parts change, and be robust to execution errors. This thesis addresses Assembly State Recognition using the IndustReal dataset, which consists of egocentric videos of operators assembling and maintaining a toy vehicle. Initial experiments show that naive solutions, such as training a task-specific object detector, fail to meet the required generalization and scalability properties. The problem is therefore approached from a representation learning perspective, where video frames are mapped into an embedding space using contrastive learning. While prior work relies on generic visual features extracted directly from video frames as input to the embedding model, such representations lack explicit information about the object’s structural composition, which limits their discriminative properties. The main contribution of this thesis is a structure-aware representation that explicitly captures part-level assembly information. Two custom object detection models are trained to identify individual parts and the fully assembled object, enabling the filtering of unassembled parts. To ensure scalability and avoid costly manual annotation, a synthetic data generation pipeline is introduced, which produces realistic and balanced training data directly from 3D models of the assembly states. The resulting part detections are used to construct graphs that encode the assembly structure depicted in each frame. These graphs are processed by a custom Graph Neural Network–based model to produce graph embeddings, which act as a substitute for the image-based visual features. In addition, multi-modal approaches are investigated, exploring how graph-based and visual features can be combined to leverage the complementary strengths of both representations. Experimental results demonstrate that the proposed graph-based representation encodes meaningful and discriminative information, leading to a consistent improvement in Assembly State Recognition performance. Notably, the approach yields a significant gain in generalization capabilities, while remaining robust to errors and naturally scalable to new assembly configurations.| File | Dimensione | Formato | |
|---|---|---|---|
|
Toffolon_Mattia.pdf
accesso aperto
Dimensione
21.05 MB
Formato
Adobe PDF
|
21.05 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/106862