Neuromorphic vision sensors (also referred as event cameras) are novel bio-inspired vision sensors that work radically different from traditional cameras. Instead of taking frames at a fixed time rate, they measure per-pixel brightness changes asynchronously. Each time the brightness change exceeds a threshold, the responsible pixel triggers an event, represented by the pixel location, time, and polarity of change. Therefore, the output of an event camera is a continuous stream of events. Events cameras bring several advantages over traditional cameras: low latency, high temporal resolution, high dynamic range, low motion blur, and an extremely condensed representation of the signal, which leads to low power consumption. However, due to their asynchronous nature, events are not directly processable by existing high-performing methods designed for images. The best-performing approaches for event data convert events into image-like representations, to be then processed by standard Convolutional Neural Networks (CNNs). However, these approaches discard both the sparsity and high temporal resolution of events, leading to high latency and poor efficiency. To tackle this problem recent methods have adopted Graph Convolutional Neural Networks (GCNs), which process events as spatial-temporal graphs, that are inherently sparse. These approaches maintain the good intrinsic properties of events, but up to now, they do not achieve the same performances of dense-based methods. We believe that graph-based approaches can be vastly improved. In this work we expand current approaches, obtaining improvements both in performance and efficiency. We evaluate our approach on object classification and object detection tasks, achieving state-of-the-art results in efficiency, state-of-the-art results in performances in the detection task, and performances close to the best-performing dense-based methods in the object classification task.

Efficient Asynchronous Graph-Based Techniques for Neuromorphic Vision Sensing

VISONÀ, MATTEO
2021/2022

Abstract

Neuromorphic vision sensors (also referred as event cameras) are novel bio-inspired vision sensors that work radically different from traditional cameras. Instead of taking frames at a fixed time rate, they measure per-pixel brightness changes asynchronously. Each time the brightness change exceeds a threshold, the responsible pixel triggers an event, represented by the pixel location, time, and polarity of change. Therefore, the output of an event camera is a continuous stream of events. Events cameras bring several advantages over traditional cameras: low latency, high temporal resolution, high dynamic range, low motion blur, and an extremely condensed representation of the signal, which leads to low power consumption. However, due to their asynchronous nature, events are not directly processable by existing high-performing methods designed for images. The best-performing approaches for event data convert events into image-like representations, to be then processed by standard Convolutional Neural Networks (CNNs). However, these approaches discard both the sparsity and high temporal resolution of events, leading to high latency and poor efficiency. To tackle this problem recent methods have adopted Graph Convolutional Neural Networks (GCNs), which process events as spatial-temporal graphs, that are inherently sparse. These approaches maintain the good intrinsic properties of events, but up to now, they do not achieve the same performances of dense-based methods. We believe that graph-based approaches can be vastly improved. In this work we expand current approaches, obtaining improvements both in performance and efficiency. We evaluate our approach on object classification and object detection tasks, achieving state-of-the-art results in efficiency, state-of-the-art results in performances in the detection task, and performances close to the best-performing dense-based methods in the object classification task.
2021
Efficient Asynchronous Graph-Based Techniques for Neuromorphic Vision Sensing
neuromorphic sensing
computer vision
GCNN
event camera
processing
File in questo prodotto:
File Dimensione Formato  
Visona_Matteo.pdf

accesso riservato

Dimensione 10.08 MB
Formato Adobe PDF
10.08 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/39272