Autonomous vehicles (AVs) heavily depend on Visual Simultaneous Localization and Mapping (vSLAM) systems for real-time navigation, obstacle detection, and decision-making. vSLAM systems rely on camera inputs to interpret road features, such as traffic lights, vehicle brake lights, and turn signals. This dependency introduces vulnerabilities to adversarial light-based attacks, particularly in scenarios where these visual cues can be manipulated. This project investigates the potential of exploiting Visible Light Communication (VLC) to subtly modulate external light sources, such as headlights and brake lights of nearby vehicles, to deceive the AV’s vSLAM system. These manipulations could lead to false perceptions of braking, lane changes, or traffic congestion, resulting in unsafe AV responses, such as abrupt braking, improper lane changes, or incorrect navigation adjustments. This study presents a threat model detailing attacker objectives and capabilities, followed by a simulation-based experimental framework to evaluate the effects of light-based disruptions on vSLAM systems. The findings underscore the critical need for improved security in AV perception systems to safeguard against adversarial light-based manipulations.

Autonomous vehicles (AVs) heavily depend on Visual Simultaneous Localization and Mapping (vSLAM) systems for real-time navigation, obstacle detection, and decision-making. vSLAM systems rely on camera inputs to interpret road features, such as traffic lights, vehicle brake lights, and turn signals. This dependency introduces vulnerabilities to adversarial light-based attacks, particularly in scenarios where these visual cues can be manipulated. This project investigates the potential of exploiting Visible Light Communication (VLC) to subtly modulate external light sources, such as headlights and brake lights of nearby vehicles, to deceive the AV’s vSLAM system. These manipulations could lead to false perceptions of braking, lane changes, or traffic congestion, resulting in unsafe AV responses, such as abrupt braking, improper lane changes, or incorrect navigation adjustments. This study presents a threat model detailing attacker objectives and capabilities, followed by a simulation-based experimental framework to evaluate the effects of light-based disruptions on vSLAM systems. The findings underscore the critical need for improved security in AV perception systems to safeguard against adversarial light-based manipulations.

Visible Light Communication Attacks on Autonomous Vehicle vSLAM Systems

ALAGAPPAN, VINU VARSHITH
2023/2024

Abstract

Autonomous vehicles (AVs) heavily depend on Visual Simultaneous Localization and Mapping (vSLAM) systems for real-time navigation, obstacle detection, and decision-making. vSLAM systems rely on camera inputs to interpret road features, such as traffic lights, vehicle brake lights, and turn signals. This dependency introduces vulnerabilities to adversarial light-based attacks, particularly in scenarios where these visual cues can be manipulated. This project investigates the potential of exploiting Visible Light Communication (VLC) to subtly modulate external light sources, such as headlights and brake lights of nearby vehicles, to deceive the AV’s vSLAM system. These manipulations could lead to false perceptions of braking, lane changes, or traffic congestion, resulting in unsafe AV responses, such as abrupt braking, improper lane changes, or incorrect navigation adjustments. This study presents a threat model detailing attacker objectives and capabilities, followed by a simulation-based experimental framework to evaluate the effects of light-based disruptions on vSLAM systems. The findings underscore the critical need for improved security in AV perception systems to safeguard against adversarial light-based manipulations.
2023
Visible Light Communication Attacks on Autonomous Vehicle vSLAM Systems
Autonomous vehicles (AVs) heavily depend on Visual Simultaneous Localization and Mapping (vSLAM) systems for real-time navigation, obstacle detection, and decision-making. vSLAM systems rely on camera inputs to interpret road features, such as traffic lights, vehicle brake lights, and turn signals. This dependency introduces vulnerabilities to adversarial light-based attacks, particularly in scenarios where these visual cues can be manipulated. This project investigates the potential of exploiting Visible Light Communication (VLC) to subtly modulate external light sources, such as headlights and brake lights of nearby vehicles, to deceive the AV’s vSLAM system. These manipulations could lead to false perceptions of braking, lane changes, or traffic congestion, resulting in unsafe AV responses, such as abrupt braking, improper lane changes, or incorrect navigation adjustments. This study presents a threat model detailing attacker objectives and capabilities, followed by a simulation-based experimental framework to evaluate the effects of light-based disruptions on vSLAM systems. The findings underscore the critical need for improved security in AV perception systems to safeguard against adversarial light-based manipulations.
Visual SLAM
Autonomous Vehicles
VLC
ORB-SLAM
Adversarial Attacks
File in questo prodotto:
File Dimensione Formato  
Alagappan_VinuVarshith.pdf

accesso aperto

Dimensione 5.64 MB
Formato Adobe PDF
5.64 MB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/80278