Vulnerability Detection aims to automate the analysis of software systems to discover security flaws and defects, called vulnerabilities. In recent years, many studies have explored using LLMs in this task, leveraging their knowledge and reasoning skills acquired through training on large text and source code datasets. Despite the potential highlighted by these works, LLMs often struggle to correctly explain the root causes of vulnerabilities, raising questions about their effectiveness. This project aims to improve Large Language Models' classification and explainability capabilities by adopting a novel reasoning methodology from the literature known as the Graph of Thoughts. Although this methodology has shown promising results in logical and mathematical tasks, it has never been applied to vulnerability detection. Testing and evaluating this new vulnerability detection technique has demonstrated its potential to improve LLMs' classification and reasoning capabilities.

Vulnerability Detection aims to automate the analysis of software systems to discover security flaws and defects, called vulnerabilities. In recent years, many studies have explored using LLMs in this task, leveraging their knowledge and reasoning skills acquired through training on large text and source code datasets. Despite the potential highlighted by these works, LLMs often struggle to correctly explain the root causes of vulnerabilities, raising questions about their effectiveness. This project aims to improve Large Language Models' classification and explainability capabilities by adopting a novel reasoning methodology from the literature known as the Graph of Thoughts. Although this methodology has shown promising results in logical and mathematical tasks, it has never been applied to vulnerability detection. Testing and evaluating this new vulnerability detection technique has demonstrated its potential to improve LLMs' classification and reasoning capabilities.

Leveraging Graph of Thoughts and Large Language Models for Advanced Vulnerability Detection

MIAZZO, NICHOLAS
2023/2024

Abstract

Vulnerability Detection aims to automate the analysis of software systems to discover security flaws and defects, called vulnerabilities. In recent years, many studies have explored using LLMs in this task, leveraging their knowledge and reasoning skills acquired through training on large text and source code datasets. Despite the potential highlighted by these works, LLMs often struggle to correctly explain the root causes of vulnerabilities, raising questions about their effectiveness. This project aims to improve Large Language Models' classification and explainability capabilities by adopting a novel reasoning methodology from the literature known as the Graph of Thoughts. Although this methodology has shown promising results in logical and mathematical tasks, it has never been applied to vulnerability detection. Testing and evaluating this new vulnerability detection technique has demonstrated its potential to improve LLMs' classification and reasoning capabilities.
2023
Leveraging Graph of Thoughts and Large Language Models for Advanced Vulnerability Detection
Vulnerability Detection aims to automate the analysis of software systems to discover security flaws and defects, called vulnerabilities. In recent years, many studies have explored using LLMs in this task, leveraging their knowledge and reasoning skills acquired through training on large text and source code datasets. Despite the potential highlighted by these works, LLMs often struggle to correctly explain the root causes of vulnerabilities, raising questions about their effectiveness. This project aims to improve Large Language Models' classification and explainability capabilities by adopting a novel reasoning methodology from the literature known as the Graph of Thoughts. Although this methodology has shown promising results in logical and mathematical tasks, it has never been applied to vulnerability detection. Testing and evaluating this new vulnerability detection technique has demonstrated its potential to improve LLMs' classification and reasoning capabilities.
Vulnerability Detect
LLM
AI
File in questo prodotto:
File Dimensione Formato  
Miazzo_Nicholas.pdf

accesso riservato

Dimensione 896 kB
Formato Adobe PDF
896 kB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/70914