A logical fallacy is an argument that appears rational and convincing but is fundamentally flawed: it is a mistake in the logical structure or the premise of an argument that makes its conclusion faulty and unreliable. Fallacies can arise unintentionally in conversations or can be deliberately used to manipulate opinions, mislead audiences, or advance specific agendas. Therefore, it is essential to be able to recognize logical fallacies: understanding them implies developing the capacity of constructing stronger arguments and acquiring analytical tools to evaluate information critically. This thesis aims at providing tools for automatically recognizing and categorizing common logical fallacies in written arguments across various domains. Indeed, this work employs five distinct datasets to conduct an extensive analysis, focusing on contemporary debates of public interest, including misinformation about COVID-19 vaccines, conspiracy theories about climate change and political propaganda. Given the subjective nature of fallacy annotations, a unified taxonomy is provided, defining 22 unique fallacies and a non-fallacious class to account for arguments that do not contain illogic reasoning. This study will present two distinct methods using Large Language Models that exploit the standard logical form of fallacies, complemented by a multi-dataset training approach. In fact, this comprehensive strategy seeks to provide a unified framework for fallacy detection that can be generalized across arguments of different genres. The first method frames the fallacy classification problem as a Natural Language Inference task, employing Electra-StructAware, a structure-aware model based on Electra. Alternatively, the second approach consists in instruction-tuning T5 and BART, exploring different prompting strategies. Together, these methods offer distinct methodologies for tackling logical fallacy detection, each with its own unique advantages and techniques.

Logical Fallacies Detection using Large Language Models

PAPADOPULOS, ELENI
2023/2024

Abstract

A logical fallacy is an argument that appears rational and convincing but is fundamentally flawed: it is a mistake in the logical structure or the premise of an argument that makes its conclusion faulty and unreliable. Fallacies can arise unintentionally in conversations or can be deliberately used to manipulate opinions, mislead audiences, or advance specific agendas. Therefore, it is essential to be able to recognize logical fallacies: understanding them implies developing the capacity of constructing stronger arguments and acquiring analytical tools to evaluate information critically. This thesis aims at providing tools for automatically recognizing and categorizing common logical fallacies in written arguments across various domains. Indeed, this work employs five distinct datasets to conduct an extensive analysis, focusing on contemporary debates of public interest, including misinformation about COVID-19 vaccines, conspiracy theories about climate change and political propaganda. Given the subjective nature of fallacy annotations, a unified taxonomy is provided, defining 22 unique fallacies and a non-fallacious class to account for arguments that do not contain illogic reasoning. This study will present two distinct methods using Large Language Models that exploit the standard logical form of fallacies, complemented by a multi-dataset training approach. In fact, this comprehensive strategy seeks to provide a unified framework for fallacy detection that can be generalized across arguments of different genres. The first method frames the fallacy classification problem as a Natural Language Inference task, employing Electra-StructAware, a structure-aware model based on Electra. Alternatively, the second approach consists in instruction-tuning T5 and BART, exploring different prompting strategies. Together, these methods offer distinct methodologies for tackling logical fallacy detection, each with its own unique advantages and techniques.
2023
Logical Fallacies Detection using Large Language Models
logical fallacies
large language model
nlp
File in questo prodotto:
File Dimensione Formato  
Eleni_PapadopulosPDFA.pdf

accesso riservato

Dimensione 4.53 MB
Formato Adobe PDF
4.53 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/71031