This thesis explores two advanced methodologies for multiclass text classification using the DBpedia dataset: prompt optimization with Stanford's DSPy framework and fine-tuning with the BERT model. The study aims to evaluate the effectiveness, efficiency, and scalability of each approach, providing insights into their applicability for large-scale natural language processing tasks. By implementing both techniques in separate modules, the research offers a comprehensive comparison of their performance and suitability for real-world applications.

This thesis explores two advanced methodologies for multiclass text classification using the DBpedia dataset: prompt optimization with Stanford's DSPy framework and fine-tuning with the BERT model. The study aims to evaluate the effectiveness, efficiency, and scalability of each approach, providing insights into their applicability for large-scale natural language processing tasks. By implementing both techniques in separate modules, the research offers a comprehensive comparison of their performance and suitability for real-world applications.

Comparative Analysis of Prompt Optimization and Fine-Tuning Techniques for Multiclass Text Classification

SABERI, ALIREZA
2024/2025

Abstract

This thesis explores two advanced methodologies for multiclass text classification using the DBpedia dataset: prompt optimization with Stanford's DSPy framework and fine-tuning with the BERT model. The study aims to evaluate the effectiveness, efficiency, and scalability of each approach, providing insights into their applicability for large-scale natural language processing tasks. By implementing both techniques in separate modules, the research offers a comprehensive comparison of their performance and suitability for real-world applications.
2024
Comparative Analysis of Prompt Optimization and Fine-Tuning Techniques for Multiclass Text Classification
This thesis explores two advanced methodologies for multiclass text classification using the DBpedia dataset: prompt optimization with Stanford's DSPy framework and fine-tuning with the BERT model. The study aims to evaluate the effectiveness, efficiency, and scalability of each approach, providing insights into their applicability for large-scale natural language processing tasks. By implementing both techniques in separate modules, the research offers a comprehensive comparison of their performance and suitability for real-world applications.
DSPY
Fine-Tuning
Prompt Optimization
Text Classification
File in questo prodotto:
File Dimensione Formato  
MasterThesis_AlirezaSaberi.pdf

accesso aperto

Dimensione 758.23 kB
Formato Adobe PDF
758.23 kB Adobe PDF Visualizza/Apri

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/102133