One of the primary challenges in Information Retrieval evaluation is represented by the cost of carrying out either online or offline evaluation. Therefore, in recent years several endeavors have been devoted to the Query Performance Prediction (QPP) task. QPP aims to estimate the quality of a system when used to retrieve documents in response to a given query, relying on different sources of information such as the query, the documents or the similarity scores provided by the Information Retrieval system. In the last years several pre and post-retrieval QPP models have been designed, but rarely tested under the same experimental conditions. The objective of our work is multifold: we develop a unifying framework that includes several state-of-the-art QPP approaches and use such framework to assess the reproducibility of such QPP approaches. Our findings illustrate that we are able to achieve a high degree of reproducibility, with fourteen different methods correctly reproduced and performance results comparable to the original ones.
Una delle sfide principali nella valutazione all’interno dell’Information Retrieval è rappresentata dal costo richiesto dalla valutazione stessa, sia online che offline. Pertanto, negli ultimi anni diversi sforzi sono stati dedicati al compito svolto dalla Query Performance Prediction (QPP). QPP ha come obiettivo quello di stimare la qualità di un sistema quando viene utilizzato per recuperare documenti in risposta a una data query, basandosi su diverse fonti di informazione come la query, i documenti o i punteggi di similarità forniti dal sistema di Information Retrieval. Negli ultimi anni sono stati progettati diversi modelli di QPP pre e post-retrieval, ma raramente sono stati testati nelle stesse condizioni sperimentali. L’obiettivo del nostro lavoro è molteplice: sviluppare una struttura unificante che includa diversi approcci QPP presenti nello stato dell’arte e usare tale struttura per valutare la riproducibilità degli approcci QPP implementati. I nostri risultati illustrano che siamo in grado di raggiungere un alto grado di riproducibilità, con quattordici metodi diversi riprodotti correttamente e risultati di performance paragonabili a quelli originali.
A Comparative Study and Analysis of Query Performance Prediction Algorithms to Improve their Reproducibility
SCICCHITANO, WILIAM
2021/2022
Abstract
One of the primary challenges in Information Retrieval evaluation is represented by the cost of carrying out either online or offline evaluation. Therefore, in recent years several endeavors have been devoted to the Query Performance Prediction (QPP) task. QPP aims to estimate the quality of a system when used to retrieve documents in response to a given query, relying on different sources of information such as the query, the documents or the similarity scores provided by the Information Retrieval system. In the last years several pre and post-retrieval QPP models have been designed, but rarely tested under the same experimental conditions. The objective of our work is multifold: we develop a unifying framework that includes several state-of-the-art QPP approaches and use such framework to assess the reproducibility of such QPP approaches. Our findings illustrate that we are able to achieve a high degree of reproducibility, with fourteen different methods correctly reproduced and performance results comparable to the original ones.File | Dimensione | Formato | |
---|---|---|---|
Scicchitano_Wiliam.pdf
accesso aperto
Dimensione
1.67 MB
Formato
Adobe PDF
|
1.67 MB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/11688