Quality of Experience (QoE) evaluation is a crucial aspect in the development of collaborative applications within extended reality (XR). Conventionally, QoE relies on both subjective and objective metrics. This thesis proposes sentiment analysis as an additional contribution to QoE. Specifically, it explores the relation between sentiment analysis features based on audio recordings and the type of transmitted information (i.e., audio only or audio and video) in a collaborative virtual reality (VR) application. After reviewing the state-of-the-art on sentiment analysis, an existing approach based on the use of audio spectrograms and Bag-of-Visual-Words (BoVW) has been selected. The sentiment analysis approach proposed in the literature has been adapted to the considered case study, aiming at differentiating between experiences with and without video support. The considered method gathers audio files from a dataset, converts them into spectrograms, performs feature extraction, clusters data, and builds visual vocabularies using BoVW. The processed data is then evaluated using two different classifiers. At last, the results are compared to understand whether the proposed approach can differentiate between the two experimental conditions. The long-term intent is to obtain a simple, reliable, and automated tool that could be integrated into XR systems and provide objective feedback on user engagement for QoE analysis.
La valutazione della Qualità dell’Esperienza (QoE) è un aspetto fondamentale nel campo dello sviluppo di applicazioni collaborative in realtà estesa (XR). Convenzionalmente, la QoE è basata su metriche soggettive e oggettive. In questa tesi, viene proposto un contributo complementare alla valutazione della QoE attraverso l’analisi delle emozioni. In particolare, viene studiata la relazione tra le caratteristiche estratte da un algoritmo di analisi delle emozioni su segnali audio e il tipo di informazione trasmessa (solo audio o audio e video) durante un’esperienza collaborativa in realtà virtuale (VR). In seguito ad un approfondimento sullo stato dell’arte dell’analisi delle emozioni, viene presentato un approccio già esistente basato sull’uso di spettrogrammi audio e Bag-of-Visual-Words (BoVW). Questo metodo è stato adattato al caso di studio preso in considerazione, con l’obiettivo di distinguere tra esperienze con e senza supporto video. L’approccio usato considera dei file audio da un dataset, li converte in spettrogrammi, effettua l’estrazione delle caratteristiche, ne fa il clustering e costruisce dei vocabolari visivi attraverso le BoVW. I dati elaborati vengono poi valutati mediante due classificatori. Infine, i risultati vengono confrontati per capire se la metodologia scelta permette di distinguere tra le due condizioni di trasmissione. L’obiettivo nel lungo termine è sviluppare uno strumento semplice, affidabile e automatizzato, integrabile in tutti i sistemi XR, che permetta di fornire un riscontro obiettivo sul coinvolgimento degli utenti ai fini dell’analisi della QoE.
Sentiment Analysis using Spectrograms and Bag-of-Visual-Words for Quality of Experience evaluation in XR asymmetric collaborative applications
PALMARINI, LORENZO
2025/2026
Abstract
Quality of Experience (QoE) evaluation is a crucial aspect in the development of collaborative applications within extended reality (XR). Conventionally, QoE relies on both subjective and objective metrics. This thesis proposes sentiment analysis as an additional contribution to QoE. Specifically, it explores the relation between sentiment analysis features based on audio recordings and the type of transmitted information (i.e., audio only or audio and video) in a collaborative virtual reality (VR) application. After reviewing the state-of-the-art on sentiment analysis, an existing approach based on the use of audio spectrograms and Bag-of-Visual-Words (BoVW) has been selected. The sentiment analysis approach proposed in the literature has been adapted to the considered case study, aiming at differentiating between experiences with and without video support. The considered method gathers audio files from a dataset, converts them into spectrograms, performs feature extraction, clusters data, and builds visual vocabularies using BoVW. The processed data is then evaluated using two different classifiers. At last, the results are compared to understand whether the proposed approach can differentiate between the two experimental conditions. The long-term intent is to obtain a simple, reliable, and automated tool that could be integrated into XR systems and provide objective feedback on user engagement for QoE analysis.| File | Dimensione | Formato | |
|---|---|---|---|
|
Palmarini_Lorenzo.pdf
accesso aperto
Dimensione
375.27 kB
Formato
Adobe PDF
|
375.27 kB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/104219