In e-commerce, recommender systems have long been important for improving user experience and boosting sales conversion rates. However, as these systems become increasingly complex, it becomes harder to understand the reasoning behind the recommendations. This project will carry out an explainability analysis of two models of recommender systems: a transformer-based model used by the company Decathlon for website personalized recommendations and a two-tower-based one (referred to as the challenger model) developed from scratch for the same purpose. The explainability analysis will focus on specific parts of the models, namely the embeddings learned at different stages, the attention layers present in the transformer modules of both the production and the challenger model, and the final embedding space. This analysis will try to shed light on the influence that different external factors such as price, sales, or seasonality have on the latent representation learned by the model. Even though the challenger model does not surpass the production model in accuracy metrics, it shows great strength in its exploration rate and its coverage of the items catalog, likely due to how it addresses the cold-start problem. Moreover, the explainability analysis evidences how the models excel at learning information about the products and users that are not explicitly fed to them as inputs.

In e-commerce, recommender systems have long been important for improving user experience and boosting sales conversion rates. However, as these systems become increasingly complex, it becomes harder to understand the reasoning behind the recommendations. This project will carry out an explainability analysis of two models of recommender systems: a transformer-based model used by the company Decathlon for website personalized recommendations and a two-tower-based one (referred to as the challenger model) developed from scratch for the same purpose. The explainability analysis will focus on specific parts of the models, namely the embeddings learned at different stages, the attention layers present in the transformer modules of both the production and the challenger model, and the final embedding space. This analysis will try to shed light on the influence that different external factors such as price, sales, or seasonality have on the latent representation learned by the model. Even though the challenger model does not surpass the production model in accuracy metrics, it shows great strength in its exploration rate and its coverage of the items catalog, likely due to how it addresses the cold-start problem. Moreover, the explainability analysis evidences how the models excel at learning information about the products and users that are not explicitly fed to them as inputs.

Explainability in recommender systems: a comparative analysis of two transformer-based models

ORELLANA I RÍOS, JOAN
2023/2024

Abstract

In e-commerce, recommender systems have long been important for improving user experience and boosting sales conversion rates. However, as these systems become increasingly complex, it becomes harder to understand the reasoning behind the recommendations. This project will carry out an explainability analysis of two models of recommender systems: a transformer-based model used by the company Decathlon for website personalized recommendations and a two-tower-based one (referred to as the challenger model) developed from scratch for the same purpose. The explainability analysis will focus on specific parts of the models, namely the embeddings learned at different stages, the attention layers present in the transformer modules of both the production and the challenger model, and the final embedding space. This analysis will try to shed light on the influence that different external factors such as price, sales, or seasonality have on the latent representation learned by the model. Even though the challenger model does not surpass the production model in accuracy metrics, it shows great strength in its exploration rate and its coverage of the items catalog, likely due to how it addresses the cold-start problem. Moreover, the explainability analysis evidences how the models excel at learning information about the products and users that are not explicitly fed to them as inputs.
2023
Explainability in recommender systems: a comparative analysis of two transformer-based models
In e-commerce, recommender systems have long been important for improving user experience and boosting sales conversion rates. However, as these systems become increasingly complex, it becomes harder to understand the reasoning behind the recommendations. This project will carry out an explainability analysis of two models of recommender systems: a transformer-based model used by the company Decathlon for website personalized recommendations and a two-tower-based one (referred to as the challenger model) developed from scratch for the same purpose. The explainability analysis will focus on specific parts of the models, namely the embeddings learned at different stages, the attention layers present in the transformer modules of both the production and the challenger model, and the final embedding space. This analysis will try to shed light on the influence that different external factors such as price, sales, or seasonality have on the latent representation learned by the model. Even though the challenger model does not surpass the production model in accuracy metrics, it shows great strength in its exploration rate and its coverage of the items catalog, likely due to how it addresses the cold-start problem. Moreover, the explainability analysis evidences how the models excel at learning information about the products and users that are not explicitly fed to them as inputs.
Recommender Systems
Explainability
Transformer
Two-tower
Embedding
File in questo prodotto:
File Dimensione Formato  
ORELLANA_JOAN.pdf

accesso riservato

Dimensione 2.78 MB
Formato Adobe PDF
2.78 MB Adobe PDF

The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.12608/71030