Artificial intelligence (AI) nowadays is being increasingly integrated into digital platforms, shaping consumer experiences in a wide variety of fields, including social media, e-commerce and online entertainment. While the implementation of AI on platforms undoubtedly provides a number of benefits, it also generates serious consumer risks, including algorithmic manipulation through targeted advertising, opacity of recommender systems and the use of dark patterns, all of which threaten autonomy, privacy and trust. Beyond just individual harm, algorithmic infrastructures also have broader societal implications, influencing information flows and user perceptions, thereby indirectly reshaping public discourse and collective opinion formation. By exploring the effectiveness of the Digital Services Act (DSA) in addressing these AI-driven consumer risks, this research seeks to answer the question: how effectively does the Digital Services Act protect consumers from manipulative and deceptive practices on AI-powered digital platforms? It argues that meaningful consumer protection in the digital environment is inseparable from protecting user autonomy and the integrity of democratic governance, and that the DSA’s future success depends on evolving from formal compliance to substantive governance. Adopting a theoretical and doctrinal legal approach, complemented by qualitative case studies of TikTok and Temu, the research examines how the DSA’s procedural obligations on transparency, manipulative design and systemic risk assessment operate in practice. The findings show that while the DSA represents a major step toward transparency and accountability of online platforms, its framework remains primarily procedural, therefore focusing more on compliance mechanisms rather than on substantive guarantees.
Artificial Intelligence and Consumer Protection on Digital Platforms: The Role of the Digital Services Act in the EU Regulatory Framework
SKORICHENKO, LIANA
2024/2025
Abstract
Artificial intelligence (AI) nowadays is being increasingly integrated into digital platforms, shaping consumer experiences in a wide variety of fields, including social media, e-commerce and online entertainment. While the implementation of AI on platforms undoubtedly provides a number of benefits, it also generates serious consumer risks, including algorithmic manipulation through targeted advertising, opacity of recommender systems and the use of dark patterns, all of which threaten autonomy, privacy and trust. Beyond just individual harm, algorithmic infrastructures also have broader societal implications, influencing information flows and user perceptions, thereby indirectly reshaping public discourse and collective opinion formation. By exploring the effectiveness of the Digital Services Act (DSA) in addressing these AI-driven consumer risks, this research seeks to answer the question: how effectively does the Digital Services Act protect consumers from manipulative and deceptive practices on AI-powered digital platforms? It argues that meaningful consumer protection in the digital environment is inseparable from protecting user autonomy and the integrity of democratic governance, and that the DSA’s future success depends on evolving from formal compliance to substantive governance. Adopting a theoretical and doctrinal legal approach, complemented by qualitative case studies of TikTok and Temu, the research examines how the DSA’s procedural obligations on transparency, manipulative design and systemic risk assessment operate in practice. The findings show that while the DSA represents a major step toward transparency and accountability of online platforms, its framework remains primarily procedural, therefore focusing more on compliance mechanisms rather than on substantive guarantees.| File | Dimensione | Formato | |
|---|---|---|---|
|
Liana Skorichenko 2071491.pdf
Accesso riservato
Dimensione
729.91 kB
Formato
Adobe PDF
|
729.91 kB | Adobe PDF |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/98706