This thesis critically examines Meta Platforms’ 2024 proposal to use the personal data of European users to train its generative AI systems, assessing its compliance with the General Data Protection Regulation (GDPR) of the European Union. The study investigates whether Meta’s approach—particularly its reliance on legitimate interests under Article 6(1)(f), its privacy notice, and its opt-out mechanism—meets the GDPR’s requirements on transparency, proportionality, and consent. The analysis begins with an overview of the GDPR framework, with a focus on lawful bases for processing (Article 6), information obligations (Articles 13–14), and data subject rights (Articles 17 and 21). It explores the legal and philosophical dimensions of “freely given” and “informed” consent in environments where consent fatigue, interface design, and asymmetries of power may limit meaningful user autonomy. The study then examines Meta’s current privacy notice, evaluating the company’s reliance on legitimate interests and other legal bases for data processing and its compliance with the principle of privacy by design (Article 25). Special attention is devoted to the practice of profiling, which is both explicitly addressed in GDPR (Article 22) and implicitly intensified by large language models. The study argues that LLM-based profiling represents a more pervasive and less visible form of automated processing, capable of generating highly personalized and potentially manipulative outputs, thereby magnifying democratic and ethical risks. The findings suggest that while the GDPR theoretically offers robust safeguards, dominant platforms such as Meta can push legal boundaries through broad interpretations of necessity, delayed compliance, and reliance on opt-out regimes. The thesis concludes by situating Meta’s case within the broader political economy of surveillance capitalism and considering the implications for AI governance and protection of fundamental rights.
This thesis critically examines Meta Platforms’ 2024 proposal to use the personal data of European users to train its generative AI systems, assessing its compliance with the General Data Protection Regulation (GDPR) of the European Union. The study investigates whether Meta’s approach—particularly its reliance on legitimate interests under Article 6(1)(f), its privacy notice, and its opt-out mechanism—meets the GDPR’s requirements on transparency, proportionality, and consent. The analysis begins with an overview of the GDPR framework, with a focus on lawful bases for processing (Article 6), information obligations (Articles 13–14), and data subject rights (Articles 17 and 21). It explores the legal and philosophical dimensions of “freely given” and “informed” consent in environments where consent fatigue, interface design, and asymmetries of power may limit meaningful user autonomy. The study then examines Meta’s current privacy notice, evaluating the company’s reliance on legitimate interests and other legal bases for data processing and its compliance with the principle of privacy by design (Article 25). Special attention is devoted to the practice of profiling, which is both explicitly addressed in GDPR (Article 22) and implicitly intensified by large language models. The study argues that LLM-based profiling represents a more pervasive and less visible form of automated processing, capable of generating highly personalized and potentially manipulative outputs, thereby magnifying democratic and ethical risks. The findings suggest that while the GDPR theoretically offers robust safeguards, dominant platforms such as Meta can push legal boundaries through broad interpretations of necessity, delayed compliance, and reliance on opt-out regimes. The thesis concludes by situating Meta’s case within the broader political economy of surveillance capitalism and considering the implications for AI governance and protection of fundamental rights.
Meta's Processing of EU Personal Data for AI Training: A Legal Analysis of GDPR Compliance and the Principle of Free and Informed Consent
CARMICHAEL, OLIVIA ANNE
2024/2025
Abstract
This thesis critically examines Meta Platforms’ 2024 proposal to use the personal data of European users to train its generative AI systems, assessing its compliance with the General Data Protection Regulation (GDPR) of the European Union. The study investigates whether Meta’s approach—particularly its reliance on legitimate interests under Article 6(1)(f), its privacy notice, and its opt-out mechanism—meets the GDPR’s requirements on transparency, proportionality, and consent. The analysis begins with an overview of the GDPR framework, with a focus on lawful bases for processing (Article 6), information obligations (Articles 13–14), and data subject rights (Articles 17 and 21). It explores the legal and philosophical dimensions of “freely given” and “informed” consent in environments where consent fatigue, interface design, and asymmetries of power may limit meaningful user autonomy. The study then examines Meta’s current privacy notice, evaluating the company’s reliance on legitimate interests and other legal bases for data processing and its compliance with the principle of privacy by design (Article 25). Special attention is devoted to the practice of profiling, which is both explicitly addressed in GDPR (Article 22) and implicitly intensified by large language models. The study argues that LLM-based profiling represents a more pervasive and less visible form of automated processing, capable of generating highly personalized and potentially manipulative outputs, thereby magnifying democratic and ethical risks. The findings suggest that while the GDPR theoretically offers robust safeguards, dominant platforms such as Meta can push legal boundaries through broad interpretations of necessity, delayed compliance, and reliance on opt-out regimes. The thesis concludes by situating Meta’s case within the broader political economy of surveillance capitalism and considering the implications for AI governance and protection of fundamental rights.| File | Dimensione | Formato | |
|---|---|---|---|
|
Carmichael_OliviaAnne.pdf
accesso aperto
Dimensione
769.34 kB
Formato
Adobe PDF
|
769.34 kB | Adobe PDF | Visualizza/Apri |
The text of this website © Università degli studi di Padova. Full Text are published under a non-exclusive license. Metadata are under a CC0 License
https://hdl.handle.net/20.500.12608/95770