EU data watchdog sets terms for AI model's legitimate use of personal data
The EU's data protection agency has clarified in what circumstances developing AI models may access personal data in an opinion which sets out a three-stage test for legitimate interest of such use.
The opinion published this week by the European Data Protection Board (EDPB) - the coordination body for national privacy regulators across the EU - followed a request from the Irish Data Protection Authority in November, seeking clarification on whether personal data could be used in AI training without breaching EU law. Ireland's DPA acts as a watchdog for many of the largest US tech companies, headquartered in Dublin.
Reaffirming models anonymity and ‘legitimate interest’
The opinion outlines that for an AI model to be considered truly anonymous, the likelihood of identifying individuals through the data must be "insignificant".
The EDPB also established a framework for determining when a company may consider it has a "legitimate interest" giving it a valid legal basis for processing personal data to develop and deploy AI models without obtaining explicit consent from individuals.
The three-step test for assessing legitimate interest requires identifying the interest, evaluating whether processing is necessary to achieve it, and ensuring that the interest does not override the fundamental rights of individuals. The EDPB also stressed the importance of transparency, ensuring that individuals are informed about how their data is being collected and used.
The EDPB stressed in the opinion that ultimately it is the responsibility of national data protection authorities to assess, on a case-by-case basis, whether GDPR has been violated in the processing of personal data for AI development.
Models developed with data extracted and


