DPO role in AI projects
With the development of any modern, ambitious business, sooner or later, a question of implementing AI arises. Besides, today’s technological service market is still one of the most innovative and unpredictable in the world. Therefore, to keep your AI solution competitive, you must put a lot of effort into it, firstly ensuring lawfulness and user safety.
A significant part of any project is ensuring privacy compliance, a matter of utmost importance in AI projects. Regulations worldwide, such as GDPR, CCPA, PIPEDA, LGPD, and others, mandate numerous privacy-ensuring measures, which are complex and demanding, requiring in-depth knowledge of privacy law.
Who is a DPO?
A solution which will allow you to ensure privacy compliance of your AI project is assigning a Data Protection Officer (a DPO). A DPO is a professional who serves as the primary point of contact for all inquiries related to the company’s privacy compliance. The responsibilities of a DPO are further described in Article 39 of the GDPR.
Employee or service provider?
It’s up to you whether to designate a DPO as your in-house employee or as a service provider, considering your needs and requirements. No matter which way you choose, you and your company’s staff members shall be able to reach the DPO easily so that they can receive assistance. DPO should be fully aware of the company’s strategies and processes to ensure compliance responsibly and efficiently. DPO’s awareness is of utmost importance when working on an AI project due to its modern nature, which requires a lot of attentiveness and research.
Conflict of interest?
As with any different job position, DPO is allowed to have other functions, but as The European Data Protection Board (“EDPB”) stated in their “Guidelines on Data Protection Officers” – “the DPO cannot hold a position within the organisation that leads him or her to determine the purposes and the means of the processing of personal data … conflicting positions may include chief executive, chief operating, chief financial, and other head executive positions, or in some cases positions lower down in the organisational structure.”
DPO responsibilities in AI project
Implementing an AI solution causes many specific privacy concerns to arise, and it is only possible to efficiently predict and address them by involving a privacy specialist, a DPO. Some of these concerns may include:
- choosing an AI model or AI service provider (for purposes of training, deploying, maintaining the AI, etc.). Different AI solutions require different service providers or models, some of which may possess more risks than others or violate privacy law;
- choosing the legal basis and sourcing the personal data in a compliant way. Ensuring purpose limitation, data minimisation, other privacy principles and the lawfulness of the data processing bases: consent, contract, legitimate interest, etc;
- conducting Transfer Impact Assessment (TIA): If data inserted into the LLM must be transferred abroad to process consumers’ prompts, a company must conduct a TIA. This requirement is necessary because even a simple prompt may contain a data subject’s personal data;
- conducting Data Processing Impact Assessment (DPIA): According to the EDPB’s criteria, set down in “Guidelines on Data Protection Impact Assessment“, AI usage can likely be recognised as an innovative technological solution, which, due to its nature, requires conducting a DPIA;
- creating new or updating old internal documents: When considering the implementation of AI, the company’s policies and internal documents may need specific adjustments.
These tasks can be overwhelming at first, especially if the technology moves fast and changes overnight. An experienced DPO can ensure a quick and exhaustive work tempo: they will know about the model from their industry peers and have previous compliance experience to kickoff the privacy impact assessment immediately, paying the utmost attention to the core value your company has: customer trust.
Collecting the data and ensuring data subjects’ rights
One of the key responsibilities of DPO in any project is to ensure that data is processed and collected with respect to data subjects’ rights. That mainly includes ensuring consumers’ rights:
- to withdraw consent in case you use consent as a legal basis for data processing;
- to object to data processing and also conducting Legitimate Interest Assessments (LIA) if legitimate interest is used as a legal basis for data processing;
- to access, rectify the personal data, be forgotten (right to erasure); you can satisfy these rights by responding to consumers’ requests, and offering a convenient tool for filing effective data subject access requests, etc.;
- to be informed (for instance, by publishing a Privacy Policy or via transparency tools such as cookie consent platforms).
Data subject rights violations are the litmus test for the company’s compliance program: they demonstrate the effort and care invested in the customer’s privacy well-being. The DPO knows what data subjects are interested in, what they are concerned about, and whether they are satisfied with the request results obtained. AI-powered solutions tend to act in a way that is not expected by the users. Therefore, the DPO is the person to go to if the user suspects that their data has been leaked, and it is why your DPO will continuously collect user feedback by helping them in case of doubt.
Possible risks of violation
Furthermore, irresponsible usage of misconfigured AI tools in your business will likely result in many privacy risks for your customers and endanger your desired privacy compliance. Here comes the DPO, whose responsibility is to mitigate and prevent these risks. For example:
- many privacy regulations prohibit the use of AI tools in specific ways. For instance, it will be considered a violation if AI makes decisions based solely on automated processing (e.g., profiling), which produces legal effects concerning your customers;
- specific AI solutions might violate data protection laws by their nature. Your AI tool may collect excessive personal information from customers, thereby violating the GDPR’s minimisation principle and other applicable laws;
- it’s crucial to understand that the internal privacy legislation of the country or region where you plan to offer services may differ from the law you currently comply with. Therefore, it’s essential to thoroughly review the applicable country’s privacy laws for any differences before commencing services in that region.
However, there is no better way to understand the importance of GDPR compliance than to glance at the enforcement practice on AI projects:
- OpenAI vs Italy: In March 2023, Italy’s data protection authority temporarily banned OpenAI’s ChatGPT over concerns, including that it lacks a legal basis for processing personal data for training algorithms. OpenAI was given 30 days to address these concerns or face fines of up to €20 million or 4% of its global annual revenue.
- Clearview AI: Several data protection authorities fined Clearview AI in different periods through 2022 and 2023 for holding and processing a database of more than 20 billion facial images worldwide. The total fines combined from French, Greek, UK, and Italian authorities exceed €70 million.
Data security
Another point to consider is the security of consumers’ personal data, where DPO is a valuable team player to notice and help mitigate the possible risks. Ensuring personal data security in an AI project is a complicated task that requires unique, professional solutions and high awareness of the project’s processes as well as security methods, and privacy law.
As an example, prompts that consumers insert into the LLM are regarded as personal data because they may likely contain information about users. Moreover, any LLMs’ training data leak may be considered a personal data breach. Those training sets must also be ethically and lawfully sourced. For instance, an accident occurred in South Korea when a company trained its newly introduced AI on dozens of conversational logs containing private and sensitive data. Over time, the company’s AI began exposing people’s names, nicknames, and home addresses in its responses, leading to severe losses and breaching consumers’ trust.
Disclosure of consumers’ personal data may be caused by different reasons, including LLM’s malfunction, fraudulent activity, etc. Thus, the involvement of LLM, as well as any other changes in tech projects in general, requires close supervision by the DPO to ensure that such risks do not arise.
So, do you actually need a DPO?
Once you step onto the hard road of privacy compliance, there will be no better companion than a DPO alongside this journey. Whether you involve AI in your project or not, a DPO will be there with you, ensuring that every step will comply with privacy law.
Whether to designate a DPO is your choice. However, you can always contact us and schedule a meeting with our team to learn more about details and use cases or discuss your organisation’s unique needs.