AI has become a ‘buzzword’ and the opportunities that it can offer mean businesses are looking closely at how to take advantage of it. However, along with the opportunities comes risk. Whether designing AI in-house or buying in AI systems, the ICO’s recent Guidance on AI and data protection is going to be important in helping businesses navigate some of these risks. Although this guidance is directed at organisations in the UK its pragmatic and detailed approach is likely to mean it resonates globally. This isn’t the ICO’s only guide on AI (see explaining AI) - nor is it the only issue you will need to address (see for example IP risks,  product liability issues and ethical issues)  - but this detailed guide is a good place to start. 

With good practice in AI being a top priority for the ICO, here are 5 things to think about to get you started.

1. Ensure governance systems are fit for purpose

Adoption of AI is likely to involve difficult novel concepts and assessing and mitigating new risks; this will require engagement from senior stakeholders. Decisions might include where the balance should lie between trade-offs and assessing whether the AI system is biased. These decisions will often involve difficult judgement calls, so a robust approval process is essential. 

Although the GDPR contains helpful tools which will be familiar, eg the data protection impact assessment, these will only be as good as the governance processes that underpin them. If your existing governance and risk management practices are not fit for purpose, you might need to consider upskilling senior management, ensuring a diverse and well-resourced team and/or reviewing internal structures. When buying in AI, you should carefully review how key decisions have been made as you will be taking on responsibility for the seller’s judgement calls.

2. Understand the impact of an organisation’s culture on the AI system 

An organisation’s culture will play a key role in how decisions relating to AI are made, including on fairness and discrimination. Well-integrated teams of legal, compliance and design should be embedded into an organisation’s culture and processes. In addition, issues such as diversity, incentives to work collaboratively and a speak-up culture are all important. When buying in AI, it will be a useful indicator of risk to assess the culture of the selling organisation.     

3. Take time to understand controller/processor relationships 

Typically, AI supply chains are complex, and the controller/processor distinction is not straightforward. Since it is this distinction that dictates your responsibilities under the GDPR, you should assess and document the status of all organisations involved in the AI system. It is worth remembering that it is the factual analysis that matters - not what is written in the agreement.

4. Ensure lawyers and design teams work together from the outset  

Legal/compliance teams will not be able to ensure GDPR compliance alone. Issues like whether the AI system is fair and transparent will need a good level of understanding of the AI system, as well as the law. In particular, legal/compliance teams will need to understand how the AI system makes decisions, what training data is used and how the outputs are going to be used. And the design team will need input from legal/compliance teams to ensure individual rights (eg right of access) are properly managed, and to identify what issues might need to be addressed by privacy-enhancing techniques, like perturbation (a technique where the values of data points belonging to individuals are changed at random whilst preserving some of the statistical properties in the dataset).

5. Be aware that AI comes with novel security vulnerabilities 

Key risks with AI systems include loss or misuse of large amounts of training data, and new software vulnerabilities. Security risks will need to be carefully assessed through due diligence or from the design stage by someone with appropriate skills. Organisations should make sure they understand new risks, like model inversion and membership inference.

For more information on AI and the law, click here.