The Guidance on AI and Data Protection from the ICO was updated on 15th March 2023. The updated guidance provides further clarity on how organisations can comply with data protection laws when using AI systems.

What topics does the ICO guidance cover?

The guidance covers the following topics:

  • Fairness
  • Accountability
  • Transparency
  • Explainability
  • Bias
  • Data protection impact assessments

The guidance also includes a number of case studies that illustrate how organisations have applied the guidance in practice.

The ICO's guidance is a valuable resource for organisations that are using AI systems. It provides clear and practical advice on how to comply with data protection laws when using AI systems.

How is the principle of ‘fairness’ impacted?

One of the key principles of data protection law is fairness. This means that organisations must ensure that they do not use AI systems in a way that is unfair or discriminatory.

The ICO's guidance provides several factors that organisations should consider when assessing fairness, including:

  • The purpose for which the AI system is being used
  • The impact of the AI system on individuals
  • The safeguards that are in place to protect individuals

How is accountability affected?

Organisations that use AI systems must be accountable for the way in which they use these systems. This means that they must be able to demonstrate that they are complying with data protection laws.

The ICO's guidance outlines several steps that organisations can take to ensure that they are accountable for the use of AI systems, including:

  • Documenting the use of AI systems
  • Implementing appropriate governance arrangements
  • Conducting data protection impact assessments

Training staff on data protection law

Why is transparency important?

Organisations that use AI systems must be transparent about the way in which these systems work. This means that they must provide individuals with information about how their personal data is being used by AI systems.

The ICO's guidance provides several ways that organisations can be transparent about the use of AI systems, including:

  • Providing clear and concise information about the AI system
  • Making it easy for individuals to understand how their personal data is being used
  • Providing individuals with the opportunity to opt out of the use of AI systems

What is the concept of ‘explainability’?

Organisations that use AI systems must be able to explain how these systems work. This means that they must be able to provide individuals with an explanation of how their personal data is being used by AI systems.

The ICO's guidance offers several examples of how organisations can be explainable about the use of AI systems, including:

  • Providing individuals with access to the data that is used by AI systems
  • Providing individuals with information about the algorithms that are used by AI systems
  • Providing individuals with information about how AI systems make decisions

How can AI bias be mitigated?

AI systems can be biased. This means that they can make decisions that are unfair or discriminatory.

To mitigate bias in AI systems the ICO recommends a number of steps including:

  • Collecting data from a diverse range of individuals
  • Using algorithms that are designed to be fair
  • Testing AI systems for bias
  • Monitoring AI systems for bias

When are data protection impact assessments required?

Organisations that use AI systems must conduct data protection impact assessments (DPIAs) if the use of these systems is likely to result in a high risk to individuals' rights and freedoms.

A DPIA is a process that organisations can use to identify, assess, and mitigate the risks to individuals' rights and freedoms posed by the use of AI systems.

The ICO recommends actions that organisations can take to conduct a DPIA, including:

  • Identifying the risks posed by the use of AI systems
  • Assessing the impact of these risks on individuals' rights and freedoms
  • Mitigating the risks posed by AI systems
  • Monitoring the effectiveness of the mitigation measures

How 3CS can help

For further guidance on AI and Data Protection or any corporate or commercial legal matter, please get in touch with your usual 3CS contact.

 

 

 

Keith McAlister

GET IN TOUCH

3CS Corporate Solicitors

Providing solutions, not just legal advice
Contact Us

GET IN TOUCH

Contact Us

3CS Corporate Solicitors Ltd
60 Moorgate
London
EC2R 6EJ

3CS is based in offices in the heart of London's financial district. The nearest underground stations are Liverpool Street, Moorgate and Bank - all within 5 minutes’ walking distance.​

To view a map of where to find us, please click here.

+44(0) 204 5161 260 English (United Kingdom)

info@3cslondon.com

Please enter your name
Please enter your phone number
Please enter your email
Invalid Input
Invalid Input
Please enter how you heard about 3CS

Our Clients


View all our clients
The Legal 500 - Leading Firm 2025

Registered in England & Wales | Registered office is 60 Moorgate, London, EC2R 6EJ
3CS Corporate Solicitors Ltd is registered under the number 08198795
3CS Corporate Solicitors Ltd is a Solicitors Practice, authorised and regulated by the Solicitors Regulation Authority with number 597935


Registered in England & Wales | Registered office is 60 Moorgate, London, EC2R 6EJ
3CS Corporate Solicitors Ltd is registered under the number 08198795
3CS Corporate Solicitors Ltd is a Solicitors Practice, authorised and regulated by the Solicitors Regulation Authority with number 597935