Artificial intelligence is now embedded across many organisations, from automation and analytics to customer engagement, decision-making, and content generation. Used well, AI can deliver efficiency and insight. Used without proper oversight, it can introduce legal, ethical, and reputational risk.
At Hope & May, we support organisations in understanding and meeting their data protection and governance obligations. As part of our ongoing research into AI compliance, we want to highlight two key frameworks that organisations should be aware of when using AI systems.
These frameworks help organisations demonstrate accountability and responsible use, particularly where AI tools:
- Process personal data
- Influence decisions affecting individuals
- Operate at scale or with limited transparency
- Are provided by third-party vendors
Why AI risk assessments are increasingly important
AI systems can introduce risks that are not always obvious at the point of adoption, including bias, unfair outcomes, lack of explainability, and unintended data use. Regulators are clear that organisations cannot treat AI as “just another IT tool”.
Instead, organisations are expected to:
- Assess risks before deployment
- Document decision-making
- Put appropriate safeguards in place
- Review AI systems over time
Two regulatory approaches are now shaping best practice in this area:
• The EU’s Fundamental Rights Impact Assessment (FRIA)
• The UK Information Commissioner’s Office (ICO) AI & Data Protection Risk Toolkit
While they originate in different jurisdictions, both aim to ensure AI is used lawfully, fairly, and responsibly.
The EU approach: Fundamental Rights Impact Assessment (FRIA)
The EU’s AI regulatory framework places a strong emphasis on protecting individuals’ fundamental rights, including privacy, equality, and freedom from discrimination.
A Fundamental Rights Impact Assessment is designed to help organisations:
- Identify how an AI system could affect people
- Assess potential harms and their likelihood
- Consider mitigations and human oversight
- Evidence responsible governance
Although FRIA requirements stem from EU legislation, they are relevant to organisations that:
- Operate in or with the EU
- Use EU-based AI providers
- Want to align with emerging international standards
For some organisations, a FRIA can be a valuable tool even where it is not yet a legal requirement.
The UK approach: ICO AI & Data Protection Risk Toolkit
In the UK, the Information Commissioner’s Office has published detailed guidance to help organisations assess AI systems from a data protection and privacy perspective.
The ICO’s AI & Data Protection Risk Toolkit builds on existing UK GDPR obligations and supports organisations in:
- Assessing lawfulness, fairness, and transparency
- Understanding how AI affects personal data processing
- Identifying risks to individuals’ rights and freedoms
- Demonstrating accountability
For most UK-based organisations, this toolkit will be the most appropriate starting point when evaluating AI use. <link>
How Hope & May can support your organisation
Hope & May helps organisations navigate data protection, risk management, and emerging technologies. We can support you to:
- Understand whether your AI use triggers DPIA or FRIA-style obligations
- Select the most appropriate assessment framework
- Complete proportionate and practical risk assessments
- Embed AI governance into wider compliance processes
If your organisation is using, or planning to use, AI tools, now is the right time to ensure your approach is robust, compliant, and future proof.
If you would like guidance or support, please contact the Hope & May team.