top of page

Artificial Intelligence: Security, Privacy and Resilience Risks

  • Writer: Ben de la Salle
    Ben de la Salle
  • Jun 27
  • 4 min read

Organisations are embracing Artificial Intelligence (AI) to drive efficiency and innovation, but as adoption grows, so do the risks.

 

AI systems can process vast amounts of sensitive data, make critical decisions, and are often embedded across business processes and within third-party services.

 

Understanding and managing the security, privacy, and resilience risks of AI is now essential for every business.

Artificial Intelligence running a business

What Risks does the use of Artificial Intelligence pose?

 

AI introduces a new set of threats that can impact confidentiality, integrity, and availability. Key risks include:

 

Security Risks

  • Data poisoning: Attackers can manipulate training data, causing the AI to make incorrect or biased decisions.

  • Model theft and inversion: Adversaries may extract or reverse-engineer models, exposing intellectual property or sensitive logic.

  • Adversarial attacks: Specially crafted inputs can trick AI into making false or harmful predictions.

  • Automation of attacks: AI can lower the barrier for cybercriminals to launch sophisticated attacks at scale. As the NCSC notes:

“AI-enabled tools will almost certainly enhance threat actors’ capability to exploit known vulnerabilities, increasing the volume of attacks against systems that have not been updated with security fixes.” (NCSC, 2025)

 

Privacy Risks

  • Repurposing of personal data: AI may use data collected in a way it was not collected for, or to make automated decisions, which may in turn breach data protection laws.

  • Inference attacks: AI systems can inadvertently reveal personal or sensitive information through their outputs or by being probed by attackers.

  • Lack of transparency: It can be difficult to explain how AI systems make decisions, making it harder to demonstrate compliance with privacy regulations.

 

Resilience Risks

  • Over-reliance on AI: Business processes may become dependent on AI, creating single points of failure.

  • Supply chain and third-party risk: Many organisations use third-party AI tools or services. A vulnerability in a supplier’s AI system can impact your own operations.

  • Data drift and model degradation: Over time, AI models can become less accurate, potentially leading to poor decisions or missed threats.

 

Of course, as adoption accelerates, staff may make use of AI tools that have not been reviewed or approved by the business, which may introduce further risk:

  • Data privacy: Free tools may store or process data externally, often in less regulated jurisdictions.

  • No contractual safeguards: Data may be retained, used for training, or exposed, without recourse.

  • Security risks: Free tools are rarely subject to rigorous security assessment.

  • Compliance: Using unsanctioned AI services can breach GDPR, DORA, and other regulations.

 

Assessing and Managing AI Risks

 

A structured approach is vital. The NCSC and UK Government recommend the following steps (NCSC AI Guidance, Code of Practice for the Cyber Security of AI):

 

  1. Identify and Map AI Use
    1. Catalogue where and how AI is used across your organisation and by your suppliers.

    2. Include shadow IT and non-obvious uses (e.g., embedded AI features in SaaS).

  2. Assess Impact and Risk
    1. Conduct a risk assessment for each AI use case.

    2. Consider the sensitivity of data processed, the potential impact of errors or compromise, and regulatory requirements.

    3. Use established frameworks such as ISO 27001, NIST CSF, and NCSC guidance.

  3. Engage Third Parties
    1. Require suppliers to demonstrate how they secure and govern their own AI systems.

    2. Include AI-specific clauses in contracts, covering data protection, incident reporting, and audit rights.

    3. Request evidence of compliance with recognised standards.

  4. Strengthen Controls
    1. Apply security-by-design principles to AI development and deployment.

    2. Regularly test models for robustness against adversarial inputs.

    3. Encrypt sensitive data in transit and at rest.

    4. Monitor for unusual activity and data drift.

  5. Ensure Transparency and Accountability
    1. Document decision-making processes and data flows.

    2. Maintain records of training data sources and model changes.

    3. Provide clear explanations of AI-driven decisions, especially where they impact individuals’ rights.

  6. Plan for Incidents
    1. Update incident response plans to cover AI-specific scenarios, such as data poisoning or model compromise.

    2. Test response procedures with realistic exercises.

 

Establish Staff Guidance

 

Ensure you set out an acceptable use policy for AI for your staff. Provide them with advice and guidance, and even an allowed or disallowed list of AI tools.

 

  • Only use AI tools for approved business purposes.

  • Never enter confidential, personal, or sensitive information into any AI tool unless it’s officially sanctioned and covered by a contract.

  • Disclose if AI-generated content is used in external communication or reports.

  • Undertake regular training on AI risks, benefits, and acceptable use.

 

Key Takeaways

 

AI brings benefits but introduces complex risks—technical, regulatory, and human.

Proactively identify, assess, and control these risks, including those from third-party and shadow IT.

 

Set clear staff guidance: use only approved AI tools, never input sensitive data into free or unofficial services, and request review of new tools.

 

Follow authoritative guidance (NCSC, UK government) and review controls as AI and threats evolve.

 

Organisations that take a practical, people-focused approach to AI risk will be best placed to thrive.


“Good cyber security is not just about technology – it’s about people, processes and making informed choices.”

If you’d like to discuss how to assess and manage AI risks within your organisation or supply chain, get in touch with ICA Consultancy.

 

 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page