AI and Cyber Security: Opportunity, Risk, and What You Should Be Thinking About Now
- 13 minutes ago
- 4 min read

AI and Cyber Security: Cutting Through the Noise
AI and Cyber Security has become one of those topics that can genuinely feel hard to get clarity on. Ask a security vendor and you'll hear about miraculous detection capabilities. Read the headlines and it's all deepfakes and autonomous hacking tools. Neither picture is quite accurate, and for most organisations, neither is particularly useful.
Where we find the conversation most helpful is when it focuses on three practical questions: how are attackers actually using AI right now? How can AI strengthen your defences? And what new risks does your own use of AI introduce? If you only focus on one of those, you're likely missing something important.
How Attackers Are Using AI
The clearest change we're seeing in the threat landscape is in social engineering. Phishing emails that used to give themselves away with clumsy grammar or an obvious template are increasingly hard to spot. AI tools can generate targeted, well-written messages at scale — personalised to the recipient, contextually plausible, and free of the red flags people have been trained to look for.
This isn't anecdotal. According to KnowBe4's Phishing Threat Trends Report, 82.6% of phishing emails analysed between September 2024 and February 2025 contained AI-generated content. The training advice to "look for spelling mistakes" is no longer reliable.
Deepfake impersonation is also well beyond the experimental stage. Voice cloning can reproduce someone's speech from a small amount of source material, and video deepfakes have already featured in real fraud cases. A widely reported 2024 incident saw a finance worker at a Hong Kong firm approve a $25 million transfer after a video call in which every person on screen, including the CFO, was an AI-generated fake. These aren't hypothetical risks.
Beyond social engineering, AI is narrowing the window between a vulnerability being disclosed and it being exploited. According to CyberMindr, the average time-to-exploit dropped from 32 days to just 5 days in 2024. VulnCheck's H1 2025 data found that 32% of exploited vulnerabilities had evidence of active attack on or before the day the CVE was even published. Most organisations are still working through manual patching processes — that gap is where attackers operate.
Add to that the growth of infostealer malware, tools designed to harvest credentials and session tokens at scale, and you get a threat environment where the volume and sophistication of attacks have both increased at the same time. Techniques once associated with nation-state attackers are now accessible to much less skilled actors. That raises the baseline risk for organisations of all sizes.
AI as a Defensive Tool
On the other side of this, AI genuinely does help defenders, when it's applied thoughtfully. Behavioural analytics powered by machine learning can catch anomalous activity that rule-based systems would miss: subtle signs of account compromise, unusual access patterns, indicators of insider risk. When it works well, it shortens the time between a compromise happening and someone noticing.
AI is also taking on some of the more repetitive work that eats into security teams' time — triaging alerts, enriching incidents with context, surfacing response recommendations. For teams that are already stretched, that matters.
There's also meaningful work happening at scale: AI systems identifying previously unknown vulnerabilities in widely used software, and coordinated industry coalitions treating AI-enabled defence as a baseline capability rather than a nice-to-have. That's a genuine shift in how the sector is thinking.
That said, not every tool labelled "AI-powered" lives up to the billing. The effectiveness of these tools depends on data quality, the maturity of your environment, and how well they're implemented. AI can amplify strong security foundations. It doesn't create them.
The Risks of Adopting AI in Your Business
Here's the part of the conversation that often gets overlooked: the risks that come from your own organisation's use of AI tools. Employees are already using generative AI to draft documents, analyse data, write code, and speed up routine tasks, sometimes with IT's knowledge, often without it.
The most immediate concern is data leakage. When someone pastes sensitive information into a third-party AI service, that data may be stored, used for model training, or processed in ways that fall completely outside your control. That's a security issue, a data protection issue, and potentially a contractual one.
Shadow AI, employees using tools that haven't been approved or assessed, mirrors the shadow IT problems many organisations dealt with a decade ago. Without visibility into what tools are being used and what's being shared, you can't meaningfully manage the risk.
For development teams specifically, the numbers are striking. Veracode's 2025 GenAI Code Security Report tested more than 100 large language models and found that AI-generated code introduced security vulnerabilities in 45% of coding tasks. The productivity gains from AI-assisted development are real — but deploying that code without thorough review shifts risk in ways that aren't always visible until something goes wrong.
Building an AI Governance Framework
None of this means you should avoid AI — quite the opposite. But it does need to be governed. A practical framework doesn't have to be complicated: clarity about which tools are approved, what types of data can be shared with them, how outputs should be reviewed before they're used, and who's accountable when AI is part of a business decision.
For most organisations, this doesn't require building a new governance structure from scratch. AI controls can usually be woven into existing information security and data protection frameworks. What matters is that AI adoption is a deliberate, managed process, not something that just happens because it's convenient.
The pace of change in this space is genuinely fast. For smaller organisations, it can feel like a lot to keep up with. But the implications are the same regardless of size: take AI seriously as a business and security issue, put proportionate governance in place early, and engage with it thoughtfully rather than reactively.
The organisations that approach this carefully will be better placed to use AI's defensive potential to their advantage. Those that don't risk finding themselves caught between increasingly capable attackers and a rapidly rising bar for what good security looks like.




Comments