Your phishing simulation metrics are lying to you.
- Feb 27
- 4 min read
Every month, security teams run phishing simulations and produce similar reports: X% clicked, Y% submitted credentials, Z% reported. Leadership sees the numbers, nods, and moves on.
But that surface-level read of phishing simulation metrics misses the most actionable intelligence sitting inside your campaign data.
Here's what I mean.
These metrics alone tells you almost nothing useful.
A 12% click rate sounds bad. But what happens in the seconds after that click is where the real story begins.
When someone clicks a phishing link, they have a moment of recognition — or they don't. They either pause and think "wait, something's off" and close the tab, or they keep going and hand over their credentials. The gap between those two outcomes isn't random. It's measurable. And it tells you far more about your security culture than the headline number ever could.
In real campaign data, we see credential submission times ranging from 8 seconds to over 10 minutes.
The 8-second submissions are the scary ones — no hesitation, no moment of doubt, credentials entered on autopilot. Those aren't just users who clicked a bad link. Those are users operating with zero threat awareness in the moment. They need different interventions than someone who clicked, paused for two minutes, and then submitted.
Velocity matters. Here's why.
Think about what fast credential submission actually tells you:
The user didn't read the URL
The landing page didn't trigger any suspicion
There was no internal monologue of "should I be doing this?"
MFA is the only control standing between that click and a full account compromise
Slow submitters (or non-submitters who clicked) tell a different story — the landing page raised suspicion, they hesitated, and we do see in many cases, they subsequently reported the email. That's a security culture win hiding inside what looks like a failure metric.
If you're not segmenting your analysis by velocity, you're treating these two groups identically. You shouldn't.
Reporting velocity matters too — in the other direction.
Most teams celebrate reporters. Fewer teams study how fast those reporters act.
A user who reports a phishing email within 2 minutes of their last engagement with it is categorically different from one who reports it two hours later.
The fast reporters are your human sensors — they're the people who, in a real attack, would be raising the alarm before half the organisation has even opened the email. Identifying them, recognising them, and building on that behaviour is one of the highest-value things a security team can do.
And when you rank reporters by response time, interesting patterns emerge. Often it's not the technical team who leads the leaderboard. It's someone in operations, or finance, who's just genuinely switched on. That's worth knowing. That's a culture conversation, not a training conversation.
The "clicked but didn't submit" group deserves its own analysis.
Every campaign has users who clicked the link but didn't hand over credentials. Most reports lump them into the risk category or ignore them entirely.
But this group often contains something valuable: people whose instincts kicked in at the landing page. The URL looked wrong, or the branding was slightly off, or they just got a gut feeling. Some of them then reported the email. That's almost a perfect security response — clicked under pressure, recognised the threat, escalated.
What about those that clicked, did not submit, but did not report — did they suspect something and just not know what to do next? That's not a click problem. That's a culture problem.
Repeat behaviour is the most under-used signal.
A user who clicked in January, went through training, and clicked again in March is a completely different risk profile to a first-time clicker.
Repeat engagement with phishing simulations — even when training has been completed — flags something that a conversation can often resolve. Sometimes it's a workflow issue (they're time-pressured and clicking fast), sometimes it's the training format not landing (we can partner with the amazing The Cyber Escape Room Co. ® to deliver a unique and engaging alternative to CBT, works nicely next to our Culture360° service to give a blended approach to human risk management), sometimes it's something else entirely.
Without tracking this explicitly, it gets lost in aggregate numbers.
Bringing Phishing Simulation Metrics together.
The analysis above isn't complex. It doesn't require a SIEM or a data science team. It requires an understanding of your campaigns and asking the right questions of the data.
At ICA Consultancy, we built a purpose-designed phishing simulation dashboard that does exactly this — taking raw campaign exports and producing: velocity analysis on credential submissions, reporter leaderboards ranked by response time, repeat offender tracking across campaigns, device and browser breakdowns, training completion overlays, and team-level risk comparisons.


The goal was simple: turn data that usually produces a one-line metric into something that actually informs decisions — about training priorities, about which users need a conversation, about where your controls need strengthening, and about where your security culture is genuinely thriving.
Because the number of people who clicked is just the start of the story.
We are starting to utilise this with Clients that are not on our Culture360° service or platform, helping them understand what is actually happening when they run a campaign, so they can directly influence their staffs behaviours, by understanding them.
What does your phishing data tell you beyond the click rate?




Comments