There is a lot of noise about AI and cybersecurity. Most of it focuses on hypothetical scenarios or vendor marketing. What we are seeing in real engagements is more straightforward and more urgent: AI is making existing attack techniques faster, cheaper, and harder to detect.
Phishing emails generated by large language models no longer contain the spelling mistakes and awkward phrasing that most people rely on to spot them. They are contextually aware, grammatically clean, and increasingly personalised. We have reviewed phishing campaigns during incident response work where the content was indistinguishable from legitimate internal communications. Traditional email filtering catches some of it, but the hit rate is dropping.
Deepfake audio is another area where the gap between capability and defence is widening quickly. We have seen cases where AI-generated voice calls were used to authorise payments and bypass verification processes that relied on "recognising the caller." If your organisation still uses voice confirmation as a control, that is a vulnerability now, not a safeguard.
AI-powered malware is not science fiction. Polymorphic malware that changes its signature dynamically has existed for years, but AI is making it more effective. We are seeing malware that adjusts its behaviour based on the environment it lands in - detecting whether it is in a sandbox, altering its execution pattern to avoid endpoint detection, and timing its actions to blend with normal system activity.
For organisations still relying primarily on signature-based antivirus, this is a serious problem. Behavioural detection and EDR are no longer optional - they are baseline. And even these tools need tuning and context to be effective against adaptive threats.
The practical response is not to panic about AI. It is to recognise that the assumptions behind several common controls are now weaker than they were two years ago. Specifically:
Email security needs to move beyond content filtering to behavioural analysis - who is sending what, to whom, and does the pattern make sense?
Identity verification processes that rely on voice, appearance, or knowledge-based authentication need revisiting. Passwordless and hardware-based authentication are more resilient to AI-powered social engineering.
Security awareness training needs updating. Teaching people to "look for spelling mistakes" is no longer sufficient. Training should focus on process verification - does this request follow the normal approval path, regardless of how convincing it looks?
Detection and response capabilities need to account for adaptive adversaries. Static rules and periodic scans are not enough when the threat changes its approach mid-attack.
AI has not changed the fundamentals of good security. Identity, access control, detection, and response still matter most. What has changed is the speed and sophistication of the attacks these controls need to withstand. Organisations that treat AI threats as tomorrow's problem are already behind.