The First 72 Hours After Ransomware - What Most Recovery Plans Miss

The plan looks great until you need it

Ransomware response plans are standard fare in most organisations now. There is a document somewhere that describes who to call, what to isolate, how to communicate, and when to engage external support. On paper, it is reassuring. In practice, it almost never survives first contact with an actual incident.

We have supported organisations through ransomware events and the pattern is remarkably consistent. The first few hours are chaotic. Nobody is sure what has been encrypted. The backup team cannot confirm whether backups are clean or compromised. Communications are improvised. And the people who are supposed to make decisions are either unavailable or unsure of their authority.

The gap between the plan and reality is where damage compounds. Not from the encryption itself, but from slow, uncertain decision-making in the critical window where containment and recovery are still possible.

Hour 0-12: containment is everything

The first priority is stopping the spread. This sounds obvious, but in practice it requires knowing exactly which systems to isolate, having the authority to do it immediately, and accepting the business impact of taking services offline before you fully understand what has happened.

Most organisations hesitate at this stage. They want more information before making disruptive decisions. That instinct is understandable but costly. Every hour of delay in containment is an hour where encryption continues to spread, where data exfiltration may be ongoing, and where the attacker still has a foothold in the environment.

The organisations that handle this well have pre-authorised containment actions. Their incident response team does not need to schedule a call with the CEO to isolate a compromised network segment. The authority to act is built into the plan and exercised in advance through tabletop exercises.

Hour 12-48: the backup question

This is where most recovery plans reveal their weakness. The plan says "restore from backup." But when the team tries to execute, they discover one or more of the following: backups were connected to the same network and may be compromised, the last verified backup is weeks old, restoration has never been tested at scale, or the recovery time for critical systems exceeds what the business can tolerate.

We have seen organisations with enterprise backup solutions costing six figures a year discover during an incident that their recovery point objective is three weeks and their recovery time objective is measured in days, not hours. The tools were there. The testing was not.

The fix is straightforward but requires discipline. Test recovery regularly. Not just the backup software - the actual end-to-end process of restoring a critical system from backup into a clean environment and confirming it works. Time it. Document it. Do it again when infrastructure changes.

Hour 48-72: communication and decisions

By this stage, the technical picture is usually clearer. The questions shift from "what happened" to "what do we tell people" and "how do we get back to normal." This is where regulatory obligations, customer notifications, insurance claims, and law enforcement engagement all converge.

The organisations that manage this well have pre-drafted communications templates, a clear escalation path for regulatory notification, and legal counsel already engaged. The ones that do not spend these critical hours drafting emails from scratch while their legal team reads the incident response plan for the first time.

What to do before it happens

The best time to test your ransomware response is before you need it. Run a realistic simulation. Not a tabletop discussion where everyone agrees the plan looks good - an exercise where the backup team actually tries to restore critical systems, where decision-makers practise authorising containment under pressure, and where communications are drafted against the clock.

If you have not done this recently, assume your plan has gaps. They all do. The question is whether you find them in a test or during an incident.