The severe floods that hit the north of England and parts of Scotland in December 2015 and January 2016 devastated both homes and businesses, and led to questions about whether the UK is sufficiently prepared to cope with such calamities.
On December 28, the Guardian newspaper went so far as to say that the failure to ensure that flood defences can withstand the unprecedented high water levels, would cost at least £5bn. Lack of investment was cited as the cause of the flooding.
Even companies such as Vodafone were reported to have been affected. The IT press said that the floods had hit the company’s data centre. A spokesperson at Vodafone, for example, told Computer Business Review on January 4: “One of our key sites in the Kirstall Road area of Leeds was affected by severe flooding over the Christmas weekend, which meant that Vodafone customers in the North East experienced intermittent issues with voice and data services, and we had an issue with power at one particular building in Leeds.”
Many reports said that the flooding restricted access to the building, which was needed in order to install generators after the back-up batteries had run down. Once access became possible engineers were able to deploy the generators and other disaster recovery equipment. However, a recent email from Jane Frapwell, corporate communications manager at Vodafone, claimed: “The effects on Vodafone of flooding were misreported recently because we had an isolated problem in Leeds, but this was a mobile exchange not a data centre and there were no problems with any of our data centres.”
While Vodafone claims that its data centres weren’t hit by the flooding, and that the media had misreported the incident, it is a fact that data centres around the world can be severely hit by flooding and other natural disasters. Floods are both disruptive and costly. Hurricane Sandy is a case in point.
In October 2012 Data Center Knowledge reported that at least two data centres located in New York were damaged by flooding. Rich Miller’s article, ‘Massive Flooding Damages Several NYC Data Centres’ said: “Flooding from Hurricane Sandy has hobbled two data centre buildings in Lower Manhattan, taking out diesel fuel pumps used to refuel generators, and a third building at 121 Varick is also reported to be without power…” Outages were also reported by many data centre tenants at a major data hub on 111 8th Avenue.
At this juncture it’s worth noting that a survey by Zenium Technology has found that half of the world’s data centres have been disrupted by natural disasters, and 45% of UK companies have – according to Computer Business Review’s article of June 17 – experienced downtime due to natural causes.
Claire Buchanan, chief commercial officer at Bridgeworks, points out that organisations should invest in at least two to three disaster recovery sites, but quite often like most insurance policies they often just look at the policy’s price rather than as the total cost of not being insured. This complacency can lead to a disaster, costing organisations their livelihood, customers and their hard fought for reputations. “So I don’t care whether it’s Chennai, Texas or Leeds. Most companies make do with what they have or know, and they aren’t looking out of the box at technologies that can help them to do this”, says Buchanan.
Buchanan suggests that rather than accepting that the flood gates will be opened, drowning out their data centres of their ability to operate, organisations should invest in IT service continuity.
The problem is that traditionally, the majority of data centres are quite often placed within the same circle of disruption. This could lead to all of an organisations data centres being put out of service. The main reason why they place their data centres within close proximity to each other is caused by the limitations of most of the technologies available on the market. Placing data centres and disaster recovery sites at a distance brings the latency issues. Buchanan explains: “Governed by an enterprises Recovery Time Objective (RTO), there has been a requirement for organisations to place their DR centre within fairly close proximity due to the inability to move data fast enough over distance.”
She adds: “Until recently, there hasn’t been the technology available that can addresses the effect of latency when transferring data over distance. The compromise has been how far away the can DR centre be without too much of a compromise on performance? ” With the right technology in place to mitigate the effects of latency it should, however, be possible to situate an organisation’s disaster recovery site far away as you like, for example in green data centres in countries such as Iceland or Scandanavia, as well as other countries to ensure that each data centre is not located within the same circles of disruption.
Green data centres have many plus points in their favour, most specifically the cost as power and land are comparatively inexpensive. The drawback has always been the distance from European hubs and the ability to move the data taking into account distance and bandwidth. With 10Gb bandwidth starting to become the new normal, coupled with the ability to move data unhindered at link speed, there is no reason why enterprises cannot now take this option.
Clive Longbottom, client services director at analyst firm Quocirca, explains the traditional approach. “The main way of providing business continuity has been through hot replication,” he explains. “Therefore, you need a full mirror of the whole platform in another data centre, along with active mirroring of data. This is both costly and difficult to achieve.”
But as most companies already have the network infrastructure in place, they should be looking for solutions that won’t cost the Earth. For this reason, organisations should look outside of the box and consider smaller and more innovative companies to find solutions to the problems they face – solutions that can mitigate latency with the organisation’s existing infrastructure, making in unnecessary to buy new kit in order to have a dramatic impact.
“With products like WANrockIT and PORTrockIT you don’t need dark fibre networks or low latency network because the technology provides the same level of performance whether the latency is 5, 50 or 150ms of latency”, says David Trossell, CEO of Bridgeworks. He claims that the biggest cost is the network infrastructure, but “you can reduce the costs considerably with these solutions, and it widens up the scope of being able to choose different network providers as well.”
“CVS Healthcare, for example, wanted electronic transfer DR from 2 of their sites, but the latency killed performance and so they still had to use a man in the van to meet their required recovery time objectives (RTO)”, explains Trossell. He adds: “They had electronic transfer to improve the RTO, but this was still too slow and yet with WANrockIT in the middle we got the RTO down to the same or better, and we reduced the RPO from 72 hours down to 4 hours.” Before this CVS Healthcare was doubling up on its costs by using the “man in the van” and electronic transfer.
Plan for continuity
While companies need to plan for business continuity first and foremost, they also need to have a good disaster recovery plan. Buchanan and Trossell have found that many organisations lack adequate planning. They don’t see a need for it until disaster strikes – ‘like everything else organisations quite often don’t think it would happen to them.’ For example, what would happen in the Thames Barrier failed to prevent Canary Wharf from being flooded? It’s after all located on a flood plain and there are many disaster recovery sites in its vicinity.
Longbottom raises a key challenge. “If flooding such as we have just seen happens – what was meant to be a once in a hundred years’ event – then planning for that puts the costs to the data centre owner out of reach,” he says. “Having water levels two or more metres above normal means that attempting to stop ingress of water become exceedingly difficult, and pumping it out just as hard.”
He therefore advises organisations to have two plans, for disaster recovery and business continuity. It’s also important to remember that IT service continuity is multi-tiered, and these two considerations are a part of it.
To ensure that they work effectively as well as efficiently together, there is a need to understand the business-related risk profile. He says this will also help organisations to define how much the business is willing to spend on continuity, and it will allow for some forethought into the types of risks that will affect the business. Disaster recovery sites may need to be located in different countries to ensure that investment in IT service continuity is the best insurance policy.
After the flood: Why IT service continuity is your best insurance policy
- Meet the Man Who Believes the Google-Facebook Duopoly Is Ripe for Disruption
- PlayStation Plus Prepaid Membership Card Now Available in India
- You Can Now Trade-In GTA V, FIFA 18, Call of Duty: World War 2, and other PS4 Games at Games The Shop
- UPDATE — BroadSoft Awarded 2017 Frost & Sullivan Latin America Cloud Telephony and UCC Services Platform Provider of the Year
- Former Disrupt Battlefield competitor Coronet automates security for SMBs in latest update