Cisco has admitted to a cloud configuration cockup that erased customer data.
The networking giant explained: “On August 3rd, 2017, our engineering team made a configuration change that applied an erroneous policy to our North American object storage service and caused certain data uploaded prior to 11:20AM Pacific time on August 3 to be deleted.”
“Our engineering team is working over the weekend to investigate what data we can recover,” Cisco’s advisory says, adding that the company is working on “tools we can build to help our customers specifically identify what has been lost from their organization.”
The Register imagines some Meraki users will already know what was lost, because among the data erased was “Enterprise Apps”. Missing interactive voice response menus will also be a bit of a giveaway.
Some of what was lost won’t be a huge hassle for users: voicemail greetings can be re-recorded, hold-music restored, custom logos retrieved from whoever designed them and uploaded anew.
But the incident is a huge mess for Cisco, because Meraki’s sold on the basis that its supporting cloud service removes much of the grunt work required to run networks and voice systems. That Meraki’s team made such a substantial mistake – and seemingly lacked data protection tools to cover such an eventuality – is a very big black mark on its reputation, not least because it leaves Cisco open to comparisons to far-smaller outfit GitLab, which had five dysfunctional backup tools.
At least Cisco’s not alone in having its staff muck up its cloud: AWS’ epic S3 outage in March 2017 was also caused by an engineering error, while Google staff once caused an outage by patching the wrong routers.
This new incident will, however, re-enforce those who advocate cloud-to-cloud backup, failover options that kick in when clouds croak, or staying on-premises and/or in managed services situations where you can see exactly what’s going on instead of trusting to cloud operators’ promises of colossal scale and attention to detail. ®
UPDATE: A Cisco spokesperson has sent us a statement about the affair, as follows.
“On Saturday, August 5, 2017, Cisco notified customers that a configuration error had inadvertently deleted a limited number of Meraki customers’ user-uploaded data. The issue affected files in a North American data center storage service, and the incorrect setting was immediately fixed. We are taking measures to reduce the future risk of this type of configuration error. Network configuration data was not involved, so most affected customers should not experience an issue with network operations. No customer data was compromised. Cisco is working to restore the deleted files and will continue to update customers in real time via a Meraki Support Page. Cisco will continue to actively engage with our customers to provide whatever support is needed to remediate this issue.”
The company also says “Our engineering team has determined that, in some cases, assets may be recoverable through our data cache, and a recovery effort is currently underway.” To help those customers whose data is either gone or has not yet been recovered, the company is cooking up a bulk upload tool to ease the restoration process.
[“Source-theregister”]