Cloud requires a datacentre revolution open source can deliver, says Rackspace VP

Nigel Beighton, vice president of technology, Rackspace

Interoperability in the cloud is something most vendors talk about and all enterprises desire, but achieving the goal of large, scalable, interoperable clouds will require a revolution in the datacentre. This is not happening fast enough for Nigel Beighton, vice president of technology at Rackspace. But as the open source philosophy begins to take hold in the development not just of software but of physical datacentre assets, that revolution might not be too far off.

Beighton says that when Rackspace first began operating datacentres the plan was to have one datacentre on each US coast, for both domestic and international customers. But it soon became clear to the hoster-turned-cloud provider that cloud services demanded a level of scale and localisation that outstripped Rackspace’s original plans.

“Customers want infinite resources, and to engineers like myself, you simply can’t have infinite resources,” Beighton tells Business Cloud News. “The only way you supply this effectively is to get good at scaling quickly, but I can’t just have 10,000 servers sitting here and there for obvious economic reasons. It’s really about datacentres, and fundamentally changing datacentre design – that’s new, that’s never really been done before.”

That’s where the Open Compute Project comes in. Rackspace is among a growing number of cloud service providers, OEMs and vendors participating in Open Compute (OCP), an open source project led by Facebook  which addresses the challenge of how to scale infrastructure in the most efficient and economic way possible.

Headed by Facebook’s vice president of infrastructure Frank Frankovsky, OCP was launched two years ago in a bid to integrate open source principles into the design of datacentre infrastructure including servers, switches, cooling and rack design. The project began with the work that was being done at Facebook, where the company designs its own datacentre infrastructure. The social-media giant open sourced its datacentre designs, server designs and power distribution infrastructure, a move mirrored by Rackspace. The project has expanded to tackle high I/O storage integration through a project called Open Vault (codenamed “knox” for those who have been following the project), network switches, and is looking into “disaggregated rack design” – or a fully modular rack enabling extremely granular configuration of commodity kit.

Beighton said the project is already starting to show promise, barely two years after its inauguration, echoing sentiments expounded by Frankovsky last week in London. Frankovsky said Facebook was able to reduce datacentre failure rates by a magnitude of three simply by standardising on OCP kit.

“People always jump to the conclusion that OCP is all about cheap hardware – it’s not. This is about allowing people to scale quickly. And it’s about sourcing. To be able to multisource as much as you can is very critical so your supply chain isn’t bottlenecked. To open source that means you have a much better way to multisource. And you’ve got a standard that people will build to which will allow people to connect clouds together,” Beighton says. “What Facebook learned is, when they open up a new piece of functionality on their site, they don’t have a slow ramp up but a massive jump, and so the whole Open Compute Project is interesting to us because it would allow us to scale.”

Rackspace has bet the farm on another open source project called Open Stack, the increasingly popular software that lets enterprises and cloud service providers build public and private clouds by pooling compute, storage and networking resources and allowing them to be controlled through standardised APIs.

Beighton says that unlike similar open source projects, Cloud Stack and OSv, Open Stack is reaching a critical mass that will help the cloud industry realise the golden dream of interoperable standards. “When you have over 200 companies participating — pretty much all the big players — you’ve got all of these companies involved in doing this because there is a standard, and because that standard is not just the software, it’s the API. The APIs are the same; the toolsets are the same; the way the image repository is set is the same,” Beighton says. “Right now, if you want to federate and connect, Open Stack is the only game in town. That’s our future going forward, and where we see the industry moving. Some really big core cloud services in some countries, but twice that amount federating and connecting to other people and services.”

“We still don’t have the maturity of any standards and we need it, because again on a scale that you’re building at as a service provider, you can’t take the risk of completely single sourcing everything you do from one provider.”

But Beighton’s assessment of Open Stack as the defacto standard, allowing clouds based on the software framework interoperate easily, tends to encounter some difficulty when you look at the three largest cloud players – Amazon, Google and Microsoft, none of which use Open Stack as the foundation of their clouds.

He says that the big players including Amazon should adopt Open Stack and open up their clouds to interconnect with other pieces of infrastructure out there because it would give customers a much better value proposition. But given the improbability that this will happen anytime soon it raises serious questions about the ability to develop and adopt technical standards that everyone can work to.

Nevertheless, Beighton is encouraged by Open Stack, OCP and the flurry of open source initiatives around software defined networking. Rackspace also works with the Open Network Foundation (of which Facebook is also a founding member) on OpenFlow in order to develop technical standards for an open source appliance-based network controller. For SDN Beighton says it’s still too early to put the company’s full weight behind any one initiative, and Rackspace is also working with the likes of Nicira, Cisco, VMware and others – including Open Stack on their SDN-focused Quantum project – in order to help develop standard approaches to SDN.

“It’s not the servers that are creating the bottlenecks, it’s the networks. And you can’t do cloud unless you have software defined networking. But it’s very early. Right now on software defined networking, you have to track more than one because everyone is delivering a different piece of the puzzle – including Open Stack,” Beighton says. “OCP is looking at this aspect too, but a different piece of SDN.”

Beighton, like many others in this industry, believes that open standards will be key to realising interoperable clouds, with OCP, Open Stack and open source development on SDN delivering the kinds of innovations needed to make the datacentre flexible enough to handle the demands of cloud computing. But failure to get some of the leading cloud service providers contributing to and adopting these standards “poses some challenges” to their being realised as just that. Even so, Beighton says that standards like Open Stack and OCP have reached a critical mass, moving forward at a pace so rapid that they can’t really be ignored.

“I think we’re going to see the rise of the small guys in the cloud space– but they’re going to need to be able to federate, to interconnect so they can integrate services, share resources. That’s where open standards come in, and why open source is going to play a huge role in the datacentre revolution.”

Beyond standards, Beighton says the next datacentre revolution needs to come at the storage level, where – at least in the cloud computing world – much of the attention has been placed on software-defined storage, perhaps to the detriment of evolutions at the hardware level.

“Storage is the only place not keeping pace. Much as SSD is becoming a little more prevalent it isn’t scaling to keep pace with data in my view. There needs to be a fundamental game change in storage units going forward. Right now failure rates are simply too high. I think we’ve solved the software piece but the hardware has to catch up. We’re still spinning disks at the same speeds we were ten years ago,” Beighton says.

“I think we’ll reach a point very soon where the economics of retrofitting a datacentre for cloud will make the whole endeavour prohibitive, which is a massive challenge for service providers moving forward. Ultimately the cloud computing revolution has created a trickle-down effect of mini revolutions – in networking, virtualisation and the like, and it will be interesting to see who comes up with the next big thing in storage,” he added.

 

 

 

 

 

 
[“Source- businesscloudnews”]