CI Data Center Trend #8: Opening up Data Center Designs
The design of a company’s data center and the components included have been closely guarded secrets for a lot of the major tech giants as many have viewed it as part of their competitive advantage in the market. Google, Microsoft, LinkedIn, eBay, Fidelity and more hyperscale players have often kept their designs closely guarded secrets. It wasn’t until the last six or seven years that these large industry players started to collaborate on best practices when it came to operating a data center – seeking more efficient deployments and cost savings.
However, best practices can only get efficiency so far before custom designs have to be engineered and implemented to achieve even greater efficiencies. This is where Facebook found itself in 2009 as it looked to construct its very own enterprise data center to meet the growing needs of its service and increasing infrastructure costs. Facebook looked at all aspects of a data center’s design seeking to lower their initial costs and improve the overall efficiency of the data center and the equipment within. The result was their Prineville, Oregon data center which resulted in a 24% cost reduction, 38% less energy usage and a PUE of 1.08 when compared to their other colocation data centers.
Following their completion of the Prineville data center Facebook decided to open up their custom designs and specifications to the data center community. Looking to share their knowledge and learnings, as well as to learn from other industry experts, Facebook, with help from Goldman Sachs and Intel, formed the Open Compute Project, a non-profit organization with engineers from around the world to design and enable the delivery of the most efficient server, storage and data center hardware designs for scalable computing.
Fast forward to today and industry giants, Microsoft, Google, Fidelity, Rackspace and more have joined the ranks of the Open Compute Project as contributing member companies. They share their designs for custom servers, switches, power distribution and other data center designs and components. However these designs differ from the traditional servers and switches you may find in smaller enterprise data centers. There’s generally no vanity faceplate to dress up the front of the server, few screws to be found, and only the necessary components included.
Although while these companies may be contributing designs and collaborating with each other, there could be several versions of a single product – whether it be a server or a rack. Each participating company seems to be developing their own solution and once near completion, opens up the design and specifications of it to the project community. Facebook and Microsoft each have contributed their own server designs that seem to compete with one another. However Microsoft’s design servers their application and use cases better than Facebook’s latest designs do as it comes down to the application that is to be running on the server.
Regardless of the design and application running on the server though, each company has been contributing radical new designs around what a server looks like and the components found inside. While the Open Compute Project may still only see contribution and benefits being realized at the platinum level, the industry as a whole is benefiting indirectly. Traditional architectures and designs are being challenged by industry giants and leaders causing the equipment vendors and manufacturers to rethink their own approaches. We are starting to see a more rapid developmental cycle in terms of equipment hardware and a focus being placed back on increasing efficiencies wherever possible. It is no longer enough to just sell equipment with a nice looking front bezel and the promise of high performance – customers are now looking, when deploying on a larger scale, that the same equipment be efficient not only in the manufacturing process and components utilized but also in the power efficiency and design.