Avatar

“The past is never dead. It’s not even past.”  — William Faulkner

In the first half of this blog, I explained how MSDCs are like mainframes and super computers. In this half I develop on my thesis that the networks connecting compute and storage resources within the MSDC are becoming proprietary just like they are in mainframes to create a competitive advantage.

Continuing from where I left off, the internal architecture of a mainframe is parallel to MSDC architectures as shown in the figure to the right.

Blog-Pic-4

Mainframe architectures were (and still are) proprietary, in that how the compute and storage resources are connected was an internal engineering design. Each vendor had their own internal design which served as a competitive differentiation. We are seeing the same thing happening in MSDCs.

The networking technologies used by MSDCs to connect compute and storage (at a  much larger scale than mainframes) are increasingly becoming proprietary to create competitive differentiation. For example: how AWS provides EC2  by connecting its servers and  storage is an internal Amazon engineering design. Similarly how Google connects all of its servers across geographies in delivering its services is also an internal design, created by hundreds of computer science PhDs.

It seems like these MSDCs are increasingly creating networking technologies (protocols and devices) that are suitable to their MSDC needs, hyper-scale, hyper-agile infrastructure. This is evident from the SDN efforts of these MSDCs that are increasingly wanting the entire networking device (from Layer 2 – Layer 7) to become programmable. Multiple examples of proprietary designs are: Google, Mystery switch, Amazon, Cold Shoulder, Googlenet, Amazon builds everything, Secretive WiFi, Amazon builds ASICs etc.  Based on these articles and news, there is good reason to believe that the networking protocols are becoming proprietary. Traditional open (TCP/IP based) networking infrastructure has been great for client-server computing but is increasingly unsuitable for MSDC needs, which are more about connecting (and tearing down) compute and storage resources for enterprise applications in a very dynamic way. Automating configuration of networks helps but is not addressing the core problems of efficiency (packet overhead with VXLAN for example) and speed (allocating and reusing compute and storage resources across geographies). Therefore they have very good reason to invent their own protocols, that may use some of the open TCP/IP standards but they may also not. This is just like the mainframes and supercomputers designing their internal memory buses and connecting massively parallel CPU architectures together in proprietary ways, optimizing for certain applications’ CPU and memory footprints.

I am not saying that all of networking is going closed. So far, only the MSDC internal networks are going closed. Enterprises connecting to MSDC over the Internet/WAN are still going to need open networking technologies (TCP/IP-based stacks). This in only until MSDCs also provide a fiber connection (like Google is) to enterprises and directly connect their networks with a simple CPE device that has proprietary connectivity into a particular MSDC. This fiber connection could be from a service provider or directly provided by the MSDC which sidelines the service providers as well.

As the consolidation in MSDCs continues with the race to build compute and storage capacities faster and capture more and more of the IaaS market, there are only a handful of MSDCs that are going to be handling about 80-90% of enterprise infrastructure needs. And these MSDCs, increasingly, have to differentiate themselves through scale and cost. And that means they are going to be reinventing networking for MSDCs in a closed manner. A whole new eco-system of vendors can capitalize on this trend and disrupt the traditional infrastructure vendors. MSDCs increasingly need hyper-programmable switches and routers that are powered by processors that can enable the customized networks protocols at multi-gigabit speeds. The software and protocol stacks are most probably going to be created by MSDCs to keep their competitive differentiation in hyper-scale IaaS. Cisco Intercloud helps with overcoming this proprietary implementations when Enterprise move workloads to these MSDC clouds. Cisco Intercloud offers APIs that can translate down to the MSDC APIs to hide these proprietary implementations. Further, Cisco’s own cloud services are built on open networking standards that lower the risk of vendor lock-in.

The question to explore further would be: Is TCP/IP still relevant for internal MSDC networks? What does Layer-3 routing mean for internal MSDC networks and is BGP, for example, appropriate for connecting virtual machines (VMs) with storage resources (and end-users) as local as possible?

We, as an industry, are indeed coming full circle from closed mainframes to open client-server computing and now back to closed networking in MSDCs. As we set to go around this circle, what could the next client-server model look like? Exciting, innovative and disruptive times ahead!

Thanks for reading and I look forward to any and all comments.



Authors

Satish Katpally

Senior Marketing Manager

Application Centric Infrastructure, SDN, ONE Software Suites