Recently, the conversations I have been having about Software Defined Networks have shifted from supplying agile networking for VM provisioning and live migrations to looking at the problem through the lens of the application team. In the past, I spoke about provisioning VMs and moving VMs as a surrogate for the application. An application and a VM are not always in a one-to-one ratio. This is a convenient simplification for everyone except perhaps the IT operations teams provisioning multi-server, tiered, or distributed server applications.
In this blog post, I want to complement Gary Kinghorn’s blog, The Promise of an Application Centric Infrastructure (ACI), to briefly share insights from talking with many IT operations managers and architects responsible for traditional enterprise applications as well the new distributed applications for cloud infrastructure. What they are saying has profound implications for cloud infrastructure.
Conventional IT organizations have dedicated teams managing their applications, compute, network, security, and storage infrastructure. These functional organizations must work together much like runners in a relay race to manage the lifecycle of the applications used by an enterprise. These runners need to be agile but the racecourses are not the same every race.
When you look at some categories of applications side by side, the implications on business agility – the speed that a business can execute on a strategy (esp. one dependent on IT) – and the requirements on applications, network and security teams become apparent.
Productivity applications like Microsoft Exchange and Web 2.0 applications like SharePoint for collaboration support lots of client -- server traffic (this is North – South traffic) for the hundreds or thousands of end users of these applications within the enterprise. Characteristic of these server deployments as they scale up users, the load is balanced across the edge servers using server load balancers or applications delivery controllers. Additionally, since these applications are highly exposed to threats from the external network, these applications have priority requirements for security devices to prevent Denial of Service attacks and deliver secure access.
To scale I/O intensive applications such as SQL Server databases, IT organizations use clustered data base servers to handle the transactions or queries with deterministic network performance between servers and storage arrays which can be measured by latency and assured bandwidth.
New distributed cloud and big data applications like Hadoop can employ tens or hundreds of servers with unique I/O patterns between servers and terabytes of collected data which require guaranteed I/O characteristics for optimal performance between servers, local data, and the big data repositories. The traffic patterns are between servers and shared storage within the data center and are often characterized as heavy East-West data center traffic patterns.
Every installation has its unique fingerprint of application requirements but the chart below is useful to provide a comparison and contrast of the requirements for these categories of applications.
Source: Cisco interviews with leading IT DevOps administrators, 2013
IT organizations that want to work faster need to define applications requirements according to these major dimensions and learn to accelerate the workflow of application deployment across pooled network, security, compute and storage infrastructure.
Last June, Cisco revealed its vision for Application Centric Infrastructure, an innovative secure architecture that delivers centralized application driven policy automation, management and visibility for physical and virtual networks from a single point of management. It provides a common programmable automation and management framework for the network, application, security, services, compute, and operations teams, making IT more agile while reducing application deployment time.
I’m happy to report that Cisco UCS Director (formerly Cloupia) has been selected as a finalist for the 2013 Storage, Virtualisation & Cloud (SVC) Awards! Please take a moment and vote for UCS Director at http://cs.co/SVCAward.
This finalist nomination recognizes the innovation and differentiation that Cisco UCS Director provides for end-to-end converged infrastructure management — including automation for both virtual and physical resources across compute, network, and storage.
The video below provides a good overview of Cisco UCS Director and its benefits for IT organizations:
The sweet spot for Cisco UCS Director is in managing converged infrastructure based on Cisco’s Unified Computing System (UCS) with Cisco Nexus switches and third party storage — focusing on our market-leading integrated systems including the FlexPod solution with NetApp, as well as VCE’s Vblock Systems and our VSPEX solutions with EMC storage.
But the beauty of Cisco UCS Director is that it can also manage heterogeneous environments, including non-Cisco infrastructure and multiple hypervisors. Whether you call it your single-pane-of-glass or one ring to rule them all, it’s a highly innovative and comprehensive infrastructure management solution for your data center operations. These capabilities and more are highlighted in the award nomination which you can read here.
Steria is a leading provider of IT-enabled business services with 20,000 employees worldwide. Steria serves private and public sector organizations across the globe – with operations across 16 countries throughout Europe, India, North Africa, and Southeast Asia. With their expertise in IT and business outsourcing, Steria provides innovative solutions to help their clients improve efficiency and profitability.
One of Steria’s recent challenges was how to satisfy its clients’ desire to improve employee productivity and enable employees to work from any device. While IT-as-a-Service is becoming an increasingly competitive market in the Americas, offerings in Europe are still sparse – so this was also an opportunity to provide competitive differentiation for Steria’s services. Steria turned to Cisco to solve 3 key problems:
1. Providing employees with instant on-demand provisioning of desktop software and easy access to workplace IT resources,
2. Enabling employees to work from any device anywhere, and thus optimize computing Total Cost of Ownership (TCO),
3. And providing a simple, user-friendly portal and service catalog to make software offerings easily accessible.
At Cisco live! Orlando in June, Cisco unveiled its vision for an Application Centric Infrastructure (ACI), a next-generation, secure data center fabric design. At the time, we were only able to unveil key conceptual aspects of ACI, but as we lead up to more detailed product announcements later this fall, we want to bring a little more clarity to the ACI vision, what it will mean for customers, and set the context for those announcements.
[Join our ACI Announcement Webcast on November 6, 7:30 AM PT/10:30 ET/15:30 GMT. Register here.]
ACI is designed around an application policy model, allowing the entire data center infrastructure to better align itself with application delivery requirements and the business policies of the organization. The entire objective of ACI is to allow the data center to respond dynamically to the changing needs of applications, rather than having applications conform to constraints imposed by the infrastructure. These policies automatically adapt the infrastructure (network, security, application, compute, and storage) to the needs of the business to drive shorter application deployment cycles.
ACI offers a highly optimized, application-aware fabric ideal for both physical and virtual workloads. Innovation in ASIC, hardware, software and orchestration results in greater scale, agility, visibility, optimization and flexibility.
With so much misinformation (dis-information?) about UCS running around in the ether, I thought the straight forward comparison offered here would be valuable. It is important to dispel myths and analyze reality before making the important decisions around server and networking refreshes / upgrades, which by necessity affect long term data center architecture. I hope you will find this presentation -- Cisco UCS, HP and IBM -- A Blade Architecture Comparison, useful in your decision making process.
You could, and probably should, ask what is left out? That’s pretty easy. I did not specifically call out Performance and TCO, for a good reason. If you can execute on the three bullets above like Cisco UCS does, Performance and TCO are the natural derivatives. You shouldn’t have to target them separately. It’s kind of a “If you build it, they will come” scenario. That’s why I made the statements in the TCO and Architecture blog that “…Server cost is irrelevant (to OpEx) because: changing its contribution to total TCO has a vanishingly small impact….” and “…It [architecture] is the single most important component of OpEx…” For more on this and how server cost and TCO intersect, please check out this blog -- Blade Server TCO and Architecture – You Cannot Separate Them. It takes a look at the OpEx and CapEx components of TCO, and how altering either of them effects the actual total 3-year TCO. You may be surprised.
Cisco is providing trade-in credits for customers’ old generation servers and blade chassis, helping ease the transition and upgrade to a new UCS blade architecture. The UCS Advantage presentation below has more details on this fantastic program that can further enhance the already compelling TCO benefit of upgrading to Cisco UCS.
Special note: For more on the benefit that Cisco UCS delivers for I/O and throughput, I suggest a great blog by Amit Jain -- How to get more SAN mileage out of UCS FI. Amit does an excellent compare / contrast of FC and FCoE technologies (“…8 Gb FC yields 6.8 Gb throughput while 10 Gb FCoE yields close to 10 Gb throughput…”).