Welcome back to the latest episode of Engineers Unplugged, featuring the inimitable dynamic duo of storage, Vaughn Stewart (@vStewed) and Chad Sakac (@sakacc). They discuss three key trends in storage today: flash, distributed DAS, and software control plane. Storage is the new black, let’s learn why:
And of course, it wouldn’t be complete without a unicorn, a flash unicorn.
Flash Unicorn! Thanks to Vaughn Stewart and Chad Sakac for the artwork.
Recently, the conversations I have been having about Software Defined Networks have shifted from supplying agile networking for VM provisioning and live migrations to looking at the problem through the lens of the application team. In the past, I spoke about provisioning VMs and moving VMs as a surrogate for the application. An application and a VM are not always in a one-to-one ratio. This is a convenient simplification for everyone except perhaps the IT operations teams provisioning multi-server, tiered, or distributed server applications.
In this blog post, I want to complement Gary Kinghorn’s blog, The Promise of an Application Centric Infrastructure (ACI), to briefly share insights from talking with many IT operations managers and architects responsible for traditional enterprise applications as well the new distributed applications for cloud infrastructure. What they are saying has profound implications for cloud infrastructure.
Conventional IT organizations have dedicated teams managing their applications, compute, network, security, and storage infrastructure. These functional organizations must work together much like runners in a relay race to manage the lifecycle of the applications used by an enterprise. These runners need to be agile but the racecourses are not the same every race.
When you look at some categories of applications side by side, the implications on business agility – the speed that a business can execute on a strategy (esp. one dependent on IT) – and the requirements on applications, network and security teams become apparent.
Productivity applications like Microsoft Exchange and Web 2.0 applications like SharePoint for collaboration support lots of client -- server traffic (this is North – South traffic) for the hundreds or thousands of end users of these applications within the enterprise. Characteristic of these server deployments as they scale up users, the load is balanced across the edge servers using server load balancers or applications delivery controllers. Additionally, since these applications are highly exposed to threats from the external network, these applications have priority requirements for security devices to prevent Denial of Service attacks and deliver secure access.
To scale I/O intensive applications such as SQL Server databases, IT organizations use clustered data base servers to handle the transactions or queries with deterministic network performance between servers and storage arrays which can be measured by latency and assured bandwidth.
New distributed cloud and big data applications like Hadoop can employ tens or hundreds of servers with unique I/O patterns between servers and terabytes of collected data which require guaranteed I/O characteristics for optimal performance between servers, local data, and the big data repositories. The traffic patterns are between servers and shared storage within the data center and are often characterized as heavy East-West data center traffic patterns.
Every installation has its unique fingerprint of application requirements but the chart below is useful to provide a comparison and contrast of the requirements for these categories of applications.
Source: Cisco interviews with leading IT DevOps administrators, 2013
IT organizations that want to work faster need to define applications requirements according to these major dimensions and learn to accelerate the workflow of application deployment across pooled network, security, compute and storage infrastructure.
Last June, Cisco revealed its vision for Application Centric Infrastructure, an innovative secure architecture that delivers centralized application driven policy automation, management and visibility for physical and virtual networks from a single point of management. It provides a common programmable automation and management framework for the network, application, security, services, compute, and operations teams, making IT more agile while reducing application deployment time.
Revolutions are usually led by challengers, not incumbents. But Cisco’s Nov. 6th mega-launch of Application Centric Infrastructure (ACI) is sounding revolutionary as described by some experienced industry watchers. Any revolution must transform the experience of its participants – in this case , the Application development teams, DevOps and CloudOps that are provisioning new applications in many mid-to-large Enterprise Data Centers. As John Chambers said at Interop “The ability to create an infrastructure that is agile, simplified, automatically programmable and able to scale on demand is critical to enabling the application model”. In this blog, we’ll zoom in on “Agility” as an experience.
The growing agility gap
In the last decade, Cisco and other equipment providers have greatly improved the agility of data center infrastructure – the ability to respond quickly to new demands for scale, performance and security. Technologies such as a unified fabric, virtualization and infrastructure controllers augmented by intelligent Automation and Governance have greatly simplified the management of the infrastructure.
But there is strong evidence that the demand for agility is increasing even faster – creating a growing agility gap.
Compared to traditional backoffice applications, new Mobile, Social and Big Data applications are much more dynamic due multi-tenancy, higher demand peaks, more distributed users, broader device support, varying performance needs, 24x7 global usage, and changing security vulnerabilities. Furthermore, to run economically at scale with performance and availability, these applications need a mix of virtualized and dedicated, “bare-metal” resources. And the reality is that only 40% of workloads are virtualized anyway in most enterprise data centers.
These factors are driving more distributed workloads and storage across the data center, more frequent changes to ports, LANs and subnets, more re-configurations of security and load-balancing, more application and flow optimizations and more monitoring and diagnostics to ensure application metrics.
Data center teams are getting overwhelmed. IDC’s 2011 research showed that total Data Center spend has shifted to these type of management and administration tasks – and that was just for virtualized servers. New bare metal workloads will increase this spend further as they move to scale, unless something is done.
At Cisco live! Orlando in June, Cisco unveiled its vision for an Application Centric Infrastructure (ACI), a next-generation, secure data center fabric design. At the time, we were only able to unveil key conceptual aspects of ACI, but as we lead up to more detailed product announcements later this fall, we want to bring a little more clarity to the ACI vision, what it will mean for customers, and set the context for those announcements.
[Join our ACI Announcement Webcast on November 6, 7:30 AM PT/10:30 ET/15:30 GMT. Register here.]
ACI is designed around an application policy model, allowing the entire data center infrastructure to better align itself with application delivery requirements and the business policies of the organization. The entire objective of ACI is to allow the data center to respond dynamically to the changing needs of applications, rather than having applications conform to constraints imposed by the infrastructure. These policies automatically adapt the infrastructure (network, security, application, compute, and storage) to the needs of the business to drive shorter application deployment cycles.
ACI offers a highly optimized, application-aware fabric ideal for both physical and virtual workloads. Innovation in ASIC, hardware, software and orchestration results in greater scale, agility, visibility, optimization and flexibility.
It’s the Season 3 Grand Finale of Engineers Unplugged! Today’s guests, Joe Onisick and Nils Swart, take on Application Affinity: how to bridge the network world and the application world. Is it possible to remove the complexity to speed adoption? Watch and see:
Welcome to Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)