At Cisco live! Orlando in June, Cisco unveiled its vision for an Application Centric Infrastructure (ACI), a next-generation, secure data center fabric design. At the time, we were only able to unveil key conceptual aspects of ACI, but as we lead up to more detailed product announcements later this fall, we want to bring a little more clarity to the ACI vision, what it will mean for customers, and set the context for those announcements.
[Join our ACI Announcement Webcast on November 6, 7:30 AM PT/10:30 ET/15:30 GMT. Register here.]
ACI is designed around an application policy model, allowing the entire data center infrastructure to better align itself with application delivery requirements and the business policies of the organization. The entire objective of ACI is to allow the data center to respond dynamically to the changing needs of applications, rather than having applications conform to constraints imposed by the infrastructure. These policies automatically adapt the infrastructure (network, security, application, compute, and storage) to the needs of the business to drive shorter application deployment cycles.
ACI offers a highly optimized, application-aware fabric ideal for both physical and virtual workloads. Innovation in ASIC, hardware, software and orchestration results in greater scale, agility, visibility, optimization and flexibility.
At vForum in Sydney today, Cisco’s Justin Cooke addressed an audience on the topic of Fabric Computing, which Cisco believes is the next wave of virtualisation.
It was the Cisco Unified Computing System (UCS) which has been the main driver of the market transition to Fabric Computing, delivering compute, network, storage access and virtualisation as one cohesive system before the term Fabric Computing became commonplace.
I’ve talked before about how getting high performance in MPI is all about offloading to dedicated hardware. You want to get software out of the way as soon as possible and let the underlying hardware progress the message passing at max speed.
But the funny thing about networking hardware: it tends to have limited resources. You might have incredibly awesome NICs in your HPC cluster, but they only have a finite (small) amount of resources such as RAM, queues, queue depth, descriptors (for queue entries), etc.
With so much misinformation (dis-information?) about UCS running around in the ether, I thought the straight forward comparison offered here would be valuable. It is important to dispel myths and analyze reality before making the important decisions around server and networking refreshes / upgrades, which by necessity affect long term data center architecture. I hope you will find this presentation -- Cisco UCS, HP and IBM -- A Blade Architecture Comparison, useful in your decision making process.
You could, and probably should, ask what is left out? That’s pretty easy. I did not specifically call out Performance and TCO, for a good reason. If you can execute on the three bullets above like Cisco UCS does, Performance and TCO are the natural derivatives. You shouldn’t have to target them separately. It’s kind of a “If you build it, they will come” scenario. That’s why I made the statements in the TCO and Architecture blog that “…Server cost is irrelevant (to OpEx) because: changing its contribution to total TCO has a vanishingly small impact….” and “…It [architecture] is the single most important component of OpEx…” For more on this and how server cost and TCO intersect, please check out this blog -- Blade Server TCO and Architecture – You Cannot Separate Them. It takes a look at the OpEx and CapEx components of TCO, and how altering either of them effects the actual total 3-year TCO. You may be surprised.
Cisco is providing trade-in credits for customers’ old generation servers and blade chassis, helping ease the transition and upgrade to a new UCS blade architecture. The UCS Advantage presentation below has more details on this fantastic program that can further enhance the already compelling TCO benefit of upgrading to Cisco UCS.
Special note: For more on the benefit that Cisco UCS delivers for I/O and throughput, I suggest a great blog by Amit Jain -- How to get more SAN mileage out of UCS FI. Amit does an excellent compare / contrast of FC and FCoE technologies (“…8 Gb FC yields 6.8 Gb throughput while 10 Gb FCoE yields close to 10 Gb throughput…”).
It was not so long ago that people would often have to make difficult choices about their work. Your dream job might open up 3,000 miles away. Your new job means leading a team on the other side of the world. Your day is spent on the road meeting with customers, not in an office. In the past, working men and women have been forced to choose: Do I uproot my family to take advantage of a new job opportunity that could bring greater financial security? Will I need to travel a majority of the time to effectively lead my team? What will I lose during hours of travel time? Companies faced similar choices: Are we missing out on talent because they are not local? How do we connect different locations and geographies effectively? Can our dispersed teams be more productive and more connected? If we require an employee to move, do we risk losing the employee? Can we afford the increasing relocation costs?