Problem is, whenever you start talking about extending your storage connectivity over distance, there are many things to consider, including some things that many storage administrators (or architects) may not always remember to think about. The more I thought about this (and the longer it took to write down the answers), the more I realized that there needed to be a good explanation for how this worked.
Generally speaking, the propeller spins the ‘other way’ when it comes to storage distance.
To that end, I began writing down the things that affect the choice for selecting a distance solution, which involves more than just a storage protocol. And so the story grew. And grew. And then grew some more. And if you’ve ever read any blogs I’ve written on the Cisco site you’ll know I’m not known for my brevity to begin with! So, bookmark this article as a reference instead of general “light reading,” and with luck things will be clearer than when we started. Read More »
At this year’s Hadoop Summit 2013, I presented on the “The Data Center and Hadoop” which built upon the past two years of testing the effects of Hadoop on the data center infrastructure. What makes Hadoop an important framework to study in the data center is that it contains a distributed system that combines both a distributed file system (HDFS) along with an execution framework (Map/Reduce). Further it builds upon itself and can provide other real-time or key/value stores(HBASE) along with many other possibilities. Each comes with its own set of infrastructure requirements that include throughput sensitive components along with latency sensitive components. Further in the Data Center, understanding how all these components work together is key to optimized deployments.
After studying many of these components and their effects, the very data we were alanyzing became a topic of a lot of our discussions. We combined application performance data, application logs, compute data AND network data to build a complete picture of what is happening in the data center.
With the advent of programmable networks (aka “Software Defined Networking”) it is not only important to make the network more application aware, but to also know where and how to analyze and make the right connections between the application and the network.
Both the Nexus 1000V and FlexPod won Best of TechEd 2013 awards. This was the third year in a row for a Cisco product to be so honored.
We’re looking forward to seeing you at WPC. Join the conversation on social media using the hashtag #CiscoWPC. If you won’t be able to join us and would like to learn more about how Cisco is changing the economics of the datacenter, I would encourage you to review this presentation on SlideShare or my previous series of blog posts, Yes, Cisco UCS servers are that good. Or visit the Microsoft Cisco UCS portal.
Source: IDC Worldwide Quarterly Server Tracker, Q1 2013 Revenue Share, May 2013
Cisco today introduced Application-Centric Infrastructure as the vision for Next Generation Data Center architecture, built for both today’s physical and virtual workloads as well as tomorrow’s highly dynamic Cloud-based, and performance-intensive big data application environments. Please check out Padmasree Warrior’s blog or Cisco Unified Fabric to learn more.
What I would like to share with you is how we are evolving the Cisco Unified Fabric to deliver operational simplicity through superior integration.
Delivering Operational Simplicity through Superior Integration
As organizations accelerate private and public cloud deployments, IT organizations and data center networks must evolve to meet rapidly changing and growing requirements. Virtualized and cloud environments require more agility and simplicity to quickly deploy and migrate virtual machines. IT organizations, on the other hand, are challenged with operational complexity, architectural rigidity and infrastructure inefficiency with manual processes, disjointed provisioning, deficient software overlays, static resource allocations and disruptions when growth is needed.
The good news is that Cisco continues to evolve its Unified Fabric to address these needs. The new Cisco Dynamic Fabric Automation delivers unsurpassed operational simplicities through superior integration. It does this by …. Read More »
Recent results clearly reinforce the growing understanding that Cisco has unleashed a more highly evolved and effective solution into the computing ecosystem. While the principles outlined by Charles Darwin in Origin of the Species can stir controversy, I find them to be an accurate model for technology evolution and quite useful for describing how we’ve arrived at this latest watershed in the x86 server market.
Our first observation would be the extremely rapid rate of customer adoption for Cisco’s Unified Computing System (UCS). Darwin would tell us that there must be significant advantage in “fitness to purpose” inherent to UCS that have driven this velocity. This is certainly true. Looking back at where we’ve been and how we’re positioned to go forward, here are key factors I see at play that create these advantages for UCS adopters:
Primitive incumbents in the server industry attempted converged infrastructure by choosing to combine compute and storage first. Cisco chose to converge compute and fabric first. This is a critical threshold event because it turns out that most optimizations for virtualization and cloud are fabric-oriented. With our Virtual Interface Cards we made server NICs and HBAs part of the fabric, not part of the server, a significant mutation in computing design. Further, Cisco abstracted every single identity and configuration element for servers, network access and storage into a programmable software model -- inventing fabric computing with stateless servers. Simple. Flexible. Resilient. Advantage: UCS Read More »