After a slow start in 2009, storage area networks (SANs) ended strong with demand from all sectors across the globe. According to a recent industry report by Dell’Oro, the storage networking market grew 21 percent Y/Y.
Storage demand is being fueled not only by general economic recovery, but also:
Broader and more complex deployments of server virtualization;
Archiving and backup requirements;
Disaster recovery and business continuance applications; and
Aligning with these trends, Cisco SAN business (MDS product family) overall had a very strong calendar quarter with growth of more than 100% Y/Y. As a data point, Cisco MDS gained 13% market share and tied for #1 marketshare for Director-class switches. The market share gain was supported by several factors, including:
Shift in customer buying trends as they acquired a DC 3.0 solution and end-to-end architecture to scale their SAN, LAN and data center interconnect (DCI); and
NX-OS as single OS across LAN and SAN resonated well with large customers, who are now citing this as a key attribute.
It should be clear to the market that Cisco *is* committed to Fibre Channel. As another proof point, we recently introduced MDS 9148, the highest 8Gb/s density 1RU fabric switch in the market. Cisco is also investing heavily in other areas of storage networking to provide scalable SANs to enhance cloud services, security and migration services and data center interconnect solutions. Cisco continues to enable customers to upgrade to the latest technology (e.g., FCoE) without the need for forklift upgrades.
Cisco MDS 9000 switches are deployed in mission-critical data centers at the world’s largest financial institutions, automobile companies, service providers, retailers, energy companies, and healthcare organizations. You can read more about these case studies here to see why Cisco MDS is in 90% of Cisco’s top Global 3.0 accounts, has over $2.5B in cumulative revenue — and is still growing.
According to IDC, the only growth rate that hasn’t gone negative in this recession is the creation of digital information. For proof, just examine your daily routine -- you create documents, send texts and emails, all while IM’ing and capturing your “Kodak moments” throughout the day. The amount of enterprise application data created and stored per sec is growing exponentially, with no slowdown forecast any time soon. So the question is not, is there going to be less data? The question is: how will you manage, access, secure and intelligently share data?
In a combined EMC multimedia paper, IDC has defined the rate at which the “Digital Universe” expands along with the challenges IT organizations face to manage their storage needs. Analyzing the graph below, it becomes evident it is not just the “growth”, but that growth in specific data types that require focused IT attention. It is not just plain old ‘one size fits all” data management any more: IT professionals need to be savvy about life cycle management of specialized data in digital universe, which is expanding more rapidly than our universe.
Many organizations are using server virtualization to consolidate application workloads in their datacenter. By using a highly efficient platform like Cisco Unified Computing System (UCS) organizations find that they can improve asset utilization, and dramatically lower IT costs. This enables the datacenter team to be more responsive to initiatives that produce real value for the business.
The server platform and virtualization address one part of the application delivery challenge for the global Enterprise. UCS can easily handle the compute requirements of complex applications, but what about the increased demand placed on the WAN as applications are delivered to a distributed workforce? How do you ensure an acceptable user experience? Accessing information over a WAN is much slower than accessing information over a LAN, due to limited WAN bandwidth, packet loss, and latency. To meet this challenge a solution needs to both scale the server platform and increase WAN performance.
Applications not only need to run fast in the data center; applications must run successfully for the end users wherever they may be. Organizations are finding that they must consider application acceleration as a part of their application solution architecture so that that they not only scale application performance on the server, but also delivery applications to reach remote sites with high performance to serve the users at every location.
“The IT department had already begun adopting server virtualization, using VMware ESX software on rack-optimized servers. But the new clinical data warehousing applications would require 16 additional VMware ESX servers, and the data center lacked sufficient I/O infrastructure and cabling. In addition, provisioning this quantity of physical servers would be difficult in the few weeks available” Sounds familiar ? As I was talking with customers at EMCWorld in Boston, but also at SAPPHIRE in Orlando over the two weeks, listening to customers from Chicago, New York, Frankfurt, London , Paris , or visiting UCS customers such as Pacific Coast Building Products Inc. , this concern seems to be all over the place. The challenge of Moses Cone Health System, a large healthcare provider based in North Carolina is not unique, but nevertheless it is critical for the success of the IT organization’s mission.
“We needed a cost-effective computing system that would enable us to expand our use of Electronic Medical Records quickly over the next year, minimize network infrastructure build-out, and reduce time to rack and configure servers,” explained also Michael Heil, manager of technology infrastructure, Moses Cone Health System.
Amongst the numerous benefits provided by the acquisition of the Cisco UCS , Michael Heil was keen to highlight :
So, this should be a good Friday post--its short and there are cash prizes involved.
Well, actually, its the first part of a post. I have a question for you readers. I have my own thoughts on the answer, but I wanted to see what other folks thought first, before I taint the discussion. I have been spending a fair number of cycles on a working group pulling together our company perspectives on cloud computing and where we fit in. While I was reading something on private clouds, something seemed oddly familiar, which leads me to the question:
Were mainframes the first private clouds?
Bear with me--there are a lot of similarities: based on virtualization, pooled resources, reallocation of resources via policy. Also, as someone pointed out to me the other day, some of the potential downsides like vendor lock-in, lack of portability, etc.
So what do you think? I have a Amazon or iTunes gift card for the best (or most entertaining) argument for and against. While you are making your argument, tell me what you think we can we learn from mainframe days that can help us today as we look a cloud models? For example, the mainframe folks certainly had the billing and accounting thing nailed.
So, what do you think?
PS For my next post, I am going to explore if the DEC VT100 was the first instance of VDI?