Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.
MDS 9700 provides unprecedented
Performance - 24 Tbps Switching capacity
Reliability -- Redundancy for every critical component in the chassis including Fabric Card
Flexibility -- Speed, Protocol, DC Architecture
In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.
In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend
As a Cloud Architect, I’ve had the privilege to work with CTOs and CIOs across the globe to uncover the key factors driving Business Continuity and Workload Mobility across their cloud infrastructures. We’ve worked with enterprises, large and small, and service providers to answer their top five concerns in our new Business Continuity and Workload Mobility solution for the Private Cloud.
1) Can you provide business continuity, workload mobility, and disaster recovery for my unique mix of applications, with lower infrastructure costs and less complexity for my operations teams? Yes.
2) Can you provide a multi-site design that reduces business outages and costly downtime, allowing my critical applications to be more secure and available? Yes.
3) Can my operations teams perform live migrations of applications across sites while maintaining user connections, security, and stateful services? Yes.
4) Does your multi-site solution allow me to utilize idle standby capacity during “normal” operations, and reclaim that capacity as needed during an outage event? Yes.
5) Can your Cisco Validated Design greatly reduce my deployment risks and simplify my design process, saving my business significant time, money, and resources? Yes.
A Proven Multi-site Design, Built on the Most Widely Deployed Cloud Infrastructure
We addressed each of these pain points as we designed, built, and validated our new multi-site business continuity and workload mobility solution. Our multi-site solution is built upon Cisco’s cloud foundation, the Virtual Multi-service Data Center (VMDC) that’s been deployed at hundreds of the world’s top enterprises and service providers. In our latest VMDC release, we’ve extended our cloud design to support multi-site topologies and critical use cases for private cloud customers. This validated design simply connects regional and long-distance data centers within your private cloud to address some critical IT functions, including:
application business continuity across data center sites;
stateful workload mobility across data center sites, will maintaining user connections and security;
application disaster recovery and avoidance across data center sites; and
application geo-clustering and load balancing across data center sites.
Choose the Cloud Infrastructure that Fits Your Unique Business Needs
The VMDC Business Continuity and Workload Mobility solution (CVD Design Guide) is grounded in the reality of today’s cloud environment, providing different design choices that match your applications needs. We realize there is no “one size fits all” cloud design, that’s why we support both physical and virtual resources, multiple hypervisors and storage choices, and security compliant designs with industry certifications like FISMA, PCI, and HIPPA.
Key Factors Driving Business Continuity and Workload Mobility in the Private Cloud Read More »
Today the Internet of Things (IoT) is everywhere: you can easily see smart meters on houses, parking sensors in the ground, cameras attached to traffic posts, and people wearing intelligent wristband and glasses -- all of them connected to the Internet. And this is only the tip of the iceberg: while you are reading this blog post factories, trains and trucks around the world are also being connected to the Internet.
Many traditional industries have historically requested help from different types of engineers to improve their processes and gain efficiency. Now they are asking us, the Internet engineers, to contribute solving new industrial world challenges by connecting billions of new devices.
The more ambitious part of this journey is the integration between both worlds: Information Technology (IT) and Operation Technology (OT). For that a systems approach is required to scale the existing Internet infrastructure to accommodate IoT use cases, while making IT technology easy to adopt for OT operators. We are facing a historical opportunity to convergence massive scale systems in a way we have never seen before, and such an effort will unlock a multibillion-dollar business.
In order to be ready to capture this opportunity and scale in a sustainable manner, four requirements are necessary:
If anything is certain about the video business, it’s this: the volume of change is daunting and every change tends to make life more complicated, not less.
This is certainly true at the sharp end of the business -- digital video processing – where “multiscreen” video, new video formats and new video technologies are together creating a perfect storm of complexity. Once there was SD over MPEG2 delivered to TVs. Now there is SD, various flavors of HD and, soon, 4K; and MPEG2, AVC and now HEVC; plus a wealth of encapsulation schemes and DRMs; And even more screen sizes and resolutions as the number of device to be supported grows ever larger.
The number of permutations of all these options is truly dizzying. Every permutation is a potential video “workflow” to be implemented – and the number of permutations is expanding rapidly, apparently endlessly and it’s exponential. Today Cisco deals with some media companies that have over 80 video workflows for their content. One more video format – for instance 4K – and this potentially doubles to 160. Another compression scheme – HEVC perhaps -- and now we have 320. And so on.
Keeping track of all these “workflows” is one thing, but Read More »
I knew we were on to something good when a customer told me “This is so easy, it’s CTO proof.”
Early in the business, I was talking to a front-line server admin who had found that Cisco UCS made server deployment so reliable, automated and simple that he was convinced even his CTO could pull it off without breaking anything. The enthusiasm was real, and infectious, and it changed the face of the data center market.
Thinking back five years to March of 2009, when Cisco introduced UCS, the economy was still spiraling into the worst recession of our lifetime. IT budgets were being slashed. Many wondered if it was the right time for Cisco to enter a new market with deeply entrenched competitors.
In the decade leading up to 2009, computing innovation had stalled. The incumbents still had tunnel vision on the power and cooling challenges that arose out of multi-core processing in the mid-2000’s. Innovation was essentially focused on mechanical packaging: blade servers for mainstream IT and “skinless” boxes for the hyperscale crowd. Overlooked was the real problem for the vast majority of customers: operational complexity. Remember that server virtualization was rapidly spreading in nearly every data center. Again, this was originally a response to a hardware problem: processor utilization; but as everyone recognized the operational benefits, virtualization was taking hold very fast. As was cloud. Combine all this with the disaggregation of data storage from the server, which had already moved out onto the network as NAS and SAN many years before, and you had a perfect storm of complexity threatening to outpace the capacity of many IT organizations. The individual technologies in the data center were not overwhelmingly complex but tying them all together, into a system where you could land and scale an application in a very secure and available way, became the all-consuming job of the customer. Collectively, the industry had failed. In 2009, more than ever, customers needed something to help them slash OPEX in the data center and free people up to face the challenges of the day. This was the innovation vacuum that UCS had been designed to fill.
Think of UCS as the Turducken of the data center: the sum is much, much greater (and tastier) than the parts. A lot of true innovation has gone into UCS in the areas of server I/O and in fundamental advancements to server management technology. The latter is especially critical, because what is often overlooked in virtualization and cloud discussions is the underlying issue of deploying, managing and scaling the physical infrastructure itself (details, details…) The advent of UCS completed the total abstraction and automation of hardware in crucial ways that hypervisor and cloud technology still can’t acheive on their own. API-controlled data center hardware is a foundational element of modern IT innovation, and UCS started it all. This may be Cisco’s greatest contribution to the industry and charted the course for Cisco ACI in the broader data center.
The team has put together this interactive timeline that commemorates many of the milestones in the first five years of UCS. Looking back over it, I can only feel proud and humbled to be associated with the team here at Cisco, our technology and channel partners, and most importantly with our customers, who have clearly proven that UCS was (and is) the right solution at the right time.