In the same year Cisco was founded, Kate Bush recorded the hypnotic Cloudbusting, one of her most iconic songs and music videos. Conceived by Terry Gilliam and featuring Donald Sutherland, there is a strikingly poignant moment in the video where Bush’s character is ‘cloudbusting’ with her father and she first realizes that adults are fallible.
Cloud Myth Busting Read More »
Tags: Cisco, cloud, Cloud Computing, collaboration, data center, innovation, virtualization
We created the Evolved Services Platform (ESP) to help our customers increase service revenue while driving down costs. In doing so, we needed to make it expansive to include the breadth of technologies and solutions that would apply to many domains (such as access, Wide Area Network (WAN), and data center) and technologies (such as cloud, security, and video).
And we addressed the fact that a virtualized network function (VNF) is only as good as the automation of orchestration capabilities that are used spin it up and expand it to fit the required job. Given all the VNFs (greater than 40, just counting our own) that we could conceivably be orchestrating, we had to ensure that the Cisco ESP was sufficiently broad and inclusive of multivendor technologies.
The following diagram shows the big picture—the applications and network services made possible by an open, elastic, and application-centric architecture. Read More »
Tags: CPE, data center, engine, epn, esp, evolved programmable network, evolved services platform, orchestration, Service Broker, Virtualized Network Function, VNF, WAN
It’s been a couple of weeks since the Cisco data center and partner teams wrapped up a terrific Oracle OpenWorld 2014. We had a great week of conversations with customers and partners on how Cisco UCS provides a superior platform for Oracle Data Base and applications. We also announced three record-setting benchmarks for Oracle E-Business Suite and Java operations (SPECjbb2013).
This year, we placed a greater emphasis on communicating beyond our booth via video, digital and social streams. We staged a studio in the Cisco booth that enabled us to stream live video interviews with industry luminaries and Cisco experts hosted by theCUBE, the leading interview format show in enterprise tech. We were honored to host Intel CIO Kim Stevenson immediately following her main stage keynote presentation. Other featured guest included Jim McHugh, Cisco VP of UCS Marketing; Intel VP, Shannon Poulin; Cisco VP of Global Partner Marketing; Sherri Liebo; and Red Hat VP, Mike Evans. The videos are now available for replay here.
The Cisco booth was a hub of non-stop action in our theater where we hosted a terrific line-up of presentations by Cisco experts, customers and partners. We took advantage of this opportunity to record video summaries of these sessions and are pleased to present this video library from Oracle OpenWorld 2014.
House of Brick Technologies on the Advantages of Cisco UCS for Oracle Workloads
Why is Cisco UCS everywhere? Dave Welch, CTO of House of Brick Technologies, highlights the many advantages of UCS for Oracle workloads in this discussion with Jim McHugh, Cisco VP of UCS Marketing.
Read More »
Tags: Cisco UCS, data center, Oracle, Oracle Database
Perhaps you’ve seen the shirts. Maybe you’ve joined in or listened to an episode of Cisco Champion Radio. Or maybe you can not resist learning new things and having access to experts in your area of technical expertise.
Join us–submit your CIsco Champion for Data Center nomination today!
No matter the reason, if you are curious about the Cisco Champion program, now is the time to nominate. yourself or a colleague for consideration for 2015!
- October 1: Open call for nominations
- October 31: Deadline to submit nominations
- November 25: Cisco Champion Class of 2015 announced
Act now! It’s a great opportunity to participate in everything from blogger briefings to podcasts, and to get to know your industry and your peers better. We need your voice.
Tags: cisco champion, Cisco Champions, cloud, data center
This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure. MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.
Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs
Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
The process should not be disruptive to the current installation for cabling, performance impact or downtime
The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level
Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.
Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.
Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.
Few years down the road customer may wants to add additional 6,144 8G ports and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps. Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift.
Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.
For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.
As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.
“Good design doesn’t date. Bad design does.”
Tags: 16 Gigabit, 16Gb, 16Gb Fibre Channel, 9710, architecture, availability, best practices, Cisco, cloud, Cloud Computing, Consolidation, convergence, data center, Data Mobility Manager, DCNM, design, Director, dmm, FCIP, FCoE, Fibre Channel, Fibre Channel over Ethernet, IO accelerator, it-as-a-service, MDS, MDS design, nexus, NX-OS, reliability, SAN, Storage, storage area networks, switch, switching, Unified Data Center, Unified Fabric, virtualization