Cisco Blogs


Cisco Blog > Data Center and Cloud

Call for #CiscoChampion(s) Nominations 2015!

October 6, 2014 at 2:47 pm PST

Perhaps you’ve seen the shirts. Maybe you’ve joined in or listened to an episode of Cisco Champion Radio. Or maybe you can not resist learning new things and having access to experts in your area of technical expertise.

Join us--submit your CIsco Champion for Data Center nomination today!

Join us--submit your CIsco Champion for Data Center nomination today!

 

No matter the reason, if you are curious about the Cisco Champion program, now is the time to nominate. yourself or a colleague for consideration for 2015!

The Basics:

  • October 1: Open call for nominations
  • October 31: Deadline to submit nominations
  • November 25: Cisco Champion Class of 2015 announced

Act now! It’s a great opportunity to participate in everything from blogger briefings to podcasts, and to get to know your industry and your peers better. We need your voice.

 

Tags: , , ,

MDS 9700 Scale Out and Scale Up

This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure.  MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Some of the customer use cases regarding MDS 9710 are detailed here. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.

Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs

  • Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)
  • The process should not be disruptive to the current installation for cabling, performance impact or downtime
  • The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level

Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.

Edge Core Design Day 1

Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.

Edge Core Design Scale UP

Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.

Edge Core Design Day1

 

Few years down the road customer may wants to add additional 6,144 8G ports  and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps.  Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift. Edge Core Edge Design Scale Out

Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.

Edge Core Edge Example 1 ScaleUP

For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.

 

Edge Core Edge Design Scale Out Scale Up

 

As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.

“Good design doesn’t date. Bad design does.”
Paul Rand

 

Tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

NetApp and Cisco Deliver Extreme Performance For Oracle Database

September 30, 2014 at 8:00 am PST

Guest post by Aaron Newcomb, Solutions Marketing Manager, NetApp

FlexPodNo one wants the 2:00 am distressed phone call disturbing a good night’s sleep. For IT Managers and Database Administrators that 2:00 am call is typically bad news regarding the systems they support. Users in another region are not able to access an application. Customers are not placing orders because the system is responding too slowly. Nightly reporting is taking too long and impacting performance during peak business hours. When your business critical applications running on Oracle Database are not performing at the speed of business that creates barriers to customer satisfaction and remaining competitive. NetApp wants to help break down those barriers and help our customers get a good night sleep instead of worrying about the performance of their Oracle Database.

NetApp today unveiled a solution designed to address the need for extreme performance for Oracle Databases with FlexPod Select for High Performance Oracle RAC. This integrated infrastructure solution offers a complete data center infrastructure including networking, servers, storage, and the management software you need to run your business 24x7 365 days a year. Since NetApp and Cisco validate the architecture you can deploy your Oracle Databases with confidence and in much less time than traditional approaches. Built with industry-leading NetApp EF-550 flash storage arrays and Cisco UCS B200 M3 Blade Servers this solution can deliver the highest levels of performance for the most demanding Oracle Database workloads on the planet.

The system will deliver more than one million IOPS of read performance for Oracle Database workloads at sub-millisecond latencies. This means faster response times for end users, improved database application performance, and more overhead to run additional workload or consolidate databases. Not only that, but this pre-validated and pre-tested solution is based on a balanced configuration so that the infrastructure components you need to run your business are working in harmony instead of competing for resources. The solution is built with redundancy in mind to eliminate risk and allow for flexibility in deployment options. The architecture scales linearly so that you can start with a smaller configuration and grow as your business needs change optimizing a return on investment. If something goes wrong the solution is backed by our collaborative support agreement so there is no finger pointing and only swift problem resolution.

So what would you do with one million IOPS? Build a new application that will respond to a competitive threat? Deliver faster results for your company? Increase the number of users and transactions your application can support without having to worry about missing critical service level agreements? If nothing else, imagine how great you will sleep knowing that your business is running with the performance needed for success.

Tags: , , , , , ,

Enabling Data Center Services with RISE : Remote Integrated Services Engine

Data Centers are becoming increasingly smart, intelligent and elastic. With the advancement in cloud and virtualization technologies, customers demand dynamic workload management, efficient and optimal use of their resources. In addition the configuration and administration of Data Center solutions is complex and is going to become increasingly so.RISE

With these requirements and architectures in mind we have a industry first solution called Remote Integrated Service Engine (RISE).  RISE is a technology that simplifies provisioning, out of box management of service appliances like load balancers, firewalls, network analysis modules. It makes data center and campus networks dynamic, flexible, easy to configure and maintain.

RISE can dynamically provision network resources for any type of service appliance (physical and virtual form factors). External appliances can now operate as integrated service modules with Nexus Series of switches without burning a  slot in a switch . This technology provides robust application delivery capabilities that accelerate the application performance manifold.

RISE is supported on all Nexus Series switches with services like Citrix NetScaler MPX, VPX, SDX and Cisco Prime NAM with many more in the pipeline.

Advantages & Features

  1. Simplified Out-of-Box experience : reduces the administrator’s manual configuration steps from 30 to 8 steps !!
  2. Supported on Citrix NetScaler MPX, SDX, VPX, and Nexus 1KV with VPX
  3. Supported on Cisco Prime Network Analyzer Module
  4. Automatic Policy Based Routing - Eliminates need for SNAT or Manual PBR
  5. Direct and Indirect Attach mode integration
  6. Show module for RISE
  7. Attach module for RISE
  8. Auto Attach – Zero touch configuration of RISE
  9. Health Monitoring of appliance
  10. Appliance HA and VPC supported
  11.  Nexus 5K/6K support (EFT available)
  12. IPV6 support (EFT available)
  13. DCNM support
  14. Order of magnitude OPEX savings: reduction in configuration, and ease of deployment
  15. Order of magnitude CAPEX savings: Wiring, Power Rackspace and Cost savings

For more information, schedule an EFT or POC Contact us at nxos-rise@cisco.com

Resources

RISE press release on Wall Street Journal : http://online.wsj.com/article/PR-CO-20140408-905573.html
RISE At A Glance white paper: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/at-a-glance-c45-731306.pdf
RISE Video at Interop: https://www.youtube.com/watch?v=1HQkew4EE2g
Cisco RISE page: www.cisco.com/go/rise
Gartner blog on RISE: “Cisco and Citrix RISE to the Occasion”: http://blogs.gartner.com/andrew-lerner/2014/03/31/cisco-and-citrix-rise-to-the-adc-occasion/

Tags: , , , , , , , , , , , , ,

The Role of Connectivity in Powering the Intercloud: Equinix

Today’s announcement expands the reach of the Intercloud by 250 additional data centers in 50 countries, and advances Cisco’s OpenStack based cloud strategy to address customer requirements for a globally distributed, highly secure cloud platform capable of meeting the robust demands of the Internet of Everything.  Cisco’s open approach to the Intercloud is designed for high-value application workloads, with real-time analytics and “near infinite” scalability and allows local hosting and local provider options that enable data sovereignty around the world.

Essentially, there are three components to this Intercloud strategy that set us apart from other companies. It starts with Cisco’s cloud architectural solutions including UCS, our Application Centric Infrastructure (ACI), and a networks functions virtualization (NFV) driven policy. The second component is network connectivity and providing the user with the right quality of service (QoS) experience for their application workloads. And the third component is our partners, who play a critical role in building out this network of clouds from a data center, network, application acceleration and compliance/data sovereignty perspective. In this blog I’d like to delve further into network connectivity and the role that our newest hosting partner, Equinix, plays in powering our Intercloud vision.

Importance of Network Connectivity in Hybrid Cloud

The role of the CIO has to move from a builder of services for the enterprise to an orchestrator of services across private clouds and various public clouds. This hybrid cloud orchestration has to be secure, hypervisor independent, manageable and compliant with all the enterprise’s IT policies across the full IT stack and across all the clouds. Cisco’s Intercloud capabilities are designed to do exactly this and will be enhanced by enabling the orchestration to be carried out in a private hosted environment where these cloud providers will be virtually located within the same exchange. This will facilitate workload interconnections between cloud providers in true hybrid cloud fashion with the lowest application latency and secure workload management for customers.

Where better to do this than in Equinix’s data centers and through the Equinix Cloud Exchange (ECX)? As the world’s largest IBX data center and colocation provider, the company offers fast application performance and low latency routes across all continents. The company provides a global interconnection platform called Equinix Cloud Exchange that hosts private clouds for enterprise customers and facilitates over 135,000 connections among more than 4,500 customers. Cisco will enable the Equinix Cloud Exchange to deliver secure private cloud access to the rich ecosystem of cloud service providers in Equinix data centers globally and to deploy Cisco Intercloud capabilities in 16 Equinix markets across Europe, Asia and the Americas. Equinix also plans to deploy key Cisco technologies and services across its Cloud Exchange, including the Cisco Nexus 9000 Series switch, Cisco APIC, and the Cisco Evolved Services Platform.

For Equinix this announcement significantly enhances their value proposition to the CIO. Their Equinix Cloud Exchange solution will now be able to guarantee full bi-directional workload portability across any hypervisor and full extensible application policy compliance across all services and clouds. This will enhance their already unique interconnect capabilities, lowest latency capabilities and extensive global footprint.

Beginning and Ending with Network Connectivity

So it is all about the connectivity, but this is not a new proposition. It’s one that has been proven consistently over the last 30 years. When networks first emerged they were proprietary, did not interoperate and as a result customers had to choose which one to use.  Cisco and our partners played a major role in seamlessly connecting them together to create the Internet.  As a result, business processes were transformed, billions of dollars of value was created and a large successful partner ecosystem emerged. As we look at the cloud landscape today we see several similarities – many independent closed and proprietary clouds which were designed to maximize vendor revenue rather than enable interoperability, security and compliance. The combined value of Cisco and Equinix will provide fast, open, secure connectivity and will unleash the value of hybrid cloud for enterprises globally.

Together with our partners we will connect the clouds to create the Intercloud.

 

Tags: , , , ,