Cisco Blogs

Cisco Blog > Data Center

Cisco publishes results of the New SAP Concurrent Benchmark

Cisco is the first organization to publish a result for the new SAP Concurrent Benchmark

Cisco is the first organization to publish a result for the new SAP Concurrent Benchmark and have it certified by SAP on behalf of the SAP Benchmark Council. The benchmark allows vendors to demonstrate how well their SAP environments work side by side in a shared environment. Getting a new benchmark running and tuned can be difficult for some vendors, but because the Cisco Unified Computing SystemTM (Cisco UCS®) is a platform built for virtualization we were the first to demonstrate results—and we did it all using virtualization with Microsoft software: the operating system, the database management system, and the hypervisor.

Recently, the SAP Benchmark Council created a new category of concurrent benchmarks that allows benchmarking of multiple SAP dialog applications
running concurrently using shared resources—in our scenario, on a single server. The benchmark rules allow the use of any supported partitioning and isolation technologies, including hypervisors, hardware partitioning, and operating system containers. With a benchmark designed by SAP to measure the performance of these environments, we now can make objective comparisons between the same SAP applications running on bare metal or in concurrent environments with results certified by SAP.

Benchmark Results

Not only did Cisco publish the first- ever results on this new concurrent benchmark, but the results are remarkable. Comparison of the results with the results for the same software configuration running on a bare-metal server shows that the penalty for running in a virtualized environment was only 6.6 percent in terms of benchmark users, and only 6.7 percent in terms of SAP Application Performance Standard (SAPS) score.

With the Cisco UCS C240 M4 Rack Server powered by the Intel Xeon processor E5-2600 v3 product family, Cisco supports a total of 14,975 SAP SD users or a total SAPS score of 81,827. This result is excellent for virtualized environments and is further evidence that when you choose Cisco® servers and a complete Microsoft software stack, you have access to outstanding SAP performance.


Many organizations and SAP administrators prefer to run their landscapes on Microsoft software stacks. This first-ever SAP Concurrent Benchmark result shows just how easily you can incorporate virtualization software from Microsoft to add
more flexibility to SAP application deployments with little performance impact.
Now you can use our SAPS score certified by SAP on behalf of the
SAP Benchmark Council to estimate your capacity on Cisco UCS running Microsoft software and run all your SAP landscapes in a shared environment with higher utilization rates and with less infrastructure.


Tags: , , , , , ,

Seven Ways to Move to the Cloud

While cloud computing is based on a number of technology innovations, I’m going to write for the non-technical person who I think needs to understand this major shift.  In the end, cloud computing will affect every business, every industry.  I’ll start this blog by sharing a story.

A few years ago, I was in a meeting with six CIOs of one of the largest healthcare providers. I asked each a question as they introduced themselves: “What are you working on?”

The first CIO, Bill, replied, “I’m working on a strategy to move to cloud.”

Next, I asked Mary, “What do you do?” Mary also said she was working on a strategy to move the cloud.

We got through every one of them and every one of them had the same answer.

I asked, “So what does that mean, working on a strategy to move to the cloud?”

They collectively said, “We’re really not sure, but we’re working on it.”

I wasn’t actually there to talk to them about cloud computing, but I said, “Give me 10 to 15 minutes to help you think about what it might mean to move to the cloud.”

I’d like to share an abbreviated view of this discussion in this blog, beginning with reviewing my cloud-computing framework.
Read More »

Tags: , , , ,

Run simple, run fast, run lean. Run SAP on Cisco UCS.

Run simple sap








Picture1The New Year has arrived, and with it, SAP has added new solutions to their portfolio. The latest announcement is SAP’s Business Suite on HANA (S4HANA).

Over the past several years, SAP’s overall movement to create an in-memory platform has been strengthened by industry-leading and mission-critical solutions such as Tailored Datacenter Integration (TDI), integrated infrastructure, big data, and Internet of Things. How much do you know about these solutions? Would you like to learn more?

Cisco has developed a series of educational webinars to provide a deep dive into each of these solution areas.

These webinars have been created to educate and provide you the opportunity to ask the hard questions about how these SAP and SAP HANA solutions will fit into your organization.

Register for the individual webinars to learn how you can maximize your company’s competitive edge.


Link 1



link 2



link 3



link 4



link 5




We look forward to seeing you there!


Tags: , ,

New Cisco APIC Software allows stretched ACI Fabric across long distances

In the world of Cisco ACI, there is never a shortage of excitement and action. Today, we are pleased to bring to your attention news about the latest Cisco APIC software release. If you wonder what’s hot of the press in APIC SW release 1.0(3f) for Nexus 9000 series ACI mode, there are quite a few.

The Stretched Fabric feature captures the headlines. For quite some time now customers have been asking for an ACI Fabric that can stretch across datacenters and over long distances. The new software allows for each leaf and spine, that participate in creating a fabric, to be located up to 30 KMs apart.  It also removes the restriction for every leaf to be connected to all spines. Let us take a close peek at the stretched fabric feature.

ACI Stretched Fabric Topology

Stretched ACI fabric is a single fabric. It is a partially meshed design that connects ACI leaf and spine switches distributed in multiple locations. Typically, an ACI fabric implementation is a single site where the full mesh design connects each leaf switch to each spine switch in the fabric.  This yields the best throughput and convergence. In multi-site scenarios, full mesh connectivity may be not possible or may be too costly. Multiple sites, buildings, and rooms can span distances that are not serviceable by enough fiber connections, or are too costly to connect each leaf switch to each spine switch across the sites.  Diagram below illustrates the stretched fabric architecture.

Transit Leaf Switch Guidelines

Transit leaf refers to the leaf switches that provide connectivity between two sites. Transit leaf switches connect to spine switches on both sites. There are no special requirements and no additional configurations required for transit leaf switches

Provision Transit and Border Leaf Functions on Separate Switches

The key benefits of stretched fabric include workload portability and VM mobility.The stretched ACI fabric behaves the same way as a regular ACI fabric, supporting full VMM integration. For example, one VMWare vCenter operates across the stretched ACI fabric sites. The ESXi hosts from both sites are managed by the same vCenter and Distributed Virtual Switch (DVS).  They are stretched between the two sites.

The ACI switch and APIC  software recover from various failure scenarios. Check out the failover scenario analysis for details.

Additional resources


Tags: , , , , , ,

MP-BGP eVPN control plane for VXLAN – SDN is growing up

We are all proud parents of our products as developers, much like our own children, we see them born, care and feed for them, watch them carefully as they are unstable during early years, we do not go out much, they become more stable over time, and then something happens – they grow up and there is a need to interact with others.  This could describe some of early customer experiences with first generation SDN LAN Emulation technologies.

Cisco Systems introduction and support of Multi-Protocol BGP eVPN control plane for VXLAN is an indication that the SDN industry is growing up, leveraging standards-track protocols, and enabling SDN to scale and interact with others.  This is far more significant to the SDN industry than one can read in a single press release and we will expand on its relevance in this blog.

Let’s start with some basic understanding and a bit of SDN history.

SDN encapsulation into overlays or tunnels are not new technologies and have been supported for many years describes GRE encapsulation and was written in 1994.  Anyone who uses a VPN also uses encapsulation such as IPSEC, so nothing new.  What is new are the SDN controller applications, how they enable logical network functions, and support centralized automation of the infrastructure for data center networks.  I will not go into all of the use cases for SDN overlays as you can find those readily by speaking to your vendor or searching for them on the web.

There are multiple controller architectures available for SDN. I will simply characterize them in three buckets and two additional qualifiers; OpenFlow, Integrated, and Decoupled are the three buckets;  SDN LAN Emulation and Policy-based are two qualifiers.  Today much of the confusion for customers is that vendors are still debating about, and attempting to monetize, “their” method of SDN.

There are key distinctions between the two qualifiers SDN LAN Emulation and Policy-based.  SDN LAN Emulation controllers reproduce properties of the layer 2 and layer 3 networks in the overlay including address learning and distribution, leveraging x86 servers to emulate LAN functions, where the overlay termination end points map logical network destinations to physical next hop in the overlay.

Policy-based controllers use fewer x86 servers by just mapping policy at the physical or virtual switch, benefitting from the integration of the overlay into vSwitches, merchant, and custom ASIC switches in an an open and cooperative manner which eliminates the need for LAN Emulation x86 components, providing more scale and far fewer components than SDN LAN Emulation models.

Five to six years ago the SDN industry started with controller applications providing a software function described with an ability to reproduce network functions from the physical network into a logical network, and overlay that logical network on top of the physical infrastructure.  I refer to this reproduction as SDN LAN Emulation as it has similarities to ATM LAN Emulation .  

Similar to how virtualization evolved in compute, starting with software only, followed by Intel introducing VT and AMD introducing -V, for the purpose that virtualization worked better when it was cooperative with hardware.  In the early days of SDN LAN Emulation controllers, none of the overlay or gateway functions existed in hardware, but today 70 to 90 percent of the SDN LAN Emulation controller use cases exist in every merchant ASIC from Broadcom.

SDN controllers do three basic operations, they run the SDN application, described often as a distributed computing application, they expose a northbound API for orchestration, and they expose a southbound API for programming physical and virtual overlay termination end points.  The overlay termination end points are referred to here as VTEP’s (VXLAN Tunnel End Point) for VXLAN, as it is the most common encapsulation and end point discussed for SDN today.

This is a basic, but fair characterization of SDN controllers, irrespective of whether they are coupled, decoupled, LAN Emulation, or Policy-based.  Integrated controllers provide the SDN controller running the application, the northbound API, the southbound API, and the VTEP.  Decoupled controllers do all of the items mentioned below, but they are meant to support the integration of separate components from third party vendors in each of the afore mentioned categories.

Examples of integrated controllers are VMware NSX and Cisco ACI.  In each of these implementations, the SDN controller application, the API’s north and south, and either a physical or virtual VTEP is provided by the same vendor.

VMware NSX is an SDN LAN Emulation controller that integrates with the NSX vSwitch VTEP provided by VMware for vSphere. Today VMware has a Multi-hypervisor product that enables the NSX Multi-hypervisor controller with a VMware supplied version of Open vSwtich to speak with Xen and KVM hypervisors (you must get VMware’s version of OVS).  VMware tightly controls the vSwitch API’s for VTEP’s in the kernel in vSphere, unlike that of RedHat, Xen Server, and Microsoft.  VMware leverages the informational RFC, OVSDB to integrate with some vSwitches and third party hardware VTEP’s.

Cisco Systems Application Centric Infrastructure (ACI) is a policy-based controller architecture with the Application Policy Infrastructure Controller or APIC, API’s north and south, physical and virtual VTEP’s.  Cisco works with open hypervisor vswitches such as OVS from Xen and KVM, Hyper-V, VMware VDS, VMware VSS, Cisco Systems Application Virtual Switch (AVS), and the Cisco Nexus 1000v, third party hardware VTEP vendors, virtual and physical layer 4 – 7 appliance vendors, each integrating the OPFLEX control protocol (outside of VMware provided vSwitches) for a southbound API and distributed control system leveraging a declarative policy model.  The northbound and southbound API’s are fully published from Cisco with ACI.  Cisco provided VTEP’s, both physical and virtual, also support or integrate directly with Multi-protocol BGP eVPN as a control plane for VXLAN.

Multi-protocol BGP eVPN as a Control Plane for VXLAN is a standards-track, distributed control plane offering a significant shift in customers ability to build and interconnect SDN overlay networks, while removing the need to run or configure multicast routing in the physical network.

A little more background is required to understand why Multi-protocol BGP eVPN as a control plane for VXLAN is so significant, so please bear with me a few more paragraphs, as the point is coming.

Various SDN controllers including VMware NSX, leverage the informational RFC OVSDB OVSDB is a management protocol supporting programmability between an SDN controller a vSwitch or hardware VTEP, providing configuration such as termination of tunnels in an overlay network.  The OVSDB VTEP.5 schema is shown below:

VTEP5 Schema –
Table — Purpose
Global — Top-level configuration
Manager — OVSDB management connection
Physical_Switch — A physical switch
Physical_Port — A port within a physical switch
Logical_Binding_Stats — Statistics for a VLAN on a physical port bound to a logical network
Logical_Switch — A layer−2 domain
Ucast_Macs_Local — Unicast MACs (local)
Ucast_Macs_Remote — Unicast MACs (remote)
Mcast_Macs_Local — Multicast MACs (local)
Mcast_Macs_Remote — Multicast MACs (remote)
Logical_Router — A logical L3 router.
Physical_Locator_Set — Physical_Locator_Set configuration
Physical_Locator — Physical_Locator configuration

Looking at the table above you quickly realize this represents a limited set of options that will require more interaction between the SDN controller and the VTEP being configured leveraging OVSDB than what is defined in the spec.  There are multiple elements in this table, but the primary element is to carry layer 2 reachability information in the overlay and communicate that between the controllers and VTEP’s.  An SDN LAN Emulation controller leveraging OVSDB is involved in address learning and distribution of addresses to the VTEP’s.  This means that the data path is dependent upon the capacity of the x86 platforms running the controller software, it’s ability to learn and distribute addresses to the VTEP’s, and the VTEP’s need to be tightly coupled to the OVSDB spec leveraging an imperative model.  Any feature must be conceptualized in the SDN LAN Emulation environment and then mapped to the data path at the VTEP’s doing the forwarding or gateway functions.

This is a major friction point in large SDN installations because the controller dictates the feature velocity and scale, the VTEP features must be tightly aligned with this model, and any feature changes are limited by the development of this specification which is an informational draft. VTEP’s are primarily ToR’s and vSwitches.  Any other configuration or innovation must be controlled through vendor integration outside of the specification and coordinated across the platforms – features such as VTEP or gateway HA, link management, or others as an example. This is where marketing open moves to the reality of vendor dependence and integration.

Vendors exclusively supporting OVSDB as a management protocol and schema for third party hardware VTEP’s and to integrate with vSwitches are limited by the scale and open or integration implications of this model.  Think back however, the basic function for OVSDB is to enable layer 2 reachability information in the overlay.

What happens if you want to extend your layer 2 and layer 3 information across a data center interconnect, to WAN routers, or across overlay networks that may have other SDN controllers, leveraging a standards-based protocol?

Enter eVPN MP-BGP EVPN control plane for VXLAN.

MP-BGP EVPN control plane for VXLAN offers the following key benefits:
Leveraging an industry standards-track control protocol it enables multi-vendor interoperability and the following benefits:
  • Control plane learning for end host Layer-2 and Layer-3 reachability information to build more robust and scalable VXLAN overlay networks.
  • Leverages the decade-long MP-BGP VPN technology to support scalable multi-tenant VXLAN overlay networks.
  • EVPN address family carries both Layer 2 and Layer 3 reachability information. This provides integrated bridging and routing in VXLAN overlay networks.
  • Minimizes network flooding through protocol-driven host MAC/IP route distribution and ARP suppression on the local VTEPs.
  • Provides optimal forwarding for east-west and north-south bound traffic with the distributed anycast function
  • Provides VTEP peer discovery and authentication which mitigates the risk of rouge VTEPs in the VXLAN overlay network.
Now you no longer have to be limited to one controller, one vSwitch, and one SDN domain.  Leveraging MP-BGP EVPN control plane for VXLAN can create independent exchanges of layer 2 and layer 3 reachability information across overlays, VXLAN gateways, DC or WAN devices, and dramatically improves scale as MP-BGP EVPN control plane for VXLAN is a distributed to control plane not limited to the scale implications or the lock-in control and development of one schema.  Cisco Nexus 9000 with NXOS, Cisco ACI, and vSwtiches all integrate or directly support MP-BGP EVPN control plane for VXLAN and is an expansion to the open choices customers have for SDN from Cisco.

So what should you be asking from your vendors?  Every VTEP in your network should have the ability to integrate or support MP-BGP EVPN control plane for VXLAN, and it should be in every RFP. You should ensure each API is fully published without 3rd party vendor’s being restricted from accessing or integrating with these API’s – this includes vSwitches inside of the hypervisor, top of rack switches, and layer 4 – 7 appliances.

In the transformation of traditional IT models to supporting DevOps and Cloud operations, vendor’s willingness to cooperate varies over time. Leveraging standards-track protocols such as MP-BGP EVPN control plane for VXLAN and keeping the API’s fully published, ensures the customer is no longer trapped by one vendor’s implementation and the customer can drive their own integration or automation by calling the URI objects delivered through open and published RESTful API’s.