Cisco Blogs

Cisco Blog > Data Center

Announcing the new Data Center and Cloud Community!

It’s finally here- the new Data Center and Cloud community framework has launched! We created new content spaces for Compute and Storage, Software Defined Networks, Data Center and Networking, and OpenStack and OpenSource Software.


Cisco Data Center and Cloud Community Infrastructure

Read More »

Tags: , , , , , , , , , , , ,

An Open Framework for Hosting Multi Data Center Distributed Applications on the Cisco Cloud

Ken Owens (@kenowens12), Keith Chambers (@keithchambers), and Jason Plank (@_japlank_)

Over the last few years, we’ve heard a lot about ways of designing new software applications. Out of this, we’ve heard a lot about “Microservice Architecture” as a way to design these applications as individual components to make up an actual application. Refer back to this recent blog regarding the impact of microservices and containers on application enablement for the enterprise. Many attempts to define the architecture were undertaken, and in general the complexities of software platform and different viewpoints on the underlying components necessary have not resulted in an agreed solution.

Read More »

Tags: , , , , , , , , , ,

Scaling OpenStack L3 using Cisco ASR1K platform

Cisco has developed a plug-in to integrate the ASR 1000 Series Router (ASR1K) into OpenStack to offload L3 capabilities on to dedicated routing hardware.  The plug-in was demonstrated at Cisco Live in a Proof-of-Concept environment. We are planning demos of a Cloud solution based on the ASR1k plug-in at the OpenStack Summit in Vancouver. The plug-in is considered open source and will be submitted upstreamed into OpenStack. It will also be available from Cisco’s Neutron Tech-Preview repository for Juno.
OpenStack offers a reference software implementation for Layer 3 functionality. Routing, static NAT (Floating IPs) and dynamic NAT/SNAT (VM “Internet” access) are handled by the l3agent that runs part of the neutron component. The L3 agent relies on Linux IP Tables to define forwarding rules. With that comes a critical scalability issue as Linux IPTables has inherent scaling shortcomings. For highly-scalable clouds with many route and NAT operations, this becomes a serious bottleneck.
Cisco offers the ASR1K routing platform to be used in Data Centers typically for WAN edge operations. It performs NAT and L3 forwarding in hardware and provides L3 high-availability (HSRP). The ASR Config Agent builds upon the same technology utilized for the integration of the Cisco Cloud Services Router (CSR1000v) into OpenStack.

Read More »

Tags: ,

Your Design Engineers Need Support and ‘Expertise on Tap’ Too!

If you are involved in designing, supporting or managing a data center, you will undoubtedly rely on technical support services from one or more vendors.  Running your data center, there is always the risk of a hardware failure or being impacted by a software defect.  While relatively rare, hardware does occasionally fail unfortunately.  However you undoubtedly have technical support in place to deal with such problems.  You may have invested in a few extra switches as backup, you may also have failover mechanisms in place.  Almost certainly you will have a support contract in place with your Cisco partner or with Cisco, so you have break/fix expertise on tap for when something goes wrong.   This is critical support for your business, no debate from me.

Engineer Under Stress!

Engineer Under Stress!

Now, arguably the most important resource you have in your data center is not so much individual switches, routers or servers.  It’s your engineers, those who design and support your data center.  If they have a problem, where and how do they get help?  Who helps them when they are stretched?  When business pressures are telling?  Of course, their colleagues and managers can and will help.  Where, however, can they tap into additional sources of expertise so that they can become even more productive for you?  This is where Cisco Optimization Services come in – including our award-winning Cisco Network Optimization Service (or “NOS” for short), Collaboration Optimization Service, and the one I’m involved with, Cisco Data Center Optimization Services.


Read More »

Tags: , , , , , , , ,

Investigating OpenStacks Multicast capabilities (Part 1 of 3)

This is my first blog post within the Data Center and Cloud technology area. I recently joined the Openstack@Cisco team under Lew Tucker focusing on advanced OpenStack System research as a Cloud Architect. As part of this role I performed a gap analysis on the functionality (or the lack thereof) of multicast within an OpenStack based private Cloud. Coming from Advanced Services I have seen multicast as a critical component of many datacenters providing group based access to data (streaming content, video conferencing, etc.) . Within a Cloud environment this requirement is almost if not more as critical as it is for enterprise data centers.

This blog will be the first in a series highlighting the current state of multicast capabilities within OpenStack. Here, I focused the analysis on OpenStack Icehouse running on top of Redhat 7 with OVS and a VLAN based network environment. I would like to thank the OpenStack Systems Engineering team for their great work on lying the foundation for this effort (preliminary tests on Ubuntu and Havana).

I used a virtual traffic generator called TeraVM to generate multicast based video traffic allowing for Mean Opinion Score calculation. The Mean Opinion Score or MOS is a calculated value showing the quality of video traffic based on latency, jitter, out of order packets and other network statistics. Historically, the MOS value was based on human perception of the quality of voice calls, hence the word opinion. Since then it has developed to an industry standardized way of measuring the quality of video and audio in networks. It is therefore a good way to objectively measure the performance of multicast on an OpenStack based Cloud. The MOS value ranges from 1 (very poor) to 5 (excellent). Anything above ~4.2 is typically acceptable for Service Provider grade video transmission.

I performed the multicast testing on a basic controller/compute node OpenStack environment, with neutron handling network traffic. In this blog, I focus my analysis solely on opensource components of OpenStack with Cisco products (CSR and N1K) being discussed in a follow-up blog. The tenant/provider networks are separated using VLANs. A Nexus 3064-X is used as the top of rack switch providing physical connectivity between the compute nodes. The nodes are based on UCS-C servers.

Read More »

Tags: , ,