It’s finally here- the new Data Center and Cloud community framework has launched! We created new content spaces for Compute and Storage, Software Defined Networks, Data Center and Networking, and OpenStack and OpenSource Software.
Over the last few years, we’ve heard a lot about ways of designing new software applications. Out of this, we’ve heard a lot about “Microservice Architecture” as a way to design these applications as individual components to make up an actual application. Refer back to this recent blog regarding the impact of microservices and containers on application enablement for the enterprise. Many attempts to define the architecture were undertaken, and in general the complexities of software platform and different viewpoints on the underlying components necessary have not resulted in an agreed solution.
If you are involved in designing, supporting or managing a data center, you will undoubtedly rely on technical support services from one or more vendors. Running your data center, there is always the risk of a hardware failure or being impacted by a software defect. While relatively rare, hardware does occasionally fail unfortunately. However you undoubtedly have technical support in place to deal with such problems. You may have invested in a few extra switches as backup, you may also have failover mechanisms in place. Almost certainly you will have a support contract in place with your Cisco partner or with Cisco, so you have break/fix expertise on tap for when something goes wrong. This is critical support for your business, no debate from me.
Now, arguably the most important resource you have in your data center is not so much individual switches, routers or servers. It’s your engineers, those who design and support your data center. If they have a problem, where and how do they get help? Who helps them when they are stretched? When business pressures are telling? Of course, their colleagues and managers can and will help. Where, however, can they tap into additional sources of expertise so that they can become even more productive for you? This is where Cisco Optimization Services come in – including our award-winning Cisco Network Optimization Service (or “NOS” for short), Collaboration Optimization Service, and the one I’m involved with, Cisco Data Center Optimization Services.
This is my first blog post within the Data Center and Cloud technology area. I recently joined the Openstack@Cisco team under Lew Tucker focusing on advanced OpenStack System research as a Cloud Architect. As part of this role I performed a gap analysis on the functionality (or the lack thereof) of multicast within an OpenStack based private Cloud. Coming from Advanced Services I have seen multicast as a critical component of many datacenters providing group based access to data (streaming content, video conferencing, etc.) . Within a Cloud environment this requirement is almost if not more as critical as it is for enterprise data centers.
This blog will be the first in a series highlighting the current state of multicast capabilities within OpenStack. Here, I focused the analysis on OpenStack Icehouse running on top of Redhat 7 with OVS and a VLAN based network environment. I would like to thank the OpenStack Systems Engineering team for their great work on lying the foundation for this effort (preliminary tests on Ubuntu and Havana).
I used a virtual traffic generator called TeraVM to generate multicast based video traffic allowing for Mean Opinion Score calculation. The Mean Opinion Score or MOS is a calculated value showing the quality of video traffic based on latency, jitter, out of order packets and other network statistics. Historically, the MOS value was based on human perception of the quality of voice calls, hence the word opinion. Since then it has developed to an industry standardized way of measuring the quality of video and audio in networks. It is therefore a good way to objectively measure the performance of multicast on an OpenStack based Cloud. The MOS value ranges from 1 (very poor) to 5 (excellent). Anything above ~4.2 is typically acceptable for Service Provider grade video transmission.
I performed the multicast testing on a basic controller/compute node OpenStack environment, with neutron handling network traffic. In this blog, I focus my analysis solely on opensource components of OpenStack with Cisco products (CSR and N1K) being discussed in a follow-up blog. The tenant/provider networks are separated using VLANs. A Nexus 3064-X is used as the top of rack switch providing physical connectivity between the compute nodes. The nodes are based on UCS-C servers.