Over the last few years, we’ve heard a lot about ways of designing new software applications. Out of this, we’ve heard a lot about “Microservice Architecture” as a way to design these applications as individual components to make up an actual application. Refer back to this recent blog regarding the impact of microservices and containers on application enablement for the enterprise. Many attempts to define the architecture were undertaken, and in general the complexities of software platform and different viewpoints on the underlying components necessary have not resulted in an agreed solution.
If you are involved in designing, supporting or managing a data center, you will undoubtedly rely on technical support services from one or more vendors. Running your data center, there is always the risk of a hardware failure or being impacted by a software defect. While relatively rare, hardware does occasionally fail unfortunately. However you undoubtedly have technical support in place to deal with such problems. You may have invested in a few extra switches as backup, you may also have failover mechanisms in place. Almost certainly you will have a support contract in place with your Cisco partner or with Cisco, so you have break/fix expertise on tap for when something goes wrong. This is critical support for your business, no debate from me.
Now, arguably the most important resource you have in your data center is not so much individual switches, routers or servers. It’s your engineers, those who design and support your data center. If they have a problem, where and how do they get help? Who helps them when they are stretched? When business pressures are telling? Of course, their colleagues and managers can and will help. Where, however, can they tap into additional sources of expertise so that they can become even more productive for you? This is where Cisco Optimization Services come in – including our award-winning Cisco Network Optimization Service (or “NOS” for short), Collaboration Optimization Service, and the one I’m involved with, Cisco Data Center Optimization Services.
This is my first blog post within the Data Center and Cloud technology area. I recently joined the Openstack@Cisco team under Lew Tucker focusing on advanced OpenStack System research as a Cloud Architect. As part of this role I performed a gap analysis on the functionality (or the lack thereof) of multicast within an OpenStack based private Cloud. Coming from Advanced Services I have seen multicast as a critical component of many datacenters providing group based access to data (streaming content, video conferencing, etc.) . Within a Cloud environment this requirement is almost if not more as critical as it is for enterprise data centers.
This blog will be the first in a series highlighting the current state of multicast capabilities within OpenStack. Here, I focused the analysis on OpenStack Icehouse running on top of Redhat 7 with OVS and a VLAN based network environment. I would like to thank the OpenStack Systems Engineering team for their great work on lying the foundation for this effort (preliminary tests on Ubuntu and Havana).
I used a virtual traffic generator called TeraVM to generate multicast based video traffic allowing for Mean Opinion Score calculation. The Mean Opinion Score or MOS is a calculated value showing the quality of video traffic based on latency, jitter, out of order packets and other network statistics. Historically, the MOS value was based on human perception of the quality of voice calls, hence the word opinion. Since then it has developed to an industry standardized way of measuring the quality of video and audio in networks. It is therefore a good way to objectively measure the performance of multicast on an OpenStack based Cloud. The MOS value ranges from 1 (very poor) to 5 (excellent). Anything above ~4.2 is typically acceptable for Service Provider grade video transmission.
I performed the multicast testing on a basic controller/compute node OpenStack environment, with neutron handling network traffic. In this blog, I focus my analysis solely on opensource components of OpenStack with Cisco products (CSR and N1K) being discussed in a follow-up blog. The tenant/provider networks are separated using VLANs. A Nexus 3064-X is used as the top of rack switch providing physical connectivity between the compute nodes. The nodes are based on UCS-C servers.
On January 13, 2015, Cisco will celebrate a year of industry adoption of Application Centric Infrastructure (ACI), a ground breaking SDN architecture. It will include a public webcast with ACI customers and ecosystem partners describing a range of new solutions that dramatically simplify data center and cloud deployments . One of these inaugural partners was Red Hat, the leading provider of open source solutions for enterprise IT . Since the ACI launch, Cisco and Red Hat have been working on extending the application policy model, at the heart of Application Centric Infrastructure, to OpenStack. Here is a preview of the Red Hat solution.
Cloud deployments of new mobile, social, and big data applications need a dynamic infrastructure to support higher demand peaks, more distributed users, varying performance needs, 24×7 global usage, and changing security vulnerabilities. These applications need a mix of virtualized and dedicated “bare-metal” resources, to run economically at scale with performance and availability.
To meet these needs, Cisco, Red Hat and other companies, have jointly developed Group Based Policy – a common open policy language that expresses the intent of business and application teams separately from the language of the infrastructure. Group Based Policy offers continuous policy governance while applications are deployed, scaled, recovered and managed for threats. It is ideal for rapidly deploying elastic, secure applications through OpenStack such as CRM, eCommerce, big data, financial reporting, and corporate e-mail.
IT organizations can get several benefits:
o Dramatically accelerate deployment of business applications and services through OpenStack.
o Maintain enforcement of business and application policies during frequent changes to scale, tenants, and the infrastructure.
o Simplify DevOps Release Automation – moving application changes to production.
o Ideal for hybrid cloud – Preserve user-intent and business policies across different infrastructures.
o Prevent shadow IT – empowers internal IT to match the agility of the public cloud while complying with corporate controls .
Network administrators can get additional benefits when Group Based Policy is combined with the full capabilities of Cisco Application Centric Infrastructure, including seamless management of heterogeneous infrastructure, policy based network automation, real-time troubleshooting and performance optimization.
Group Based Policy (GBP) is implemented through a new APIC Group Based Policy plug-in for OpenStack Neutron, the networking service. Since networking connects all compute and storage end points in the data center, it is possible to define groups of endpoints through Neutron that share the same application requirements, regardless of how they are connected. In addition, GBP:
- Captures dependencies between applications, tiers and infrastructure so that respective teams can evolve underlying capabilities independently.
- Works with multiple SDN controllers and extensible to multi-hypervisor infrastructures.
- Brings application policy-based provisioning to existing networking plug-ins.
Group Based Policy will be available and supported in the upcoming release of Red Hat Enterprise Linux OpenStack Platform 6. Learn more about Group Based Policy here. And register for Cisco’s webcast on January 13th.