The next stable OpenStack release codenamed “Juno” is slated to be released October 16, 2014. From improving live upgrades in Nova to enabling easier migration from Nova Network to Neutron, the OpenStack Juno release will address operational challenges in addition to providing many new features and enhancements across all projects.
As indicated in the latest Stackalytics contributor statistics, Cisco has contributed to seven different OpenStack projects including Neutron, Cinder, Nova, Horizon and Ceilometer as part of the Juno development cycle. This is up from five projects in the Icehouse release. Cisco also ranks first in the number of completed blueprints in Neutron as well.
In this blog post, I’ll focus on Neutron contributions, which are the major share of contributions in Juno from Cisco.
Cisco OpenStack team lead Neutron Community Contributions
An important blueprint that Cisco collaborated on and implemented with the community was to develop the Router Advertisement Daemon (radvd) for IPv6. With this support, multiple IPv6 configuration modes including SLAAC and DHCPv6 (both Stateful and Stateless modes) are now possible in Neutron. The implementation provides for running a radvd process in the router namespace for handling IPv6 auto address configuration.
To support the distributed routing model introduced by Distributed Virtual Router (DVR), this Firewall as a Service (FWaaS) blueprint implementation handles firewalling North–South traffic with DVR. The fix ensures that firewall rules are installed in the appropriate namespaces across the Network and Compute nodes to support perimeter firewall (North-South). However, firewalling East-West traffic with DVR will be handled in the next development cycle as a Distributed Firewall use case.
Additional capabilities in the ML2 and services framework were contributed for enabling better plugin and vendor driver integration. This included the following blueprint implementations -
Cisco device specific contributions in Neutron
Cisco added Application Policy Infrastructure Controller (APIC) ML2 MD and Layer 3 Service Plugin in the Juno development cycle. The ML2 APIC MD translates Neutron API calls into APIC data model specific requests and achieves tenant Layer 2 isolation through End-Point-Groups (EPG).
The APIC MD supports dynamic topology discovery using LLDP, reducing the configuration burden in Neutron for APIC MD and also ensures data is in-sync between Neutron and APIC. Additionally, the Layer 3 APIC service plugin enables configuration of internal and external subnet gateways on routers using Contracts to enable communication between EPGs as well as provide external connectivity. The APIC ML2 MD and Service Plugin have also been made available with OpenStack IceHouse release. Installation and Operation Guide for the driver and plugin is available here.
Enterprise-class virtual networking solution using Cisco Nexus1000v is enabled in OpenStack with its own core plugin. In addition to providing host based overlays using VxLAN (in both unicast and multi-cast mode), it provides Network and Policy Profile extensions for virtual machine policy provisioning.
The Nexus 1000v plugin added support for accepting REST API responses in JSON format from Virtual Supervisor Module (VSM) as well as control for enabling Policy Profile visibility across tenants. More information on features and how it integrates with OpenStack is provided here.
As an alternative to the default Layer 3 service implementations in Neutron, a Cisco router service plugin is now available that delivers Layer 3 services using the Cisco Cloud Services Router(CSR) 1000v.
The Cisco Router Service Plugin introduces a notion of “hosting device” to bind a Neutron router to a device that implements the router configuration. This allows the flexibility to add virtual as well as physical devices seamlessly into the framework for configuring services. Additionally, a Layer 3+ “configuration agent” is available upstream as well that interacts with the service plugin and is responsible for configuring the device for routing and advanced services. The configuration agent is multi-service capable, supports configuration of hardware or software based L3 service devices via device drivers and also provides device health monitoring statistics.
The VPN as a Service (VPNaaS) driver using the CSR1000v has been available since the Icehouse release, as a proof-of-concept implementation. The Juno release enhances the CSR1000v VPN driver such that it can be used in a more dynamic, semi-automated manner to establish IPSec site-to-site connections, and paves the way for a fully integrated and dynamic implementation with the Layer 3 router plugin planned for the Kilo development cycle.
The OpenStack team at Cisco has led, implemented and successfully merged upstream numerous blueprints for the Neutron Juno release. Clearly, some have been critical for the community and others enable customers to better integrate Cisco networking solutions with OpenStack Networking.
Stay tuned for more information on other project contributions in Juno and on Cisco lead sessions at the Kilo Summit in Paris !
You can also download OpenStack Cisco Validated Designs, White papers, and more at www.cisco.com/go/openstack
Tags: ACI, APIC, CSR1000v, IPv6, Juno, Neutron, Nexus1000V, OpenStack
Customers gain great value from server virtualization in the form of virtual machines (VM) and more recently Linux Containers /Dockers in data centers, clouds and branches. By some estimates, more than 60 % of the workloads are virtualized although less than 16% of the physical servers (IDC) are virtualized (running a hypervisor). From a networking perspective, the hypervisor virtual switch on these virtualized servers plays a critical component in all current and future data center, cloud, and branch designs and solutions
As we count down to the annual VMworld conference and reflect on the introduction of the Cisco Nexus 1000V in vSphere 4.0 six years ago, we can feel proud of what we have achieved. We have to congratulate VMware for their partnership and success in opening vSphere networking to third party vendors. It was beneficial for our joint customers, and for both companies. VMware and Cisco could be considered visionaries in this sense. Recognizing this success, the industry has followed.
Similarly we praise Microsoft as well, for having also provided an open environment for third-party virtual switches within Hyper-V, which has continued gaining market share recently. Cisco and Microsoft (along with other industry players) are leading the industry with the latest collaboration on submitting the OpFlex control protocol to the IETF. Microsoft’s intention to enable OpFlex support in their native Hyper-V virtual switch enables standards-based interaction with the virtual switches. Another win for customers and the industry.
In KVM and Xen environments, many organizations have looked at Open vSwitch (OVS) as an open source alternative. There is an interest in having richer networking than the standard Linux Bridge provides, or using OVS as a component for implementing SDN-based solutions like network virtualization. We think that there is an appetite for OVS on other hypervisors as well. Cisco is also committed to contributing and improving these open source efforts. We are active contributors in the Open Virtual Switch project and diligently working to open source our OpFlex control protocol implementation for OVS in the OpenDaylight consortium.
To recap on the thoughts from above, Table 1 provides a quick glance at the options for virtual networking from multiple vendors as of today:
Table 1: Hypervisors and Choices in Virtual Switches
3-party or OpenSource vSwitch
•Distributed Virtual Switch
•Cisco Application Virtual Switch
•IBM DVS 5000V
•HP Virtual Switch 5900V
|Native Hyper-v Switching
|Linux Bridge(some distributions include OVS natively)
|OVS -- open source project with multiple contributions from different vendors and individuals
As an IT Professional, whether you are running workloads on Red Hat KVM, Microsoft Hyper-V or VMware vSphere, it is difficult to imagine not having a choice of virtual networking. For many customers, this choice still means using the hypervisor’s native vSwitch. For others, it is about having an open source alternative, like OVS. And in many other cases, having the option of selecting an Enterprise-grade virtual switch has been key to increasing deployments of virtualization, since it enables consistent policies and network operations between virtual machines and bare metal workloads.
As can be seen in the table above, Cisco Nexus 1000V continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM. Currently, over 10,000 customers have selected this option with Cisco Nexus 1000V in either vSphere, Hyper-V, or KVM (or a combination of them).
Cisco is fully committed to the Nexus 1000V for vSphere, Hyper-V and KVM and also the Application Virtual Switch (AVS) for Application Centric Infrastructure (ACI), in addition to our open source contributions to OVS. Cisco has a large R&D investment in virtual switching, with a lot of talented engineers dedicated to this area, inclusive of those working on open-source contributions.
Nexus 1000V 3.0 release for vSphere is slated for August 2014 (general availability). This release addresses scale requirements of our increasing customer base, as well as an easy installation tool in the form of Cisco Virtual Switch Update Manager. The Cisco AVS for vSphere will bring the ACI policy framework to virtual servers. With ACI, customers will for the first time benefit from a true end-to-end virtual + physical infrastructure being managed holistically to provide visibility and optimal performance for heterogeneous hypervisors and workloads (virtual or physical). These innovations and choices are enabled by the availability of open choices in virtual switching within hypervisors.
As we look forward to VMworld next month, we are excited to continue the collaborative work with platform vendors VMware, Microsoft, Red Hat, Canonical, and the open source community to maintain and continue development of openness and choice for our customers. We are fully committed to this vision at Cisco.
Acknowledgement: Juan Lage (@juanlage) contributed to this blog.
Tags: application centric infrastructure, Application Virtual Switch, AVS, Canonical, KVM, Microsoft Hyper-V, Nexus1000V, open source, opendaylight, OpFlex, opflex protocol, OVS, RedHat, VMware vSphere, vmworld, vmworld 2014
Cisco has been the leader in virtual networking since the introduction of Nexus 1000V virtual switch more than 5 years ago. Now it is time to make the virtual network more application aware. With the introduction of the Application Centric Infrastructure (ACI), we are pleased to introduce the Application Virtual Switch (AVS), the virtual network edge of the Cisco ACI -enabled network that includes the Nexus 9000 series of switches.
In the ACI architecture, applications drive networking behavior, not the other way around. Pre-defined application requirements and descriptions (“policy templates”) automate the provisioning of the network – virtual and physical, application services, security policies, tenant subnets and workload placement. Automating the provisioning of the complete application network reduces IT costs, reduces errors, accelerates deployment and makes the business more agile.
Application Virtual Switches are the purpose-built, hypervisor-resident virtual network edge switches designed for the ACI fabric. They provide consistent virtual networking across multiple hypervisors to simplify network operations and provide consistency with the physical infrastructure.
- AVS is robustly integrated into the ACI architecture and supports Application Network Profile (ANP) enforcement at the virtual host layer consistent with the Nexus 9000 series physical switches.
- AVS is managed centrally along with rest of the ACI fabric components through the Application Policy Infrastructure Controller (APIC) and provides advanced telemetry features to allow end-to-end visibility and troubleshooting capabilities across both virtual and physical devices, .
- AVS enables optimal traffic steering between virtual and physical layers of the fabric to maximize performance and resource utilization. For example, if the web and app tier are located on the same host, AVS can route traffic or apply security policies between these end point groups within the hypervisor itself. On the other hand, if the database is a bare metal workload that is attached to the physical Nexus 9000, the application policy is consistently applied at the physical Nexus 9000 top of rack switches instead.
Application Centric Infrastructure with Application Virtual Switch
ACI eliminates the operational complexity of differences in managing virtualized environments vs. bare metal or legacy environments. It provides a consistent operational model across both AVS and Nexus 9000 respectively. ACI also allows for flexibility of placement of application workloads based on application requirements. Watch this short video.
Read More »
Tags: application centric infrastructure, Application Policy Infrastructure Controller, Application Virtual Switch, AVS, Nexus 9000, Nexus1000V
vPath, a Cisco innovative technology developed within Cisco Nexus 1000V, has been shipping for more than 2 years, enabling customers to seamlessly create policy-based multi-tenant / multi-container Data Centers across multiple hypervisor environment. Increasingly, customers are implementing network services into their virtualization and cloud networks in order to meet regulatory, security and service levels. To this end we are seeing increased deployments of virtual firewalls, load balancing, routing, WAN optimization & monitoring tools. Cisco’s vPath technology allows customers to deploy these best-in-class network services seamlessly in their Data Center and Cloud deployments. So, what makes vPath so unique in this industry?
#1 -- vPath Powered Service Chaining at a tenant level: For customers to create multi-tenancy architecture today, they have to configure the different network services and manually “stitch” them together for every unique combination. While this method provides the goals for regulatory compliance, security and service levels it often increases application provision time, and does not easily support application mobility. Additionally most applications have to follow the same manually stitched network services.
With Cisco Nexus 1000V vPath technology, the customer’s Data Center becomes very agile by enabling policy based services chaining at the application or tenant level. Customers can create policies and select the L3-7 virtual services appropriate for the application at the time of VM or Tenant creation. These policies are then dynamically instantiated and fulfilled in the Nexus 1000V distributed virtual switch. If the particular application VM moves, the Nexus 1000V network policy moves with it and hence the service chain remains intact.
Figure 1: Policy based dynamic service chaining through vPath
#2 -- vPath enables Distributed Cloud Network Services: As noted in the picture above, vPath controls the packet flow through all Services that are chained for that particular policy. Once the first few packets of the flow is inspected by each Service node, vPath offers the capability to off load flow decisions of the particular Service to the local host such that the subsequent packets of the same flow are locally inspected at the host. Through this mechanism, vPath improves the performance of the particular service since the subsequent packets of the flow are no longer required to be inspected by the individual Service node and hence enabling distributed behavior of the particular service.
Figure 2: Distributed Cloud Network Services through vPath Fast Path Offload
#3 -- vPath offers Best-In-Class Cloud Network Services across multiple hypervisors: vPath enables the customers to use the best-in-class Cloud Network Services from Cisco such as Virtual Security Gateway, ASA 1000V & virtual WAAS, and best-in-class ecosystem partners such as Citrix NetScaler 1000V & Imperva Secure Sphere Web Application Firewall. This vPath enabled architecture will be supported across all major hypervisors such as VMware vSphere, Microsoft Hyper-V, KVM and Xen.
#4 -- vPath to become a standard based Network Services Header: In traditional fashion, Cisco creates innovative solutions to help solve our customer’s IT challenges. Once proven, we offer these technologies such as VXLAN through standards bodies to allow greater interoperability and choice. Recently, vPath header format has been submitted to the IETF as a Network Service Header draft. In the future customers will be able to leverage dynamic policy based services chaining including both virtual and hardware based solutions that support Network Services Header!
To learn more about Cisco Nexus 1000V and Cloud Network Services, please visit our community site. Create a Cloud Lab account and checkout out the vPath in action today!
Lastly, if you are at VMworld, make a point to attend our sessions PHC6409 and NET6380, or stop by at the Cisco booth.
Tags: Cloud Network Services, data center, Nexus1000V, SDN, service chaining, virtualization, vPath
Today marks an important milestone for one of our most strategic data center products and the foundation of virtual networking portfolio. Five years ago, the Nexus 1000V virtual switch was the pioneer in the virtual networking market with its launch at VMworld in 2008. Since then it has been adopted by over 8000 customers and continues to grow on other platforms, such as Microsoft Hyper-V, and soon Linux/KVM. Today, Nexus 1000V represents the largest software controller-based networking solution (aka, Software Defined Networking or SDN) in the industry.
We continue to add hundreds of paying customers every quarter, in spite of offering a fully featured no-cost essential edition. The interest in the virtual networking space also continues to increase ever since the SDN trend started. There are also plenty of FUD or rumors being spread about the Cisco’s virtual networking solution. On this 5th year anniversary, let’s do some myth busting focused on Nexus 1000V based solutions. Read More »
Tags: ACI, application centric infrastructure, Cisco DFA, network virtualization, Nexus1000V, NVGRE, OpenStack, SDN, ucs director, VXLAN, VXLAN-VLAN Gateway