Cisco Blogs


Cisco Blog > Data Center

Cisco Nexus 1000V Now Supports VMware vSphere 6.0

Following up on my blog post announcing our intent to support VMware vSphere 6.0 environments with Cisco Nexus 1000V, I am happy to announce that we now have the supported release available for use.

Starting with release 5.2(1)SV3(1.4), Cisco Nexus 1000V for vSphere will support VMware vSphere 6.0 environments.  Customers can download the release on Cisco.com portal for Nexus 1000V for vSphere.

Cisco Nexus 1000V for vSphere, Hyper-V and KVM environments is continuing to be sold and supported by Cisco.  If you have an expiring VMware support contract for Nexus 1000V, please contact your Cisco account team about continuing product support through Cisco support organization.

Check out the video below for more in-depth discussion about Cisco Nexus 1000V support across multiple hypervisors and attend our webinar on April 21st 2015.

 

 

 

Tags: , , , , ,

Announcing Cisco Nexus 1000V for VMware vSphere 6 Release

The Cisco Nexus 1000V has been supported in VMware vSphere hypervisor since 4.0 release (August 2009) up to the current vSphere release 5.5 update 2.  We are happy to announce that the Nexus 1000V will continue to be supported in the latest vSphere 6 release which VMware recently announced. Customers who are currently running Nexus 1000V will be able to upgrade to the vSphere 6 release and the new vSphere 6 customers will have the Nexus 1000V as part of their choices for virtual networking.

Cisco is fully committed to support the Nexus 1000V product for our 10,000+ Advanced Edition customers and the thousands more using the Essential Edition software in all future releases of VMware vSphere. Cisco has a significant virtual switching R&D investment with hundreds of engineers dedicated to the Nexus 1000V platform.  The Nexus 1000V has been the industry’s leading virtual switching platform with innovations on VXLAN (industry’s first shipping VXLAN platform), and distributed zone firewall (via Virtual Security Gateway released in Jan 2011).

The Nexus 1000V also continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM.

In the last major release of the Nexus 1000V for vSphere, version 3.1 (August 2014) we added significant scaling and security features and we continue to provide subsequent updates (December 2014) with the next release planned for March 2015. The recently released capabilities include:

  • Increased scale per Nexus 1000V:
    • 250 hosts
    • 10,000 virtual ports
    • 1,000 virtual ports per host
    • 6,000 VXLAN segments with ability to scale out via BGP
  • Increased security and visibility
    • Seamless security policy from campus and WAN to datacenter with Cisco TrustSec tagging/enforcement capabilities
    • Distributed port-security for scalable anti-spoofing deployment
    • Enhanced L2 security and loop prevention with BPDU Guard
    • Protection against broadcast storms and or attacks with Storm control
    • Scalable flow accounting and statistics with Distributed Netflow
  • Ease of management via Virtual Switch Update Manager (VSUM) – a vSphere web-client plug-in

One of the common questions coming from our customers is whether VMware is still re-selling and supporting the Nexus 1000V via VMware support?

VMware has decided to no longer offer Nexus 1000V through VMware sales or sell support for the Nexus 1000V through the VMware support organization as of Feb 2nd 2015.  We want to reiterate that this has NO IMPACT on the availability and associated support from Cisco for the Nexus 1000V running in a vSphere environment.  Cisco will continue to sell Nexus 1000V and offer support contracts. Cisco encourages customers who are currently using VMware support for the Nexus 1000V to migrate their support contracts to Cisco by contacting their local Cisco Sales team to aide in this transition.

For questions or help, please reach out nexus1000vinfo@cisco.com

Tags: , , , , , , , ,

Cisco and OpenStack: Juno Release – Part 1

The next stable OpenStack release codenamed “Juno” is slated to be released October 16, 2014. From improving live upgrades in Nova to enabling easier migration from Nova Network to Neutron, the OpenStack Juno release will address operational challenges in addition to providing many new features and enhancements across all projects.

As indicated in the latest Stackalytics contributor statistics, Cisco has contributed to seven different OpenStack projects including Neutron, Cinder, Nova, Horizon and Ceilometer as part of the Juno development cycle. This is up from five projects in the Icehouse release. Cisco also ranks first in the number of completed blueprints in Neutron as well.

In this blog post, I’ll focus on Neutron contributions, which are the major share of contributions in Juno from Cisco.

blueprint_completed blueprint_completed_neutron

Cisco OpenStack team lead Neutron Community Contributions

An important blueprint that Cisco collaborated on and implemented with the community was to develop the Router Advertisement Daemon (radvd) for IPv6. With this support, multiple IPv6 configuration modes including SLAAC and DHCPv6 (both Stateful and Stateless modes) are now possible in Neutron. The implementation provides for running a radvd process in the router namespace for handling IPv6 auto address configuration.

To support the distributed routing model introduced by Distributed Virtual Router (DVR), this Firewall as a Service (FWaaS) blueprint implementation handles firewalling North–South traffic with DVR. The fix ensures that firewall rules are installed in the appropriate namespaces across the Network and Compute nodes to support perimeter firewall (North-South). However, firewalling East-West traffic with DVR will be handled in the next development cycle as a Distributed Firewall use case.

Additional capabilities in the ML2 and services framework were contributed for enabling better plugin and vendor driver integration. This included the following blueprint implementations –

Cisco device specific contributions in Neutron

Cisco added Application Policy Infrastructure Controller (APIC) ML2 MD and Layer 3 Service Plugin in the Juno development cycle. The ML2 APIC MD translates Neutron API calls into APIC data model specific requests and achieves tenant Layer 2 isolation through End-Point-Groups (EPG).

The APIC MD supports dynamic topology discovery using LLDP, reducing the configuration burden in Neutron for APIC MD and also ensures data is in-sync between Neutron and APIC. Additionally, the Layer 3 APIC service plugin enables configuration of internal and external subnet gateways on routers using Contracts to enable communication between EPGs as well as provide external connectivity. The APIC ML2 MD and Service Plugin have also been made available with OpenStack IceHouse release. Installation and Operation Guide for the driver and plugin is available here.

Enterprise-class virtual networking solution using Cisco Nexus1000v is enabled in OpenStack with its own core plugin. In addition to providing host based overlays using VxLAN (in both unicast and multi-cast mode), it provides Network and Policy Profile extensions for virtual machine policy provisioning.

The Nexus 1000v plugin added support for accepting REST API responses in JSON format from Virtual Supervisor Module (VSM) as well as control for enabling Policy Profile visibility across tenants. More information on features and how it integrates with OpenStack is provided here.

As an alternative to the default Layer 3 service implementations in Neutron, a Cisco router service plugin is now available that delivers Layer 3 services using the Cisco Cloud Services Router(CSR) 1000v.

The Cisco Router Service Plugin introduces a notion of “hosting device” to bind a Neutron router to a device that implements the router configuration. This allows the flexibility to add virtual as well as physical devices seamlessly into the framework for configuring services. Additionally, a Layer 3+ “configuration agent” is available upstream as well that interacts with the service plugin and is responsible for configuring the device for routing and advanced services.  The configuration agent is multi-service capable, supports configuration of hardware or software based L3 service devices via device drivers and also provides device health monitoring statistics.

The VPN as a Service (VPNaaS) driver using the CSR1000v has been available since the Icehouse release, as a proof-of-concept implementation. The Juno release enhances the CSR1000v VPN driver such that it can be used in a more dynamic, semi-automated manner to establish IPSec site-to-site connections, and paves the way for a fully integrated and dynamic implementation with the Layer 3 router plugin planned for the Kilo development cycle.

Summary

The OpenStack team at Cisco has led, implemented and successfully merged upstream numerous blueprints for the Neutron Juno release.  Clearly, some have been critical for the community and others enable customers to better integrate Cisco networking solutions with OpenStack Networking.

Stay tuned for more information on other project contributions in Juno and on Cisco lead sessions at the Kilo Summit in Paris !

You can also download OpenStack Cisco Validated Designs, White papers, and more at www.cisco.com/go/openstack

Tags: , , , , , , ,

Power of Open Choice in Hypervisor Virtual Switching

Customers gain great value from server virtualization in the form of virtual machines (VM) and more recently Linux Containers /Dockers in data centers, clouds and branches.  By some estimates, more than 60 % of the workloads are virtualized although less than 16% of the physical servers (IDC) are virtualized (running a hypervisor).  From a networking perspective, the hypervisor virtual switch on these virtualized servers plays a critical component in all current and future data center, cloud, and branch designs and solutions

As we count down to the annual VMworld conference and reflect on the introduction of the Cisco Nexus 1000V in vSphere 4.0 six years ago, we can feel proud of what we have achieved. We have to congratulate VMware for their partnership and success in opening vSphere networking to third party vendors. It was beneficial for our joint customers, and for both companies. VMware and Cisco could be considered visionaries in this sense. Recognizing this success, the industry has followed.

Similarly we praise Microsoft as well, for having also provided an open environment for third-party virtual switches within Hyper-V, which has continued gaining market share recently.  Cisco and Microsoft (along with other industry players) are leading the industry with the latest collaboration on submitting the OpFlex control protocol to the IETF. Microsoft’s intention to enable OpFlex support in their native Hyper-V virtual switch enables standards-based interaction with the virtual switches.  Another win for customers and the industry.

In KVM and Xen environments, many organizations have looked at Open vSwitch (OVS) as an open source alternative. There is an interest in having richer networking than the standard Linux Bridge provides, or using OVS as a component for implementing SDN-based solutions like network virtualization. We think that there is an appetite for OVS on other hypervisors as well.  Cisco is also committed to contributing and improving these open source efforts.  We are active contributors in the Open Virtual Switch project and diligently working to open source our OpFlex control protocol implementation for OVS in the OpenDaylight consortium.

To recap on the thoughts from above, Table 1 provides a quick glance at the options for virtual networking from multiple vendors as of today:

Table 1:  Hypervisors and Choices in Virtual Switches

Hypervisor

Native vSwitch

3-party or OpenSource  vSwitch

vSphere

•Standard vSwitch
•Distributed Virtual Switch
•Cisco Application Virtual Switch
•IBM DVS 5000V
•HP Virtual Switch 5900V

Hyper-V

Native Hyper-v Switching
•NEC
•Broadcom

KVM

Linux Bridge(some distributions include OVS natively)
•OVS

XEN

OVS – open source project with multiple contributions from different vendors and individuals
•OVS

 

As an IT Professional, whether you are running workloads on Red Hat KVM, Microsoft Hyper-V or VMware vSphere, it is difficult to imagine not having a choice of virtual networking. For many customers, this choice still means using the hypervisor’s native vSwitch.  For others, it is about having an open source alternative, like OVS. And in many other cases, having the option of selecting an Enterprise-grade virtual switch has been key to increasing deployments of virtualization, since it enables consistent policies and network operations between virtual machines and bare metal workloads.

As can be seen in the table above, Cisco Nexus 1000V continues to be the industry’s only multi-hypervisor virtual switching solution that delivers enterprise class functionality and features across vSphere, Hyper-V and KVM. Currently, over 10,000 customers have selected this option with Cisco Nexus 1000V in either vSphere, Hyper-V, or KVM (or a combination of them).

Cisco is fully committed to the Nexus 1000V for vSphere, Hyper-V and KVM and also the Application Virtual Switch (AVS) for Application Centric Infrastructure (ACI), in addition to our open source contributions to OVS.  Cisco has a large R&D investment in virtual switching, with a lot of talented engineers dedicated to this area, inclusive of those working on open-source contributions.

Nexus 1000V 3.0 release for vSphere is slated for August 2014 (general availability). This release addresses scale requirements of our increasing customer base, as well as an easy installation tool in the form of Cisco Virtual Switch Update Manager.   The Cisco AVS for vSphere will bring the ACI policy framework to virtual servers.  With ACI, customers will for the first time benefit from a true end-to-end virtual + physical infrastructure being managed holistically to provide visibility and optimal performance for heterogeneous hypervisors and workloads (virtual or physical).  These innovations and choices are enabled by the availability of open choices in virtual switching within hypervisors.

As we look forward to VMworld next month, we are excited to continue the collaborative work with platform vendors VMware, Microsoft, Red Hat, Canonical, and the open source community to maintain and continue development of openness and choice for our customers.  We are fully committed to this vision at Cisco.

Acknowledgement:  Juan Lage (@juanlage) contributed to this blog.

Tags: , , , , , , , , , , , , , , ,

Introducing Cisco Application Virtual Switch – Extending Virtual Networking to Applications

Cisco has been the leader in virtual networking since the introduction of Nexus 1000V virtual switch more than 5 years ago.  Now it is time to make the virtual network more application aware.  With the introduction of the Application Centric Infrastructure (ACI), we are pleased to introduce the Application Virtual Switch (AVS), the virtual network edge of the Cisco ACI -enabled network that includes the Nexus 9000 series of switches.

In the ACI architecture, applications drive networking behavior, not the other way around. Pre-defined application requirements and descriptions (“policy templates”) automate the provisioning of the network – virtual and physical, application services, security policies, tenant subnets and workload placement. Automating the provisioning of the complete application network reduces IT costs, reduces errors, accelerates deployment and makes the business more agile.

Application Virtual Switches are the purpose-built, hypervisor-resident virtual network edge switches designed for the ACI fabric. They provide consistent virtual networking across multiple hypervisors to simplify network operations and provide consistency with the physical infrastructure.

  • AVS is robustly integrated into the ACI architecture and supports Application Network Profile (ANP) enforcement at the virtual host layer consistent with the Nexus 9000 series physical switches.
  • AVS is managed centrally along with rest of the ACI fabric components through the Application Policy Infrastructure Controller (APIC) and provides advanced telemetry features to allow end-to-end visibility and troubleshooting capabilities across both virtual and physical devices, .
  • AVS enables optimal traffic steering between virtual and physical layers of the fabric to maximize performance and resource utilization. For example, if the web and app tier are located on the same host, AVS can route traffic or apply security policies between these end point groups within the hypervisor itself.  On the other hand, if the database is a bare metal workload that is attached to the physical Nexus 9000, the application policy is consistently applied at the physical Nexus 9000 top of rack switches instead.
Application Centric Infrastructure with Application Virtual Switch

Application Centric Infrastructure with Application Virtual Switch

ACI eliminates the operational complexity of differences in managing virtualized environments vs. bare metal or legacy environments. It provides a consistent operational model across both AVS and Nexus 9000 respectively.  ACI also allows for flexibility of placement of application workloads based on application requirements. Watch this short video.

Read More »

Tags: , , , , ,