VNMC 2.0 is a template-driven policy management tool that is now bundled with Cisco Virtual Security Gateway (VSG) and Cisco ASA 1000V Cloud Firewall. This new release now has expanded capabilities to configure the security of your virtual cloud environment. Because VNMC 2.0 is such a step up from prior releases, and fewer people are familiar with its functionality, this is going to be a bit longer of a post than usual (but with lots of screen shots).
Let’s take a look at some of the key VNMC features and how it works with the two virtual firewalls:
Resource Objects for ASA 1000V
Cisco VNMC abstracts the devices it manages. As part of provisioning, devices are configured to point to Cisco VNMC for policy management. Cisco VNMC discovers all devices and lists them under the Resources pane. In addition to the ASA 1000V, the Resources pane has other resources such as Cisco VSGs, VSMs, and VMs.
Nothing sits around and gets stale for long at Cisco (outside the break rooms anyway). On the heels of shipping our Nexus 1000V 1.5.2 release earlier this week (which you can download from here), we are ramping up to show the upcoming generation of the virtual switch next week at VMworld in San Francisco. This new major release 2.1 will be going into beta in October, and will represent a quantum leap in ease of deployment and management, as well as greater security for cloud environments.
vCenter Plug-in – Provides a holistic view of the virtual network to the server administrator from within VMware vCenter. A Nexus 1000V dashboard in vCenter shows the virtual supervisor module (VSM) and virtual ethernet module (VEM) details, such as VSM health status, license information, PNIC information, connected VM’s, et al.
Support for Cisco TrustSec -- Extends Cisco TrustSec security solutions for network-based segmentation of users and physical workloads to virtual workloads, leveraging Security Group Tags (SGT) for defining security segments. Data center segmentation and consistent security policy enforcement can now be implemented across physical and virtual workloads.
Cross Data Center High-availability – Supports split Active and Standby Nexus 1000V Virtual Supervisor Modules (VSMs) across two data centers to implement cross-DC clusters and VM mobility while ensuring high availability. In addition, VSM’s in the data center can support VEM’s at remote branch offices. Read More »
Today Cisco made a new version of its Nexus 1000V virtual switch available for immediate download. The newly available Nexus 1000V 1.5.2 release can be downloaded for a 60 day free trial from here. As most of you know because you’ve been reading all my blog posts over the last year, the Nexus 1000V is the edge switch for virtual environments, bringing the network edge right up to the virtual machine, by residing in the hypervisors and connecting virtual ports to the physical network and beyond. The Nexus 1000V is the foundation for our entire virtual network overlay portfolio, including all of our virtual L4-7 application and security services, our cloud orchestration software, VXLANs and more.
The new release supports the latest version of VMware’s vSphere hypervisor, and includes vPath 2.0 with service chaining between virtual services. I wrote a blog post a couple of weeks ago about the importance of vPath in inserting virtual services into data center networks, and now we also have a great new white paper available on vPath service insertion technology. The most important enhancement in vPath 2.0 is that you can now insert multiple services in the path between the source and destination addresses in your virtual network. Read More »
The lack of programmability in existing networking hardware is certainly a problem, but VMware’s acquisition of Nicira does not mean that Cisco and its ilk will be marginalized… It does mean the role and management of the physical network is changing, and I think Cisco is further ahead than most of its competitors in creating a vision for the next phase of networking.
My take here was that the VMware-Nicira acquisition did not portend a strategic break with Cisco, and while there are some obvious overlaps in our product lines, there are still a number of areas of collaboration, cooperation and interoperability. The virtual network infrastructure is just one piece of a larger software stack and the differentiation will likely be decided in the orchestration, management and applications built on top of the newly programmable infrastructures sometime down the road. Read More »
Continuing on our theme of virtual network overlays and programmable networks, today we’ll look at how to increase workload mobility over more data center and cloud resources. If server virtualization increases resource utilization and reduces costs, and data center consolidation is a good thing, then it follows that the larger the resource pool that your virtual workloads can migrate over, the more cost effective your IT operation can be. And if your mobility diameter spans multiple sites, you can obviously improve your fault tolerance as well. We call this increasing your mobility diameter, and we’ll complement what we’ve already learned about VXLAN and virtual overlays with some new technologies to seamlessly scale your diameter up. (Sounds like some sort of bizarre reverse Weight Watchers program, doesn’t it?).
As we noted in our VXLAN overview, VXLANs enable private virtual overlays over layer 3 boundaries via their MAC in UDP encapsulation and the cool way they filter MAC address broadcasts to only the right subnets. However, when you are doing full on application migration over a layer 3 boundary, VXLAN alone isn’t going to do it alone. In order to extend virtual workload mobility beyond layer 2 boundaries, Cisco came up with Overlay Transport Virtualization (OTV) that can work in conjunction with VXLAN to extend application mobility to any point the VXLAN virtual overlay can reach. And not surprisingly, the media wizards over at TechWise TV have a great video that takes all the complexity of OTV and makes it cartoonishly simple.