Cisco Application Virtual Switch (AVS), a virtual member of the Cisco’s Application Centric Infrastructure (ACI) family has seen increasing interest from customers who want to enforce application centric policies all the way to the virtual edge of the data center.
Cisco AVS is a derivative of the Nexus 1000V virtual switch, which is the market leading 3rd party virtual switch in the industry. Nexus 1000V has accumulated more than 10,000 customers and has been deployed in large-scale service providers to large enterprises. Recently Cisco announced Nexus 1000V support for vSphere 6.0 releases. VMware has also announced to continue supporting Cisco Nexus 1000V in vSphere 6.0 and later releases.
VMware has supported and re-sold Nexus 1000V since we jointly launched the product. Recently (Feb 2nd 2015) VMware has announced that they will stop re-selling product/post-sales support for Nexus 1000V. Cisco will continue to sell and support Nexus 1000V for all customers.
Cisco AVS uses exactly the same vSphere APIs that the Nexus 1000V uses. Cisco AVS has always been supported by Cisco since the launch of ACI and will continue to be supported by Cisco. Currently, ACI with Cisco AVS is supported in vSphere 5.1 and vSphere 5.5 releases. With the latest release 5.2(1)SV3(1.5), Cisco AVS supports the Data Center Micro Segmentation delivered by the ACI. We plan to release vSphere 6.0 support for ACI with Cisco AVS later in second half of CY 2015. Cisco is committed to deliver on the strong customer interest in Cisco AVS and have multiple successful production deployments of Cisco AVS with ACI across our customer install base.
Customers who use either Cisco Nexus 1000V or Cisco AVS are assured that Cisco will continue to innovate and support these products via Cisco Support channel.
To ease adoption, Cisco AVS product and support is included as part of the Cisco ACI product and support agreement. No additional services or products need to be purchased.
Tags: #CiscoACI, ACI, application vi, AVS, Nexus 1000v, VMware vSphere, vsphere 6
It’s finally here- the new Data Center and Cloud community framework has launched! We created new content spaces for Compute and Storage, Software Defined Networks, Data Center and Networking, and OpenStack and OpenSource Software.
Cisco Data Center and Cloud Community Infrastructure
Read More »
Tags: ACI, CiscoUCS, cloud, compute, intercloud fabric, Invicta, MDS, Nexus 1000v, OpenSource, OpenStack, software defined networks, Storage, Unified Computing Systems
Over the last 12 months I’ve been doing a lot of work that has involved the Cisco Nexus 1000v, and during this time I came to realise that there wasn’t a huge amount of recent information available online about it.
Because of this I’m going to put together a short post covering what the 1000v is, and a few points around it’s deployment.
What is the Nexus 1000v?
The blurb on the VMware website defines the 1000v as “..a software switch implementation that provides an extensible architectural platform for virtual machines and cloud networking.”, and the Cisco website says, “This switch: Extends the network edge to the hypervisor and virtual machines, is built to scale for cloud networks, forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software Defined Networking (SDN)”
So that’s all fine and good, but what does this mean for us? Well, the 1000v is a software only switch that sits inside the ESXi (and KVM or Hyper-V, if they’re your poison) Hypervisor that leverages VMware’s built-in Distributed vSwitch functionality.
Read More »
Tags: #ciscochampion, Cisco Nexus, Nexus 1000v
In the past, we have pointed out that configuring network services and security policies into an application network has traditionally been the most complex, tedious and time-consuming aspect of deploying new applications. For a data center or cloud provider to stand up applications in minutes and not days, easily configuring the right service nodes (e.g. a load balancer or firewall), with the right application and security policies, to support the specific workload requirements, independent of location in the network is a clear obstacle that has to be overcome.
Let’s say, for example, you have a world-beating best-in-class firewall positioned in some rack of your data center. You also have two workloads that need to be separated according to security policies implemented on this firewall on other servers a few hops away. The network and security teams have traditionally had a few challenges to address:
- If traffic from workload1 to workload2 needs to go through a firewall, how do you route traffic properly, considering the workloads don’t themselves have visibility to the specifics of the firewalls they need to work with. Traffic routing of this nature can be implemented in the network through the use of VLAN’s and policy-based routing techniques, but this is not scalable to hundreds or thousands of applications, is tedious to manage, limits workload mobility, and makes the whole infrastructure more error-prone and brittle.
- The physical location of the firewall or network service largely determines the topology of the network, and have historically restricted where workloads could be placed. But modern data center and cloud networks need to be able to provide required services and policies independent of where the workloads are placed, on this rack or that, on-premises or in the cloud.
Whereas physical firewalls might have been incorporated into an application network through VLAN stitching, there are a number of other protocols and techniques that generally have to be used with other network services to include them in an application deployment, such as Source NAT for application delivery controllers, or WCCP for WAN optimization. The complexity of configuring services for a single application deployment thus increases measurably.
Read More »
Tags: ACI, ietf, Network Services Header, Nexus 1000v, NSH, SDN, vPath
A Guest Blog by Partner Rick Heiges of Scalability Experts: Rick is a SQL Server Microsoft MVP and Senior Solutions Architect. He primarily works with Enterprise customers on their Data Platform strategies. Rick is also very involved in the SQL Server Community primarily through PASS and events such as the PASS Summit, SQL Saturdays, and 24 Hours of PASS. His tenure on the PASS Board of Directors saw the annual Summit triple in size from 2003 to 2011. You can find his blog at www.sqlblog.com.
So far, it has been another great week here at the PASS Summit 2014, SQL Server’s largest annual user and partner conference. With yesterday’s keynote address, there is still very much a focus on getting to the cloud and new investments in cloud technology in general. Microsoft seems to be extending its data collection and storage technologies in the cloud and also on-prem. One of the coolest features talked about was the concept of a “stretch tables” where a table that lives on your on-prem SQL Server can be “stretched” on to tables in SQL Azure Databases. The data may be shared so that the “hot” data can stay local and the “cold” data would live in the cloud. There were some other great demos around using the Kinect device to create a heat map of customer activity in a physical store (similar to what people linger and search for when shopping online). You can watch the PASS Summit 2014 Keynote here on PASStv.
As a Senior Solutions Architect with Scalability Experts, I work with large enterprise customers (Fortune 500 type) on a regular basis. There is more and more interest about leveraging the Public Cloud for some workloads and taking advantage of “on-prem” resources in a cloud-like way. This means deploying your internal resources in a similar way – for example via Cisco’s Microsoft Fast Track certified FlexPod or VSPEX integrated infrastructure solutions – that public cloud resources are deployed with a similar chargeback (or ‘show back’) model and automating the self-service deployment of infrastructure, and the monitoring of the entire stack.
One of the things that I really like about Microsoft’s products is a focus on ease of use, tight integration, and low TCO. This is important to a lot of the customers that I interact with. This is why I have seen a surge in Cisco UCS products in my customer base of the past few years. Cisco has a similar goal to keep things simple and TCO low – read this Total Economic Impact report from Forrester on UCS ROI/TCO. Cisco also provides Management Pack plug-ins to Microsoft’s System Center suite for tight integration so that you can manage the entire stack (Hardware, Hypervisor, Application, and even Public Cloud) with a single tool. It is great to see how this partnership between Microsoft and Cisco can be beneficial to the customers that I work with.
Microsoft’s SQL Server 2014 also brings “In-Memory” Technology to OLTP in a cost-effective manner by not forcing a complete rewrite of the application. In a recent Cisco UCS on Microsoft SQL Server 2014 case study, Progressive Insurance was able to take advantage of this technology to further its strategy of its competitive advantage – ease of use.
Eventually, I see the Public Cloud taking on a more “primary” role in the future. Similar to the “Everything on a VM unless there is a reason not to” mantra, I see an “Everything on a Public Cloud VM unless there is a reason not to” mantra on the long-term horizon. Until then, the Hybrid Cloud will be the default stance for many large enterprises.
Tags: Big Data, Cisco, Cisco UCS, FlexPod, Microsoft, Microsoft Hyper-V, Microsoft SQL Server, Microsoft SQL Server2014, Nexus 1000v, PASS Summit 2014, SQL PASS, vspex