Over the last 12 months I’ve been doing a lot of work that has involved the Cisco Nexus 1000v, and during this time I came to realise that there wasn’t a huge amount of recent information available online about it.
Because of this I’m going to put together a short post covering what the 1000v is, and a few points around it’s deployment.
What is the Nexus 1000v?
The blurb on the VMware website defines the 1000v as “..a software switch implementation that provides an extensible architectural platform for virtual machines and cloud networking.”, and the Cisco website says, “This switch: Extends the network edge to the hypervisor and virtual machines, is built to scale for cloud networks, forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software Defined Networking (SDN)”
So that’s all fine and good, but what does this mean for us? Well, the 1000v is a software only switch that sits inside the ESXi (and KVM or Hyper-V, if they’re your poison) Hypervisor that leverages VMware’s built-in Distributed vSwitch functionality.
Read More »
Tags: #ciscochampion, Cisco Nexus, Nexus 1000v
In the past, we have pointed out that configuring network services and security policies into an application network has traditionally been the most complex, tedious and time-consuming aspect of deploying new applications. For a data center or cloud provider to stand up applications in minutes and not days, easily configuring the right service nodes (e.g. a load balancer or firewall), with the right application and security policies, to support the specific workload requirements, independent of location in the network is a clear obstacle that has to be overcome.
Let’s say, for example, you have a world-beating best-in-class firewall positioned in some rack of your data center. You also have two workloads that need to be separated according to security policies implemented on this firewall on other servers a few hops away. The network and security teams have traditionally had a few challenges to address:
- If traffic from workload1 to workload2 needs to go through a firewall, how do you route traffic properly, considering the workloads don’t themselves have visibility to the specifics of the firewalls they need to work with. Traffic routing of this nature can be implemented in the network through the use of VLAN’s and policy-based routing techniques, but this is not scalable to hundreds or thousands of applications, is tedious to manage, limits workload mobility, and makes the whole infrastructure more error-prone and brittle.
- The physical location of the firewall or network service largely determines the topology of the network, and have historically restricted where workloads could be placed. But modern data center and cloud networks need to be able to provide required services and policies independent of where the workloads are placed, on this rack or that, on-premises or in the cloud.
Whereas physical firewalls might have been incorporated into an application network through VLAN stitching, there are a number of other protocols and techniques that generally have to be used with other network services to include them in an application deployment, such as Source NAT for application delivery controllers, or WCCP for WAN optimization. The complexity of configuring services for a single application deployment thus increases measurably.
Read More »
Tags: ACI, ietf, Network Services Header, Nexus 1000v, NSH, SDN, vPath
A Guest Blog by Partner Rick Heiges of Scalability Experts: Rick is a SQL Server Microsoft MVP and Senior Solutions Architect. He primarily works with Enterprise customers on their Data Platform strategies. Rick is also very involved in the SQL Server Community primarily through PASS and events such as the PASS Summit, SQL Saturdays, and 24 Hours of PASS. His tenure on the PASS Board of Directors saw the annual Summit triple in size from 2003 to 2011. You can find his blog at www.sqlblog.com.
So far, it has been another great week here at the PASS Summit 2014, SQL Server’s largest annual user and partner conference. With yesterday’s keynote address, there is still very much a focus on getting to the cloud and new investments in cloud technology in general. Microsoft seems to be extending its data collection and storage technologies in the cloud and also on-prem. One of the coolest features talked about was the concept of a “stretch tables” where a table that lives on your on-prem SQL Server can be “stretched” on to tables in SQL Azure Databases. The data may be shared so that the “hot” data can stay local and the “cold” data would live in the cloud. There were some other great demos around using the Kinect device to create a heat map of customer activity in a physical store (similar to what people linger and search for when shopping online). You can watch the PASS Summit 2014 Keynote here on PASStv.
As a Senior Solutions Architect with Scalability Experts, I work with large enterprise customers (Fortune 500 type) on a regular basis. There is more and more interest about leveraging the Public Cloud for some workloads and taking advantage of “on-prem” resources in a cloud-like way. This means deploying your internal resources in a similar way – for example via Cisco’s Microsoft Fast Track certified FlexPod or VSPEX integrated infrastructure solutions -- that public cloud resources are deployed with a similar chargeback (or ‘show back’) model and automating the self-service deployment of infrastructure, and the monitoring of the entire stack.
One of the things that I really like about Microsoft’s products is a focus on ease of use, tight integration, and low TCO. This is important to a lot of the customers that I interact with. This is why I have seen a surge in Cisco UCS products in my customer base of the past few years. Cisco has a similar goal to keep things simple and TCO low – read this Total Economic Impact report from Forrester on UCS ROI/TCO. Cisco also provides Management Pack plug-ins to Microsoft’s System Center suite for tight integration so that you can manage the entire stack (Hardware, Hypervisor, Application, and even Public Cloud) with a single tool. It is great to see how this partnership between Microsoft and Cisco can be beneficial to the customers that I work with.
Microsoft’s SQL Server 2014 also brings “In-Memory” Technology to OLTP in a cost-effective manner by not forcing a complete rewrite of the application. In a recent Cisco UCS on Microsoft SQL Server 2014 case study, Progressive Insurance was able to take advantage of this technology to further its strategy of its competitive advantage -- ease of use.
Eventually, I see the Public Cloud taking on a more “primary” role in the future. Similar to the “Everything on a VM unless there is a reason not to” mantra, I see an “Everything on a Public Cloud VM unless there is a reason not to” mantra on the long-term horizon. Until then, the Hybrid Cloud will be the default stance for many large enterprises.
Tags: Big Data, Cisco, Cisco UCS, FlexPod, Microsoft, Microsoft Hyper-V, Microsoft SQL Server, Microsoft SQL Server2014, Nexus 1000v, PASS Summit 2014, SQL PASS, vspex
A Guest Blog by Cisco’s Frank Cicalese: Frank is a Technical Solutions Architect with Cisco, assisting customers with their design of SQL Server solutions on Cisco Unified Compute System. Before joining Cisco, Frank worked at Microsoft Corporation for 10 years, excelling in several positions, including as Database TSP. Frank has in-depth technical knowledge and proficiency with database design, optimization, replication, and clustering and has extensive virtualization, identity and access management and application development skills. He has established himself as an architect who can tie core infrastructure, collaboration, and application development platform solutions together in a way that drives understanding and business value for the companies he services.
Ah yes, it’s that time of year again. It’s time for PASS Summit! I hope all of you are having a great event thus far. During my conversations with customers and peers, I am inevitably asked “Why should we implement SQL on UCS?” In this blog I cover this very common question. First off, for those of you not familiar with Cisco UCS, please visit here when you have a moment to learn more about this great server architecture. So, why would anyone want to consider running their SQL workloads on Cisco UCS? Read on to learn about what I consider to be the top reasons to do so…
High availability is one of the most important factors for companies when it comes to considering an architecture for their database implementations. UCS provides companies with the confidence that their database implementations will be able to recover quickly from any catastrophic datacenter event in minutes as opposed to the hours if not days that it would take to recover on a competing architecture. UCS Manager achieves this through its implementation of Service Profiles. Service Profiles contain the identity of a server. The UCS servers themselves are stateless and do not acquire their personality (state) until they are associated with a Service Profile. This stateless type of architecture allows for the re-purposing of server hardware dynamically and can be utilized for re-introducing failed hardware back in to production within five to seven minutes.
Service Profiles can provide considerable relief for SQL Server administrators when re-introducing failed servers back in to production. Service Profiles make this a snap! Just un-associate the Service Profile from the downed server, associate it with a spare server and the workload will be back up and running in five to seven minutes. This is true for both virtualized and bare-metal workloads! Yes! You read that last statement correctly!! Regardless of the workload being virtual or bare-metal, Cisco UCS can move the workload from one server to another in five to seven minutes (providing they are truly stateless i.e., booting from SAN).
Since every server in UCS that is serving a workload requires that a Service Profile be associated with it, Cisco UCS Manager provides the ability to create Service Profile Templates which ease the administrative effort involved with the creation of Service Profiles. Server administrators can configure Service Profile Templates specifically for their SQL Servers and foster consistent standardization of their SQL Server implementations throughout the enterprise via these templates. Once the templates are created, Service Profiles can be created from these templates and associated to a server in seconds. Furthermore, these operations can be scripted via Cisco’s Open XML API and/or PowerShell integration (discussed next) simplifying the deployment process even more.
To learn more about Service Profile Templates and Service Profiles, please visit here.
Manage Workloads Efficiently:
Cisco UCS has very tight integration with Microsoft System Center. Via Cisco’s Operations Manager Pack, Orchestrator Integration Pack, PowerShell PowerTool and Cisco’s extensions to Microsoft’s Hyper-V switch, administrators are able to monitor, manage and maintain their SQL Server implementations proactively and efficiently on Cisco UCS. Additionally, Cisco’s PowerTool for PowerShell, with its many cmdlets, can help to automate any phase of management with System Center thus optimizing the overall management/administration of Cisco UCS even more so. All of this integration comes as a value-add from Cisco at no extra cost!
Please visit http://communities.cisco.com/ucs to learn more about, download and evaluate Cisco’s Operations Manager Pack, Orchestrator Integration Pack and PowerShell PowerTool.
Read More »
Tags: business intelligence, Cisco, database, Microsoft, Microsoft SQL Server, Microsoft SQL Server2014, Microsoft Windows Server 2012, Nexus 1000v, OLTP, UCS
The second revolution in server virtualization is here. Virtual Machines were the first revolution that allowed users the ability to run multiple workloads on a single server through a hypervisor. Now the next wave is here. Linux Containers have recently started to gain momentum with many enterprise customers asking me if they should consider it and if Cisco offered Docker support in the enterprise-grade virtual networking products.
I approached my engineers to see whether our recently introduced Nexus 1000v for the Linux KVM hypervisor, which already has 10000+ customers across various hypervisors, is able to support linux containers or more specifically the popular linux container technology, Dockers.
One of the key advantages of Nexus 1000V today is that it allows easy management of policies for all of the virtual machines. For example, with a single command or REST API call, a security policy can be deployed or altered across all virtual interfaces connected to a Virtual Extensible LAN (VXLAN). My reasoning was that we should able to extend that to support to linux containers/dockers.
So I approached Tim Kuik (@tjkuik) and Dave Thompson (@davetho610) and much to my delight, they not only said Nexus 1000V can do it but also showed how to do it so that customers can take advantage of this today in their deployments.
I have included Tim and Dave’s how to attach Docker containers to the Nexus 1000v and to assign policies write-up below so that you can try this in your setup. Happy reading.
How to use Dockers with Nexus 1000V for KVM Hypervisor:
Begin by installing the Nexus 1000v to manage one or more Ubuntu servers: The Nexus 1000v is managed by a Virtual Supervisor Module (VSM). Once the package is installed on your servers, the servers will appear on the VSM as Virtual Ethernet Modules (VEM). Below we can see our VSM is managing a server named Bilbo:
We’ve also pre-configured our server to have a port-channel that is capable of carrying vlan traffic 100-109. We’ve used an Ethernet Port-profile to conveniently manage the uplinks for all of our servers:
A key concept of the Nexus 1000v is that of a Port-profile. The Port-profile allows for a shared set of port attributes to be cohesively managed in a single policy definition. This policy can include an ACL definition, Netflow specification, VLAN or VXLAN designation, and/or other common port configuration attributes. We can, of course, create multiple Port-profiles. Perhaps we would have one per level of service or tenant. The Port-profile provides the mechanism to collect and manage the set of containers that share the same policy definition.
Below we create a Port-profile that could be used by any number of containers on any number of servers.
Install docker on your server. [https://docs.docker.com/installation/ubuntulinux/]
The purpose of the container is to run our application. Let’s create one for this example which can handle ssh sessions. Here is an example Dockerfile which does that:
At this point, via Docker, you can build an image specified by this Dockerfile.
All the pieces are now in place. The Nexus 1000v is running. We have a policy definition that will assign the interfaces to vlan 100 (port-profile vlan100). Docker is installed on our server. We have created a useful container image. Let’s create an actual container from this image:
The container instance started at this point is running with just a loopback interface since we used the argument “–networking=false”. We can now add an eth0 interface to this container and set it up to be handled by the Nexus 1000v on the host.
Setup a few env variables we will use as part of the procedure. Find the PID of the running container and generate UUIDs to be used as identifiers for the port and container on the Nexus 1000v:
In this example the following PID and UUIDs were set:
Create a linux veth pair and assign one end to the Nexus 1000v. We will use the port-profile defined on the VSM named “vlan100’ which will configure the port to be an access port on VLAN 100:
When an interface is added to the Nexus 1000v, parameters are specified for that interface by adding keys to the external_ids column of the Interface table. In the example above the following keys are defined:
- iface-id: Specifies the UUID for the interface being added. The Nexus 1000v requires a UUID for each interface added to the switch so we generate one for this.
- attached-mac: Provides the MAC of the connected interface. We get this from the ‘ip link show’ command for the interface to be added to the container.
- profile: Provides the name of the port-profile which the Nexus 1000v should use to configure policies on the interface.
- vm-uuid: Specifies the UUID for the entity which owns the interface being added. So in this case that’s the container instance. Since Docker doesn’t create a linux type UUID for the container instance, we generate one for this as well.
- vm-name: Specifies the name of the entity which owns the interface. In this case it’s the container name.
Move the other end of the linux veth pair to the container’s namespace, rename it as eth0, and give it a static IP address of 172.22.64.201 (of course DHCP could be used instead to assign the address):
On the Nexus 1000v’s VSM you will see logs like this indicating the interface has been added as switch port vethernet1 on our virtual switch:
The following VSM commands show that switch port veth1 is up on VLAN 100 and is connected to host interface veth18924_eth0 on host bilbo:
On the host bilbo we can use vemcmd to get information on the port status:
That’s it. We now have a useful Docker container with an interface on the Nexus 1000v using our selected policy. Using another server (and/or container) that is on the same vlan, we can ssh into this container using the IP address we assigned:
When shutting down the Docker container, remove the port before removing the container:
Tags: docker, Nexus 1000v