Here is a great technical session presented by Guy Brunsdon of VMware. It starts with a good foundation on networking in a virtual machine environment, followed by a discussion of the virtual distributed switch and the Cisco Nexus 1000V. For more on VM networking, check out Guy’s blog over at VMware.
Doug and I were having an interesting conversation the other day, which I thought was worth sharing….In 1965 Gordon Moore postulated in a paper that transistor density would double approximately every two years. We’ve heard people question why networking does not follow Moore’s Law, presuming that it is behind the curve. It is easy for those without the domain expertise in any particular technology or IT area to try to force fit Moore’s Law in as a catch-all measuring stick for technology evolution. So, let’s take a look at the evolution of networking contrasted with the predictable transistor densities of Moore’s law.We have to pick a starting point, so we’ll start with 1994, it’s fifteen years ago and gives us enough iterations of Moore’s Law to see if there is a noticeable trend or not. In 1994 Cisco started shipping the Catalyst 5000 series of modular LAN switches- it had a 1.2Gb/s backplane based on a shared bus and had modules supporting 12-port 100Mb Ethernet and 24-port 10Mb Ethernet. We will baseline all assumptions on a 1994 starting point with a 1.2Gb/s backplane. We will double the performance every two years on the Moore’s Law row, and track historical performance of Cisco’s networking products on the Cisco Switching Row. Read More »
As I have noted, virtualization introduces a couple of distinct challenges that directly impact its ultimately scalability as a solution. While virtualization offers a number of compelling benefits, it does introduce increased operational complexity. Because it is simple to add a new virtual machine (VM) and because we can move VMs across physical infrastructure, we see an uptick in the change requests that the network and storage teams must address to support this new flexibility. Additionally, because we see more use of automated live migration of VMs (i.e. VMware DRS), we need to the ability to automate the provisioning of the underlying infrastructure in a much more dynamic way. To us, it was simple: the management framework of the Cisco UCS should simplify and automate server provisioning and the network connectivity that the server requires. This allows companies to accelerate their virtualization plans. However, there was also one other key goal–we did not want to create any data center islands, so the Cisco UCS should easily integrate into the existing environment — no forklifts.
|So, to get a bit further into the management of the UCS in a heterogeneous environment, I went to the source and did a quick interview with Brian Schwarz, who is Product Manager focused on UCS Manager in the Server Access and Virtualization BU. Brian came to Cisco via the Nuova acquisition and prior to that he worked at Symantec/Veritas.|
Omar: So, how does Cisco UCS integrate into a customer’s existing environment–there seems to be a perception that UCS will be an island in the data center. Brian: UCS Manger runs below the operating system. There is a whole ecosystem of tools that runs operating system and above and they use host agents and traditional forms of host interaction–those will all work unmodified on top of the UCS. When we are done associating a service profile with a server and it’s attached networks, the OS essentially gets a bare metal server exactly like it sees it today. There should be no need to change any of those products in order to work with UCS servers. Having said that, that’s the base level qualification–the UCS and OS level tools won’t break each other. There is still a value in ISVs exploiting the UCS XML API. They can deliver new capabilities to our mutual customers that has not been possible with non-unified architectures. This is what BMC has done with their BladeLogic offering, but integration is not a requirement in order for their software to work.
In this installment, our fearless data center architect Doug Alger discusses how Cisco IT handles equipment that needs side-to-side airflow. As to why we need side-to-side airflow in the first place, check out this recent interview with Doug Gourlay.On a related note, here is some useful, practical info from Cisco’s Energy Efficient Data Center (EEDC) program