There has been quite a lot of buzz recently about Cisco’s Unified Computing strategy and what that might mean in terms of partnerships and competition. The latest (and well written) report on the subject comes from Nick Lippis and its title (Are Cisco, HP and IBM on Data Center Collision Course?) says it all. Read More »
So, here we are again with another”Wishful Thinking” post because, yet again, someone would rather hurry to TGI Fridays for their appletini fix, than take the time to do some due diligence. Or, perhaps they can’t come up with anything useful to say about their own products and decided to make stuff up about ours.Anyway, lets tackle some of the rumors, whispers and innuendo our customers are hearing around the Cisco Nexus 1000V. The most popular assertion seems to be something along the lines of:
- The 1000V only works with Nexus switches, so you have to upgrade your network
- The 1000V requires you to use that new-fangled FCoE
- The 1000V requires you to replace your server’s network adaptors
- The 1000V can only be configured while wearing special slippers (OK, I made this one up, but its about as valid as the other things I have heard)
So let’s be clear: if your infrastructure can run ESX+vSwitch, it can run ESX+1000V without changes. The switch will run over GbE or 10GbE. The switch will run with your existing network interface (assuming it is on the ESX HCL--and if its not, you have larger problems). The Nexus 1000V will happily work with whatever upstream switch you have in place now--Catalyst, Nexus, whatever. Finally, the Nexus 1000V does not require FCoE, and, of course, your choice of footwear is your own. I’d say the Nexus 1000V is one of the most agnostic products we have every shipped.In fact, I’ll take this a step further on the issue of openness and compatibility. The Nexus 1000V will concurrently work across multiple server vendors and multiple form factors (blade, rack, multi-RU), which is something that not everyone can say. In short, if your infrastructure supports the next version of VMware ESX, it will support the Nexus 1000V.The second assertion is that there really isn’t any problem for the Nexus 1000V to solve, which is kinda funny, since we created the switch in partnership with VMware. In reality, I think anyone that has any kind of sizable VM deployment or has aspirations for a sizable deployment sees the immediate need for the Nexus 1000V both in terms of the VN-Link features it offers (VM-level config and troubleshooting, policy portability, etc) as well the streamlining of operations and coordination between server and network teams…or at least that has been my consistent experience talking to dozens of customers over the last year or so. I’d love to be able to wrap with some firm details on availability, pricing, etc, but, alas, we are not quite there yet. We are chugging along through the beta and we are currently on target to hit our goal of first half of this year.Stay tuned for details.
Michael Morris, one of the first Cisco Certified Design Experts, and trusty engineer Kamal Vyas are blogging about their experience is setting up a Cisco Nexus based Data Center. They are going through an extensive proof-of-concept in Cisco’s CPOC lab right now and covering this near-real time on the Cisco Subnet blog over at Network World.They have the Cisco Nexus 7000 in the AggregationThe Cisco Nexus 5000 for 10GbE AccessThe Cisco Nexus 2000 for 1GbE AccessCatalyst 6500 for Network ServicesCisco ASR 1000 for WAN EdgeThey then went and tested the Virtual Port-Channel code to eliminate Spanning Tree from being a topology bound protocol to simplify the network, reduce convergence times, and enable, if necessary, a larger and flatter topology to support virtualization more effectively.Michael- what is working well? Also where can we improve and what would make this easier for you and Kamal?dg
What’s in a name? Who judges a book by its cover? Ok, I do. I often buy based on the cover, not the content because I don’t know the content til I read the book. So the cover helps a bit.“Data Center Networking” is a wonderfully descriptive and accurate name for this blog. But let’s face it -- it’s a tad droll. Is there a better name? It certainly needs to encompass the fact that we are mainly discussing Data Centers, Virtualization, and Cloud Computing on this blog. Yes, it’s a bit techier than some of our other ones, I can’t nor do I want to help that -- but I am not naming it in hex or binary (Sorry Dino!) Certainly we love networks, so some linkage or homage to that strong legacy would be nice. Any thoughts? If someone comes up with a good one and we use it I’ll think of something nice to do for them! (where did I hide those last few Data Center fleeces…)dg
I have been posting on this blog for a couple of years now as a member of the Data Center Marketing team. However, I’ve been silent for several months as I transitioned into a new customer-facing role selling to some of our largest customers in Nothern California. Stepping outside marketing into the real world, if you will, has given me a unique perspective into our customer’s data center problem set.I recently attended a data center briefing with a large healthcare provider that is in the process of consolidating and virtualizing their data center assets. After a few hours of presentations that covered our strategies around Virtualization, Unified Fabric, and Unified Computing, I quickly realized that the lightbulb going off in their heads was not about how cool this technology was or how much money they could save but how much time they would get back in their lives by implementing our Data Center 3.0 products and solutions.That by enabling hitless ISSU on the Nexus 7000, they could perform software upgrades without dropping a single packet. This combined with the built-in out-of-band Connectivity Management Processor (CMP) meant they could perform required maintenance anytime from anywhere they happened to be.That by implementing a Unified Fabric with the Nexus 5000 that converges their LAN and SAN architectures, they could drastically reduce the number of switches, adaptors, and cables they would have to manage and configure on a day to day basis by at least 50 percent.That by using the Nexus 2000 Fabric Extender, they could consolidate dozens of top-of-rack switches they currently manage down to a single switch with a single image and a single configuration while enjoying the performance benefits of an end-of-row architecture.That by installing the Nexus 1000V in each of their virtualized servers, they could save the time of manually configuring security policy on switch ports every time the server team decided to Vmotion a VM from one physical server to another. That the VN-Link architecture automated the process and made sure that the proper network configuration followed the VM where ever it happened to wander, night or day.I started my career in IT as a data center manager and I remember many sleepless nights and late-hour pages while on-call. I’m actually quite jealous of some of the technology that is available today that allows an IT engineer to live a somewhat normal 9-5 life. As my father frequently told me when I was growing up, “You don’t know how good you have it today, son.” Don’t I feel like saying that sometimes to the young IT folks I meet on a daily basis.Maybe I should go visit my old friends in Data Center Marketing and tell them that their messaging is all wrong. Data Center 3.0 is not about how to enable a CIO to get the most out of his or her data center assets but how it will get the most out of their overworked IT staff.