Here are a few questions we regularly see about the Nexus 1010.
Why do I need a Nexus 1010?
There are many advantages to running a dedicated piece of hardware for your 1000v implementations. A few that jump off the page are:
- Dedicated hardware which includes lights out and out of band management.
- Offloading of up to 4 VSMs (Virtual Service Blades) and 1 NAM (Network Analysis Module) for increased scalability. Each VSM can manage 64 Virtual Ethernet Modules (VEMs)
- HA at both the software layer and hardware layer incorporating virtual service blade, chassis level and network level redundancy.
Is it more difficult to setup a Nexus 1010 and deploy the 1000v?
Actually, the Nexus 1010 requires a fewer number of steps for 1000v deployment. The whole process is basically 4 steps.
- Create Virtual Service Module (VSM) as a Virtual Service Blade (VSB) on the 1010
- Register the certificate as plug-in for vCenter
- Define your ethernet and vethernet port-profiles
- Install VEM using vCenter
What’s the best way to access the CLI?
The Nexus 1010 is position just like any other switch and has the same access methods available. The typical options are telnet or a terminal server. The 1010 runs NX-OS.
What is the best configuration for the control, packet, management and data information?
We get this one a lot. There are current 4 published deployment configurations.
- Nexus 1010 uplink type 1: Ports 1 and 2 carry Management, Control and Data traffic. This option is best for simplistic 1000v only deployments.
- Nexus 1010 uplink type 2: Ports 1 and 2 carry Management and Control; Ports 3-6 carry Data traffic. This option is recommended when a NAM deployment is being considered.
- Nexus 1010 uplink type 3: Ports 1 and 2 carry Management; Ports 3-6 carry Control and Data traffic. Best for multiple VSM implementations requiring maximum bandwidth.
- Nexus 1010 uplink type 4: Ports 1 and 2 carry Management; Ports 3 and 4 carry Control, Ports 5 and 6 carry Data traffic. Best for implementations that require traffic separation.
How hard is it to upgrade the Nexus 1010?
The process to upgrade the 1010 is not very different from a traditional switch. There are some additional steps to consider when using HA implementations. Check the Nexus 1010 install and configuration guide for step by step procedures.
For many months now, we’ve talked about the Journey to Cloud Computing and how an evolution within your Data Center is needed to make that a reality. In many cases, we looked at this from an application perspective, focused on the interaction between automation, applications, servers, storage and the edges of the network.
But many of you have asked us to provide you a broader understanding of the role the Network plays in the Journey to Cloud Computing. Specifically you’ve asked us to highlight several areas:
- What is Cisco’s perspective and strategy around the usage of multiple types of Cloud Computing (Private, Public, Hybrid, Community) and what is needed from the network to interconnect all these offerings?
- How does my business manage the network transitions needed between today’s applications (often client-server), the virtualization of those application, and next-generation web and big data applications?
- What considerations do we need to make within my Data Center as we try and maximize efficiency and scalability?
- What considerations do we need to make at the edges of our networks when the proliferation of devices is almost out of control?
- Are there ways to protect my network investments while still having the flexibility to deal with the business uncertainties that are around the next corner?
Read More »
Tags: Big Data, Borderless Networks, Cloud Computing, Data Center Fabric, FabricPath, Layer 2, Layer 3, nexus, UCS, virtualization, Web Applications
One of the things I admire about Cisco marketing, and I think generates a lot of respect for us from our customers, is how we approach competitive marketing. Most importantly, we hardly ever do it. Sure, we arm our sales teams with specific comparison data, but it’s rare we feel the need to compare ourselves publically or to bash competitors. When you bash a competitor, it really only serves to give them credibility, and highlights that they must be doing something important to occupy your mindshare, or that of your customer’s. Occasionally though, we are faced with not only having to take the gloves off a little more, but responding to the inevitable FUD that gets thrown our way.
This brings us to a blog post written by HP about Cisco’s Virtual Security Gateway (VSG), which unfortunately contains a number of inaccuracies and misrepresentations of our product that we have to clear up.
Let’s start with this example:
Cisco has a product called the Virtual Security Gateway (VSG) for the Nexus 1000V Series. It is a virtual firewall that lets you enforce policy and segmentation virtual environments. All associated security profiles are configured to include trust-zone definitions and access control lists (ACLs) or rules. They also support VM mobility when properly configured. If there’s one thing the company is good at, it is those good-old ACLs developed back in the early 90s!
The strength of VSG’s firewall capabilities is its awareness of the virtual machine environment, and specifically the ability to write firewall rules based on the attributes of the virtual machine, attributes such as the NAME of the VM. This gives tremendous power to establish policies in virtual environments, such as logically isolating tenants running on the same machine, or separating VMs based on operating system or application type in virtual desktop environments, a use case I wrote about earlier. To imply VSG is enforcing good-old ACL’s from the 90’s is disingenuous at best. Read More »
Tags: ASA, data center security, Nexus 1000v, Virtual Security Gateway, vsg
Well it’s about a week since we wrapped up Cisco Live… Looking back, it was a great week of insights, dialog with customers, and some great interaction with our partner ecosystem. This year’s Cisco Live was special for VXI, as it marked the major “coming out” of our end-to-end VXI system at this venue, with lots of proof points established in the 9 months since we launched at Cisco’s Collaboration Summit.
I’m especially excited about the traction our joint solution with VMware has gained, as demonstrated by customers embracing VXI and desktop virtualization. I had the opportunity to join James Lomonaco (Senior Manager Alliance Marketing, VMware) in an interview discussing the journey thus far with our joint solution for desktop virtualization – you can check it out here. In the coming weeks you’re going to hear more about the joint innovation we share, and some great examples of why the Cisco-VMware solution presents a truly interlocked, integrated solution combining the most innovative, and widely adopted hypervisor platform (VMware vSphere), with the most rapidly growing x86 bladesystem (Cisco UCS). This workspace-optimized infrastructure integrated with VMware View, delivers on the promise of the highest-fidelity user centric computing platform for customers on their journey to IT as a service.
In an upcoming post, I’ll share with you some real examples of joint innovation in the Cisco-VMware solution that make this platform unlike any other when it comes to supporting desktop virtualization. As you know, VMworld 2011 is only weeks away, and we’ll have a great suite of content for you to take in, as you make your way back to Vegas, so stay tuned…
Tags: UCS, vdi, VMware, VMware View, vxi
While at Cisco Live I had the pleasure of meeting several people who were curious about Multihop FCoE but had the unfortunate experience of getting too much misinformation from several sources (yes, including some of Cisco’s competition, but even some partners!). Some had already seen my article on FCoE and TRILL and wanted to know if I could help explain the relationship between FCoE and QCN (Quantized Congestion Notification), one of the documents in the IEEE DCB standard revision.
Even though we have a very good, short white paper on the subject, this is one of those subjects that as soon as people ask about it we break out the white boarding, or in the case of being at Cisco Live, the napkins. There are just some things that pictures help explain better.
Because of this, I’m going to try something different with this blog. It may work, or I may fall flat on my face; I suppose we shall find out. Read More »
Tags: FCoE, MDS 9500, Nexus 5500, Nexus 7000, QCN