Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco Nexus1000v: LASIK surgery for the network admin

I finally took a leap of faith and had LASIK surgery done recently, and without a doubt it’s been a life changing decision.  The daily hassle of glasses and contacts are gone, and my vision is now 20/15…it’s like going from regular TV to HiDef!  Of course these benefits came with a cost, requiring investments both financial and mental.  The financial cost was easy enough thanks to no interest payments, however the mental cost required a careful weighing of risk vs reward and a bit of blind faith (no pun intended).  In the end, trust in the technology and the doctor, and the belief that I could find my happy place for 15 minutes to endure the procedure was enough to take the leap.  Looking back it was one of my better life decisions.

Shortly after my procedure I was on site at a customer who was implementing a Vblock, and Cisco was engaged for UCS optimization services to follow up the install.   For those new to integrated infrastructure solutions, a Vblock is a pre-integrated and tested infrastructure stack with various components across compute, network, and storage.  My favorite component hands down is the Cisco Nexus1000 This product replaces the VMware vSwitch functionality with a feature rich Cisco switch powered by NXOS, which this particular customer had no knowledge of.   Well,  I’m a huge fan of the product, and I knew they would be too once they came to understand it’s use cases and capabilities.   I gave their network and server admins a 4 hour overview covering everything from architecture to troubleshooting.  The light bulbs went on and they were exchanging smiles about 10 minutes into the presentation when I started talking about the non disruptive operational model and VN-LINK concepts.  One of the network admins interrupted me and said “ are you telling me I can get clear vision to the VM level without the hassle of dealing with these guys” as he pointed at the closest server admin.  I immediately thought of my new eyes and chuckled at the thought that server admins apparently were as annoying as glasses or contacts to deal with on a daily basis.

Read More »

Tags: , , , , , , ,

Reflections on the Cloud Expo in Silicon Valley and How Do I Know My Apps are Working in the Cloud?

Cloud Expo was indeed a very interesting juxtaposition of people espousing the value of cloud and how their stuff is really cloudy.  You have a group of presenters and expo floor booths talking about their open API and how that is the future of cloud.  Then you have the other camp that tells us how their special mix of functions is so much better than that.   All of this is a very interesting dialog.  APIs are indeed very important.  If your technology is indeed a cloud operating model then you must have an API.   Solutions like Cisco’s Intelligent Automation for Cloud rely on those APIs to orchestrate cloud services.   But APIs are not the end all.   The reality is that while the cloud discussions tend to center on the API and the model behind that API, the real change enabling the move towards cloud is the operating model of the users who are leveraging the cloud for a completely fresh game plan for their businesses.

James Urquhart’s recent blog:   http://gigaom.com/cloud/what-cloud-boils-down-to-for-the-enterprise-2/ highlights that the real change for users of the cloud is modifying how they do development, test, capacity management, production operations and disaster recovery.  My last blog talked about the world before cloud management and automation and the move from the old world model to the new models of dev/test or dev/ops that force the application architects, developers, and QA folks to radically alter  their model.   Those that adopt the cloud without changing their “software factory” model from one that Henry Ford would recognize to the new models may not get the value they are looking for out of the cloud.

At Cloud Expo I saw a lot of very interesting software packages.   Some of them went really deep into a specific use case area, while others accomplished a lot of functional use cases that were only about a inch deep.   As product teams build out software packages for commercial use, they have a very interesting and critical decision point that will drive the value proposition of the software product.  It seems to me that within 2 years, just about all entrants in the cloud management and automation marathon will begin to converge on a simple focused yet broad set of use cases.   Each competitor will be either directly driving their product to that point, or they will be forced to that spot by the practical aspects of customers voting with the wallets.  Interestingly enough, this whole process it drives competition and will yield great value for the VP of Operations and VP of Applications of companies moving their applications to the cloud.

Read More »

Tags: , , , , , , , , , ,

Hadoop and the Network

Big Data’s move into the enterprise has generated a lot of buzz on why big data, what are the components and how to integrate? The “why” was covered in a two part blog (Part 1 | Part 2) by Sean McKeown last week. To help answer the remaining questions, I presented Hadoop Network and Architecture Considerations last week at the sold out Hadoop World event in New York. The goal was to examine what considerations need to be taken to integrate Hadoop into Enterprise architectures by demystifying what happens on the network and identifying key network characteristics that affect Hadoop clusters.

Hadoop World
View more presentations from Cisco Data Center

The presentation includes results from an in depth testing effort to examine what Hadoop means to the network. We went through many rounds of testing that spanned several months (special thanks to Cloudera on their guidance). Read More »

Tags: , , , ,

Intelligent Automation for Cloud Value Calculator

The customers I talk to know that deploying a private or hybrid cloud will both save them money on IT operations and make them more agile to respond to the business.  There is a low grade euphoria over the cloud opportunity that gets the conversation going.  The conversation drives development of both our solution and our customers’ sophistication in thinking about how and why they will use Intelligent Automation for Cloud (CIAC).

However, finance guys and IT management don’t get that feel-good feeling over the opportunity or even the coolness of the technology in the absence of dollar numbers to motivate them.

Nor should they.

We are in a part of high-tech that does not do technology for technology’s sake.  We do it because it makes business sense.

Read More »

Tags: , ,

Another trend affecting data centers – “Convergence”

Interesting trends are taking root around us and one of them is convergence. The term conjures up different thoughts depending on our background and experiences. Economists may say convergence is the parity of per capita income around the world. Convergence for telecom is the combination of voice, data and entertainment services. So what does it mean for data centers? In one of my recent informal webcast polls of technologists, one opinion was that convergence implied the union of telecom and IT. Reality is that data centers now are the hub and source for voice, video, data and application services.

So if we look at application workloads running in data centers, there are four infrastructure capacity variables -- CPU, Memory, Storage and Network. One approach is to optimize on the utilization of one of these variables. If we decide to optimize on Storage, then it must be virtualized and/or provided as a service. Implementation would involve purchase of the best of breed storage hardware, and building highly skilled teams to manage, tweak and optimize performance of the storage resources. Similarly a COE(Center of Excellence) for servers (CPU and Memory) must be formed for servers and for networks. This implies that any project would involve multiple teams and project management would be a challenge, to put it lightly. This reminds me of my mainframe experience in relation to the distributed platform. We could get an entire application developed, tested and ready to go before getting a RACF id to even access the mainframe.

Read More »

Tags: , , ,