Cloud Expo was indeed a very interesting juxtaposition of people espousing the value of cloud and how their stuff is really cloudy. You have a group of presenters and expo floor booths talking about their open API and how that is the future of cloud. Then you have the other camp that tells us how their special mix of functions is so much better than that. All of this is a very interesting dialog. APIs are indeed very important. If your technology is indeed a cloud operating model then you must have an API. Solutions like Cisco’s Intelligent Automation for Cloud rely on those APIs to orchestrate cloud services. But APIs are not the end all. The reality is that while the cloud discussions tend to center on the API and the model behind that API, the real change enabling the move towards cloud is the operating model of the users who are leveraging the cloud for a completely fresh game plan for their businesses.
James Urquhart’s recent blog: http://gigaom.com/cloud/what-cloud-boils-down-to-for-the-enterprise-2/ highlights that the real change for users of the cloud is modifying how they do development, test, capacity management, production operations and disaster recovery. My last blog talked about the world before cloud management and automation and the move from the old world model to the new models of dev/test or dev/ops that force the application architects, developers, and QA folks to radically alter their model. Those that adopt the cloud without changing their “software factory” model from one that Henry Ford would recognize to the new models may not get the value they are looking for out of the cloud.
At Cloud Expo I saw a lot of very interesting software packages. Some of them went really deep into a specific use case area, while others accomplished a lot of functional use cases that were only about a inch deep. As product teams build out software packages for commercial use, they have a very interesting and critical decision point that will drive the value proposition of the software product. It seems to me that within 2 years, just about all entrants in the cloud management and automation marathon will begin to converge on a simple focused yet broad set of use cases. Each competitor will be either directly driving their product to that point, or they will be forced to that spot by the practical aspects of customers voting with the wallets. Interestingly enough, this whole process it drives competition and will yield great value for the VP of Operations and VP of Applications of companies moving their applications to the cloud.
Read More »
Tags: API, application, automated provisioning, cloud, data center provisioning, devops, devtest, intelligent automation, monitoring, private cloud, service assurance
Big Data’s move into the enterprise has generated a lot of buzz on why big data, what are the components and how to integrate? The “why” was covered in a two part blog (Part 1 | Part 2) by Sean McKeown last week. To help answer the remaining questions, I presented Hadoop Network and Architecture Considerations last week at the sold out Hadoop World event in New York. The goal was to examine what considerations need to be taken to integrate Hadoop into Enterprise architectures by demystifying what happens on the network and identifying key network characteristics that affect Hadoop clusters.
The presentation includes results from an in depth testing effort to examine what Hadoop means to the network. We went through many rounds of testing that spanned several months (special thanks to Cloudera on their guidance). Read More »
Tags: Big Data, Cisco, Cloudera, data center, Hadoop
The customers I talk to know that deploying a private or hybrid cloud will both save them money on IT operations and make them more agile to respond to the business. There is a low grade euphoria over the cloud opportunity that gets the conversation going. The conversation drives development of both our solution and our customers’ sophistication in thinking about how and why they will use Intelligent Automation for Cloud (CIAC).
However, finance guys and IT management don’t get that feel-good feeling over the opportunity or even the coolness of the technology in the absence of dollar numbers to motivate them.
Nor should they.
We are in a part of high-tech that does not do technology for technology’s sake. We do it because it makes business sense.
Read More »
Tags: Cisco, cloud, data center
Interesting trends are taking root around us and one of them is convergence. The term conjures up different thoughts depending on our background and experiences. Economists may say convergence is the parity of per capita income around the world. Convergence for telecom is the combination of voice, data and entertainment services. So what does it mean for data centers? In one of my recent informal webcast polls of technologists, one opinion was that convergence implied the union of telecom and IT. Reality is that data centers now are the hub and source for voice, video, data and application services.
So if we look at application workloads running in data centers, there are four infrastructure capacity variables – CPU, Memory, Storage and Network. One approach is to optimize on the utilization of one of these variables. If we decide to optimize on Storage, then it must be virtualized and/or provided as a service. Implementation would involve purchase of the best of breed storage hardware, and building highly skilled teams to manage, tweak and optimize performance of the storage resources. Similarly a COE(Center of Excellence) for servers (CPU and Memory) must be formed for servers and for networks. This implies that any project would involve multiple teams and project management would be a challenge, to put it lightly. This reminds me of my mainframe experience in relation to the distributed platform. We could get an entire application developed, tested and ready to go before getting a RACF id to even access the mainframe.
Read More »
Tags: Cisco, Converged Infrastructure, data center, unified computing
Early in my career I moved quite a bit, new job, growing family, whatever the reason it seemed like every two or three years we were packing up and going to a new place and meeting our new neighbors.
Each new place had its own protocol for getting to know the neighbors, sometimes they came to us other times we had to walk around the block with the kids in tow to make that connection. The benefits of knowing your neighbors are many, who’ll lend you tools, who will help move furniture, etc.
Knowing the device neighbors in you network is just as important and fortunately there is a protocol for that, Cisco Discovery Protocol Cisco Discovery Protocol. This article is a guide to getting to know your UCS Fabric Interconnects’ neighbors in a manual and automated way.
Read More »
Tags: application, automated provisioning, cloud, devops, devtest, expect, intelligent automation, server provisioning, TCL