Cisco Blogs


Cisco Blog > Data Center and Cloud

Part 1: 10 Things Vmware Server Admins Should Know About Self-Service Catalogs and Lifecycle Management

This part 1 of the series “10 Things Vmware Server Admins Should Know About Self-Service Catalogs and Lifecycle Management” that I’ll be publishing over the next few weeks--I hope! (The boy is nothing if not ambitious).

1. The service catalog is a tool for driving users to standard configurations.

To get the operational efficiencies we hope to achieve from virtualization and / or cloud computing, we need to establish standard configurations. This is tough, for a couple of reasons.

First, the challenge is the gap between the language of the customer, and the detail needed by the operations group typically generates a lot of back and forth during the “server engineering” process.  Instead of having “pre-packaged” configurations, every thing is bespoke.

Instead of having useful abstraction layers and levels, the customer has to invent their own little bit of the data center. This made sense when the new app meant a whole new hardware stack to which the app would be fused to and the concrete poured on it. It doesn’t make sense now.

Second,  there’s resistance from customers to adopt standard VM builds.  Sometimes the reasons are valid, other times less so. The issue arises because the technical configurations have not been abstracted to a level the user can understand what they get and what’s available for configuration.  Nor can they compare one template to another in ways that are meaningful to them.

The service catalog is the tool to help deal with these two obstacles.  The service catalog is a useful tool to communicate, in the language of the customer, the different options available from IT for hosting environments.

A service catalog will support multiple views (customer, technical, financial, etc) so that when the customer selects “small Linux” for testing, this generates both a bill of materials and standard configuration options.  Once that base is selected, self-service configuration wizards provide both guidance and gutter-rails so the customer is both helped to the right thing and prevented from making errors.

From this customer configuration, the environment build sheet is generated which will drive provisioning and configuration activities or to execute any policy automation in place.

And the catalog allows the VM admins to figure out what their “market” is buying; which is very useful for capacity planning.

Tags: , , , , , , ,

The Value of Orchestration: What Did Captain Kirk Know That Scotty Didn’t? & The Roach Motel Infrastructure Issue

Recently, a customer asked me what was the value of using automation to operate a private cloud?  It was a good question. Working  in the middle of the reality distorition field of the cloud industry I take it for granted that everyone knows automation’s benefits.

Fundamentally, automation tools help to reduce labor costs, rationalize  consumption and increase utilization.

Costs are lower because the labor required to configure and deploy is eliminate. This automation is possible by creating standard infrastructure offerings. Standard infrastructure offering make possible a new operational model: to move from the artesanal approach of delivering infrastructure ,where every system and configuration is uniqe, to the industrialized approach, that ensures repeatability, quality and agility.  It’s the difference between custom tailoring and standardized sizes at The Gap. Both have their place, but one costs more.

Read More »

Tags: , , , , ,

The New Bronze Age: SLA’s too high and they prevent innovation, too low and they prevent operation

Where I grew up, you could buy individual cigarettes. While I played ball at the park, I’d see the young men approach the paper kiosk to get a cigarette. Not a pack, just one lonely stick. The customers overpaid on per-cigarette basis but it helped them manage their budget I’d watch them and think nothing of it. It was normal.

People also could buy shampoo in ketchup-sized packages. Unilever still sells them in India. I grew up in the third world, it was the bronze age, but only only on good days.  We’re back to bronze with cloud computing, and I’m hyper ready.

For me, the biggest invention cloud computing brings about is unreliable level services. And how important it is to have low quality service levels available on a metered basis. A metered basis the customer can manage.  Hear me out.

Today, Amazon’s block storage is unpredictable for databases. The latency in the network is funky. Machines fail to start. Machines don’t fail to fail. Service levels in the cloud don’t exist.

This is not your typical datacenter. It’s a bronze age datacenter. No great expectations, but diminished expectations.  And for a young segment of the market, it’s just right and couldn’t be be better.

I sat down with a young start up and asked them why do they use cloud computing if it’s so unreliable, if it requires so much more coding.

Answer: They have more time than money. And the money they have, they have to be parsimonious, avaricious and cautious. They are ok coding more to deal with the cloud’s weirdness. But running out of cash would kil them. The bronze age suits them just fine.

So all the cool kids in Silicon Valley are super excited about writing software for “Designed-to-Fail’ infrastructure. We can’t wait for a chaos monkey to spank us. Well…  that’s a San Francisco thing.

So what’s the lesson of this meditation? It’s that service levels are important. Too high and they prevent innovation, too low and they prevent operation.

Read More »

Tags: , , , , ,

Re-Thinking Pork Bellies. Why There are No Commodity Clouds, Only Commodity Thinkers.

For a while now,  I’ve been bothered with the word commodity. Like legacy, greenfield, there are value judgements implicit in the words. When we apply them to technology adoption, they serve as marketing oars to rock the new tech boat, but are not useful when you need a fish for dinner.

And this article on the NYSE community cloud is a great example of why there are no commodity clouds.

The NYSE’s community cloud platform is design to ensure that its customers are treated fairly, and it ensures them that the maximum latency that any user will experience in this data center is 70 microseconds (one millionth of a second) round-trip for any message, O’Sullivan said.

“We guarantee that nobody will have an advantage on the network,” said O’Sullivan. “It’s designed to be a level playing field for trading.

Basically, this compute service comes with a latency service level and a promise that no one gets better latency, thus ensuring a level playing field for traders.

So it’s “level-playing-field-as-a-service;” which is right and ridiculous. Right because that’s the differentiators; ridiculous that I have to pull the *aaS to describe what before I would have simply called “service.”

There was a time when coffee was called a commodity, then Howard Schultz of Starbucks came along, and Peet’s came along, and next, we are all paying $5 for coffee.

Even frozen pork bellies are not commodities anymore. You might remember this quote:

“Pork bellies! I have a hunch something exciting is going to happen”

from Trading Places with Dan Aykroyd and Eddie Murphy

But as you see from the link, even pork bellies are not commodities anymore in the trading markets.

And then again, pork bellies are not commodities according to chef Michael Mina--it’s now branded, locally-grown, organic and … sexy. Pork bellies. Sexy.

So you can see why I might think clouds are far, far from being commodities like pork bellies. Which are not commodities anymore.

As for x86 being a commodity? I don’t see Intel suffering. Don’t confuse platform with commodity.

Tags: , , , , ,

Management of dynamic virtualized data centers

September 12, 2011 at 12:25 pm PST

As a resident of Austin TX, I got to experience a record setting heat spell and drought this summer. Not to mention, some of the worst forest fires which are yet to be contained.  I was fortunate to escape the heat in the last week of August and attend VMWorld 2011.

One of the themes at the conference was Desktop Virtualization -- desktop access through a range of devices and access to cloud-based virtual machines. This is a timely issue in light of the June Cisco Visual Networking Index report which predicted that there will be twice as many networked devices as people on earth by 2015. With the proliferation of devices such as the iPhone and iPad, it is not inconceivable that workers will use the same devices in the office as well as home.

Another theme at the conference was management of servers and desktops including provisioning, ongoing maintenance and automation to meet service level agreements.  VMWare’s CTO in his keynote also mentioned that some of their biggest investments are around operations management of the virtualized environment.

Not surprisingly, another theme at the conference was Cloud Computing.  Whether virtualization is required for Cloud Computing can be a topic for heated debate.  Although virtualization is not an integral part of the NIST definition of Cloud Computing, the resource-pooling characteristic of Cloud Services is enabled by virtualization.

These themes prompted me to revisit a study by Forrester Research Cisco sponsored on the basics of management for Cloud Computing.  Although it was aimed at Cloud Management, the basic steps and concepts should be valid for any data center on a journey towards a dynamic, connected world.  The paper is titled “Elements of Cloud Service Orchestration”. A closer look under the hood is warranted even though the term “Service Orchestration” has taken a life of its own, with Wikipedia calling it a buzzword.  A webcast on the top is also available. What do you think service orchestration means in the context of data center management? I am very interested in your feedback.

Tags: , , ,