Within the Data Center marketing organization, we spend quite a bit of time focused on activities that announce new products or solutions, as well as educating customers, partners and analysts. Sometimes this is done via launches and other times it’s at events like EMC World 2011, SAP Sapphire, Citrix Synergy, CiscoLive 2011 or the upcoming VMworld 2011 (Cisco is a Diamond-level Sponsor). But once that information leaves our hands and gets out into the community, it’s important for us to remember that it’s often discussed, dissected, and evaluated by a broad range of IT professionals.
Over the past few weeks and months, Cisco Data Center technologies have been features in several industry podcasts.
Tell me if this sounds familiar… you are asked to perform a penetration test on customer’s network to determine the security posture of their assets and the first thing they do is give you a list of assets that you are NOT allowed to test, because they are criticalsystems to the business. Ironic isn’t it? This is exactly the difficulty you can expect when performing penetration testing in the cloud, but multiplied by ten.
There is a lot to think about and plan for when you want to perform a penetration test in a cloud service provider’s (CSP) network. Before we get into the technical details, we need to start with the basics.
Stealing a quote from Arthur C. Clarke: “Any technology, sufficiently developed, is indistinguishable from magic”. Some people would certainly consider security these days as magic. Okay, so much for that reference, but what does Star Trek have to do with government and security, my typical topics. Star Trek, although mostly about exploration sure seemed to have a bit of a “Space Military” characteristic to it. Isn’t that what the Star Fleet was all about? (no offense intended, Capt. Kirk.)
Lately, I’ve been doing some research for a paper on the integration of physical and logical security (I did an initial paper that you can see here: Click on “The Necessity of Security”) and it dawned on me how very similar the technology of today is to the science fiction of the 1960’s, or in Mr. Clarke’s case, magic. So here is a synopsis of some of my observations. I’m sure there are more; please feel free to reply with what I’ve missed or your own favorites.
For many months now, we’ve talked about the Journey to Cloud Computing and how an evolution within your Data Center is needed to make that a reality. In many cases, we looked at this from an application perspective, focused on the interaction between automation, applications, servers, storage and the edges of the network.
But many of you have asked us to provide you a broader understanding of the role the Network plays in the Journey to Cloud Computing. Specifically you’ve asked us to highlight several areas:
What is Cisco’s perspective and strategy around the usage of multiple types of Cloud Computing (Private, Public, Hybrid, Community) and what is needed from the network to interconnect all these offerings?
How does my business manage the network transitions needed between today’s applications (often client-server), the virtualization of those application, and next-generation web and big data applications?
What considerations do we need to make within my Data Center as we try and maximize efficiency and scalability?
What considerations do we need to make at the edges of our networks when the proliferation of devices is almost out of control?
Are there ways to protect my network investments while still having the flexibility to deal with the business uncertainties that are around the next corner?
After just getting back from a great week at Cisco Live 2011, I wanted to highlight one of the demonstrations that garnered a huge amount of attention from attendees (customers & partners). This is from our CITEIS project, which is Cisco’s internal Private Cloud.
This demonstration highlights a number of unique Cisco Data Center technologies, along with partner technologies: