At CiscoLive! one of the more common questions I was asked was around the future of enterprise IT from an organizational perspective. To me, the shift in the conversation is an indicator that folks are continuing down the path of data center virtualization and are trying to get a better understanding of the long-term impact. Conversations usually centered around the continuing role of IT in the enterprise and what kinds of skill sets will be needed in this new world. A number of folks in IT management roles asked some version of “with IaaS/PaaS/SaaS, will enterprise IT go away?”–I could not really tell if they viewed this a negative or not. So, are we going to get to the point where someone in purchasing is buying services they same way they buy janitorial services?
Simply, no. The reality is IT has become as core to the business as sales or finance. What all these cloud services will allow you to do is drive greater efficiency with your spend–focus your budget and headcount on high-value activities that are core to the business and use these other services (IaaS, PaaS, etc) to flesh out what the functionality you need to deliver back to the business. How much you partake in cloud services is really going to depend on how core a given function is to your business. Lets jump back into the wayback machine for a second and visit what may have been the first cloud service: Centrex, what would probably be called VaaS today. For those of you who weren’t alive during the Disco era, Centrex was essentially a PBX hosted by your friendly local phone company. You essentially got flexible capacity, fixed costs, and were spared the capex and opex costs of buying and maintaining your own PBX (any of this sounding familiar?). For typical businesses, this might make a lot of sense; however, for certain customers with very demanding voice needs, hospitals for instance, the potential TCO savings did not outweigh the level of control they needed over their voice infrastructure. Jumping back to today, you will see the same logic apply–folks that need a high degree of control of certain key systems or can drive consistently high utilization of key systems will continue to keep those in house–everything else is potential fodder for the cloud and the attendant TCO reduction. However, even with those systems that are kept in-house, I think you will see a shift in philosophy. With the elastic capacity that cloud computing can offer, I think you will see folks much more comfortable with a thinner provisioning model–building infrastructure for typical usage levels with the understanding that you can reliably grab compute capacity to address peak demand, which, again, translates to more efficient spend. The engineers I talked to were, as you might expect, significantly less circumspect. With the convergence that is inherent in Data Center 3.0, they really wanted to understand if their still sets and their value to their companies would stay relevant. Again,the answer is simple: yes. We are talking about infrastructure convergence, not functional convergence and that is an important distinction. Using history as a guide again, when we moved SNA traffic onto an IP backbone, the mainframe team pretty much continued business as usual. With the deployment of IP Telephony, the roles and responsibilities of the voice team were largely unchanged. Using unified fabric as an example again, the storage team will pretty much continue to do the same thing they have done in the past. In fact, because unified fabric lets all servers be SAN-attached, we actually expect the role and relevance of the storage team to grow. The net is that we see each function within the data center team remaining intact, maintaining its relevancy and maintaining its autonomy (more on that in a second). So, what should you be thinking about from an organizational perspective? The guiding principle is that your technical architecture and your IT organizational structure should mirror each other. You cannot move towards a virtualized data center while maintaing an silo-ed IT team. From a practical perspective, this means loosely couple your teams–each group should maintain their autonomy but the framework should be in place for the teams to work together and collaborate. This is where the technology can facilitate the process: if you look at port profiles on the Nexus 1000V or service profiles on the UCS, they are built along this loosely-coupled model: the various IT teams (server, virtualization, storage, and network) need to work together to initially build the profiles (this is how we support Oracle servers or this is how we support finance department VMs), but on a day to day basis, each team maintains their autonomy. We believe this will give you the right balance between “collaboration” and “management-by-committee.” This model is also the reason we have deployed Roles Based Access Controls in our data center switches and in the Cisco UCS. For example, the storage team can directly manage the storage related aspects of a Nexus 5000 or a Cisco UCS without having to go through another team. At the same time, the storage team is walled-off from areas of the platform that are beyond the scope of their responsibilities. Now, the technology will only get you so far–IT leadership has to set and enforce a collaborate mindset to make this all work. Along these lines, another area to consider is organizing around service teams for key systems. So, instead of having infrastructure-centric teams (server-team, network-team, etc), build operational teams around key systems (SAP-team, Exchange-team, etc) with members from the various technical disciplines under a common manager. Its an alternative approach some of our customers have used successfully to support their key business systems. So, what challenges are you seeing as you start virtualizing your data center–how have you handled them?