I knew we were on to something good when a customer told me “This is so easy, it’s CTO proof.”
Early in the business, I was talking to a front-line server admin who had found that Cisco UCS made server deployment so reliable, automated and simple that he was convinced even his CTO could pull it off without breaking anything. The enthusiasm was real, and infectious, and it changed the face of the data center market.
Thinking back five years to March of 2009, when Cisco introduced UCS, the economy was still spiraling into the worst recession of our lifetime. IT budgets were being slashed. Many wondered if it was the right time for Cisco to enter a new market with deeply entrenched competitors.
As it turns out, it was the perfect time. Because change occurs fastest when times are hard.
In the decade leading up to 2009, computing innovation had stalled. The incumbents still had tunnel vision on the power and cooling challenges that arose out of multi-core processing in the mid-2000’s. Innovation was essentially focused on mechanical packaging: blade servers for mainstream IT and “skinless” boxes for the hyperscale crowd. Overlooked was the real problem for the vast majority of customers: operational complexity. Remember that server virtualization was rapidly spreading in nearly every data center. Again, this was originally a response to a hardware problem: processor utilization; but as everyone recognized the operational benefits, virtualization was taking hold very fast. As was cloud. Combine all this with the disaggregation of data storage from the server, which had already moved out onto the network as NAS and SAN many years before, and you had a perfect storm of complexity threatening to outpace the capacity of many IT organizations. The individual technologies in the data center were not overwhelmingly complex but tying them all together, into a system where you could land and scale an application in a very secure and available way, became the all-consuming job of the customer. Collectively, the industry had failed. In 2009, more than ever, customers needed something to help them slash OPEX in the data center and free people up to face the challenges of the day. This was the innovation vacuum that UCS had been designed to fill.
Think of UCS as the Turducken of the data center: the sum is much, much greater (and tastier) than the parts. A lot of true innovation has gone into UCS in the areas of server I/O and in fundamental advancements to server management technology. The latter is especially critical, because what is often overlooked in virtualization and cloud discussions is the underlying issue of deploying, managing and scaling the physical infrastructure itself (details, details…) The advent of UCS completed the total abstraction and automation of hardware in crucial ways that hypervisor and cloud technology still can’t acheive on their own. API-controlled data center hardware is a foundational element of modern IT innovation, and UCS started it all. This may be Cisco’s greatest contribution to the industry and charted the course for Cisco ACI in the broader data center.
Cisco’s not stopping. In the intervening five years, new innovation opportunities have appeared. Most recently, the addition of flash systems to Unified Computing in the form of UCS Invicta, which opens up a whole new chapter for what customers will be able to achieve with the System. UCS Director is taking on a pivotal role for automation across Cisco solutions and the integrated infrastructures that we construct with our storage partners. The future is so bright, our partners need sunglasses.
The team has put together this interactive timeline that commemorates many of the milestones in the first five years of UCS. Looking back over it, I can only feel proud and humbled to be associated with the team here at Cisco, our technology and channel partners, and most importantly with our customers, who have clearly proven that UCS was (and is) the right solution at the right time.
Tags: Cisco UCS, Cloud Computing, data center, UCS, virtualization
Selecting the right cloud service provider for your company requires more than just browsing through prospective cloud vendors’ websites and reading about them online.
How do you decide which vendor to trust for the performance, reliability, and security you need?
Whether you are in the process of migrating to the cloud or a current cloud adopter, a recent Business 2 Community article provided the acronym, “PERFECTION” to remember 10 important technological and business considerations when choosing a cloud service provider.
Finding this perfect cloud service provider can seem like a daunting feat, right?
In this post, I’ll discuss how organizations can have confidence in their cloud vendor decisions. They need to be assured the technology powering their services leads the industry in performance and scalability. And most importantly, the vendor they choose should not only act as a cloud provider, but also as a cloud partner.
Here’s a deeper look at the top 10 considerations for selecting a cloud partner and how Cisco, through Cisco Powered, is able to help you with your cloud strategy.
Tags: Cisco, cloud, Cloud Computing, Cloud Management, data center
You probably have already heard that during CiscoLive Milan, we have unveiled the new additions to our Data Center and Cloud networking portfolio:
- New Nexus 7706 and a high density F3 Series 1/10G module for Nexus 7700 provide increased deployment options for data center interconnect, core or aggregation.
- The next generation Nexus 5600 family offers VXLAN bridging and routing capability, line rate L2/L3, and 40G uplinks, to deliver high performance in a compact form factor for 10G Top of Rack, 1/10G FEX aggregation deployments.
- New Nexus 6004 Unified Port LEM Module brings industry’s highest UP port density in a four RU form factor simplifying LAN and SAN convergence.
- New Nexus 3172TQ top of rack 1 RU switch delivers industry-first 1/10G BaseT copper server access and superb performance combined with robust NX-OS features.
- New Nexus 1000V on the Kernel-based Virtual Machine (KVM) hypervisor brings OpenStack cloud a fully integrated network virtualization solution that can be deployed consistently across VMware, Microsoft, and Linux based software platforms.
AND THERE HAS BEEN BROAD CUSTOMER ADOPTION ACROSS THE DATA CENTER!
From Nexus 1000V to the Nexus 9000, Cisco’s holistic approach resonates with customers because it provides increased business agility, operational efficiency, and empowers IT to rapidly evolve as business requirements change.
Here are the latest examples of why our customers chose Nexus:
Read More »
Tags: Cisco, Cisco DFA, Cisco Dynamic Fabric Automation, cloud, Cloud Computing, data center, DCNM, F3 Modules, FabricPath, KVM, LISP, nexus, Nexus 1000v, Nexus 3000, Nexus 3100, Nexus 5000, Nexus 5600, Nexus 6000, Nexus 7000, Nexus 7700, NX-OS, OTV, private cloud, switch, Unified Fabric, Unified Ports, virtualization, VXLAN
If you are reading this blog hoping to get a universal recipe for your cloud strategy, I believe you will be disappointed. But then, you already know…. there are no ‘universal’ cloud strategies. You have to formulate a cloud strategy that best fits your business objectives and IT priorities (among a number of other factors.) Our Cisco services team for Cloud Strategy, Management and Operations has various tools including our Cisco DomainTen™ framework that will help you formulate the right cloud strategy for your organization. Parag’s blog is a great source of information in this regard.
This blog series instead will offer a set of perspectives on how I view the evolution of the World of Many Clouds ™ and what steps we are taking to align our cloud strategy to capitalize on it. This first blog will put our strategy in ‘context’ outlining our point of view in light of some important market dynamics.
The primary market research study that we conducted in collaboration with INTEL, along with additional secondary market research studies, clearly indicate that Line of Business (LoB) leaders have been playing a more important role in driving requirements for IT solutions and services. The reasons behind this trend are many, including and not limited to increasing market and competitive pressures, an uncertain business climate, variability of macroeconomic factors and a relentless need to innovate at a faster pace to stay ahead of the competition. What’s more, LOBs now have greater ability to access IT solutions – such as Software as a Service – outside the traditional enterprise IT value chain, creating “shadow IT” initiatives. In response, IT organizations are looking for new ways to retain their leadership, control, and at times, even relevancy. Furthermore, IT organizations are now expected to support strategic business objectives and enable business growth while also harnessing new technology trends, leading to innovation and new customer experiences. To remain relevant to the business, IT must become a “change agent” and be perceived as a true strategic enabler. The question is how?
We envision IT organizations transitioning to new roles as trusted ‘brokers of IT services’. This model enables IT to add value to one or more public or private cloud services on behalf of its users. IT does this by dynamically bringing together, integrating, and tailoring the delivery of cloud services to best meet the needs of the business.
In a wide-ranging study, Cisco, in partnership with Intel®, sought to pinpoint just how these powerful trends are impacting IT. The “Impact of Cloud on IT Consumption Models” study surveyed 4,226 IT leaders in 18 industries across nine key economies, developed as well as emerging: Brazil, Canada, China, Germany, India, Mexico, Russia, United Kingdom, and the United States. The study supports our point of view. Up to 76% of the survey respondents signaled that IT will act as a “broker” of cloud services across internal and external clouds for LoBs.
In other words, when formulating their sourcing strategies, IT organizations repeatedly face service-by-service, “build-versus-buy” decisions. Therefore, IT needs a plan and a set of governance criteria that support the consistent evaluation of their IT services sourcing options (e.g., time to market, value, sustainable differentiation that the service can provide, SLAs, cost, risk profile and the experience the IT department intrinsically has with that particular service etc..)
This “IT services sourcing flexibility” enables greater levels of business agility, transparency, and speed of deployment to help LoB leaders unlock innovation and achieve core business objectives.
However, let’s step back and see how this is all fitting together. If we rewind, we introduced the concept of the World of Many Clouds ™ a couple of years ago. You can view the evolution of this world as the outcome of the intersection and progressive integration between traditional IT environments and IT services offered by public cloud providers. The roads (in our metaphor) are converging. Lines are blurring. In theory, nothing is preventing a company that consumes IT services from becoming a cloud provider itself (public or private.)
I also believe that the debate regarding private versus public cloud is over. It is about having both at the same time. And to be able to bridge and take advantage of both; hybrid cloud is the new ‘normal.’
In turn, the ability to combine and dynamically aggregate cloud services from private and public clouds can truly occur if IT organizations can rely on an open and secure hybrid cloud environment. And for that to take place you should have the ability to move your cloud workloads (and more broadly your IT services) around. Both data and applications.
You can easily envision a scenario in which a workload – based on a set of specifications – ‘automatically discovers’ the best infrastructure to run on. An exchange could facilitate the allocation process. An XML based standard could emerge along with a set of processes used by exchanges to match demand and supply of IT services based on SLAs, costs, data locality requirements etc… On the supply side you can also envision a scenario in which federation or capacity aggregation among suppliers of cloud services would enable increased economies of scale, consistency and a broader set of choices.
Ok … coming back to earth … our Cloud strategy intends to capitalize on some of these market dynamics and enable IT to retain control, relevance and increase its strategic profile by leveraging the evolution of the World of Many Clouds. In my next blog I will provide an overview of the actual strategy and begin focusing on it in more detail. But first I wanted to share the context.
And as always, to learn more you can begin here.
Tags: Cisco, cloud, Cloud Computing, Hybrid Cloud, private cloud, Public Cloud, SaaS, Service Provider, strategy
During my years at Cisco, I’ve been able witness IT become incredibly pervasive. While traditional IT has been thought of to just help run a business (and make things work!), today’s expectation of IT is also how it can help change and grow the business.
In my conversations with CEOs across the globe, one major theme keeps coming up: CEOs want IT leaders to figure out how technology can help their businesses transform and expand, as much as make it operate.
Recently, I had the opportunity to participate in a new Cloud Insights video series to discuss IT’s role in driving business outcomes with cloud and collaboration technology. It’s an interesting time as the pace of change is at an all-time high. Communication, collaboration and cloud are front and center, helping drive the transformation CEOs want.
Here are a couple of insights from the series that discuss how IT leaders can embed the role of technology within a business to accelerate more efficiency and promote differentiation among competitors. One thing is certain, the time for IT leaders to lean forward with their business partners and think about how the right technology can solve their problem is now.
What is collaboration and is there proof it’s actually working? Read more.
Tags: Cisco, CiscoCloud, cloud, Cloud Computing, collaboration, next-generation IT