In this highly engaging episode of Engineers Unplugged, Andy Sholomon (@asholomon) and Damian Karlson (@sixfootdad) break down the hidden costs of cloud in the enterprise space. You don’t want to miss this one.
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
Episodes will publish weekly (or as close to it as we can manage)
The role of IT in the enterprise is transforming. Cisco is creating the next-generation data center and cloud deployments with Application Centric Infrastructure (ACI) to simplify and optimize the entire application deployment lifecycle. Read More »
In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90′s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
If you are like the many IT managers we talk to every day, you prefer to have options whenever you tackle a project or formulate your IT strategy. Perhaps, you do not like the idea of feeling limited, constrained or unable to leverage a viable contingency plan. Architecting your cloud strategy should be no exception …. And Cisco Intercloud Fabric can help!
So what does Cisco Intercloud Fabric do?
No time to read? This short video will provide you with an overview of the solution and perhaps entertain you for a couple of minutes. And if you are at VMworld this week, you can stop by at the Cisco booth to learn more about Cisco Intercloud Fabric.
In essence, Cisco Intercloud Fabric provides open and highly secure portability of workloads (aka applications) among heterogeneous cloud environments and with consistent network and security policies. You can move your workloads from your traditional IT environment or your private cloud to a public cloud provider of your choice. We have discussed in the past how hybrid cloud is becoming the ‘new normal’. Cisco Intercloud Fabric lets you deploy a hybrid cloud that operates as one unified environment—straddling your data center boundaries—with you in control.
And what are the benefits?
Choice -- Can you really put in place a sound strategy if you do not have options, if you do not have choice? Are you limited in your choice of hypervisors, public cloud providers, or IT infrastructure? How easy is it to change cloud providers if you wanted to do so in the future? Cisco Intercloud Fabric will give you the freedom to place workloads across clouds. And across heterogeneous environments … ‘any’ network … ‘any’ hardware platform … with multi-hypervisor support … from VMware vSphere to Microsoft Azure … and …. back!
Consistency -- Can you seamlessly extend your private cloud environment to the public cloud? What about your network and security policies? How will they change? Cisco Intercloud Fabric will make your life easier in this regard. You will be able to get consistent network and security policies across your data and applications, wherever they reside. This will allow you can accelerate the time required to deploy your applications to the cloud.
Control -- Managing multiple cloud frameworks is challenging! More importantly, it is about selecting the best cloud for your specific application and data. Cisco Intercloud Fabric gives you unified workload management across clouds ….. You are back in control!
Cisco Intercloud Fabric is a powerful enabler to facilitate that transition. You, like most IT decision makers want to retain control over your hybrid cloud environment and you may need the ability to repatriate your workloads back to your data centers. Avoid a ‘one-way’ trip to the public cloud …. Retain choice, consistency and control without compromising your compliance requirements with Cisco Intercloud Fabric!
Do you want to see a demo?
Well … If you are going to be at VMworld in San Francisco this week, you can stop by at the Cisco booth (#1217.) You will be able to witness how you can unleash your hybrid cloud with Intercloud Fabric. You can also attend one of our sessions on Tuesday to learn more about this solution and associated use cases.
In particular, we’re bringing Cisco UCS Director to VMworld and it will be featured in our demos, theater presentations, and breakout sessions at the show. If you’re not already familiar with UCS Director, it’s our flagship infrastructure automation software – for provisioning not only VMs but also bare metal servers, storage, networking, and layer 4-7 services. It’s a key component of many of our solutions that you’ll see at VMworld.
This past week, we also announced our new Cisco UCS Performance Manager software for performance monitoring of UCS and UCS-based integrated infrastructure – leveraging technology from our partner Zenoss. Stop by the Cisco or Zenoss booths at VMworld and be one of the first to see a live demonstration!
We’re also showcasing our software solutions for hybrid cloud, virtual network services automation, integrated infrastructure management, cloud automation, and more.