In my Internet of Things keynote at LinuxCon 2014 in Chicago last week, I touched upon a new trend: the rise of a new kind of utility or service model, the so-called IoT specific service provider model, or IoT SP for short.
I had a recent conversation with a team of physicists at the Large Hadron Collider at CERN. I told them they would be surprised to hear the new computer scientist’s talk these days, about Data Gravity. Programmers are notorious for overloading common words, adding connotations galore, messing with meanings entrenched in our natural language.
We all laughed and then the conversation grew deeper:
Big data is very difficult to move around, it takes energy and time and bandwidth hence expensive. And it is growing exponentially larger at the outer edge, with tens of billions of devices producing it at an ever faster rate, from an ever increasing set of places on our planet and beyond.
As a consequence of the laws of physics, we know we have an impedance mismatch between the core and the edge, I coined this as the Moore-Nielsen paradigm (described in my talk as well): data gets accumulated at the edges faster than the network can push into the core.
Therefore big data accumulated at the edge will attract applications (little data or procedural code), so apps will move to data, not the other way around, behaving as if data has “gravity”
Therefore, the notion of a very large centralized cloud that would control the massive rise of data spewing from tens of billions of connected devices is pitched both against the laws of physics and Open Source not to mention the thirst for freedom (no vendor lock-in) and privacy (no data lock-in). The paradigm shifted, we entered the 3rd big wave (after the mainframe decentralization to client-server, which in turn centralized to cloud): the move to a highly decentralized compute model, where the intelligence is shifting to the edge, as apps come to the data, at much larger scale, machine to machine, with little or no human interface or intervention.
The age-old dilemma, do we go vertical (domain specific) or horizontal (application development or management platform) pops up again. The answer has to be based on necessity not fashion, we have to do this well; hence vertical domain knowledge is overriding. With the declining cost of computing, we finally have the technology to move to a much more scalable and empowering model, the new opportunity in our industry, the mega trend.
Very reminiscent of the early 90′s and the beginning of the ISPs era, isn’t it? This time much more vertical with deep domain knowledge: connected energy, connected manufacturing, connected cities, connected cars, connected home, safety and security. These innovation hubs all share something in common: an Open and Interconnected model, made easy by the dramatically lower compute cost and ubiquity in open source, to overcome all barriers of adoption, including the previously weak security or privacy models predicated on a central core. We can divide and conquer, deal with data in motion, differently than we deal with data at rest.
The so-called “wheel of computer science” has completed one revolution, just as its socio-economic observation predicted, the next generation has arrived, ready to help evolve or replace its aging predecessor. Which one, or which vertical will it be first…?
Imagine that you see a Tweet today inviting you to apply for a part-time networking job, something you can do in addition to your normal job. You appear to be qualified for the job, and the work looks interesting as well. However, it requires enough of your time so that you would have to set aside your current professional development plans, including study for that next Cisco certification. The job lasts one year.
Would you take the job, setting aside your certification plans for a year? How much money would you need to make in that job before it would entice you to abandon your learning and certification plans for a year?
This post works through a couple of ideas (like the above) about how to quantify the value of a certification. Many people expect that more skills and certifications will give them more Read More »
Have threat-centric security questions and don’t know where to turn? Wish you could engage with Cisco Security experts and your peers? Good news! … (drumroll please)…. introducing the Cisco Security Community!
The Cisco Security Community is expressly designed to connect you with Cisco Security experts and your peers for all your security questions. Further, the Community is focused on helping you discover what’s new in threat-centric security alongside other leading security professionals. Plus, you can browse the latest videos, product information, on-demand webinars, and blog posts in a single location! There are subsections that allow you to subscribe to just the content you want to see – Cisco products and services, and security discipline focused “sub-communities” are just few of the options. Cisco Communities are set up to allow you to personalize your experience.
Take a moment to cruise around and get to know your new community better, bookmark the site, turn on those RSS feeds, and start engaging!
We look forward to working with you to build a great community of members from experts to newer practitioners and high-quality content (with some community-only exclusives). Make sure to connect with me in the community and message any questions you may have!
If you are like the many IT managers we talk to every day, you prefer to have options whenever you tackle a project or formulate your IT strategy. Perhaps, you do not like the idea of feeling limited, constrained or unable to leverage a viable contingency plan. Architecting your cloud strategy should be no exception …. And Cisco Intercloud Fabric can help!
So what does Cisco Intercloud Fabric do?
No time to read? This short video will provide you with an overview of the solution and perhaps entertain you for a couple of minutes. And if you are at VMworld this week, you can stop by at the Cisco booth to learn more about Cisco Intercloud Fabric.
In essence, Cisco Intercloud Fabric provides open and highly secure portability of workloads (aka applications) among heterogeneous cloud environments and with consistent network and security policies. You can move your workloads from your traditional IT environment or your private cloud to a public cloud provider of your choice. We have discussed in the past how hybrid cloud is becoming the ‘new normal’. Cisco Intercloud Fabric lets you deploy a hybrid cloud that operates as one unified environment—straddling your data center boundaries—with you in control.
And what are the benefits?
Choice -- Can you really put in place a sound strategy if you do not have options, if you do not have choice? Are you limited in your choice of hypervisors, public cloud providers, or IT infrastructure? How easy is it to change cloud providers if you wanted to do so in the future? Cisco Intercloud Fabric will give you the freedom to place workloads across clouds. And across heterogeneous environments … ‘any’ network … ‘any’ hardware platform … with multi-hypervisor support … from VMware vSphere to Microsoft Azure … and …. back!
Consistency -- Can you seamlessly extend your private cloud environment to the public cloud? What about your network and security policies? How will they change? Cisco Intercloud Fabric will make your life easier in this regard. You will be able to get consistent network and security policies across your data and applications, wherever they reside. This will allow you can accelerate the time required to deploy your applications to the cloud.
Control -- Managing multiple cloud frameworks is challenging! More importantly, it is about selecting the best cloud for your specific application and data. Cisco Intercloud Fabric gives you unified workload management across clouds ….. You are back in control!
Cisco Intercloud Fabric is a powerful enabler to facilitate that transition. You, like most IT decision makers want to retain control over your hybrid cloud environment and you may need the ability to repatriate your workloads back to your data centers. Avoid a ‘one-way’ trip to the public cloud …. Retain choice, consistency and control without compromising your compliance requirements with Cisco Intercloud Fabric!
Do you want to see a demo?
Well … If you are going to be at VMworld in San Francisco this week, you can stop by at the Cisco booth (#1217.) You will be able to witness how you can unleash your hybrid cloud with Intercloud Fabric. You can also attend one of our sessions on Tuesday to learn more about this solution and associated use cases.
As a Cloud Architect, I’ve had the privilege to work with CTOs and CIOs across the globe to uncover the key factors driving Business Continuity and Workload Mobility across their cloud infrastructures. We’ve worked with enterprises, large and small, and service providers to answer their top five concerns in our new Business Continuity and Workload Mobility solution for the Private Cloud.
1) Can you provide business continuity, workload mobility, and disaster recovery for my unique mix of applications, with lower infrastructure costs and less complexity for my operations teams? Yes.
2) Can you provide a multi-site design that reduces business outages and costly downtime, allowing my critical applications to be more secure and available? Yes.
3) Can my operations teams perform live migrations of applications across sites while maintaining user connections, security, and stateful services? Yes.
4) Does your multi-site solution allow me to utilize idle standby capacity during “normal” operations, and reclaim that capacity as needed during an outage event? Yes.
5) Can your Cisco Validated Design greatly reduce my deployment risks and simplify my design process, saving my business significant time, money, and resources? Yes.
A Proven Multi-site Design, Built on the Most Widely Deployed Cloud Infrastructure
We addressed each of these pain points as we designed, built, and validated our new multi-site business continuity and workload mobility solution. Our multi-site solution is built upon Cisco’s cloud foundation, the Virtual Multi-service Data Center (VMDC) that’s been deployed at hundreds of the world’s top enterprises and service providers. In our latest VMDC release, we’ve extended our cloud design to support multi-site topologies and critical use cases for private cloud customers. This validated design simply connects regional and long-distance data centers within your private cloud to address some critical IT functions, including:
application business continuity across data center sites;
stateful workload mobility across data center sites, will maintaining user connections and security;
application disaster recovery and avoidance across data center sites; and
application geo-clustering and load balancing across data center sites.
Choose the Cloud Infrastructure that Fits Your Unique Business Needs
The VMDC Business Continuity and Workload Mobility solution (CVD Design Guide) is grounded in the reality of today’s cloud environment, providing different design choices that match your applications needs. We realize there is no “one size fits all” cloud design, that’s why we support both physical and virtual resources, multiple hypervisors and storage choices, and security compliant designs with industry certifications like FISMA, PCI, and HIPPA.
Key Factors Driving Business Continuity and Workload Mobility in the Private Cloud Read More »