Cisco Blogs

Digital Transformation and “Cloud Ready”

- October 5, 2017 - 0 Comments

We hear a lot about digital transformation, hybrid cloud, and “lift and shift (to cloud)” from various marketing sources. Let’s examine that.

About two years ago I blogged and presented “Is Your Network Cloud Ready?” (See also the presentation with more basics in it.) See also some of my more recent blogs at

This blog updates that content from a more business-centric digital transformation perspective. This reflects some experiences and thoughts spanning between network/datacenter topics and the larger business strategic picture.

The marketing of digital transformation, hybrid cloud, and lift and shift lately has started sounding a bit like “it’s cool stuff, why aren’t you doing it?” Or business consultants saying, in effect, “most others are doing it, why aren’t you?”

You likely see past that. This really should be about the end game: having clarity about what are you hoping to accomplish. What are you really trying to do?

Why Lift and Shift?

Lift and shift is mostly about operational costs or service levels, or should be. If you are looking to get a better managed, more secure, more available datacenter, then moving server instances to a CoLo or the cloud might well work for you, especially if your datacenter is rather small. Providers should have economies of scale, with lower costs but also better processes and skills. One would hope they can do things better at lower cost than you can do it yourself, unless your organization is fairly large. I say “hope” because back in the ASP era, I saw a lot of expertise touted by companies with little to no prior experience or processes to speak of.

Lift and shift is not a no-brainer. Costs matter: cloud or a super CoLo might look great, but be beyond your budget. There are also other factors to consider.

The cloud might provide sites with higher speed connectivity. Some CoLo facilities tout their inexpensive high-speed connections to the Internet, other CoLo facilities, other companies, and cloud sites.  Creating a cloud or a CoLo-based datacenter/WAN might be relevant; see my recent blog about that. That’s part of the Equinix story, for example.

You should know and have in mind some of the technical things to be careful about: I’ve written before (and recently!) about cloud latency surprises. Adequate security is of course the other big consideration.

All that’s fine, but that’s about costs/quality. You might end up running your legacy app from the cloud, but it is still the same old thing. It’s likely not web-scale or highly available.

Note that VMware “active/active” refers to datacenters not applications – your application is generally active in one not both datacenters at the same time. Internet-scale applications are active out of multiple sites, simultaneously. That requires some real thought, coding, and work on the database backend, including consistency model.

Consequently, pure lift and shift of apps leaves you with apps not particularly well-written to run from the cloud. In short, it’s not digitally transformed. It’s taking small server-centric steps when much bigger, faster progress may be needed. Tactical, not strategic. That may or may not be what you’re looking for.

Hybrid Cloud

Cisco and others are pushing tools to manage hybrid cloud.

My question: is your hybrid cloud part of a considered strategy, or is it what you get when various DevOps teams run at full speed, fail to plan ahead, or fail to coordinate with other teams?

I suspect the latter — been there, seen that. Out of control costs or latency challenges might be the result. This is perhaps another case of “fast/ cheap/good: you get at most two.” I’ve seen from the sidelines a couple of instances of DevOps teams not looking ahead, and charging right into a wall, so to speak. In this particular case, it might be a wall (cloud) with lock-in, e.g. data gravity or cloud-specific tools that prevent you from moving / consolidating. Cisco CloudCenter might be something to consider in this context (capturing app flow essentials, portable instantiation).

Suggestion: “let’s be agile and fast, but let’s not charge blindly off as fast as we can; let’s do some planning as well.”

Digital Transformation

Digital transformation means changing business processes and activities to leverage new technologies. That is rather broad, and encompasses a multitude of things such as increased network usage, vastly more data, higher degrees of data-driven automation, external connections, Internet of Things, and others. It also should be strategic, whereas lift and shift and hybrid cloud are more tactical or operational.

There are waves of change, overlapping, coming faster and faster. We need to prioritize around the business becoming faster, more competitive, driving out cost, etc., while anticipating how the technology changes will impact the business. Which data needs to be accessible, by which apps? Which front-ends / apps need web-scale capabilities. That’s digital transformation according to Pete.

Getting to Cloud Ready and Accelerating Digital Transformation

A couple of my recent blogs discussed the New WAN Model and the Network of the Future.

Because of that, I’m going to skip over network, security, office spaces, and datacenter technologies here, to focus on items closer to the business side of things. Namely, applications and data. If you’re a network or security or datacenter person, you need to be aware of this in your designs and implementation.

Concerning data, organizations are discovering that it is necessary to manage it. That starts by sharing the data, not having private islands of data, and by not having multiple copies of very similar data. Then there’s data gravity — as in the costs of moving data around, cloud vendor lock-in and costs, etc. You pay a lot to export data from most clouds, or even externally access that data, let alone move it elsewhere. Having multiple copies of data adds cost. Moving large amounts of data takes time and has costs. To me, this is a hidden cost of hybrid cloud: thought needs to be given to where the apps are in relation to the data. Those that access a lot of data likely need to be “near” the data – latency, cloud external data transfer costs. How to architect things is not / should not be a purely coding or technical decision!

IoT is going to generate a lot more data. How are you going to manage storage, backup, archival, and access to all of your digital assets? OK, that’s perhaps some of what “data lakes” are about: raw data, not cleaned and structured data. That name makes me wonder how the data warehouses got all wet, melted, and lost structure. Humor intended, but perhaps also a somewhat appropriate way to think about it.

There’s also security for data. What we’re trying to secure is … the sensitive data.

Our present network security is rather focused on controlling and monitoring access to the servers and apps that gateway access to the data, at least where the data is in a database/SAN and not directly network accessible. Providing secure applications or micro-services as the only way to access the data helps with security, allows for ID- and role-based access controls, and allows for an API to make data access more uniform. Said differently, standardized and uniform access methods are easier to secure.

Yet how does security verify that all applications, let alone all micro-services, are properly coded with proper authentication and other security measures? (Which sort of takes us back to why we have firewalls in the first place: we know our apps aren’t all properly and consistently secured?)

For what it’s worth, Google has said security is about applications not firewalls. What’s your dev team’s perspective? Baked-in security or code then secure?

It’s About the Applications

One key item moving forward, under any name, is the application inventory and strategy. We’re seeing this in increasing numbers of organizations. Does your organization have an application inventory and strategy?

This is where network and security people come in. The business side of the organization may be able to provide the top-down list of applications and services, and business priorities. The technical side can (maybe, with effort) provide the bottom-up details: application flows, flow mapping, “app pods” (groupings of servers with heavy flows between them), etc. — what’s under the hood.

If you have a fairly complete list of applications, then the strategic thinking can begin. There are several “buckets” you might lump applications into:

  • Which ones are ripe for offloading as SaaS?
  • Which are limited use in-house apps, maybe single server apps, self-contained and not using any exotic or vendor-specific technologies, where migration to a VM, CoLo-based VM, or cloud instance should be fairly straightforward? (Many organizations have done the virtualization part of this, but maybe not the cloud side of the analysis.)
  • Which are important to the business, and would strongly benefit from a web-scale/micro-services rewrite?
  • The others, not worth trying to change them. What’s your strategy for fixing or retiring them?
  • What data do we need to open up controlled access to?

I view heavy virtualization as the precursor here. If you haven’t moved from physical to virtual machines, in a controlled internal environment, then is cloud really a good first step?

Re web-scale, I consider scale-out ready applications to be a key aspect. Scale-out, as in anycast or Layer 3 Global Server Load Balanced. I’ve been reading a couple of books about scalability, and it’s about people and process as well. Recommended reading:

Have you been doing your homework reading? (If you have other good books to recommend, please post a comment!)

There’s also some degree of scalable access to data, perhaps a migration plan to get to having well thought-out micro-services. Clear thinking about static data, eventually consistent data, and single systems of record matters.

Security also needs to be factored in. If an application is being heavily changed, where are you more likely to spot security issues like leaking sensitive data — in your physical datacenter, or in the cloud? Who is responsible for sanitizing sensitive data for DevOps coding using the cloud? Etc.

What needs to happen to make all this work?

  • Some detailed research into the application and data inventories
  • Some creative thinking about how the data and the functionality might be used in the future
  • Some prioritization
  • Some planning around evolving key applications, as a migration or a rewrite might take years (I’ve included this last item since I’ve seen some unrealistic thinking about timelines for app re-writes)

There are a couple of conclusions I draw from all this:

  • The right degree of cross-function conversation and skills is needed for application transformation to succeed.
  • Best results may require balance between DevOps speed and some big picture planning and coordination. If you don’t look ahead, you won’t see the pitfalls until you encounter them painfully. But avoid analysis paralysis.
  • Understanding flows, latency, throughput, and translating that to application performance. The time and costs for data transfers/access matter.


Leave a comment

We'd love to hear from you! To earn points and badges for participating in the conversation, join Cisco Social Rewards. Your comment(s) will appear instantly on the live site. Spam, promotional and derogatory comments will be removed.