Avatar

As government agencies begin deploying cloud solutions and strategizing to meet cloud IT modernization mandates, a question arises – what will the future of the agency look like when they update their systems and start implementing cloud solutions? And further, possibly most important – will cybersecurity protocols hold up?

There are many cybersecurity solutions available for organizations and companies going to a digital, cloud architecture, but most of them aren’t suitable in a government environment. Government agencies have unique needs and requirements, and because of the sensitive nature of their work, they operate under a more stringent set of rules. Therefore, the agency CIO is going to look at their IT and cybersecurity strategy through a different, and possibly more conservative, lens – making it a constant battle between being in compliance with federal mandates and staying competitive in their digital transformation.

That’s why the federal government put into place the Federal Risk and Authorization Management Program (FedRAMP). This is a government-wide program that “provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services,” according to the U.S. General Services Administration (GSA) website. The program determines which cloud solutions are viable for government agencies from a security perspective, helping agencies keep sensitive and confidential information secure while still taking advantage of the latest cloud solutions.

So, why is the government going through all this trouble to approve cloud solutions for its agencies? According to the FedRAMP website, “cloud solutions allow for faster processing and more elasticity in computing in an on-demand, more efficient platform.” In other words, cloud solutions will enable the government to function more seamlessly, and access people, resources and information like never before – all on a secure and efficient network. Programs like FedRAMP save agencies an estimated 30-40 percent of government costs, as well as time and staff needed to do typical security assessments for agencies. Plus, FedRAMP is a collective effort between major government agencies including: the General Services Administration (GSA), National Institute of Standards and Technology (NIST), Department of Homeland Security (DHS), Department of Defense (DOD), National Security Agency (NSA), Office of Management and Budget (OMB), and the Federal Chief Information Officer (CIO) Council. But while they are working in concert, can the government move fast enough to get these cloud solutions approved?

Some of the problem is the necessary approval process that can take months and costs money, which deters many vendors from doing business with the federal government. So, FedRAMP is now offering FedRAMP Tailored, which is a faster approval process for cloud service providers with low-impact software-as-a-service offerings. This program cuts down the number of security controls required in the approval process from 125 to 36 – lowering the up-front costs involved and opening up more possibilities for vendors who currently do business with individual agencies.

As FedRAMP continues to evolve and other vendors become certified, government officials will have a greater opportunity to keep up with the latest digital trends – all the while keeping cybersecurity core to its IT strategy – giving the agency a peace of mind in keeping information safe, employees the tools to do their jobs better and easier, and the public the confidence that they will experience the best service and protection possible.

October is Cyber Security Awareness Month, and Cisco is a Champion Sponsor of this annual campaign to help people recognize the importance of cybersecurity. For the latest resources and events, visit cisco.com/go/cybersecuritymonth.

Authors

Larry Payne

Vice President, Sales

US Public Sector

Avatar

The Cisco Spark Innovation Fund is a $150 million initiative to enable exceptional startups, developers, and ISVs to create new solutions to improve the Cisco Spark experience. We distribute funding in a variety of ways, including:

  • Direct investments
  • Partnerships with startup accelerators
  • Micro-grants to accelerate adoption of products that integrate with Cisco Spark
  • Funding events and activities where we showcase products within the Cisco Spark ecosystem

Many of our 100+ investments to date are with startups and independent software developers that believe in the power of collaboration. These investments aren’t your average communications solutions. A few examples:

  • Transforming student to teacher interaction with Involvio to solve the problem of student retention and bolster engagement.
  • Improving customer experiences using ingenious integrations with Altocloud
  • Providing unique customer insights using brilliant bots including LocalMeasure.

Find these and hundreds more in the Cisco Spark Depot, the hub for all Cisco Spark integrations and bots.

More than Dollars

There’s more to the Fund than money. We’re working to more closely to combine Cisco Spark with best-in-class tools from other vendors. Whether it’s technical and business mentoring, co-marketing with startups, or connecting our developers with customers, the Innovation Fund provides companies with help building their businesses.

In some cases, we’re bundling some of these products with Cisco Spark and reselling them through Cisco. Our sales team is already selling products from four of our portfolio companies — and four more will be on our resale list by the end of this year. This level of cooperation enhances the overall effectiveness of Cisco Spark while helping boost awareness of our pioneering partners.

Connecting with Developers

Part of transforming the way we work together means working more closely with the developer community. We’re meeting people in person, attending leading industry events and hackathons, including TechCrunch Disrupt, SXSW, and TADHack. This gives developers easy hands-on access to our products (such as Cisco Spark Board), and lets them experiment with the powerful Cisco Spark APIs. By bringing products from throughout our developer ecosystem to these events, we’re helping developers reach markets they might not otherwise be able to access.

Whether you are a one-person shop, a small start-up, or a large company, our goal is to provide the support and dev tools you need to add collaboration capabilities to your services. By teaming up with developers like you, we’re creating new ways to work smarter together.

Want to get involved? Contact us to apply to be part of the Cisco Spark Innovation Fund.

 

Authors

Jason Goecke

Vice President & General Manager

Cognitive Collaboration & Cisco Spark Platform

Avatar

Digitization, enabled by new and emerging technologies, has the power to change education as it has disrupted other industries: travel and transportation, retail, healthcare, and financial services offer consumers completely different capabilities and experiences today than in the past. We find ourselves solidly in the center of the service and information-sharing movement and community.

Digitization enables all this. Content, workflow processes, reservations, customer support, and a multitude of other activities are possible today because of digitization. We explored digitization in detail in our new book published in partnership with the Public Technology Institute, The Digital Journey in K-12: Overcoming Roadblocks and Embracing Innovation.

We saw that education is not immune. Our students (consumers of learning) are demanding new ways of learning, in new places, with new ways to engage.

Students no longer want to be bound to a classroom or set schedule. They want to learn anytime, anywhere, and on any device. They want to consume information and knowledge in their own way. And, they are demanding immersive, engaging learning experiences. They want to hear from outside experts, connect with students and subject matter experts from beyond the traditional walls of the classroom, and obtain tutoring and support regardless of the time of day.

One of my favorite quotes from James Hurst, “Pride is a wonderful, terrible thing, a seed that bears two vines, life and death,” reminds me of technology, which is truly a wonderful, terrible thing. The opportunities it provides to us, including our students, is wonderful. Privacy, the protection of students and student information, the risk of cybersecurity attacks are terrible and must be top of mind.

An additional consideration is how schools will find the funding to pay for these new technologies, which when properly applied, will dramatically improve engagement, the student experience, and student learning outcomes. The Cisco Digital Education Platform sets the foundation for your digital future, and at its foundation, includes a solid core network, wired and wireless technology, and security everywhere, many of these items funded through the E-rate program. Learn more about how Cisco enables you to take a platform approach and supports E-rate.

Authors

Renee Patton

No Longer at Cisco

Avatar

Remote PHY, the industry’s best bet technology for boosting performance, increasing capacity, reducing costs, and simplifying network operations, is the subject of a must-see seminar at this year’s SCTE-ISBE Cable-Tec Expo.

Featuring Cisco Fellow and Cable CTO John Chapman, who played a central role in the creation and ongoing development of the Remote PHY DOCSIS standard, the seminar will include an in-depth review of the Remote PHY architecture. Industry trailblazers like, Comcast, Liberty Global, Midcontinent Communications, and others will also share their roadmaps for implementing remote PHY and their timelines for deployment.

The Remote PHY Seminar is scheduled for Tuesday, October 17th and runs from 7:30 AM until Noon in Rooms 401/402 of the Colorado Convention Center.

The session will kickoff with a discussion of Remote PHY technology by some of the most interesting technologists in the industry including — John Chapman, Cisco Fellow and Cable CTO; Jorge Salinger, vice president, access architecture for Comcast; John Pederson, CTO of Midco; and Phil Oakley, director, access platform engineering for Liberty Global. Their discussion will address topics such as the business drivers for Remote PHY adoption, implementing the Remote PHY architecture, and how together, Remote PHY and Full Duplex DOCSIS, can transform your business.

See Cisco Technology and Solutions at Booth #987
When the 2017 SCTE-ISBE Cable-Tec Expo opens on Wednesday, October 18th, make sure you stop by and see us at booth #987. We think you’ll find a visit to see our step-by-step evolution path across infrastructure, virtualization, management, and automation well worth your time.

For these technology areas we will show you how:

  • Next-generation converged cable access platforms (CCAP), DOCSIS 3.1, and Remote PHY can be used to deliver Gigabit service tiers and drive down operating costs
  • Virtualized, Cloud Native cable modem termination systems (CMTS) and other functions can enable you to elastically scale and reposition resources to meet changing demand
  • Management and automation tools can be used to monitor network health proactively and automate end-to-end provisioning
  • Revolutionary cable technologies of the future, including full duplex DOCSIS 3.1 and Mobile backhaul enablement and can be integrated into your business.

Live Demonstrations
We have a great lineup of demos this year, be sure to stop by and see:

  • RPHY Node – See the new GS7000 RPHY Optical Node (iNode) Platform in action. Part of the Cisco Infinite Broadband solution, the iNode can help you to reduce TCO, as well as simplify operations and deployments.
  • OpenRPD – Using RPHY, we’ll show you our OpenRPD interoperability with multiple RPD vendors at the SCTE.
  • RPHY Compact Shelf – See how the industry’s first standards based RPHY Compact Shelf provides CCAP and DOCSIS 3.1 capabilities in small hubs enabling hub site consolidation and reducing TCO.
  • SP Automation – Using our recently launched Smart PHY Automation application, we’ll demonstrate the automated provisioning of an RPD and the cBR8. The Smart PHY Automation application allows you to significantly reduce the operational expenses and complexities of a RPHY deployment, including reducing technical field staff training to support RPHY and improving the time to service enablement for RPHY.
  • DOCSIS FDX – An industry first, we’ll have a live demonstration of Full Duplex DOCSIS 3.1 using RPHY. Using a working HFC network, we’ll demonstrate new capabilities that make FDX DOCSIS 3.1 function in a network with multiple cable modems, including demonstrating backwards compatibility with DOCSIS 3.0 cable modems.
  • Cloud CMTS – See the industry’s first demonstration of a Cloud Native CMTS. Merging the latest NFV developments with DOCSIS, we’ll show a virtualized DOCSIS control and data plane using RPHY to enable a scalable, elastic and distributed CMTS software architecture.
  • Video Aware Network – Learn how to build video capabilities into the network using programmable networking and network function virtualization to make the IP network more video aware and make video more network aware.
  • IVP & Analytics – See our Infinite Video Platform and Analytics demonstration and learn how Cisco can help you deliver a best in class video experience, while leveraging data and business insights to identify new opportunities.
  • Optical Solutions for MSO/Cable – We’ll show you how Cisco’s optical networking solutions can simplify operating and maintaining Cable/MSO networks. At the SCTE Cable-Tec Expo, we’ll demonstrate how to maximize existing DWDM systems with the Cisco NCS 2000; how to cost-effectively migrate aging TDM based platforms to IP based transport using High Density Circuit Emulation with the Cisco NCS 4200; and how transport networks can be automated and optimized using Cisco NCS 1000.
  • Transforming Cable to 5G – We’ll show you how you can transform your cable business to support 5G by using your existing DOCSIS Infrastructure to densify mobile access with small cells and Citizens Broadband Radio Service (CBRS)

We look forward to seeing you the 2017 SCTE-ISBE Cable-Tec Expo in Denver, Colorado. Have questions and comments, Tweet us @CiscoSP360.

Authors

Alison Izard

Marketing Manager

Avatar

Daren Fulwell is a Cisco Champion, an elite group of technical experts who are passionate about IT and enjoy sharing their knowledge, expertise, and thoughts across the social web and with Cisco. The program has been running for over four years and has earned two industry awards as an industry best practice. Learn more about the program at http://cs.co/ciscochampion.

==========================================

Recently I was faced with the question – new network technologies like SDN are supposed to make networking simpler, but do they really?  There’s only one answer to that in reality – “it depends”!  Let me expand …

As in most things, accepted good network design principles can be summed up as “KISS” (Keep it Simple, Stupid) – only introduce complexity into the design where it is necessary.  We build modular repeatable designs, we standardise where possible, and within our implementation we template and use naming conventions to quickly recognise what we are doing at any point.  But the network, out of necessity, is a reflection of the complexity of the business it serves: it acts as a map of potential data flows around an organisation and as we become more and more reliant on data to transact business, so those flows become many and varied.

The underlying network topology then needs to be built to efficiently facilitate those flows, and we need to ensure there is sufficient resilience in it to keep the data flowing during failure or attack.  The level to which we protect the infrastructure against these events is obviously dependent on the criticality of those data flows. Conceivably, certain parts of the network that carry traffic relating to a particularly critical application may be required to be more resilient than others.  In these days of IoT, application flows don’t simply mean PC-to-PC either.  Building infrastructure such as physical security or environmental controls, manufacturing process equipment such as machinery or control stations, or other capabilities such as inventory tracking, can all require connectivity – and depending on the business they support, some or all of these things may be fundamental to the ability to transact.  So we can see that with the proliferation of requirements for a ubiquitous network, the complexity of the underlying infrastructure inevitably increases.

SDN techniques are helping us battle with this complexity.  While people struggle to agree on a definition of SDN, most would agree that centralising the control plane using a controller of some sort, and introducing programmability of the environment through APIs are fundamental.  These features enable automation and orchestration, and provide us with a means to abstract complexity away.  Alternative – simpler – configuration constructs are defined to represent overall technical capabilities of the entire network and configuration is applied box-by-box across the infrastructure by the controller as necessary to implement those capabilities.  These can then be built into workflows and custom scripts which can then be consumed through simple dialogs with the network operator.

At this point, I would add another necessary feature to my SDN definition – the ability to express a centralised set of policies to express desired behaviour of the network (“intent”).  Networks exist fundamentally to connect endpoints with data stores which can then be shared, and all the business really needs to care about is which endpoints are allowed to connect to the network, which endpoints are allowed to converse and how the network prioritises and treats those flows of conversation.  This can be boiled down into “user policy” – authentication, authorisation and access rights for endpoints – and “application policy” – desired treatment of traffic flows associated with a particular application through the network, such as prioritisation, performance guarantees, preferred traffic path and so on.  Once those policies are defined, the network devices themselves are given a standardised configuration and their specific behaviour is actually modified by updating the central policy engine.  This, in turn can pull data through from the customer’s Active Directory, making policy changes (and thus configuration of network behaviour) a very simple administrative task.

Does that make the network simpler?  Hmm, I’ve got a couple of thoughts here.

A network is the sum of a number of parts – there is always a need to connect users and endpoints (an access network); there is a need to connect those users to services off site (private or public WAN); and the services they consume have to be hosted somewhere (traditional DC or Cloud environment).  There is no viable single SDN solution that solves connectivity issues across all of those networks as they each have different configuration requirements and characteristics.  Each part of the network may have its own controller, and then these must be orchestrated either through a standard tool built for the purpose or through custom scripts and workflows – so adding another layer of abstraction in order to define the end-to-end behaviour at a single point of control.

And what happens when things go wrong?  Another feature usually expected of an SDN solution is visibility: the controller gives the operator an end-to-end view of how the environment is functioning and its behaviour, good or bad.  In theory, the controller can spot when things are not functioning as expected and take corrective action itself based on the parameters it has been set in the policies that it has been configured to implement.  The assumption here is that the network has been built and is operating under optimal conditions.

A typical complaint over recent years is that the large networking vendors use customers as beta testers for their software – that bugs are often found not in pre-release testing but in operation of production networks.  In an SDN environment, a new layer of software is introduced – not only the network devices themselves, but the controller now needs to be running perfect software that always functions correctly as designed in order to provide the level of availability that businesses require to transact.  In the real world, as we know, this is not likely, and so we need to understand the complexity this introduces to our operational state.  While the controller may take away the need to carry out complex configuration activities across the network, it doesn’t remove the need to understand how the network achieves its capabilities as we will always need to be able to troubleshoot the infrastructure.

With all that said and done, is network complexity necessarily a bad thing?  We’ve already seen that in order to achieve the end-to-end view it is necessary to do different things in different parts of the network – so it could be argued that in order to achieve create a network to provide the foundation for a complex business, some level of complexity is necessary, even desirable.

So, to answer the question then – in my view simplicity is a matter of perspective.  In order to build a foundation for the IT systems for a complex business, we need to create complex connectivity patterns to allow devices to talk, with complex features at the edges to protect the systems from malicious intent and failure.  The operators and maintainers of that network need to be exposed to the full complexity in order to fully support and troubleshoot end-to-end should issues arise.  However – with a not-insignificant development effort, and more than half an eye on managing questionable software quality – the interface to the wider business can be drastically simplified through orchestration of SDN controllers to enable a single set of policies to determine network configuration end-to-end.  Automated change, self-healing and configuration that reacts to network events are all completely feasible in today’s networks.

So, is the network simpler?  No.  And yes.  It depends!

Authors

Daren Fulwell

Technical Architect

Cisco Champion

Avatar

There are many, many challenges we all face throughout life. It’s important to note though, we all have the ability to make someone’s day a little easier, a little brighter – and leave a positive impact – simply through our actions. At Cisco, this is something we are empowered to exemplify as much as possible.

This year, I participated in China’s Egg Walkathon. The Egg Walkathon is a traditional fundraising event organized by the Shanghai United Foundation every year where volunteers walk 50 Km (30 miles) within 12 hours. The original concept of this fundraiser was to provide an egg a day for underprivileged children in rural China, but today the goal has grown and the efforts now support all kinds of services to children – even education.

As a volunteer for the Cisco Egg Walkathon team this year, I was moved to join my fellow Cisco co-workers and walk the 50km to honor and serve those in need throughout this journey.

Along the way, I am lucky to have experienced so many touching moments that inspired me to keep going no matter what, here are just three of those moments:

  1. The Cisco Speed: 50km is absolutely a challenge for every single human body, especially on such a hot day – 31* Celsius (88*F). But our members who are in the CRDC (China Research and Development Center) Running Club ran the entire way, and reached the finish line between five and six hours!

This fantastic record is consistent with professional runners. As I look back on the day, I remember this amazing achievement for all of our runners, and the hard work they not only put into their daily goals at the office, but also the time and effort spent on reaching this level of athleticism.

  1. The Cisco Determination: Our CRDC site leader Nan Chang and most of our team members walked the entire way. Which takes a good deal of determination as this took us seven to eight hours! By encouraging and supporting each other, we got through this challenge together. Some of our teammates were bearing bad foot aches and pains after walking such a long distance, but they wouldn’t seek medical support until their feet touched the finish line.

I was truly empowered watching them persevere. Especially Engineer Du Peng who spent almost 12 hours completing the challenge with wounded legs! Accompanied with her wife, they educated me on what Cisco’s Spirit is truly all about. When I asked them if it was worth the pain they endured, they said, “Of course! It’s all for the kids who are looking forward to a meal!”

  1. Be You, With Us – We Are Cisco: Last but not least, I must mention the amazing team work we experienced during this day while giving back. The Egg Walkathon is not just a charity activity, but it is also a big celebration within the community. Best of all, every Cisco employee was able to show that in our own unique ways, and in our own times – when we come together, we can achieve great things!

Nothing can stop our path to success, especially when we are all working together.

Will I be back to walk another 50km next year? I’m already counting down the days to the 2018 Egg Walkathon! 😀

Are you looking for a company that is just as passionate about giving back as you are? We’re hiring!

Authors

Wega Li

Component Engineer

Supply Chain T&Q

Avatar

This post was authored by Paul Rascagneres.

Introduction

In the CCleaner 64bit stage 2 previously described in our blog, we explained that the attacker modified a legitimate executable that is part of “Symantec Endpoint”. This file is named EFACli64.dll. The modification is performed in the runtime code included by the compiler, more precisely in the __security_init_cookie() function. The attacker modified the last instruction to jump to the malicious code. The well-known IDA Pro disassembler has trouble displaying the modification as we will show later in this post. Finally, we will present a way to identify this kind of modification and the limitation in this approach.

Read More >>

Authors

Talos Group

Talos Security Intelligence & Research Group

Avatar

Welcome back!  In Part 1: Embrace NetDevOps, Say Goodbye to a “Culture of Fear”, I introduced my definition of NetDevOps and talked about how we need to dispel the “Culture of Fear” as we move to NetDevOps.  We also considered the two stakeholders of NetDevOps, the builders and consumers of the network.  In this post I’ll be picking up where I left off discussing the core principals of NetDevOps.  Let’s dive in!

The NetDevOps Pipeline

A “pipeline” simply defines the process by which an activity is completed.  The concept of a “software delivery pipeline” is well understood today in IT, but network configurations also follow a pipeline.  Today’s network configuration pipeline is a complex maze of forks, bends, off shoots, dead ends, and paths that require special timing, keys, and phases of the moon.  The current network configuration pipeline needs to go, and be completely replaced in NetDevOps.

It is in this aspect of NetDevOps where Infrastructure as Code is relevant, and it must be driven by DevOps principles of automation, testing, and verification.  In NetDevOps, it is standard to have a “Continuous Development” approach to network changes.  Proposed network changes are picked up by build servers which manage the progression from “Development” to “Test” and into “Production”.  NetDevOps will mirror what is becoming commonplace in software development teams.

The NetDevOps pipeline

There is great work being done in network automation, network automation tooling, to help make a proper NetDevOps pipeline come to life.  In fact I am spending much of my own time within DevNet in this space, and you can look for new blog posts in this space from me very soon!

Rethinking Network Monitoring for NetDevOps

Active monitoring of software performance and user experience is core to the DevOps principles and culture.  And NetDevOps needs to bring with it a strategy and technique for network monitoring.  It isn’t that we aren’t monitoring the network today, however for many networks it’s a haphazard combination of SNMP and syslog used more as a forensic research tool than as an active part of the day to day strategy of gauging the health of the network.

The networking industry is already adopting and moving to new technologies and strategies for monitoring.  “Streaming Telemetry” solutions that provide near real-time access to structured data based on standard data models are becoming quite common.  Further these new solutions are fitting into the same monitoring framework and systems that are being used by software DevOps teams.

However, replacing older protocols with newer ones isn’t a full solution.  The more critical question that we need to answer today is what to monitor.  What are the key performance indicators (KPIs) for the network?  A strategy of collecting everything available isn’t possible or practical today.  There is just too much available data, coming at too high a rate to reasonably transport, store and process it all.  As an industry, we must figure out what to gather.  And further… we can’t sit around and wait for an engineer to take a look at the data.  We must develop strategies and plans to process the data as it comes in and take action immediately.

In NetDevOps, monitoring is about continuous health and improvement, not forensics.

The NetDevOps Engineer

Networking engineers must adopt new skills for NetDevOps, but this isn’t new to us as network engineers.  We’ve had to learn new skills for IPv6, MPLS, 802.1x, and so much more.  Not to mention the list of new technologies that have flooded us in the “Software Defined Networking” era.  First and foremost, network mastery is still a critical skill.  I scoff a little bit at all the doomsday talk I see around these days about “the death of the network engineer”.  Network engineering is as strong as ever, but it is changing.  Just look at what has happened to our cousins in software development as DevOps has flourished.

NetDevOps Engineers are skilled in programmability as well as networking.  Many of us are already well on our way down the programmability path picking up familiarity with API interfaces and new scripting languages like Python.  To these we’ll need to become familiar and fluent with DevOps tooling in areas like configuration management, build servers, and testing suites and tools.  For me the biggest challenge in this space is the sheer variety and velocity in the tools that are available today and being developed for tomorrow.  Don’t let this discourage you, embrace it as an opportunity, but realize you’ll need to become comfortable with not knowing them all.. there are just way too many out there.

And lastly I turn back to an old friend of us all, the OSI Model (or Open Systems Interconnection Model).  I do not doubt that you all look nostalgically on learning the 7 layers of the OSI model, but let’s be honest with ourselves… many of us are really are only comfortable with layers 2 – 4.  Well, in order to successfully understand, troubleshoot, and test network health for applications running today and going forward, we must embrace the upper 3 layers of the OSI Model.  Session, Presentation and Application skills and understanding are going to become more and more critical.  With REST APIs becoming pervasive, you really need to know how HTTP works in detail…

For more info on the evolution of the network engineer, take a look at Carl’s journey in my post over on Learning at Cisco!

Conclusion

As a reminder of the key elements we’ve discussed about NetDevOps, here they are again:

  • Organizations practicing “NetDevOps” see network changes as routine and expected.
  • NetDevOps builds and manages a network that enables network services to be consumed in a DevOps approach.
  • In NetDevOps, it is standard to have a “Continuous Development” approach to network changes.
  • In NetDevOps, monitoring is about continuous health and improvement, not forensics.
  • NetDevOps Engineers are skilled in programmability as well as networking.

This transition has me so excited that I’m re-branding myself as a “NetDevOps Evangelist” and am spending as much time as I can exploring all the topics and elements I’ve outlined in this article.  Check back often for more blogs as I dive into each area and test out new theory, technology, and ideas.  And you can be sure I’ll be building new Learning Labs, Sample Code and DevNet Sandboxes for you all to explore along with me.

And we’ve made it to the end, but the discussion on NetDevOps is far from over.  In these two posts I’ve just started exploring my own thoughts on the topic, and framing up a discussion I look forward to having with all of you.  Leave me a comment here on the post, or drop me a note over on Twitter (@hfpreston) or on LinkedIn (hpreston) and let me know your thoughts.  And as always be sure to follow #DevNet on Twitter and Instagram for all the latest adventures in coding!

Until next time!

Hank, NetDevOps Evangelist!


We’d love to hear what you think. Ask a question or leave a comment below.
And stay connected with Cisco DevNet on social!

Twitter @CiscoDevNet | Facebook | LinkedIn

Visit the new Developer Video Channel

Authors

Hank Preston

Distinguished Architect

Learn with Cisco

Avatar

I recently spoke with a select group of Cox Communications’ customers. The audience consisted of CXOs in gaming, healthcare, education, and the public sector. I was impressed with the dedication and commitment they showed to urgently solving the very difficult challenges of thriving in a world of digital disruption. Here are the top three insights I would like to share from this event.

  1. Digital Disruption Runs Wide and Deep

The CXOs in attendance were from a wide variety of industries. They were all very aware that disruption is here, yet the extent of how it was impacting their industries and companies varied. This is in line with the research members of my team have done with the Global Center for Digital Business Transformation.

Digital disruption is like a “vortex,” a rotational force that draws everything nearby to its center. In the digital vortex (below), the center is where everything that can be digitized – offerings, business models, value chains – is digitized. Industries closest to the center of the vortex face the most substantial competition and disruption, while those around the edges feel less immediate impact. Over the past two years, there has been a high level of industry convergence, and that is a major source of the disruption we are seeing.

DBT Center’s 2017 Digital Vortex Industry Ranking (Orange: dramatic disruption, Blue: significant disruption, Green: modest disruption)

  1. Cisco’s Digitization: Start with Customer Value Creation

Many CXOs were intrigued with Cisco’s transformation approach and saw it as a way they could digitize their own companies. At Cisco, we focus on “twin imperatives” of digital transformation. First, we are innovating our business model. This means how we make money and especially how we create value for customers in new ways. The second imperative is creating greater agility in Cisco’s operating model, which includes our how our people, processes, and technology work together in order to capitalize on new business models. Focusing on how to create new value for customers is the right point of departure for your transformation journey. For more information about Cisco’s own transformation, please read my previous blog.

  1. Digitization Requires Timely Application of Technology and Talent

Given the speed and power of digital disruption, it is tempting to simply apply new technologies to old ways of doing business. Rightfully so, the CXOs all recognized that simply automating existing processes and practices would just exacerbate the issues of revamping their business and operating models. Instead, companies should first do what is needed to reinvent how they work and deliver customer value. Only then should they automate their redesigned businesses with new technologies.

This discussion also brought up the issue of finding the right talent. Surviving in a world of disruption means operating at speeds faster than ever before. While many companies are willing to transform, they are often hampered by the lack of the right talent to do so. It is important to remember that digital transformation requires an “all hands on deck” approach where every organization, especially HR, must understand and be fully committed to achieving the company’s digitization goals.

How do these insights align with your transformation experiences? Is there an opportunity for your company to reinvent and digitize your business and operating models? I’d love to hear your thoughts.

 

Authors

Kevin Bandy

No Longer with Cisco