Avatar

Written by David Ward, CTO of Engineering and Chief Architect, and Maciek Konstantynowicz, Distinguished Engineer; Chief Technology & Architecture Office

For those that don’t want all the gory details, this is a short version of the longer blog conversation that can be found here. The longer blog goes into quite a bit of detail on the technology; please check it out.

“Holy Sh*t, that’s fast and feature rich” is the most common response we’ve heard from folks that have looked at some new code made available in OpenSource. A few weeks back, the Linux Foundation launched a Collaborative Project called FD.io (“fido”). One of the foundational contributions to FD.io is code that our engineering team wrote, called Vector Packet Processing or VPP for short. It was the brainchild of a great friend and perhaps the best high performance network forwarding code designer and author, Dave Barach. He and the team have been working on it since 2002 and it’s on its third complete rewrite (that’s a good thing!).

The second most common thing we’ve heard is “why did you Open Source this?” There are a couple of main reasons. First, the goal is to move the industry forward with regard to virtual network forwarding. The industry is missing a feature rich, high performance/scale virtual switch router that runs in a user space and has all the modern goodies from hardware accelerators; built on a modular architecture. VPP can run either as a VNF or as a piece of virtual network infrastructure in OpenStack, OpenNFV, OpenDaylight or any of your other fav *open*. The real target is container and micro-services networking. In that emerging technology space, the networking piece is really really early and before it goes fubar we’d like to help and avoid getting “neutroned” again.

Why Forwarding & VPP are Important

So, why is the forwarding plane so important and where cool kids are hanging out? Today’s MSDCs, Content delivery networks and Fintech are operators of some of the largest and most efficient data centers in the world. In their journey to get there they have demonstrated the value of four things: 1) using speed of execution as a competitive weapon, 2) taking an industrial approach to HW&SW infrastructure, 3) automation as a tool for speed and efficiency and 4) full incorporation of a devops deployment model. Service providers have been paying attention and looking to apply these lessons to their own service delivery strategies. VPP enables not only all the features of Ethernet L2, IP4&6, MPLS, Segment Routing, Service Chaining, all sorts of L2 and IP4&6 tunneling, etc., but it does it out of the box. Unbelievably fast on commodity compute hardware and in full compliance with IETF RFC networking specs.

Most of the efforts to towards SDN, NFV, MANO, etc have been on control, management and orchestration. FD.io aims to focus where the rubber hits the road: the data plane. VPP is straight up bit banging forwarding of packets, in real-time, at real linerates, zero packet loss. It’s enabled w/ high performance APIs northbound but not designed for any specific SDN protocol; it loves them all. It’s not designed for any one controller, it also loves them all. To sum up, VPP fits here:

Image1_VPP

 

What we mean is a “network” running on virtual functions and virtual devices, a network running as Software on computers. Therefore, VPP-based virtualized devices, multi-functional virtualized routers, virtualized firewalls, virtualized switches, host-stacks and often function-specific NAT or DPI virt-devices as software bumps in a wire, running in computers, and in virtualized compute systems, building a bigger and better Internet. The question service designers & developers, engineers and operators have been asking is how functional and deterministic are they? How scalable? How performant – and is it enough to be cost effective?

Further, how much their network behavior and performance depend on the underlying compute hardware, whether it’s x86 (x86_64), ARM (AArch64)or other processor architectures?

If you’re still following my leading statements, you would know answer to that, but lot’s of us would be guessing and both will be right! Of course it does depend on hardware. Internet services are all about processing packets in real-time, and we love a ubiquitous and infinite Internet. More Internet. So this SW-based network functionality must be fast, and for that the underpinning HW must be fast too. Duh.

Now, here is where reality strikes back (Doh)! Clearly, today there is no single answer:

  • Some VNFs are feature-rich but with non-native compute data planes, as they’re just a retro-fittings or reincarnations of their physical implementation counterparts: the old lift and shift from bare metal to hypervisor
  • Some are better performing with native compute data planes, but lack required functionality, and are still a long way to go to implement required network functions coverage to realize the levels of network service richness used to and demanded by the network service consumers.
  • VPP tries to answer what can be done, what technology or rather set of technologies and techniques can be used to progress virtual networking towards the actual fit-for-purpose functional, deterministic and performant service platform it needs to be to realize the promises of fully-automated network service creation and delivery.

 

Deterministic Virtual Networking

A quick tangent on what I mean by reliable and deterministic network. Our combined Internet experience taught us few simple things: fast is good and never fast enough, high scale is good and never high enough and delay and losing packets is bad. We also know that bigger buffer arguments are a tiresome floorwax-and-dessert-topping answer to everything. Translating this into the best practice measurements of network service behavior and network response metrics:

– packet throughput,

– delay, delay variation,

– packet loss,

– all at scale befitting Internet and today’s DC networks,

– all per definitions in [RFC 2544] [RFC 1242] [RFC 5481].

So here is what we need and are to expect by these simple metrics:

Repeatable linerate performance, deterministic behavior, no (aka 0, null) packet loss, realizing required scale, and no compromise.

If this can be delivered by virtual networking technologies, then we’re in business; as an industry. Now that’s easy to say and possible for implementations on physical devices (forwarding asics have an excellent reason to exist and continue into the future), built for the sole purpose of implementing those network functions, but can it be done on COTS general purpose computers? The answer is: it depends for what network speed, for 10Mbps, 100Mbps, 1Gbps today’s computers work… ho, hum. Thankfully COTS computing now has low enough cost and enough clock cycles, cores and fast enough NICs for 10GE. Still a yawn for building real networks. For virtualized networking to be economically viable, we need to make these concepts work for Nx10 | 25 | 40GE, …, Nx100, 400, 1T and faster.

FD.io & VPP Resources

The best part of open source projects is the opportunity to work with the code. In the open. You can get instructions on setting up a dev environment here and look at the code here. Finally check out the develop wiki here. There are a ton of getting started, tutorials, videos, docs on not only how to get started but how everything works. Even videos of Dave Barach. Please don’t mind the choice of language; we don’t let him out of his cave very much.

The code is solid and completely public and under the Apache license. We’re already using an “upstream first” model at Cisco and continuing to add more features and functionality to the project. I encourage you to check it out as well as consider participating in the community as well. There’s room for newbies and greybeards. We have a great set of partners and individuals contributing to and utilizing FD.io already in the open source community and a good sized ecosystem already emerging. Clearly from this conversation you can tell I think VPP is a great step forward for the industry. It has a great potential for fixing a number of architectural flaws in current SDN, orchestration, virtual networking and NFV stacks and systems. Most importantly it enables a developer community around the data plane to emerge and move the industry forward by having a modular, scalable, high performance data plane with all the goodies; readily available.

Realizing a Vision

Five or so years ago we began evangelizing this diagram as a critical target for SDN and establish a trajectory for the industry.

Image2_VPP

 

We have been making progress towards that target in the open source community.   Services orchestration now includes SDN controllers (check). The network can now be build around strong, programmable forwarders == VPP (check). Providing a solid analytics framework is immediately next and work is already underway. This year, through work with Cloud Foundry, we hope to realize our goal in making the network relevant in the PaaS layer. These latter endeavors will be the subject of future blogs.

Authors

Lauren Cooney

Sr. Director, Strategic Programs

Chief Technology & Architecture Office

Avatar

KR03007

One of the most daunting aspects of updating your wireless network to the 802.11ac Wave 2 standard is that much of your infrastructure won’t be wired to handle the increased speed of your new network. It’s sort of like having a bullet train, but not having the tracks.

Where does this leave you? One option would be to remove the old cables and replace them with wiring that will allow you to take advantage of this speed. But that’s an expensive solution that’s incredibly time-consuming—it’s not easy to fish out those wires and then patch up and paint the inevitable holes in the wall.

The easiest and most cost-responsible thing to do is install the foundation of IEEE standardization (via the 802.3bz task force), NBASE-T technology. NBASE-T technology is a type of Ethernet signaling that ups the speed of the already-installed cabling. That means you can put the sledgehammers down, you’re not going to need them. Cabling that now reach top speeds of 1Gbps (for up to 100 meets) will be augmented to reach up to 2.5Gbps or 5Gbps, provided that they are Cat5e and Cat6 cables.

The Cisco Aironet 3800 Access Point (AP) is the only AP currently on the market that has speeds that can take advantage of this cabling. With dual radios providing a theoretical connection rate of up to 5.2Gbps in total (2.6Gbps per radio), this is roughly double the rates that today’s high-end 802.11ac access points provide. All of these speeds are supported on the Category 5e cabling and 10GBASE-T cabling.

You’ll be getting a much faster network without spending a lot of time and money to do so. That’s a win-win for everyone.

You might think that you can get away with the network that you have now—and you may be able to, for a time. But eventually more and more devices are going to be brought into the network and they’re going to be using data-heavy applications when they connect to your network. These devices doesn’t include the products like security cameras and other smart devices, which make up IoT. The additions of these new devices are going to chew up your bandwidth real quick.

You don’t have to overhaul your entire network today, but it’s inevitable that you will soon. Equipping your network with the NBASE-T technology and the new Cisco Aironet 3800 access points are a great head start on this project—before it becomes a major problem later.

To learn more about NBASE-T, click here and to learn more about the Cisco Aironet 3800 Access Point, click here

Authors

Mark Denny

Senior PLM

Mobility

Avatar

After the early caucuses and primaries, it’s a mistake to pretend that voters aren’t angry with conventional, middle-of-the-road solutions. The next President isn’t going to limp to the finish line with a warmed-over, spiced-up version of the past. He or she is going to have to make fundamental changes, and that includes to our country’s technology policy.

A group of senior IT experts, including many who served in executive roles in government, are attempting to direct the presidential candidates’ attention to the urgent needs of technology policy in the Federal government. We released a report on February 11, titled “Tech Iconoclasts — Voting for America’s Success in a Network World”, that outlines five key needs in technology and innovation and recommends polices to address those needs. We define the five central areas in need of addressing as Advancing America’s Competitiveness, Rebuilding Trust in Government and Institutions, Using Technology to Simplify and Enhance People’s Lives, Reinventing Government Technology, and Evolving the Workforce.

This group of Iconoclast was tasked with thinking creatively – even boldly – about how to address these five areas. The word “iconoclast” is from the Greek word that literally means “image destroyer”; it refers to a person who attacks settled beliefs or institutions or widely accepted practices. And that certainly is appropriate for our times. As one analyst observed recently, there’s a reason that Donald Trump and Bernie Sanders are the only two candidates who regularly fill arenas with passionate, standing room crowds. Both are calling for change that is fundamental, not cosmetic or incremental.

We believe in fundamental change for our county’s technology policy, and our report reflects that. Our recommendations are ambitious in both goals and scope. We call for a complete remodeling of Federal IT infrastructure across agencies; a reformation of educational and hiring practices for science, technology, engineering, and math (STEM) students; and a restructuring of both the patent and immigration systems, among many other suggestions.

The report justifies the aggressive plan with a grim outlook on the future if the next Administration does nothing: “Today’s thinking will not solve tomorrow’s challenges. The next president ignores the changing nature of the global network at his or her own peril. Indeed, the very ability of governments to govern rests on their understanding of the networked world and their willingness to change at network speed,” the report states.

It goes on to note that U.S. investments may soon be outstripped by China’s, U.S. happiness and employment marketability still lags behind other countries, and overall trust in the government’s ability to protect data is at an all-time low.

Though each of the five areas lays out a specific plan for enacting change, they all stress that these changes are of vital importance to the nation’s future. This IT call-to-action sends a powerful statement to the presidential candidates: drastically change IT policy or be responsible for government failure on nearly all levels.

Authors

Alan Balutis

Distinguished Fellow and Senior Director

North American Public Sector for Busiiness Solutions Group

Avatar

With the International Association of Privacy Professionals gathering this week to discuss evolving regulatory requirements and rising customer expectations, there’s no better time to talk about privacy.

Privacy is an integral part of the digital transformation wave. As more countries, companies and organizations take advantage of cloud, mobile and data analytics, privacy plays an increasingly important role in critical decision-making, product design and service offerings. Innovators who are emphasizing privacy as an integral part of the product life cycle are on the right track.

As a Chief Privacy Officer (CPO) myself, I’m very excited about the momentum regarding data privacy with international governments and regulatory agencies. They are now poised to build on important work that has been done by private sector companies. A great example of this trend comes from the U.S. federal government’s recent move to greater data privacy management with the presidential Executive Order signing that is shaking up the federal government in the best way possible. The U.S. Executive Order comes on the heels of the EU-U.S. Privacy Shield agreement negotiated by EU and U.S. officials earlier this year that recognizes the importance of cross-Atlantic data flows to ongoing economic development.

I was heartened to see that in speaking about the Executive Order’s directive regarding “identifying and sharing lessons learned and best practices,” the Office of Management and Budget’s new privacy lead, Marc Groman, highlighted the importance of dialogue between public and private sectors. With that in mind, I look forward to working with the agency CPO and their new Council. We at Cisco have also made this transition and have always been committed to delivering high-grade engineering and holding ourselves to extremely high requirements for quality around security around the world.

Here’s an excerpt of what the U.S. President had to say:

Protecting privacy in the collection and handling of this information is fundamental to the successful accomplishment of the government’s mission.  The proper functioning of government requires the public’s trust, and to maintain that trust the government must strive to uphold the highest standards for collecting, maintaining and using personal data.  Privacy has been at the heart of our democracy from its inception, and we need it now more than ever. 

The focus on privacy is gaining momentum across the globe with government and regulatory agencies in Europe for example taking bold steps to ensure that citizen data is protected and safeguarded wherever it is stored. The EU is currently in the process of a comprehensive update of its data protection rules, known as the General Data Protection Regulations (GDPR). As CPOs work for us at various governmental levels it’s critical, now more so than ever, that they understand the technology, that this isn’t just a bureaucratic form-filling exercise but that they truly understand the elements of privacy engineering.  

The EU-U.S. Privacy Shield agreement acknowledges that privacy and information about consumers and employees are elements worthy of protection and that citizens are entitled to fully and securely benefit from the digital economy. In this context, the notion of privacy is the right to control your information’s destiny. The feeling of security empowers personal and professional growth.

Personally, I want these people to be creative, transparent and collaborative. I want them to help set the requirements for the cybersecurity refresh that is now underway.  I want them to be privacy engineers who deeply understand the context of how the government designs, builds, contracts for and maintains the systems that process and store all the information we need to share with the government in the course of our interactions with it.

If you’re interested in reading further, I wrote recently about data privacy’s critical partner: data security. You can read that blog post here.

Authors

Michelle Dennedy

No Longer with Cisco

Avatar

Digital disruption is resulting in smarter and smarter devices, an explosion of business services, and increased complexity for data center professionals.   The initial response of many organizations was to increase virtualization and headcount only to discover that this was not the way to increase organizational speed and agility.   Meeting the challenges of today’s digital world requires organizations to fundamentally change the way they deliver services to business and application teams — and this only happens by adopting automation.

Humans are brilliant but inconsistent.   Machines are fast but dumb.  At its most basic, automation combines human intelligence with the speed of machines to deliver consistent, error-free services without human intervention.  Automation can transform a mundane tedious task, such as setting up a new server, into one that delivers that new server faster, with greater consistency relieving your operations staff to focus on more strategic projects.

Where do you start?  Begin by converting a couple of your most popular repetitive processes into automated workflows.  After being released into production for self-service ordering, watch how your organization will flock to service delivery within minutes. Next, move to incident and problem management processes to further increase data center staff productivity.  Data center staff productivity increases because automation executes actions and tasks on each specific platform while allowing your staff to manage your holistic environment from a single, unified interface.

If your organization has not adopted automation, now is the perfect time.  Budgets are tight, technology and business service complexity is steadily increasing and qualified talent is scarce.  Research has shown that organizations that utilize automation as part of their business transformation, experience one or more of the following business outcomes: Continue reading “New is the Perfect Time for Private Cloud”

Authors

Joann Starke

No Longer with Cisco

Avatar

Safety and Security in the Digital Era

With reduced workforces and constrained budgets, today’s public safety and private security organizations need cost-effective solutions to keep citizens and public spaces safe.

Please join Cisco and our ecosystem partners at ISC West.  Stop by booth 8053 to see how Cisco and our partners are changing safety and security with the Internet of Things (IoT) and enter to win an Amazon Echo!

iSC West ecoystem

Continue reading “Safety and Security in the Digital Era: Please visit Cisco and Ecosystem Partners at ISC West”

Authors

Kacey Carpenter

Senior Manager

Global Government and Public Sector Marketing

Avatar

Cisco and Cavium, Inc., a leading provider of semiconductor products that enable secure and intelligent processing for enterprise, data center, cloud, wired and wireless networking, today are announcing that Cisco is integrating Cavium’s LiquidSecurity™ family into the Cisco Cloud Services Platform (CSP) 2100. The CSP 2100 is a Network Functions Virtualization (NFV) turn-key & open x86 Linux Kernel-based Virtual Machine (KVM) software and hardware platform to run both Cisco and 3rd party virtual network services. The CSP 2100 bridges network, server, and security teams by offering several ways to manage and operate the platform. You can manage the platform via a GUI, CLI, REST API, and/or NetConf leveraging Cisco’s Network Services Orchestrator (NSO).

For virtual network services running on the CSP 2100 that require crypto processing and centralized key management, this joint solution enables security features and performance similar to dedicated hardware appliances while providing the benefits of reduced costs and complexity, flexibility, and speed of delivery. With availability in both FIPS and non-FIPS modes, the solution is targeted at both Enterprise and Service Provider markets for a variety of applications in the Cloud, Data Center, Point-of-Presence (POP), Central Office (CO), COLOs, Carrier Neutral Facilities (CNF), WAN Aggregation, DMZ, Core Network, and Server Farms. Example applications include Load Balancers, WAN Accelerators, Web Application Firewalls (WAFs), Routers, Security Gateways and IDS/IPS.

Market Dynamics for Virtual Network Services

Most applications have been virtualized over the past decade, and now the same trend is occurring for network services. With this trend, network services can be deployed and managed much more flexibly in a virtualized environment using x86 computing resources instead of purpose-built dedicated hardware appliances. However, there are challenges that need to be addressed in order to speed up this deployment.

The challenges for deploying virtual network services pertain to the complexity of the software required to enable the virtualized environment, capability of the team to deploy and bring up services, and the lack of hardware performance and features for security. The platform needs to have easy-to-use software and development/deployment tools. The network team needs to have the capability to easily and quickly deploy virtual network services at the pace that the DevOps and server teams need (within minutes). The platform needs to have the required performance for crypto applications (i.e. performance of hardware with the agility of software). Several customer applications highlighted above that run on virtualized infrastructure require high, asymmetric cryptographic performance to match the performance offered by dedicated hardware appliances. Today most of the SSL transactions use 2048-bit RSA key operations that significantly tax x86 CPUs. There is a real need for centralized key operation offload and centralized key management to generate, store and manage keys in a highly secure manner for crypto applications running in a multi-domain cloud data center.

Cisco Cloud Services Platform 2100

Cisco Cloud Services Platform (CSP) 2100 is an NFV turn-key, open x86 Linux Kernel-based Virtual Machine (KVM) software and hardware platform for both Enterprise and Service Provider environments with 100 or fewer nodes per site. The platform enables users to quickly deploy any Cisco or third-party network virtual service through a simple built-in native web user interface (WebUI), command-line interface (CLI), or representational state transfer (REST) API. Users can also use the standardized NetConf interface with software such as Cisco Network Services Orchestrator (NSO) or even OpenDaylight (ODL). Any or all management interfaces can be used. The Cloud Services Platform 2100 is shipping today as a network appliance.

Cisco Cloud Services Platform 2100 Native WebUI Dashboard

CSP 2100 dashboard GUI

 

Cisco Cloud Services Platform 2100 v1.0 Demo 

 

Cavium LiquidSecurity™ Family

The LiquidSecurity™ family provides a partitioned, centralized and elastic key management solution with the highest symmetric/bulk and asymmetric/transaction per sec performance. It addresses the high performance and security requirements for private key management and administration while also addressing elastic performance per virtual / network domain for the virtualized cloud environment. This product family is available as a PCI Express adapter with complete software and also as an appliance. Product options include FIPS 140-2 level 2 and 3 certified as well as non-FIPS. Feature details are as follows:

FIPS

• LiquidSecurity™ FIPS family provides performance that is at least 10 times higher than any other solution on the market today. This product family supports 35K 2048 bit RSA Ops/sec and 10 Gbps bulk encryption. In addition, multiple LiquidSecurity™ products can be pooled together to offer higher performance for large deployments.
• SSL handshake offloads for 32 domains – LiquidSecurity™ HSM product family supports 32 FIPS 140-2 Level 3 Partitions per appliance. Each partition functions as an independent and fully secure HSM.
• Hardware support for 2048 bit RSA key pair generation – robust key generation within the FIPS boundary is a critical component of the overall security this product family provides.

Non-FIPS

• LiquidSecurity™ non-FIPS family supports 130K 2048 bit RSA Ops/sec, 300K ECC ops/sec and 10 Gbps bulk encryption/sec. In addition, multiple LiquidSecurity™ products can be pooled together to offer higher performance for large deployments.
• SSL handshake offloads for 64 domains – LiquidSecurity™ solution provides 64 Partitions where each partition functions as an independent key store and key operation partition
• 2048 bit RSA key pair generation – Multi-thousands of 2048b key generation per sec.

Architecture - LiquidSecurity

 

Multiple Load Balancing vendors such as F5, A10, Kemp and traffic monitoring services such as ExtraHop have already announced support for the LiquidSecurity™ solution. Cisco and Cavium are actively working together to add several additional virtual network service vendors to this list.

“With the integration of Cavium’s LiquidSecurity™ into the CSP 2100 platform, Cisco customers will be able to flexibly and efficiently scale critical crypto performance and secure valuable crypto keys,” said Jim French, Distinguished Systems Engineer at Cisco.  “Because of their broad industry support, we have been partnering with Cavium to enhance SSL processing using the Nitrox® III adapter for over 3 years.  LiquidSecurity™ is the next logical step in that partnership.”

“This partnership enables the availability and support of the LiquidSecurity™ product family through Cisco’s global sales and support channels thus accelerating the adoption of LiquidSecurity™ for the target markets,” said Tejinder Singh, Marketing Director of Crypto Solutions at Cavium. “We are delighted to jointly bring this solution to the market.”

Availability

The LiquidSecurity™ solution on the CSP 2100 will be orderable from Cisco starting in late Q2CY16. Field trials will start before then. It will be fully supported by the Cisco Technical Assistance Center (TAC).

For More Information

For additional information about Cisco CSP 2100, visit http://www.cisco.com/go/csp

For additional information about Cavium LiquidSecurity™, visit http://www.cavium.com/LiquidSecurity-HSM.html

Authors

Gunnar Anderson

Product Manager

CNSG Product Management

Avatar

This is a guest post written by Emmeline Wong, Marketing Specialist @ Cisco.emmwong

The most outstanding developers and programmers I have met always stay up to date with the latest technology and foster innovative thinking. They often gather to talk about ideas, problems, failures, solutions, what works well and what doesn’t. However, it takes time to really sit down to read and do research aside from our day-time job. Who really has time for that, right? Hence, if we want to be time efficient and be at the front of technological innovations, nothing is better than attending events.

Why You Should Go to These Events

Being at these events will allow you to get a complete outlook about the newest and greatest in the industry. You will have a chance to connect with peers that work on similar projects and issues and exchange tips. And lastly, you will get inspired to think outside the box by new people and places. Who knows?! Maybe your next “A-ha!” moment will be when you’re laying at the beach in San Diego.

Continue reading “Top 5 Events in 2016 that DevOps Shouldn’t Miss”

Authors

Rami Rammaha

Sr. Marketing Manager

IDS

Avatar

Hello, everyone! My name is Ed Jimenez, and I am the new lead for Retail & Hospitality for Cisco’s Business Transformation Team. My job is to help retailers and hoteliers make better use of technologies to deliver customer experiences that align with their brand promise.

I worked my way though college as a retail store manager before going into IT after graduation. Later, I joined a major book retailer at the dawn of the first eCommerce boom in the mid-‘90s, and that’s where I began to realize the excitement of how technology is driving this industry.

For years, many retailers were considered to be conservative, hesitant to invest in innovative technologies. But nothing could be further from the truth these days.

I’ve been fortunate to witness this transformation up close, and what I have seen hasn’t been a change around retail technology, but instead one of retail philosophy. I’ve been involved with some of the first implementations of cybercafés and order online-pick up in store back in the mid-1990s.  I’ve also helped retailers ranging from Metro AG (Europe) to the Gap on Store of the Future concepts.

Retailers have always been solution-seekers looking for ways to address specific business problems – “I need a new inventory system,” or “I need better security.” Today, the conversations I’m having are much broader in scope: “What experience do I need to create in my stores?” and “How can I better represent my brand across channels?” Ultimately, the technology conversation has shifted to understanding what capabilities are needed to help support these long-term strategies.

Today, this transformation is accelerating as mobile customer engagement and sophisticated analytics are allowing retailers to redefine their entire business models. In this blog, I’ll be highlighting some of these examples and discussing the learnings from this journey we are all on.

Today’s retailers are no longer just product and distribution businesses – they are now becoming technology businesses. And I don’t think this is limited to the largest retail chains. Mobility and cloud capabilities are also giving smaller stores an exciting opportunity to create new experiences for customers at a very manageable cost – we are seeing some of the most innovative work being done in these businesses!

If you didn’t catch it at NRF, learn more about where retail stands in the overall move toward digitization based on Cisco’s most recent research.

I look forward to talking with you on a regular basis to discuss digitization and how leading retailers are breaking new ground in selling across all their channels. Stay tuned! Please feel free to contact me with your questions or comments at any time at edjimene@cisco.com.

 

Authors

Ed Jimenez

Business Transformation Lead

Retail & Hospitality