Delany College is a small school in the heart of western Sydney, and is a prime example of educational innovation and student engagement. To deliver and improve pedagogy at their school, Delany College partnered with Cisco to implement technologies and smart facilities that foster innovative learning and teaching. Their goal is to enable students to become critical thinkers, creative individuals and above all, collaborators.
According to Ivanka Rancic, the Director of Religious Education at Delany, “The access, the presentation, the apps are absolutely phenomenal. [The students] become so engaged, they get so lost in their learning.”
Watch this video to see how Delany College is enabling their students to learn in new and innovative ways with Cisco technology.
https://www.youtube.com/watch?v=c2xNLM6NODM&feature=youtu.be
It’s nearly IBC time, which means that some 55,000-ish of us are about to pack up for the massive RAI Amsterdam exhibit hall(s!) for five days of media and entertainment convention bliss.
This also means it’s that time when people start asking about what we’ll be focused on, demonstrating, and announcing at IBC 2016. Let’s start with focus. Ours is to help customers across the broadcast, studio and service provider ecosystem to scale with security, differentiate with innovation, and improve profitability.

As for announcements – we will have a range of news to launch new products and showcase momentum in security for video, the continuing shift to IP for broadcast production and our role in moving the ecosystem forward, our cloud platform for video services that transforms today’s video operations, and video quality — including 4K and the virtualization of video network processes. So “stay tuned,” as they say!
At our stand in Hall 1, Stand A71 in Amsterdam we will showcase our security for video solutions, highlight our cloud-based Infinite video solutions, and our transformative work to better serve the broadcast production and media and entertainment community with the Cisco Media Blueprint, that support media and video processing applications in a scalable and fully virtualized cloud environment.
One of my favorite demos at our stand this year shows how we take advantage of the latest innovations in programmable networking (SDN), and uniquely blend network and video application intelligence along with video client data to improve video quality and bandwidth efficiency. Doesn’t get better than that!
But enough about demos and intentions. Let’s talk about food! Breakfast is on us from 8-8:30 on Friday the 9th, in the area outside of the Emerald Room. It precedes a CTO session in the Emerald Room from 8:30-9 to discuss technology insight, with Ken Morse, CTO, Service Provider Video Software. From 9:45-11 Dave Ward, Chief Architect and CTO, Service Provider Business, will join a ‘CTO Roadmap Session’ in the RAI Forum.
On Saturday 9/10, from noon to 1:30, Thomas Kernen, Technical Leader for Cisco Switzerland, will participate in a session about technological advances. He’ll share his views on deploying accurate time information using Precision Time Protocol within an all-IP studio. Later that day, from 3:30-5, Sam Rastogi, Senior Product and Solutions Marketing Manager, will participate on a media cyber-security session in the Emerald Room, along with other leaders from the studio and broadcast technology community.
Dave Ward is back in action on the panel circuit Saturday to speak about the ‘Rising Stars Program: New Skills for the Robot dominated future,’ 12:00-1 pm in room G102/103.
On the ‘softer-ware’ side of things, we have several speakers planned to support ‘Power Sessions’ at the Imagine Communications booth Sunday 9/11, including Dave Ward and Steve Epstein, Distinguished Engineer, tackling ‘Cybesecurity for Media Systems,’ at 12:00 – 12:45 pm, and Bryan Bedford, business development for the Worldwide Partner Organization, discussing ‘Live Production in an IP World,’ at 5:00-6 pm. Steve Epstein will also speak about ‘Cybersecurity’ at the BT stand on Monday 9/12, at 12:00 – 12:45 pm.
There’s tons more — it’s a five-day show — so we’ll be playing it out to you in pieces. Follow us @CiscoSPVideo for all the latest, and #IBCShow. If you’re going, please come by and kick the tires! See you in Amsterdam!
I fly quite a bit for my job as a Security Services consultant for Cisco. I’m one of billions of passengers traveling annually: according to the International Air Transport Association (IATA), passenger numbers are expected to reach 3.8 billion in 2016. The number of unique city pairs connected by airline networks will reach 18,243.
With numbers like these, you can understand why the job of an air traffic controller is considered to be one of the most stressful jobs.
Now consider the job of the aviation chief information security officer (CISO): they are charged with safeguarding the air traffic control systems in an era of hyper-connectivity in the Internet of Things (IoT) and constantly evolving cybersecurity risk.
Air traffic control (ATC) is collectively a set of regional, interconnected systems that perform numerous functions such as in-air flight separation and routing, on-ground traffic control, radar control, and runway lighting control. Working together, the network of regional ATC systems provides comprehensive coverage for the nation’s airspace and allows travelers to safely enjoy commercial and private air travel.
The interconnected network of ATC systems is part of the evolving IoT landscape. Various aspects of ATC interface with physical processes such as radar control and airport runway lighting. Without reliable radar and lighting, safe air travel wouldn’t actually be possible. Industrial control systems—and their components and networks—provide the ability to control these kinds of physical processes. If industrial control systems are disrupted or taken out of normal operation, ATC functions could be severely impacted.
Fortunately, aviation CISOs can implement an effective security program strategy by incorporating best practices from industrial security environment. A few to consider:
- Obtain executive-level visibility and support for the security program
- Implement robust internal network segmentation
- Implement real-time cyber threat detection and response capability
- Implement robust remote access controls
- Regularly evaluate third-party and supply chain security risks
Aviation CISOs can build on this foundation to achieve higher levels of cybersecurity program maturity, and ultimately keep the 3.8 billion passengers in our skies safer.
For a more detailed review of my recommendations and other resources, read this blog I recently wrote for Homeland Security Today and visit Cisco’s Security Services for the Internet of Things web page.
On August 2, 2016, Cisco launched the Veteran Talent Incubation Program (VTIP) at its Research Triangle Park (RTP) campus in North Carolina. Twelve veterans from area military bases are participating in the program, which spawned from a discussion between Erika Plant and TAC Managers (Chris Phillips, Charles Armstrong and Chris Myers) when they volunteered at a Cisco IT Awareness Day in February 2016 at Fort Bragg.
With Erica Plant’s executive sponsorship, as well as the support of Scott Lawrence and other Services leadership support, a cross functional team was formed to create a Cisco Customer Service Engineer (CSE) job pipeline for North Carolina transitioning service members and veterans. This past spring, volunteers traveled to Camp Lejeune to complete the new selection process that included a video as part of the application and an interview assessment.
Upon successful completion of the program, VTIP participants will be accepted into the Services Academy CSE Program and will be required to complete the three-month Services Academy program, preparing them for a Global TAC or other Services role. For those who do not get a job offer from Cisco, they will have received CCNA training, an ICND 1 and 2 certification voucher, mentoring, job shadowing and corporate culture training.
The self-study aspect of the program launched on August 2, 2016 with an inspiring welcome by Mark Holloman, a detailed training program outline by Jonathan Nichols, tours of Cisco labs and advice from TAC Managers and CSEs. One participant has already passed their CCNA certification exam!
VTIP is the successful culmination of multiple Cisco teams to formulate an innovative idea, tap into a diverse talent pool, bring that idea into fruition in a matter of months and provide our veterans with valuable skills training in IT.
Thank you to the cross-functional core team driving this initiative; Angie Coolidge (TA Services), Carly Enarson (University), Nicole Learn (Services Academy), Jonathan Nichols (Services Academy), Gena Pirtle (Corp. Affairs), Griselle Paz (TA I&C), Dianna Teague-Lanier (Program Manager) and Dara West (Corp. Affairs).
Visit Cisco Corporate Social Responsibility and learn more about Cisco’s Veterans Program today!
Co-Authors: Frank Palumbo, SVP, Global Data Center Sales and Kristine A. Snow, President, Cisco Capital
Digital transformation is no future state. It’s happening now. In fact, according to Gartner there are 125,000 enterprises in the U.S. alone that are now launching digital transformation projects involving companies of all sizes from nimble startups to global conglomerates. For the largest enterprises, the challenges will be profound, impacting virtually all aspects of their technology practices. And this first and foremost includes its mission-critical core – the data center.
With the immense proliferation and increasingly distributed nature of data, devices, and applications, coupled with the need for real-time decisions and security, the data center has never been more important or has seen a more daunting challenge. How do you manage this complexity?
The answer is – the network. The network is the single source of truth. And it’s everywhere the applications and devices are—from the data center to the private and public cloud, all the way to the edge. Far from being mere plumbing, the network is the strategic asset and competitive differentiator.
The advent of flexible financial structures
As the data center professionals tackle the formidable challenges they face, the last thing they want to worry about is financial limitations. That’s where Cisco Capital comes in.
Up-front cash and capital can limit the flexibility organizations need when facing the pressure to scale at a moment’s notice. Supporting seasonal spikes or rapid shifts is a huge obstacle from a technological perspective, but new, adaptive financial structures provide solutions by creating agility within organizations.
With customers repeatedly returning to us with stories about scaling challenges and internal flexibility limitations, we began to ask ourselves how we could improve financial structures to help organizations stay more competitive, nimble and innovative.
Ultimately, the only answer that made sense was utilization-based models. Subscription models provide the flexibility that consumers want and the liquid agility that organizations need to stay competitive.
Open Pay – a unique financial structure tailored to individual customer needs
By working with engineers, product teams and partnering closely with customers, our team developed Open Pay, a new type of financial structure that can be tailored to individual customer needs and meet peak network demands. Open Pay provides a more responsive, agile way for organizations to meet the demands of a rapidly changing industry by paying for capacity as it is needed.
Open Pay is unique in that it takes a metered approach to monitoring usage – a technical feat that only Cisco has been able to employ for converged infrastructure, storage, routing and switching solutions.
Now, organizations can better align infrastructure costs to actual usage – saving time and money while increasing operating efficiency. With Open Pay, organizations can prepare for both anticipated and unexpected demand spikes with less risk and increased flexibility.
A welcome change in Technology Financing
Open Pay represents a much bigger shift in the way companies and customers are interacting with technology. Rather than investing large amounts of capital into data center technology that requires updating and prevents flexibility, organizations can now access the technology as needed.
There are other variants of this structure, but Open Pay is the most robust model of its kind to charge based on metered usage of both compute and storage. We’re excited to help the next generation of technology-based organizations flourish. We’re just getting started, and will be adding additional product lines to the Open Pay program, and companies are increasingly adopting this flexible payment model to free up resources and drive agility from within.
As technological innovation accelerates at an exponential rate, organizations can now leverage this financial strategy as a means of not only exceeding their business objectives, but also propelling themselves ahead of the competition.
Originally published in the book: Gray, Ken & Nadeau, Thomas. Network Function Virtualization, 1st Edition. (2016). Morgan Kaufman, also available here.
This book by Ken and Tom on NFV is perhaps the first time they’ve laid out both a fantastic review of the vast landscape of technologies related to virtualized networking and woven in a subtle argument and allusion to what the future of this technology holds. I may be drunk on my own Koolaid, but I certainly read the book with a specific lens which is one in which I asked the question “Where are we on the maturity continuum of the technology and how close is it to what operators and customers need or want to deploy?” The answer I believe I read from Tom and Ken is that “We’ve only just begun.” (Now try and get Karen Carpenter out of your head while reading the rest of this :)) What I mean by this is that over the last say 6 years of trade-rag-eye-ball-seeking articles, the industry has lived through huge hype cycles and ‘presumptive close’ arguments of download-install-transition-your-telco/mso to a whole new business.
What K/T lay out is not only an industry in transition but one that is completely dependent on the pace being set by the integration of huge open source projects and proprietary products, proprietary implementations of newly virtualized appliances, huge holes in the stack of SW required to make the business: resilient, anti-fragile, predictable, supportable, able to migrate across DCs, flexibly re-arrangeable, a whole mess of additional x-ilities and most importantly… operate-able and billable. Thing is, this transition MUST happen for the industry and it MUST happen in many networking dependent industries. Why? Because the dominant architecture of many SPs is one in which flexible data centers were not considered part of the service delivery network. On-demand user choice was not the norm. Therefore, trucking in a bunch of compute-storage-switching (aka Data Centers) does not by itself deliver a service. SPs build services in which there are SLAs (many by regulation in addition to service guarantees), guarantees of experience, giving access to any and all new devices that arrive on the market and as a goal focus on building more and more uses of their network. The key thing that this book lays out is that: it’s complex, there are a ton of moving parts, there are layers upon layers, there are very few people if in fact any that can hold the entire architecture in their head and keep track of it. And most importantly: we are closer to the starting line than the finish line.
Thankfully as K/T laid out in another book, we have made it through the SDN-hype cycle and it’s being rolled out across the industry. We watched the movie “Rise of the Controllers” and the industry is settled down and deploying the technology widely. Now we are to the point where virtualizing everything is the dominant topic. Technology progresses. Thing is, it requires a lot of “stack”, and all that’s necessary in that stack doesn’t exist today and is in various stages of completeness, but not integrated into something consistent or coherent yet. Also, when someone says VNF (virtualized network function) or NFV (network function virtualization): A) it applies to more than networking related stuff, see video production and playout services and B) the terms were defined when hypervisors where the only choice for virtualization. Now that containers are available; we have to clearly set the industry goal that “cloud native” (which is basically a synonym for containerized applications) is the ultimate endpoint; until the next wave comes in. The real point is that lifting and shifting of physical appliances to hypervisor based VNFs (necessary first step but not sufficient) led to one particular stack of technology but cloud-native leads to a different variant. One in which use of DC resources is dramatically lower, time to boot/active service is faster and parallelism is the central premise. For the love of all the G*ds, I truly hope that the industry doesn’t stall prematurely. Without knowing what K/T are up to next; it’s an easy prediction to make that this book will have a potential for 42 versions of publication as today it documents the first steps toward this industry revolution.
The implicit argument that I read throughout the book is that we are seeing the end of the feudal reign of siloed organizations and technical structure of long lasting beliefs of the architecture of the Internet. The next conclusion I came to is that we are at the point where the OSI model should only be taught in history books. It’s close to irrelevant and assumptions that don’t hold water today. What K/T laid out (although they don’t spell it out this way so I’ll take the liberty) is that there are now so many more explicit first-class citizens in the Internet architecture that our old notions of layering have to be wholesale rejected. What are these new first class citizens? Identity, encapsulation/tunneling, specific application treatment, service chains, security, privacy, content as endpoint, dependent applications performing as workflows, access agnosticism, multi-homing, location independence, flat addressing schemes, unique treatment/augmentation per person, and a desire that the network and the service someone or some business requested reacts to repair itself and reacts to experience that is desired… to name a few.
Ok, let me dial it back a little and try and explain how I got this worked up. I reached the same point K/T discuss in the book that the stack to virtualize networks goes way beyond the concepts of SDN. As the reader probably understands, SDN is a key ingredient to enable NFV but alone it only does part of the job. I’m going to switch terms from what K/T use and describe the high-level goal as “Reactive Networking.”

The industry has been making progress towards this target in the open source community and somewhat in standards bodies (e.g. MEF, IETF, ETSI etc). Services orchestration now includes SDN controllers. Many are working toward an implementation of MANO, the orchestration framework that can create virtualized services per tenants or customers. There are service orchestration products on the market; I lumped them all together in the greenish bubble on the left. The network can now be built around strong, programmable forwarders. Providing a solid analytics platform is needed immediately and work is already underway. This is key because in this day and age we can’t actually correlate a network failure to video quality analytics. Meaning that a country full of video viewers just had their content tile because a link went down somewhere and no one has any idea what caused it. Yep, it’s the case right now we can’t correlate any networking or NFV event to service quality analytics. It’s all siloed. Everything today is compartmentalized to its fiefdom or rigidly stuck in it’s OSI layer. One of the magical cloud’s most important contributions to the industry is the notion of a PaaS. Request a service and voilà, it’s working. We need to get NFV to the PaaS and cloud designs for relevancy. But, at the same time, have to make networking relevant in the PaaS layer. Today you can dialup/request CPU, RAM, Storage but not networking (even bandwidth) in any cloud.
That last fact is horribly depressing to me, but I understand some of the reasons why. In cloudy DCs available today the Internet is considered ubiquitous and infinite. The services put in the cloud are built on that assumption and are most often not IO constrained (note that there are just about no VNFs available either); they are CPU, RAM and Storage constrained. They are built for web services type workloads. When they are IO bound, you pay for it (aka $) in a big way via massive egress bandwidth charges. As mentioned, you can pay for it but you can’t provision it. Sucks. But read what I wrote way above, in SP cloudy DCs, the Internet is also ubiquitous but they have a big business around SLAs, best of class experiences and stuck dealing with regulations. Therefore, in an SP network the Internet is also infinite, but it is purposefully engineered to meet those goals of best in class, on-all-the-time and guaranteed. VNFs in an SP cloud are sources and sinks of mostly IO and the VNFs are chained together and orchestrated as a complete service. For example, a virtual router, load balancer, firewall, anti-DDOS, anti-malware, IPS, cloud storage, etc. are all one bundle for the end-customer. For the SP, it’s a service chain per tenant (yes, it could also be deployed as a multi-tenanted service chain) who also fundamentally bought a 1 or 10Gig end-to-end access and security service++. Therefore, the virtualized services are directly tied to a residential or business customer who is buying a complete turnkey service. And they want something new tomorrow. Not only more and more bandwidth but services available that are cool, valuable and going to solve their problems. The fact the SP uses VNFs to reduce cost, add more flexibility to service delivery (aka new stuff all the time) is great for the end customer, but the SP has to build an end-to-end service from access through to VNFs and to the Internet seamlessly. The DC has to be orchestrated as a part of the complete network and not as an island. The stack to do this is across domains, as K/T explain is very complex.
That last note on complexity is the key to the point of the picture above. The stack for NFV has to be fully reactive and does the right thing to manage itself. Elastic services (expanding and contracting on demand), migrating workloads (aka VNFs), life-cycle management, low-cost compute and storage failures, constantly changing load in the network, load engineering in the DC all require that bus between the analytics platform and orchestration to be functioning well or we are going to end up with an old-school event->alarm->notify->trouble ticketing system of the last decades all over again.
As K/T explain throughout the book and really do a great job at simplifying: the stack is complex. Towards that end, there’s another concept in the industry of the “whole stack” developer. This concept is that for an app developer or service designer in the NFV space, to do their job well, they need to understand the “whole stack” of orchestration++ infrastructure below their app or service to get their job done. The goal for the NFV industry hopefully isn’t going to be towards this end. <This tangent is a moderately interesting argument in the intro of a book that attempts to describe the whole stack and how important it is to understand.> Where I differ and I’d bet that K/T might agree, is that the industry target has to be to a “No Stack” developer. That’s the point of having a fully reactive, do-what-I-need, do-what-I-want set of orchestration and analytics software. A service designer for NFV probably doesn’t want to know the goop and infinite layers below and IMHO, shouldn’t care less. It’s all being swapped out and upgraded to the latest and greatest new set of tools and services tomorrow anyways. I posited above that there are very few, if any people that could fit the entire architecture at any point in time into their heads anyways. Let alone all the APIs, function call flows, events, triggers and fubars that’s happening underneath at any point in deployment.
So, I offer this diagram derived from the chapters of the book and my own hallucinations:

Everything below that blue line represents all the networking layers that the NFV stack has to config/provision, spew telemetry data, collect and correlate in the analytics platform, react to changes in the network, reprogram the network, spin up elastic VNFs, life-cycle manage them, create service chains, apply to tenants changing service requests, etc, etc, etc. Please see the book attached to the intro for details. 🙂
All the service designer or app developer or the SP’s OSS/BSS should see as a result is way up at the top programmed and managed via the Service Platform. In industry terms it’s a PaaS layer (which today doesn’t exist for NFV but IT is exactly the target the industry needs to hit). The history of SDN, controllers, changes to embedded OS’ that enable generic apps to run on them, orchestration platforms and biggest data platforms is one of chasing the developer. The industry chased each new building block during the creation of the stack believing that developers would be attracted to their API set and developer platform. Apps on the router/switch, apps on the SDN controller, analytics apps… in fact they all were right in the end and all wrong. But to date, none have emerged as more than pieces of infrastructure in the stack and as application platforms for very specific outcomes. This point is often really hard to explain. An SDN controller is both infrastructure and an app platform? Yes. In an overall NFV or orchestration stack, it’s infrastructure. It’s the thing that speaks SDN protocols to program the virtual and physical topologies and ultimate resources. As an app platform, it’s unbelievably awesome for an IT pro or engineer trying to debug or program something very specific. But, it may not be the right tool for an overall NFV lifecycle manager and service orchestrator. Rinse and repeat the argument for the on-box apps or analytics platform. Great for certain types of apps in specific scenarios, but awesome as infra in an overall infrastructure orchestration stack. And perfect if linked together by high performance message busses to enable reactive networking.
Here’s a pic of the situation:

Assume the dashed line in this picture is in the same location as the solid blue line in the previous picture. The more technology centric you get, the more the apps become very specific for a task or job. All the action for an SP or Enterprise to deliver a business outcome or service is at the PaaS layer. The reason I say this is that dragging all the APIs and data models from lower layers in the layered stack all the way to the NFV service designer or app developer == dragging every single feature, event, trigger and state to the top. No one builds a cloud service that way and it would be a modification to the old saying and now be “Luke, use the 1 Million APIs.”
Projecting forward to future versions of K/T’s book, the argument I read in this book and conclusion I came to could be a world that looks something like this:

The “No Stack” developer, designer and deployer is in an industry where there’s a PaaS that has a rich catalogue of VNFs and VxFs (I hate to admit it, but not everything is Networking related; e.g. transcoders for JIT, CDN caches, etc). The Stack takes care of itself. The entire network (including DCs) is load, delay, jitter engineered as a whole; services are bought and sold on demand and the entire system reacts and can recover from any change in conditions.
Note this 100% does NOT mean that there has to be a grand unification theory of controllers, orchestration systems, analytics platforms to achieve this. K/T go to great lengths to describe access != WAN != DC != peering policy != content caching policy. This does mean there’s room for NNI interfaces in between specific controllers for access, campus, WAN, DC as each of the resource, physical and virtual devices, etc. is unique. But it does mean that a cross domain orchestration system that can place and manage VNFs on-prem, in a CO or head end or centralized DC is a reality. As much as I dislike grand-unification theories, I dislike centralized vs distributed and on-prem vs cloud debates. As K/T describe, they are pointless arguments and really boring conversations at a dinner party. All of the scenarios are going to emerge. Sooner we face up to it, the sooner we can get building it and not get wrapped around the axle of use-case jousting.
This being said, there are some fundamentals that make this challenging and a ton of work to round out the different mechanisms to orchestrate VNFs wherever they may be placed. Let me give a couple examples. How much resource does a VNF need? What I mean is in terms of: CPU cores/cycles, RAM, Storage and what are the capabilities and attributes of the VNF? Can it fling 10Gpbs? 1? 100? How/ where can we describe this? That “manifest” format doesn’t exist yet, though there is work being done specifically for VNFs. Note that it doesn’t really exist for anything virtualized via hypervisors or containers in general either. Virtualized forwarding (switching or routing) recently was a trouble area but this status has recently changed w/ the introduction of fd.io. The platform for network data analytics is just about to emerge as I write this. Most VNFs are not cloud native yet and are stuck in the lift and shift model and in heavyweight hypervisors. Yep, the industry is making it work but for how long before economic and complexity collapse? Can I reduce the overlay/underlay operational headache to a single flat layer with a flat addressing scheme like IPv6 could provide? Probably yes. Linkages to OSS/BSS and other management systems haven’t been defined. Is the PaaS I describe above really the new (re)formulation of a SPs OSS/BSS? Some operators are alluding to it. As a customer, am I going to have the cloud-based experience to see the state of my entire network services? I say this because most often a customer has no way of verifying except packets are flowing or not. What about the user experience of different roles within the enterprise or SP? Thankfully, the notion of bring this to a PaaS layer means these different questions appear to have some answers as the orchestration and analytics platforms can both be utilized at the “outcome” services of PaaS. This enables someone to re-render the data into an operational experience that meets the different roles of an operator and necessary information to an end-customer. The earlier diagrams Jedi-mind trick “these are not the layers you are looking for” into an outcome like this:

This is all the same data available at the PaaS to enable:
- an on demand customer marketplace for the end-user
- service catalogue app for the service designer
- service order app for the product manager (what’s the status of my customer’s deployment)
- service assurance app for the operator (in which the layers and services have been identified and tracked to render the data from resource to policy and every way of viewing in between)
Ok, I’ve been riffing and ranting for a while and you really want to get to the book so, let’s start bringing this to a close. The overall goal, drawn slightly differently again, is to deploy VNFs as a part of an engineered end-to-end service. There are physical resources of compute, switching storage (hopefully orchestrated in a “hyper converged manner”) which is enabled by a cloud platform using hypervisors, containers, both programmed by SDN controllers with telemetry data spewing out to the Model Driven orchestration and analytics platforms.

Topped off by a service rich PaaS that may emerge in future revisions of the book as being segment specific because of service differences. Also, we already know some of the immediate next scenes in this screenplay. One of the immediate scenes required is an answer to the question “What do I need to install to deploy X Tbps of Y NFV service?” Today, there are no academic or engineering models that can answer that question. Particularly with the lack of IO optimized computer platforms on the market. As mentioned NFV services are IO bound and not necessarily cycle or RAM (and certainly not storage) bound. A lift and shift architecture to get a physical device into a hypervisor isn’t particularly hard but also not particularly efficient. So another scene is going to be how fast are VNFs enabled as cloud native. Another one is going to be around common data models that have to emerge in the standards bodies to be able to orchestrate a rich ecosystem of suppliers.
As mentioned right up front, this is a fast moving space with a lot of details and coordination that has to occur which MUST be simplified to “No Stack” delivery. BUT the whole purpose of continuing on with the book is to get the details of why, how, where, what-were-they-thinking with respect to the big NFV story from Ken and Tom. I certainly hope that K/T have great success and I fully predict they have a great career writing the next 41 revisions of the book as this area of technology is moving FAST.
Dave Ward, Cisco
2016.04
Rick Rubin is one of the most prolific producers in the history of music. If you haven’t heard of Rubin, he has worked with some of the best known artists in the world such as Johnny Cash, the Red Hot Chili Peppers, Tom Petty, the Dixie Chicks, the Beastie Boys, and Jay Z to name a few. As a producer, Rick’s job is to pull together and service many pieces of the musical creation process such as gathering ideas for songs, coaching artists, and rearranging the music into the final production.

Photo: The Source
The music producer’s ultimate goal is to pull a lot of different moving parts together into harmony and create a song or album that aligns to an artist’s vision. Rubin has done this for more than thirty years in an industry that rapidly changed from selling physical albums to becoming almost completely digital with products like iTunes or service-oriented offerings like Apple Music and Spotify.
So what makes Rick Rubin still so prolific in an industry that has evolved and how does he work across so many genres of music? The answer lies in his ability to ask questions and get to the root of what the artist is trying to achieve:
“I’ll ask a lot of questions and we’ll probably listen to some of the riffs that they’ve been writing. Usually, I’ll hear something that will sort of indicate the direction and then we’ll talk about it from there” – 2011 interview.
Asking questions and finding the right direction in a rapidly changing industry – it’s a very similar scenario to what manufacturers are looking at right now as the industry becomes more competitive and digitized.
It’s a journey to find the right elements and bring them together into harmony. The modern manufacturer is gathering ideas, coaching, and re-arranging their processes – much like a music producer.
While Rick Rubin is a “super producer” and has done this job for decades, his formula has remained consistent in getting the best out of artists. Rubin approaches each project by asking a series questions to work through the discovery process with his artists. Similar to music, there are artists in every part of the manufacturing process – from engineering artists, to supply chain artists, to production artists who work together to output a product, just like a band making an album or song.
So what are some of the questions that these new “producers” should ask these “artists” as they begin their journey together into digital manufacturing?
While I can’t list them all in a blog post, here are some good ones to start with:
- Do the current teams have the right skill set and are they trained on the latest technology?
- How can we align teams to drive change?
- How can we adopt new processes to get the most out of investments?
- Do I know what resources I need?
- Have we done an assessment of our potential security risks?
https://youtu.be/PwyIGS5Fb9M
As you begin to lay the foundation for your digital journey, you don’t have to start from scratch and you can’t become a Rick Rubin overnight. Rubin has noted that his path began with working with mentors and he is always open to coaching and learning. There are resources available to help you identify the right questions to ask and help you on your path to digitization.
Additionally, at Cisco we have a services group who helps bring out the best performances across many organizations. Cisco Services can help lay the foundation for orchestrating the integration of technologies, coordinating and aligning IT and manufacturing line of business teams, and helping organizations adopt change.
To start your journey, all you have to do is ASK us the most basic question – “How can Cisco help me?” and you can be on your way to producing the next big manufacturing hit. I look forward to hearing your thoughts below. Happy Harmonizing!
Check out our website for more resources:
Digital transformation is happening NOW. Your smartphone is your bank branch. Your mouse and PC are quickly replacing brick and mortar stores. As consumers, we have the tendency to seek alternatives when we don’t get want we want, when we want it.
If you’re in IT, you are already feeling the pain. The modern enterprise has become user-centric instead of IT-centric. Line of business, application developers and DevOps teams expect you to define, offer and deliver services where, when and how they want. When you don’t, they go around you.
The benefits of transforming IT to an as-a-service model are well known and yet the pace has been slow. The reasons for this are complex as change is swirling around business models, customers, application development as well as your data center. Now add the fact that all these changes are happening at once along with the reality that every business is different and you have the formula for slow transformation.
Years of managing data and accelerating organizations has taught Cisco precisely what is needed to transform your business. There are four pillars to transformation but I am focusing on one element: automation. Automation allows your data center, and ultimately your business, to respond faster by replacing manual, trouble-ticket processes with consistent, automated service delivery. The cornerstone to Cisco’s enterprise automation is Cisco ONE Enterprise Cloud Suite. Watch this video to learn more.
Continue reading “Deliver Simplicity and Freedom”
