Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco is bringing together networking and programming

January 16, 2014 at 11:03 am PST

Well Cisco has done it.

I have worked in IT since 1995 and never learned programming. Sure, I can do a little HTML, and years ago, I learned just enough Perl to configure MRTG, but I have never written a program. The good old CLI has kept me very busy and brought home the bacon.

With the announcements on NX-OS APIs, Application Centric infrastructure APIs, python scripting support, SDN, and open source projects OpenStack, OpenDaylight, and Puppet, I cannot hold back anymore.

Therefore, I have opened an account at codecademy.com. I will start with Python and Java. I see many late nights in my future.

I have thought about learning code, but I could never think of an app I wanted to write. Now Cisco is bringing together networking and programming. Cisco is not only making APIs available, Cisco is contributing code to the open source community. In fact, Cisco has created a Data Center repository, a Nexus 9000 community, and a general Cisco Systems repository on GitHub.

DevNet

Cisco has recently overhauled the developer program and its content. The new DevNet website is filled with developer information on products such as AVC, Collaboration, UCS, CTI, Energywise, FlexPod, UCS Microsoft Manager, Jabber, onePK, XNC, Telepresence.

Cisco is bringing the networking and programing worlds together and this stubborn old networker is finally onboard.

Happy Coding!

NewAssistantNetworkEngineerBill Carter is a Senior Network Engineer with more than 18 years of experience. He works for Sentinel Technologies and specializes in next-generation data center, campus and WAN network services.  

Follow Bill on Twitter  @billyc5022 and read his blog  http://billyc5022.blogspot.com/
Bill is a Cisco Champion -- Check here to learn more about the Cisco Champion program .

 

Bill’s New Assistant Network Engineer

Tags: , , , , , , , , , , , , , , , , , , ,

What’s on Cisco’s Technology Radar? Predictions for 2014 and Beyond

Will an ‘Internet of Everything’ shorten your commute in the morning? Are we at the beginning or the end of the SDN hype cycle? What exactly is ‘context aware’ computing? How will large format HDTV technology transform the way global teams work together? 

Just before the holidays, I had the pleasure of posing these and other questions to a distinguished panel of Cisco engineers, innovators and business leaders.

Susie Wee, VP and CTO, Networked Experiences, Lauren Cooney, Senior Director, Software Strategy , CTO and Architecture Office, David Ward, CTO of Engineering and Chief Architect and Maciej Kranz, VP of the Corporate Technology Group led a discussion inspired by the work of Cisco’s Technology Radar team.

Cisco’s Tech Radar brings together a network of 80+ scouts  to identify emerging technology trends and forecast their impact on business, governments, and everyday society through a five, ten and twenty-five years time frame. The findings inform Cisco’s engineering and corporate development strategy.

During the course of 90 minutes, our panel dissected as many of those trends as they could, from augmented collaboration to WebRTC; mega data centers to SDN; security and privacy to the Internet of Everything.  You can view some highlights of the discussion in the video below, or – if your New Year isn’t too busy yet – you can watch the entire Technology Radar 2014 program here.

Join the conversation #CiscoTechRadar

Tags: , , , , , , , , , , , ,

Data Driven Platforms to Support IoT, SDN, and Cloud

More and more enterprises are managing distributed infrastructures and applications that need to share data. This data sharing can be viewed as data flows that connect (and flow through) multiple applications. Applications are partly managed on-premise, and partly in (multiple) off-premise clouds. Cloud infrastructures need to elastically scale over multiple data centers and software defined networking (SDN) is providing more network flexibility and dynamism. With the advent of the Internet of Things (IoT) the need to share data between applications, sensors, infrastructure and people (specifically on the edge) will only increase. This raises fundamental questions on how we develop scalable distributed systems: How to manage the flow of events (data flows)? How to facilitate a frictionless integration of new components into the distributed systems and the various data flows in a scalable manner? What primitives do we need, to support the variety of protocols? A term that is often mentioned within this context is Reactive Programming, a programming paradigm focusing on data flows and the automated propagation of change. The reactive programming trend is partly fueled by event driven architectures and standards such as for example XMPP, RabbitMQ, MQTT, DDS.

fabric1One way to think about distributed systems (complementary to the reactive programming paradigm) is through the concept of a shared (distributed) data fabric (akin to the shared memory model concept). An  example of such a shared data fabric is Tuple spaces, developed in the 1980’s. You can view the data fabric as a collection of (distributed) nodes that provides a uniform data layer to the applications. The data fabric would be a basic building block, on which you can build for example a messaging service by having applications (consumers) putting data in the fabric, and other applications (subscribers) getting the data from the fabric. Similarly such a data fabric can function as a cache, where a producer (for example a database) would put data into the fabric but associates this to a certain policy (e.g. remove after 1 hour, or remove if exceeding certain storage conditions). The concept of a data fabric enables applications to be developed and deployed independently from each other (zero-knowledge) as they only communicate via the data fabric publishing and subscribing to messages in an asynchronous and data driven way.

The goal of the fabric is to offer an infrastructure platform to develop and connect applications without applications having to (independently) implement sets of basic primitives like security, guaranteed delivery, routing of messages, data consistency, availability, etc… and free up time of the developer to focus on the core functionality of the application. This implies that the distributed data fabric is not only a simple data store or messaging bus, but has a set of primitives to support easier and more agile application development.

Such a fabric should be deployable on servers and other devices like for example routers and switches (potentially building on top of a Fog infrastructure). The fabric should be distributed and scalable: adding new nodes should re-balance the fabric. The fabric can span multiple storage media (in-memory, flash, SSD, HDD, …). Storage is transparent to the application (developer), and applications should be able to determine (as a policy) what level of storage they require for certain data. Policies are a fundamental aspect of the data fabric. Some other examples of policies are: (1) time (length) data should remain in the fabric, (2) what type of applications can access particular data in the fabric (security), (3) data locality, the fabric is distributed, but sometimes we know in advance that data produced by one application will be consumed by another that is relative close to the producer.

It is unlikely that there will be one protocol or transportation layer for all applications and infrastructures. The data fabric should therefore be capable to support multiple protocols and transportation layers, and support mappings of well-known data store standards (such as object-relational mapping)fabric2

The data fabric can be queried, to enable discovery and correlation of data by applications, and support widely used processing paradigms, such as map-reduce enabling applications to bring processing to the data nodes.

It is unrealistic to assume that there will be one data fabric. Instead there will be multiple data fabrics managed by multiple companies and entities (similar to the network).  Data fabrics should therefore be connected with each other through gateways creating a “fabric of fabrics” were needed.

This distributed data fabric can be viewed as a set interconnected nodes. For large data fabrics (many nodes) it will not be possible to connect each node with all other nodes without sacrificing performance or scalability, instead a connection overlay and smart routing algorithms are needed (for example a distributed hash tables) to ensure scalability and performance of this distributed data fabric. The data fabric can be further optimized by coupling this fabric (and its logical connection overlay) to the underlying (virtual) network infrastructure and exploit this knowledge to further optimize the data fabric to power IoT, Cloud and SDN infrastructures.

Special thanks to Gary Berger and Roque Gagliano for their discussions and insights on this subject.

Tags: , , , , , , ,

SDN Adoption Challenges: My Wrap Up For 2013

December 23, 2013 at 11:34 am PST

2013 was the year I started working on SDN -- specifically in the area of devising professional services for Cisco ONE and Application Centric Infrastructure, ACI.  A few months ago, I used a compendium to summarize my Cisco Domain TenSM blogs.  This was well received, so  I thought it would be a good idea to wrap up the year with a summary of my 2013 journey into the SDN world, and in particular the adoption challenges I learned about along the way, some of which are illustrated in the diagram below.

SDN Adoption Challenges

SDN Adoption Challenges

Read More »

Tags: , , , , , ,

The Year Ahead in Networking

Throughout 2013, I’ve had the opportunity to meet with services provider leaders from around the globe.  Whether they are large or small, focused on consumer services or business, or engaged in video or mobility, their ambitions are very much in line with our strategy:  To help them monetize and optimize their networks, while accelerating their ability to deliver their services.

  • Monetize:  From innovative new managed security services, to video, cloud and new machine driven (M2M) services to enable the Internet of Everything (IoE), there are a number of  new incremental revenue opportunities for service providers which sit at the very center of these trends estimated at over $2.9 Trillion over the next 10 years.
  • Optimize:  Delivery of these new services has to be less than the cost to deploy and operate them.  At the end of the day, the SP is a business, and, as all businesses, they need to be profitable.  New ways to deliver these services as economically as possible are key to their success.
  • Accelerate:  In this dynamic marketplace, service providers need to move quickly to seize these new opportunities.  Gone are the days when service rollouts can take months or quarters  Instead, they need to operate at “web speed” shortening the time to provision new services from months to minutes and do it in a cost-effective way. Read More »

Tags: , , , , , , ,