Cisco Blogs

Cisco Blog > Data Center

Private Cloud Increases Business Velocity

The world is experiencing a digital transformation as everything – customers and technology alike – are becoming connected; which has made technology pervasive in all of our lives.   The tools we use have been on a fast innovative pace which has made us all tech-savvy individuals.  If you doubt this, just watch a 6 year-old with an iPad.

We have all become accustomed to a user experience that empowers us to receive information, products or services immediately.   The problem is, once we enter the business world, the user experience changes dramatically.

Does business need to transform?   Yes!  It needs to change the pace at which it delivers services both within and externally as well as address customer expectations for a self-service order experience.   From what I am hearing during customer conversations, the good news is that businesses are acknowledging that changes are required.

Read More »

Tags: , , , , , ,

#CiscoACI #CiscoChat- Cisco SDN in the Data Center

Digital transformation is changing the world as technology continues to play a central role in today’s business strategy. By helping to reach more customers, offer differentiated services, and grow the business, the relevance of IT is stronger than ever. Customers want to deploy services fast, at scale, at the lowest cost possible.


Cisco’s SDN strategy and portfolio enable policy-driven infrastructure built around ACI and the programmable fabric and network. Cisco offers freedom of choice to IT teams looking to add automation capabilities to their network infrastructure by leveraging their programmability capabilities to accelerate application deployment and management, automate network operations and create a more responsive IT model.

Cisco Application Centric Infrastucture delivers an agile, open, and secure solution for deploying applications across any physical, virtualization or cloud technology being used for data center infrastructure. Cisco ACI provides consistent policy and for multi-hypervisor, container and bare metal server workloads. It is a true SDN solution with built-in secure multi-tenancy. So much so, that our next #CiscoChat on Thursday, December 3rd at 10:00 a.m. PST will bring together a team of experts to discuss the extent of Cisco ACI challenges and the solutions leaders can use to address them.

In the article “The New Need for Speed in the Datacenter Network”, IDC confirms that “Today’s datacenter networks must better adapt to and accommodate business-critical application workloads. Datacenters will have to increasingly adapt to virtualized workloads and to the ongoing enterprise transition to private and hybrid clouds”.

In this #CiscoACI #CiscoChat led by Cisco Data Center, @CiscoDC, with co-hosts, Mike Cohen, (@mscohen), Principle Engineer at Cisco Systems and, Zeus Kerravala, (@zkerravala), Principle of ZK Research, will assess how Cisco ACI can help businesses stay competitive with an agile and programmable network and through a Fast, Open and Secure approach. You don’t want to miss this #CiscoChat on #CiscoACI Thursday, December 3rd at 10:00 a.m. PST as we reveal our latest innovations.

Tags: , , ,

Simple. Fast. Open. Cisco ACI shakes up SDN.

sfo v3

If you come to Cisco’s corporate headquarters, chances are good that (especially if you’re traveling internationally) you will fly into SFO, which is the airport code for San Francisco International Airport. This point has virtually nothing to do with the rest of what you’re about to read…other than the fact that those same 3 letters – SFO – are representative of 3 key takeaways from an outstanding Infoworld product review on Application Centric Infrastructure (ACI). When you think about ACI, think about SFO:

Simple. Fast. Open.

I won’t spend much space on this, as I’d much rather you go and read Paul Venezia’s comprehensive and detailed look at ACI. But I do want to highlight a few brief comments on how ACI is Simple, Fast and Open.


“Implementing ACI is surprisingly simple, even in the case of large-scale buildouts.”


“Assuming the cabling is complete, the entire process of standing up an ACI fabric might take only a few minutes from start to finish.”


“Not only is ACI an extremely open architecture…”

“Cisco is actively supporting a community gathering around ACI, and the community is already reaping the rewards of Cisco’s open stance.”

“This is only one example of ACI’s openness and easy scriptability. The upshot is it will be straightforward to integrate ACI into custom automation and management solutions, such as centralized admin tools and self-service portals.”

“This should be made abundantly clear: This isn’t an API bolted onto the supplied administration tools, or running alongside the solution. The API is the administration tool.”

Simple. Fast. Open.

Whether you’re traveling to Northern California or not, if you’re considering a better way to do networking, think about SFO and ACI.

Photo courtesy of

Tags: , , , , , ,

Project Contiv – Infrastructure Operational Policy Specification for Containerized Application Deployment

One of the biggest disruptions in the IT world is upon us.  10 years ago it was server virtualization, more recently the adoption of cloud – both private and public.  One could argue that cloud adoption is still ongoing. But I think a more fundamental disruption is happening with the way applications are going to be built, deployed and operated in the future.

By now, almost everyone is familiar with the industry buzzwords such as containers/Docker, microservices and DevOps, etc.  We are in some ways skeptical of these buzzwords as we have seen many fizzle over longer term. But, these technologies/architectures enable the enterprise to build cloud-native applications and run them at scale. They will help organizations make the most of public and private cloud deployment and will result in cloud adoption increasing exponentially.

Many still believe that the primary benefits of containers come from the technology optimizations that they bring when compared to Virtual Machines (VMs). For instance, the significant scale increase (more than 10x per host density), smaller footprint (memory, CPU, hard disk) or the faster creation and destroy cycle (milliseconds vs. minutes). But while these things are indeed very relevant, the real benefits are broader than just infrastructure advantages. The two main benefits are, first how the container technology is ideally suited to enable newer ways to develop applications (continuous integration and development) and secondly how you can scale applications (through microservices architecture) and port them between different infrastructure environments (public or private).

Microservices architectures are transforming the way applications are architected and built.  I can remember the days when I could never wait for our IT to role out an update to my favorite application because the timelines were always in multiple months if not years.   Hopefully, those days are going to be a thing of the past with the current ability to construct applications in a more easily developable/updatable/scalable microservices framework.

Although there are numerous projects and tools available in the market place in order for IT to set up the infrastructure, there is still need for admins to be able to specify the infrastructure operational policies around network, storage, security, compute for the containerized applications in an automated way and have those policies be implemented across infrastructure consistently. If no such mechanism exists, we could have resource contention between production and development applications or security violations between different applications/tenants and overall unpredictable application performance.   We believe there has to be better way for containerized applications to run in a shared infrastructure.

Introducing Project Contiv

Project Contiv is an open source project defining infrastructure operational policies for container-based application deployment.  Application intent, such as docker compose, allows for declarative specification for an application’s microsevices composition. Project Contiv compliments application intent with the ability to specifyinfrastructure operational policies for network, storage and compute elements of the physical and virtual infrastructure by directly mapping the application intent, with the infrastructure policy required.

Project Contiv Architecture

Project Contiv Architecture

So what are some of the infrastructure operational policies that most IT organization expects to specify for containerized applications?

  • Security policies for applications for inbound/outbound as well as within application tiers
  • Network services policies- integration of L4-L7 services (Load balancers, firewall, encryption, etc.)
  • Analytics and diagnostics policies
  • Physical infrastructure policies around bandwidth limit/guarantee per container, latency requirements, etc.
  • IP allocation management  (IPAM) policies
  • Storage policies around persistence storage, volume allocation, snapshotting etc.
  • Compute policies around performance requirements/off-load (to NIC or Network) and SLA etc.
  • Corporate and government compliance policies

So with Project Contiv, we hope to help you optimize and achieve saner shared infrastructure for your various containerized applications.

We believe the best way to go about achieving this objective is to build a community of similar minded people to join the Project Contiv and contribute at to enable enterprise grade applications to be adopted more rapidly.

Currently there are two projects that enable networking and storage for docker based container deployment.

Contiv Networking is a container network plugin to provide infrastructure and security policies for a multi-tenant microservices deployment, while providing integration to physical network for communicating with non-container workload. Contiv Networking implements the remote driver and IPAM APIs available in Docker 1.9 onwards. For more information, visit

Contiv Volume plugin is a docker volume plugin that provides multi-tenant, persistent, distributed storage with intent based consumption using ceph underneath. For more information, visit

We got a very encouraging start to our introduction talk by Vipin Jain (@jainvipin_), core developer of Project Contiv at Docker Meetup in Palo Alto last month with 250 registered attendees (with about 100 on waitlist). If you are visiting DockerCon Europe 2015 at Barcelona next week, make sure you visit Project Contiv booth for a demo and connect with us in person. We are looking forward to your contributions in the container community and Project Contiv github.

Project Contiv at Docker Palo Alto Meetup

Project Contiv at Docker Palo Alto Meetup

I also encourage you to visit Cisco’s open source project Mantl around microservices infrastructure.   Project Contiv will soon be part of the Project Mantl to bring better infrastructure for your microservices applications.

Tags: , , ,

Cisco UCS Delivers First-ever 100-terabyte and Best 3-TB and 30-TB Big Data Benchmark Results on the TPCx-HS Benchmark

Cisco UCS® Integrated Infrastructure for Big Data delivered industry’s first-ever 100-terabyte (TB) and best 3-TB and 30-TB results on the TPC Express Benchmark HS (TPCx-HS).

These results demonstrate Cisco’s leadership with best performance at the scale factors of 3 and 30 TB, and Cisco is the first vendor to publish results for a scale factor of 100 TB. The results are made possible with Cisco UCS Integrated Infrastructure for Big Data, an industry-leading platform widely adopted across industry vertical markets that provides a fast and simple way to deploy big data environments.

These world-record results were achieved using Cisco UCS Integrated Infrastructure for Big Data powered by Cisco UCS C240 M4 Rack Servers interconnected using two Cisco UCS 6296 96-Port Fabric Interconnects with embedded management using Cisco UCS Manager and a Cisco Nexus® 9372PX Switch. Check out the Performance Brief and UCS Industry Benchmarks Summary for additional information on the benchmark configuration. The detailed official benchmark disclosure report is available at the TPC Website.

TPCx-HS Benchmark Results with Cisco UCS Integrated Infrastructure for Big Data Summary:

Scale Number of Cisco UCS C240 M4 Rack Servers Performance and Price/Performance Availability Date
3 TB1 16 11.76 HSph@3TB


September 24, 2015
30 TB2 32 23.42 HSph@30TB


October 26, 2015
100 TB3 32 21.99 HSph@100TB


October 26, 2015


The industry and technology landscapes have changed. IT is being extended far beyond traditional transaction processing and data warehousing to big data and analytics. Foreseeing the industry transition the TPC has developed TPC Express Benchmark HS (TPCx-HS) – industry’s first (and so far the only) standard for benchmarking big data systems to provide the industry with verifiable performance, price-performance and availability metrics of hardware and software systems dealing with Big Data. TPCx-HS provides an objective measure of hardware, operating system, and commercial software distributions compatible with the Apache Hadoop Distributed File System (HDFS) API. This benchmark can be used to asses a broad range of system topologies and implementation of Hadoop systems in a technically rigorous and directly comparable, in a vendor-neutral manner.

Although all vendors have access to same Intel processors, only Cisco UCS unleashes their power to deliver high performance to applications through the power of unification. The unique, fabric-centric architecture of Cisco UCS integrates the Intel Xeon processors into a system with a better balance of resources that brings processor power to life. For additional information on Cisco UCS and Cisco UCS Integrated Infrastructure Solutions please visit Cisco Unified Computing & Servers web page.


The Transaction Processing Performance Council (TPC) is a nonprofit corporation founded to define transaction processing and database benchmarks, and to disseminate objective and verifiable performance data to the industry. TPC membership includes major hardware and software companies. The performance results described in this document are derived from detailed benchmark results available as of October 23, 2015, at http:// results.asp

Tags: , , ,