Announcing Flexpod Select with Hadoop

Speed is everything. Continuing our commitment to make data center infrastructures more responsive to enterprise applications demands, today, we announced FlexPod Select with Hadoop, formerly known as NetApp Open Solution for Hadoop, broadening our FlexPod portfolio.  Developed in collaboration between Cisco and NetApp, offers an enterprise-class infrastructure that accelerates time to value from your data. This solution is pre-validated for Hadoop deployments built using Cisco 6200 Series Fabric Interconnects (connectivity and management), C220 M3 Servers (compute), NetApp FAS2220 (namenode metadata storage) and NetApp E5400 series storage arrays (data storage). Following the highly successful FlexPod model of pre-sized rack level configurations, this solution will be made available through the well-established FlexPod sales engagement and channel.

The FlexPod Select with Hadoop architecture is an extension of our popular Cisco UCS Common Platform Architecture (CPA) for Big Data designed for applications requiring enterprise class external storage array features like RAID protection with data replication, hot-swappable spares, proactive drive health monitoring, faster recovery from disk failures and automated I/O path fail-over. The architecture consists of a master rack and optionally up to nine expansion racks in a single management domain, creating a complete, self-contained Hadoop cluster. The master rack provides all of the components required to run a 12 node Hadoop cluster supporting 540TB storage capacity. Each additional expansion rack provides an additional 16 Hadoop cluster nodes and 720TB storage capacity. Unique to this architecture is seamless management integration and data integration capabilities with existing FlexPod deployments that can help to significantly lower the infrastructure and management costs.

FlexPod Select has been pretested and jointly validated with leading Hadoop vendors, including Cloudera and Hortonworks.


In an effort to keep conversations fresh, Cisco Blogs closes comments after 60 days. Please visit the Cisco Blogs hub page for the latest content.


  1. Hello juhasz

    What is the minimum number of fas2220 nodes as well as the e-Series hardware and what type of disks are available for this solution?

    • Just need one FAS2220 per cluster (a cluster can scale from small to hundreds of nodes). We typically recommend a master rack (12 data nodes and 3 E-5460 storage arrays) to start with. At present we support 60 x 3TB HDDs per E-5460. Thanks

  2. Can a flexpod certified partner also be a flexpod select partner ?

  3. What’s unique about this architecture is that it’s completely “open” solution. Users can use any distribution of hadoop in any configuration. The platform offers all the flexibility in adapting to rapidly maturing hadoop distributions and various eco-system tools and utilities. The platform has been validated with Cloudera and Hortonworks distribution of hadoop. But there is no reason to use the same platform with IBM, MapR , Intel , apache or any other flavor of hadoop if the users decides to switch. This architecture has successfully decoupled storage and compute from a traditional hadoop design, which offers independent scaling of compute and storage in a cluster. The NetApp E-5400 series storage arrays provide enterprise grade reliability and performance for all the HDFS data served in a cluster. The data is protected using raid configured disks, which means the datanodes in the HDFS cluster will have zero effect, in case of a disk failure. This will result in guaranteed job performance catering to strict SLAs.

  4. Good to know Cisco in collaboration with NetApp getting involved in such an initiative.
    Looking forward to see more success on this for both Cisco and NetApp and also a good Cisco on Cisco story which can add a extra fuel to the success.