Cisco Blogs


Cisco Blog > Data Center and Cloud

Announcing Cisco UCS Common Platform Architecture v2 (CPA v2) for Big Data

Also launches,

Industry’s first reference architecture for Hadoop with advanced access control and encryption with IDH, first flash-enhanced reference architecture for Hadoop demonstrated using YCSB with MapR, industry’s first validated and certified solution for real-time Big Data analytics with SAP HANA, and Unleashing IT big data special edition

Built up on our vision of shared infrastructure and unified management for enterprise applications, the Cisco UCS Common Platform Architecture (CPA) for Big Data has become a popular choice for enterprise Big Data deployments. It has been widely adopted in finance, healthcare, service provider, entertainment, insurance, and public sectors. The new Cisco UCS CPA V2 improves both performance and capacity featuring Intel Xeon E5-2600 v2 family of processors, industry leading storage density, and industry’s first transparent cache acceleration for Big Data.

The Cisco UCS CPA v2 offers a choice of infrastructure options, including “Performance Optimized”, “Balanced”, “Capacity Optimized”, and “Capacity Optimized with Flash” to support a range of workload needs.

cpa v2

Up to 160 servers (3200 cores, 7.6PB storage) are supported in single switching/UCS domain. Scaling beyond 160 servers can be implemented by interconnecting multiple UCS domains using Nexus 6000/7000 Series switches, scalable to thousands of servers and to hundreds of petabytes storage, and managed from a single pane using UCS Central in a data center or distributed globally.

The Cisco UCS CPA v2 solutions are available through Cisco UCS Solution Accelerator Paks program designed for rapid deployments, tested and validated for performance, and optimized for cost of ownership: Performance Optimized half-rack (UCS-SL-CPA2-P) ideal for MPP databases and scale-out data analytics, Performance and Capacity Balanced rack (UCS-SL-CPA2-PC) ideal for high performance Hadooop and NoSQL deployments, Capacity Optimized rack (UCS-SL-CPA2-C)  when capacity matters, and Capacity Optimized with Flash rack (UCS-SL-CPA2-CF)  offers industry’s first transparent caching option for Hadoop and NoSQL. Start with any configuration and scale as your workload demands.

Cisco supports leading Hadoop and NoSQL distributions, including Cloudera, HortonWorks, Intel, MapR, Oracle, Pivotal and others. For more information visit Cisco Big Data Portal, and Big Data Design Zone that offers Cisco Validated Designs (CVD)  -- pretested and validated architectures that accelerate the time to value for customers while reducing risks and deployment challenges.

Additional Information

Cisco UCS Common Platform Architecture Version 2 for Big Data
Cisco Launches the First Flash-Enhanced Solution for Hadoop
Simplifying the Deployment of Real-time Big Data Analytics — UCS + SAP HANA

Also see Maximizing Big Data Benefits with MapR and Informatica on Cisco UCS

Tags: , , , , , , ,

Get More out of your Data with Cisco at Strata Hadoop World October 28 – 30, 2013

Superbowl Hype

 

With enough hype to rival even the most popular of Superbowl’s, Big Data experts will converge on New York City in just a couple weeks!  But big data has good reason for all the hype as businesses continue to find new ways to leverage the insights derived from vast data pools that are continuing to grow at an exponential rate.  A big reason for this is the ability to leverage Hadoop with the Hadoop Distributed File System and MapReduce functionality to analyze the data very quickly and provide incredibly fast queries that, although not even possible previously, can now be accomplished in minutes or less.  We’ve only just begun to scratch the surface in terms of the financial returns made around Hadoop and the infrastructure to support Hadoop deployments but one thing we do know, it’s going to be big and it will continue to get bigger!

 

Strata Hadoop World

 

So how does Cisco fit into this picture?

Scalable

Cisco is partnering with leading software providers to offer a comprehensive infrastructure and management solution to support customer big data initiatives including Hadoop, NoSQL and Massive Parallel Processing (MPP) analytics.  Leveraging the advantages of fabric computing, the Cisco UCS Common Platform Architecture (CPA) delivers exceptional performance, capacity, management simplicity, and scale to help customers derive value more quickly and with less management overhead for the most challenging big data deployments.

Competitive Advantage

 

Cisco UCS Common Platform Architecture for big data enables rapid deployment, predictable performance, and massive scale without the need for complex layers of switching infrastructure.  In addition, the architecture offers unique data and management integration with enterprise applications hosted on Cisco UCS.  This allows big data and enterprise applications to co-exist within a single management domain that simplifies data movement between applications and eliminates the need for unique technology silos in the data center.  You can also check out my previous blog, Top Three Reasons Why Cisco UCS is a Better Platform for Big Data, to get an idea of what we’ll be sharing at the show.

 

Cisco logo

Have you considered Cisco UCS for your Big Data projects?   I’d like to invite you to come and hear more in a couple weeks at Strata Hadoop World in New York City.    We’ll have a number of demos and experts on hand to answer all of your questions.

 

In addition, Cisco and  ClouderaCloudera Logo are teaming up to offer you a chance to win some exciting prizes by joining our demo crawl program.  Stop by either the Cisco booth (#3) or the Cloudera booth (#403) to learn more.

 

Ready for Big Data

 

Stop by and say hello and let me know if you have any comments or questions, or via twitter at @CicconeScott.

Tags: , , , , , , , , , , , , , , ,

Announcing Flexpod Select with Hadoop

Speed is everything. Continuing our commitment to make data center infrastructures more responsive to enterprise applications demands, today, we announced FlexPod Select with Hadoop, formerly known as NetApp Open Solution for Hadoop, broadening our FlexPod portfolio.  Developed in collaboration between Cisco and NetApp, offers an enterprise-class infrastructure that accelerates time to value from your data. This solution is pre-validated for Hadoop deployments built using Cisco 6200 Series Fabric Interconnects (connectivity and management), C220 M3 Servers (compute), NetApp FAS2220 (namenode metadata storage) and NetApp E5400 series storage arrays (data storage). Following the highly successful FlexPod model of pre-sized rack level configurations, this solution will be made available through the well-established FlexPod sales engagement and channel.

The FlexPod Select with Hadoop architecture is an extension of our popular Cisco UCS Common Platform Architecture (CPA) for Big Data designed for applications requiring enterprise class external storage array features like RAID protection with data replication, hot-swappable spares, proactive drive health monitoring, faster recovery from disk failures and automated I/O path fail-over. The architecture consists of a master rack and optionally up to nine expansion racks in a single management domain, creating a complete, self-contained Hadoop cluster. The master rack provides all of the components required to run a 12 node Hadoop cluster supporting 540TB storage capacity. Each additional expansion rack provides an additional 16 Hadoop cluster nodes and 720TB storage capacity. Unique to this architecture is seamless management integration and data integration capabilities with existing FlexPod deployments that can help to significantly lower the infrastructure and management costs.

FlexPod Select has been pretested and jointly validated with leading Hadoop vendors, including Cloudera and Hortonworks.

Resources:

Tags: , , , , , , ,

Introducing Cisco UCS Common Platform Architecture (CPA) for Big Data

Updated: 10/01/2013

You may have heard that the digital universe is in petabytes, global IP traffic is in 100s of exabytes. These are mind bogglingly large metrics. Big data analytics can play a crucial role in making datasets in this space usable – by improving operational efficiency to customer experience to prediction accuracy. While Cisco is the global leader in networking -- Did you know that 85% of estimated 500 exabyte global IP traffic in 2012 will pass through Cisco devices ? – the company also builds an innovative family of unified computing products. This enables the company to provide a complete infrastructure solution including compute, storage, connectivity and unified management for big data applications that reduce complexity, improves agility, and radically improves cost of ownership.

To meet a variety of big data platform demands (Hadoop, NoSQL Databases, Massively Parallel Processing Databases etc), Cisco offers a comprehensive solution stack: the Cisco UCS Common Platform Architecture (CPA) for Big Data includes compute, storage, connectivity and unified management. Unique to this architecture is the seamless data integration and management integration capabilities with enterprise application ecosystem including Oracle RDBMS/RAC, Microsoft SQL Server, SAP and others. See Figure 1.

Figure 1:
UCS CPA

The Cisco UCS CPA for Big Data is built using the following components:

  • Cisco UCS 6200 Series Fabric Interconnects provides high speed, low latency connectivity for servers and centralized management for all connected devices with UCS Manager. Deployed in redundant pairs offers the full redundancy, performance (active-active), and exceptional scalability for large number of nodes typical in big data clusters. UCS Manger enables rapid and consistent server integration using service profile, ongoing system maintenance activities such as firmware update operations across the entire cluster as a single operation, advanced monitoring, and option to raise alarms and send notifications about the health of the entire cluster.
  • Cisco UCS 2200 Series Fabric Extenders, act as remote line cards for Fabric Interconnects providing a highly scalable and extremely cost-effective connectivity for large number of nodes.
  • Cisco UCS C240 M3 Rack-Mount Servers, 2-RU server designed for wide range of compute, IO and storage capacity demands. Powered by two Intel Xeon E5-2600 series processors and support up to 768 GB of main memory (typically 128GB or 256GB for big data applications) and up to 24 SFF disk drives in the performance optimized option or 12 LFF disk drives in the capacity optimized option. Also features Cisco UCS VNIC optimized for high bandwidth and low latency cluster connectivity with support for up to 256 virtual devices.
    Read More »

Tags: , , , , , , , , , , , , ,

Cisco at Hadoop Summit

Last week we participated in the annual Hadoop Summit held in San Jose, CA. When we first met with Hortonworks about the Summit many months back they mentioned this year’s Hadoop Summit would be promoting Reference Architectures from many companies in the Hadoop Ecosystem. This was great to hear as we had previously presented results from a large round of testing on Network and Compute Considerations for Hadoop at Hadoop World 2011 last November and we were looking to do a second round of testing to take our original findings and test/develop a set of best practices around them including failure and connectivity options. Further the set of validation demystifies the one key Enterprise ask “Can we use the same architecture/component for Hadoop deployments?”.   Since a lot of the value of Hadoop is seen once it is integrated into current enterprise data models the goal of the testing was to not only define a reference architecture, but to define a set of best practices so Hadoop can be integrated into current enterprise architectures.

Below are the results of this new testing effort presented at Hadoop Summit, 2012. Thanks to Hortonworks for their collaboration throughout the testing.