When customers look to deploy their Hadoop solutions, one of the first questions they ask is, which distro should we run it on? For many enterprise customers, the answer has been MapR. For those of you not familiar with MapR, they offer an enterprise-grade Hadoop software solution that provides customers with a robust set of tools for running Big Data workloads. A few months ago, Cisco announced the release of Tidal Enterprise Scheduler (TES) 6.1 and with it integrations for Hadoop software distributions, such as Cloudera and MapR, as well as adapters to support Sqoop, Data Mover (HDFS), Hive, and MapReduce jobs. All performed through the same TES interface as their other enterprise workloads.
Today, I’m pleased to announce that with the upcoming 6.1.1 release of Cisco’s Tidal Enterprise Scheduler, Cisco’s MapR integration will deepen further. Leveraging Big Data for competitive advantage and rises in innovative product offerings are changing the storage, management, and analysis of an enterprise’s most critical asset -- data. The difficulty of managing Hadoop clusters will continue to grow and enterprises need solutions like Hadoop to enable the processing of large amounts of data. Cisco Tidal Enterprise Scheduler enables more efficient management of those environment because it is an intelligent solution for integrating Big Data jobs into an existing data center infrastructure. TES has adapters for a range of enterprise applications including: SAP, Informatica, Oracle, PeopleSoft, MSSQL, JDEdwards, and many others.
Stay tuned for additional blog posts on Cisco’s Tidal Enterprise Scheduler version 6.
Tags: Big Data, Cloudera, enterprise scheduler, Hadoop, MapR, mapreduce, sqoop, tes, Tidal
A little over a month ago we had a chance to present as session in conjunction with Eric Sammer of Cloudera on Designing Hadoop for the Enterprise Data Center and findings at Strata + Hadoop World 2012 .
Taking a look back, we started this initiative back in early 2011 as the demand for Hadoop was on the rise and we began to notice a lot of confusion from our customers on what Hadoop would mean to their Data Center Infrastructure. This lead us to our first presentation at Hadoop World 2011 where we shared an extensive testing effort with the goal of characterizing what happens when you run a Hadoop Map/Reduce job. Further, we illustrated how different network and compute considerations would change these characteristics. As Hadoop deployment gained tracking in enterprise, we found a need of developing network reference architecture for Hadoop. This lead us to another round of testing concluded earlier this year and presented at Hadoop Summit, which examined what happened when looking at design considerations such as architectures, availability, capacity, scale and management.
Finally this brings us to last month and our presentation at Strata + Hadoop World 2012. We met with Cloudera in the months leading up to the event and discussed what we could share to the Hadoop community. We discussed all the previous rounds of testing and came to the conclusion that along with a combination of customer experiences and another round of testing that examined Multi-tenant environments we could put together a talk that really addressed the fundamental design considerations of Hadoop in the Enterprise Data Center.
We went into depth to examine the network traffic considerations with Hadoop in the Data Center to
Read More »
Tags: Big Data, Cloudera, data center, Eric Sammer, Hadoop, Hadoop World, Strata
You may have heard that the digital universe is in petabytes, global IP traffic is in 100s of exabytes. These are mind bogglingly large metrics. Big data analytics can play a crucial role in making datasets in this space usable – by improving operational efficiency to customer experience to prediction accuracy. While Cisco is the global leader in networking -- Did you know that 85% of estimated 500 exabyte global IP traffic in 2012 will pass through Cisco devices ? – the company also builds an innovative family of unified computing products. This enables the company to provide a complete infrastructure solution including compute, storage, connectivity and unified management for big data applications that reduce complexity, improves agility, and radically improves cost of ownership.
To meet a variety of big data platform demands (Hadoop, NoSQL Databases, Massively Parallel Processing Databases etc), Cisco offers a comprehensive solution stack: the Common Platform Architecture (CPA) for Big Data includes compute, storage, connectivity and unified management. Unique to this architecture is the seamless data integration and management integration capabilities with enterprise application ecosystem including Oracle RDBMS/RAC, Microsoft SQL Server, SAP and others. See Figure 1.
The CPA is built using the following components:
- Cisco UCS 6200 Series Fabric Interconnects provides high speed, low latency connectivity for servers and centralized management for all connected devices with UCS Manager. Deployed in redundant pairs offers the full redundancy, performance (active-active), and exceptional scalability for large number of nodes typical in big data clusters. UCS Manger enables rapid and consistent server integration using service profile, ongoing system maintenance activities such as firmware update operations across the entire cluster as a single operation, advanced monitoring, and option to raise alarms and send notifications about the health of the entire cluster.
- Cisco UCS 2200 Series Fabric Extenders, act as remote line cards for Fabric Interconnects providing a highly scalable and extremely cost-effective connectivity for large number of nodes.
- Cisco UCS C240 M3 Rack-Mount Servers, 2-RU server designed for wide range of compute, IO and storage capacity demands. Powered by two Intel Xeon E5-2600 series processors and support up to 768 GB of main memory (typically 128GB or 256GB for big data applications) and up to 24 SFF disk drives in the performance optimized option or 12 LFF disk drives in the capacity optimized option. Also features Cisco UCS VNIC optimized for high bandwidth and low latency cluster connectivity with support for up to 256 virtual devices.
Read More »
Tags: Big Data, Cloudera, Common Platform Architecture, CPA, Greenplum MR, Hadoop, MapR, MarkLogic, MPP Database, NoSQL, Oracle NoSQL Database, ParAccel
Big Data’s move into the enterprise has generated a lot of buzz on why big data, what are the components and how to integrate? The “why” was covered in a two part blog (Part 1 | Part 2) by Sean McKeown last week. To help answer the remaining questions, I presented Hadoop Network and Architecture Considerations last week at the sold out Hadoop World event in New York. The goal was to examine what considerations need to be taken to integrate Hadoop into Enterprise architectures by demystifying what happens on the network and identifying key network characteristics that affect Hadoop clusters.
The presentation includes results from an in depth testing effort to examine what Hadoop means to the network. We went through many rounds of testing that spanned several months (special thanks to Cloudera on their guidance). Read More »
Tags: Big Data, Cisco, Cloudera, data center, Hadoop