Industry’s first reference architecture for Hadoop with advanced access control and encryption with IDH, first flash-enhanced reference architecture for Hadoop demonstrated using YCSB with MapR, industry’s first validated and certified solution for real-time Big Data analytics with SAP HANA, and Unleashing IT big data special edition
Built up on our vision of shared infrastructure and unified management for enterprise applications, the Cisco UCS Common Platform Architecture (CPA) for Big Data has become a popular choice for enterprise Big Data deployments. It has been widely adopted in finance, healthcare, service provider, entertainment, insurance, and public sectors. The new Cisco UCS CPA V2 improves both performance and capacity featuring Intel Xeon E5-2600 v2 family of processors, industry leading storage density, and industry’s first transparent cache acceleration for Big Data.
The Cisco UCS CPA v2 offers a choice of infrastructure options, including “Performance Optimized”, “Balanced”, “Capacity Optimized”, and “Capacity Optimized with Flash” to support a range of workload needs.
Up to 160 servers (3200 cores, 7.6PB storage) are supported in single switching/UCS domain. Scaling beyond 160 servers can be implemented by interconnecting multiple UCS domains using Nexus 6000/7000 Series switches, scalable to thousands of servers and to hundreds of petabytes storage, and managed from a single pane using UCS Central in a data center or distributed globally.
The Cisco UCS CPA v2 solutions are available through Cisco UCS Solution Accelerator Paks program designed for rapid deployments, tested and validated for performance, and optimized for cost of ownership: Performance Optimized half-rack (UCS-SL-CPA2-P) ideal for MPP databases and scale-out data analytics, Performance and Capacity Balanced rack (UCS-SL-CPA2-PC) ideal for high performance Hadooop and NoSQL deployments, Capacity Optimized rack (UCS-SL-CPA2-C) when capacity matters, and Capacity Optimized with Flash rack (UCS-SL-CPA2-CF) offers industry’s first transparent caching option for Hadoop and NoSQL. Start with any configuration and scale as your workload demands.
Cisco supports leading Hadoop and NoSQL distributions, including Cloudera, HortonWorks, Intel, MapR, Oracle, Pivotal and others. For more information visit Cisco Big Data Portal, and Big Data Design Zone that offers Cisco Validated Designs (CVD) -- pretested and validated architectures that accelerate the time to value for customers while reducing risks and deployment challenges.
Cisco UCS Common Platform Architecture Version 2 for Big Data
Cisco Launches the First Flash-Enhanced Solution for Hadoop
Simplifying the Deployment of Real-time Big Data Analytics — UCS + SAP HANA
Also see Maximizing Big Data Benefits with MapR and Informatica on Cisco UCS
Tags: Cisco UCS CPA, Cisco UCS Solution Accelerator Paks, Cloudera, Hortonworks, Intel Hadoop, MapR, Pivotal HD, SAP. HANA
With enough hype to rival even the most popular of Superbowl’s, Big Data experts will converge on New York City in just a couple weeks! But big data has good reason for all the hype as businesses continue to find new ways to leverage the insights derived from vast data pools that are continuing to grow at an exponential rate. A big reason for this is the ability to leverage Hadoop with the Hadoop Distributed File System and MapReduce functionality to analyze the data very quickly and provide incredibly fast queries that, although not even possible previously, can now be accomplished in minutes or less. We’ve only just begun to scratch the surface in terms of the financial returns made around Hadoop and the infrastructure to support Hadoop deployments but one thing we do know, it’s going to be big and it will continue to get bigger!
So how does Cisco fit into this picture?
Cisco is partnering with leading software providers to offer a comprehensive infrastructure and management solution to support customer big data initiatives including Hadoop, NoSQL and Massive Parallel Processing (MPP) analytics. Leveraging the advantages of fabric computing, the Cisco UCS Common Platform Architecture (CPA) delivers exceptional performance, capacity, management simplicity, and scale to help customers derive value more quickly and with less management overhead for the most challenging big data deployments.
Cisco UCS Common Platform Architecture for big data enables rapid deployment, predictable performance, and massive scale without the need for complex layers of switching infrastructure. In addition, the architecture offers unique data and management integration with enterprise applications hosted on Cisco UCS. This allows big data and enterprise applications to co-exist within a single management domain that simplifies data movement between applications and eliminates the need for unique technology silos in the data center. You can also check out my previous blog, Top Three Reasons Why Cisco UCS is a Better Platform for Big Data, to get an idea of what we’ll be sharing at the show.
Have you considered Cisco UCS for your Big Data projects? I’d like to invite you to come and hear more in a couple weeks at Strata Hadoop World in New York City. We’ll have a number of demos and experts on hand to answer all of your questions.
In addition, Cisco and Cloudera are teaming up to offer you a chance to win some exciting prizes by joining our demo crawl program. Stop by either the Cisco booth (#3) or the Cloudera booth (#403) to learn more.
Stop by and say hello and let me know if you have any comments or questions, or via twitter at @CicconeScott.
Tags: Big Data, blade server, Blade Servers, Cisco UCS, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Cisco Unified Management, Cloudera, Hadoop, Hortonworks, Intel, MapR, rack server, UCS Manager, UCS service profiles
On June 20th, Cisco and MapR will join with Forrester Research Big Data analyst Mike Gualtieri to discuss “productionizing” Hadoop. But what does it mean?
Mike has developed a list of 7 architectural best practices that will help your enterprise quickly, and easily develop or move your Hadoop environment into standard data center processes. Following his guidelines, your can get your Hadoop environment up and running in no time, saving time by being proactive on the headaches and pitfalls that are unique to Big Data environments.
Joining Mike will be MapR CMO, Jack Norris discussing their best practices and how they line up with the Big 7 from Forrester.
Finally, Cisco IT will showcase a MapR production environment and how they have streamlined the complex Big Data workloads, automatically moving data into and running analytics out of their Hadoop environment.
Keeping the Hadoop production environment up and running smoothly is the name of the game here and in the face of resource constraints, Cisco IT has standardized on Cisco Tidal Enterprise Scheduler—with its seamless integrations into MapR, Hive, and Sqoop—giving your enterprise the ability to “productionize” complex workloads from any data source.
Join us as we walk you through the 7 architectural best practices for Big Data, MapR and Cisco Tidal Enterprise Scheduler.
Read More »
Tags: Big Data, cisco live, forrester, Hadoop, MapR, Tidal Enterprise Scheduler, unified management, workload automation
Guest Blog by Jack Norris
Jack is responsible for worldwide marketing for MapR Technologies, the leading provider of a enterprise grade Hadoop platform. He has over 20 years of enterprise software marketing experience and has demonstrated success from defining new markets for small companies to increasing sales of new products for large public companies. Jack’s broad experience includes launching and establishing analytic, virtualization, and storage companies and leading marketing and business development for an early-stage cloud storage software provider.
Big Data use cases are changing the competitive dynamics for organizations with a range of operational use cases. Operational intelligence refers to applications that combine real-time, dynamic, analytics that deliver insights to business operations. Operational intelligence requires high performance. “Performance” is a word that is used quite liberally and means different things to different people. Everyone wants something faster. When was the last time you said, “No, give me the slow one”?
When it comes to operations, performance is about the ability to take advantage of market opportunities as they arise. To do this requires the ability to quickly monitor what is happening. It requires both real-time data feeds and the ability to quickly react. The beauty of Apache Hadoop, and specifically MapR’s platform, is that data can be ingested as a real-time stream; analysis can be performed directly on the data, and automated responses can be executed. This is true for a range of applications across organizations, from advertising platforms, to on-line retail recommendation engines, to fraud and security detection.
When looking at harnessing Big Data, organizations need to realize that multiple applications will need to be supported. Regardless of which application you introduce first, more will quickly follow. Not all Hadoop distributions are created equal. Or more precisely, most Hadoop distributions are very similar with only minor value-added services separating them. The exception is MapR. With the best of the Hadoop community updates coupled with MapR’s innovations, the broadest set of applications can be supported including mission-critical applications that require a depth and breadth of enterprise-grade Hadoop features.
Read More »
Tags: Big Data, enterprise scheduler, Hadoop, informatica, job scheduling, MapR, Tidal Enterprise Scheduler, UCS, workload automation
When customers look to deploy their Hadoop solutions, one of the first questions they ask is, which distro should we run it on? For many enterprise customers, the answer has been MapR. For those of you not familiar with MapR, they offer an enterprise-grade Hadoop software solution that provides customers with a robust set of tools for running Big Data workloads. A few months ago, Cisco announced the release of Tidal Enterprise Scheduler (TES) 6.1 and with it integrations for Hadoop software distributions, such as Cloudera and MapR, as well as adapters to support Sqoop, Data Mover (HDFS), Hive, and MapReduce jobs. All performed through the same TES interface as their other enterprise workloads.
Today, I’m pleased to announce that with the upcoming 6.1.1 release of Cisco’s Tidal Enterprise Scheduler, Cisco’s MapR integration will deepen further. Leveraging Big Data for competitive advantage and rises in innovative product offerings are changing the storage, management, and analysis of an enterprise’s most critical asset -- data. The difficulty of managing Hadoop clusters will continue to grow and enterprises need solutions like Hadoop to enable the processing of large amounts of data. Cisco Tidal Enterprise Scheduler enables more efficient management of those environment because it is an intelligent solution for integrating Big Data jobs into an existing data center infrastructure. TES has adapters for a range of enterprise applications including: SAP, Informatica, Oracle, PeopleSoft, MSSQL, JDEdwards, and many others.
Stay tuned for additional blog posts on Cisco’s Tidal Enterprise Scheduler version 6.
Tags: Big Data, Cloudera, enterprise scheduler, Hadoop, MapR, mapreduce, sqoop, tes, Tidal