Big Data is better than a sharp stick in the eye. I can say this with great authority, since I missed the first half of Strata+Hadoop World 2015 in San Jose because of the latter. But eye injuries have never kept me offline for long, and I was able to follow online with what I didn’t see in person. But I was very happy to make it in to the show on Friday, and even got a seat at about row 6 in the main hall for the keynotes. Read More »
Big Data is not just about gathering tons of data, the digital exhaust from the internet, social media, and customer records. The real value is in being able to analyze the data to gain a desired business outcome.
Those of us who follow the Big Data market closely never lack for something new to talk about. There is always a story about how a business is using Big Data in a different way or about some new breakthrough that has been achieved in the expansive big data ecosystem. The good news for all of us is, we have clearly only scratched the surface of the Big Data opportunity!
With the increasing momentum of the Internet of Everything (IoE) market transition, there will be 50 billion devices connected to the Internet by 2020—just five years from now. As billions of new people, processes, and things become connected, each connection will become a source of potentially powerful data to businesses and the public sector. Organizations who can unlock the intelligence in this data can create new sources of competitive advantage, not just from more data but from better access to better data.
What we haven’t heard about – yet—are examples of enterprises that are applying the power of this data pervasively in their organizations: giving them a competitive edge in marketing, supply chain, manufacturing, human resources, customer support, and many more departments. The enterprise that can apply the power of Big Data throughout their organization can create multiple and simultaneous sources of ongoing innovation—each one a constantly renewable or perpetual competitive edge. Looking forward, the companies that can accomplish this will be the ones setting the pace for the competition to follow.
Cisco has been working on making this vision of pervasive use of Big Data within enterprises a reality. We’d like to share this vision with you in an upcoming blog series and executive Webcast entitled, ‘Unlock Your Competitive Edge with Cisco Big Data Solutions’, that will air on October 21st at 9:00 AM PT.
I have the honor of kicking off the multi-part blog series today. Each blog will focus on a specific Cisco solution our customers can utilize to unlock the power of their big data – enterprise-wide– to deliver a competitive edge to our customers. I’m going to start the discussion by highlighting the infrastructure implications for Big Data in the internet of Everything (IoE) era and focus on Cisco Unified Computing System initially.
Enterprises who want to make strategic use of data throughout their organizations will need to take advantage of the power of all types of data. As IoE increasingly takes root, organizations will be able to access data from virtually anywhere in their value chain. No longer restricted to small sets of structured, historical data, they’ll have more comprehensive and even real-time data including video surveillance information, social media output, and sensor data that allow them to monitor behavior, performance, and preferences. These are just a few examples, but they underscore the fact that not all data is created equally. Real-time data coming in from a sensor may only be valuable for minutes, or even seconds – so it is critical to be able to act on that intelligence as quickly as possible. From an infrastructure standpoint, that means enterprises must be able to connect the computing resource as closely as possible to the many sources and users of data. At the same time, historical data will also continue to be critical to Big Data analytics.
Cisco encourages our customers to take a long-term view—and select a Big Data infrastructure that is distributed, and designed for high scalability, management automation, outstanding performance, low TCO, and the comprehensive, security approach needed for the IoE era. And that infrastructure must be open—because there is tremendous innovation going on in this industry, and enterprises will want to be able to take full advantage of it.
One of the foundational elements of our Big Data infrastructure is the Cisco Unified Computing System (UCS). UCS integrated infrastructure uniquely combines server, network and storage access and has recently claimed the #1, x86 blade server market share position in the Americas. It’s this same innovation that propelled us to the leading blade market share position that we are directly applying to Big Data workloads. With its highly efficient infrastructure, UCS lets enterprises manage up to 10,000 UCS servers as if they were a single pool of resources, so they can support the largest data clusters.
Because enterprises will ultimately need to be able to capture intelligence from both data at rest in the data center and data at the edge of the network, Cisco’s broad portfolio of UCS systems gives our customers the flexibility to process data where it makes the most sense. For instance, our UCS 240 rack system has been extremely popular for Hadoop-based Big Data deployments at the data center core. And Cisco’s recently introduced UCS Mini is designed to process data at the edge of the network.
Because the entire UCS portfolio utilizes the same unified architecture, enterprises can choose the right compute configuration for the workload, with the advantage of being able to use the same powerful management and orchestration tools to speed deployment, maximize availability, and significantly lower your operating expenses. Being able to leverage UCS Manager and Service Profiles, Unified Fabric and SingleConnect Technology, our Virtual interface card technology, and industry leading performance really set Cisco apart from our competition.
So, please consider this just an introduction to the first component of Cisco’s “bigger”, big data story. To hear more, please make plans to attend our upcoming webcast entitled, ‘Unlock Your Competitive Edge With Cisco Big Data Solutions’ on October 21st.
Every Tuesday and Thursday from now until October 21st, we’ll post another blog in the series to provide you with additional details of Cisco’s full line of products, solutions and services.
View additional blogs in the series:
10/14: Analytics for an IoE World
Please let me know if you have any comments or questions, or via Twitter at @CicconeScott.
Tags: ACI, analytics, Big Data, blade server, Blade Servers, Cisco UCS, Cisco UCS C240 M3 Rack Server, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Cloudera, data virtualization, Hadoop, Hortonworks, Internet of Everything, IoE, MapR, rack server, security, UCS Central, UCS service profiles
Big Data remains one of the hottest topics in the industry due to the actual dollar value that businesses are deriving from making sense from tons of structured and unstructured data. Virtually every field is leveraging a data-driven strategy as people, process, data and things are increasing being connected (Internet of Everything). New tools and techniques are being developed that can mine vast stores of data to inform decision making in ways that were previously unimagined. The fact that we can derive more knowledge by joining related information and recognizing correlations can inform and enrich numerous aspects of every day life. There’s a good reason why Big Data is so hot!
This year at Hadoop Summit, Cisco invites you to learn how to unlock the value of Big Data. Unprecedented data creation opens the door to responsive applications and emerging analytics techniques and businesses need a better way to analyze data. Cisco will be showcasing Infrastructure Innovations from both Cisco Unified Computing System (UCS) and Cisco Applications Centric Infrastructure (ACI). Cisco’s solution for deploying big data applications can help customers make informed decisions, act quickly, and achieve better business outcomes.
Cisco is partnering with leading software providers to offer a comprehensive infrastructure and management solution, based on Cisco UCS, to support our customers’ big data initiatives. Taking advantage of Cisco UCS’s Fabric based infrastructure, Cisco can apply significant advantage to big data workloads.
Tags: ACI, Big Data, blade server, Blade Servers, Cisco UCS, Cisco UCS C240 M3 Rack Server, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Cloudera, Hadoop, Hortonworks, MapR, rack server, UCS Central, UCS service profiles
By now it is clear that big data analytics opens the door to unprecedented analytic opportunities for business innovation, customer retention and profit growth. However, a shortage of data scientists is creating a bottleneck as organizations move from early big data experiments into larger scale adoption. This constraint limits big data analytics and the positive business outcomes that could be achieved.
It’s All About the Data
As every data scientist will tell you, the key to analytics is data. The more data the better, including big data as well as the myriad other data sources both in the enterprise and across the cloud. But accessing and massaging this data, in advance of data modeling and statistical analysis, typically consumes 50% or more of any new analytic development effort.
• What would happen if we could simplify the data aspect of the work?
• Would that free up data scientists to spend more time on analysis?
• Would it open the door for non-data scientists to contribute to analytic projects?
SQL is the key. Because of its ease and power, it has been the predominant method for accessing and massaging data for the past 30 years. Nearly all non-data scientists in IT can use SQL to access and massage data, but very few know MapReduce, the traditional language used to access data from Hadoop sources.
How Data Virtualization Helps
“We have a multitude of users…from BI to operational reporting, they are constantly coming to us requesting access to one server or another…we now have that one central place to say ‘you already have access to it’ and they immediately have access rather than having to grant access outside of the tool” -Jason Hull, Comcast
Data virtualization offerings, like Cisco’s, can help organizations bridge this gap and accelerate their big data analytics efforts. Cisco was the first data virtualization vendor to support Hadoop integration with its June 2011 release. This standardized SQL approach augments specialized MapReduce coding of Hadoop queries. By simplifying access to Hadoop data, organizations could for the first time use SQL to include big data sources, as well as enterprise, cloud and other data sources, in their analytics.
In February 2012, Cisco became the first data virtualization vendor to enable MapReduce programs to easily query virtualized data sources, on-demand with high performance. This allowed enterprises to extend MapReduce analyses beyond Hadoop stores to include diverse enterprise data previously integrated by the Cisco Information Server.
In 2013, Cisco maintained its big data integration leadership with updates of its support for Hive access to the leading Hadoop distributions including Apache Hadoop, Cloudera Distribution (CDH) and Hortonworks (HDP). In addition, Cisco now also supports access to Hadoop through HiveServer2 and Cloudera CDH through Impala.
Others, beyond Cisco, recognize this beneficial trend. In fact, Rick van der Lans, noted Data Virtualization expert and author, recently blogged on future developments in this area in Convergence of Data Virtualization and SQL-on-Hadoop Engines.
So if your organization’s big data efforts are slowed by a shortage of data scientists, consider data virtualization as a way to break the bottleneck.
Tags: apache, Big Data, Cisco Data Center, Cisco Data virtualization, Cloudera, Composite Software, data integration, data virtualization, Hadoop, HiveServer2, Hortonworks, mapreduce, query, SQL, video
Industry’s first reference architecture for Hadoop with advanced access control and encryption with IDH, first flash-enhanced reference architecture for Hadoop demonstrated using YCSB with MapR, industry’s first validated and certified solution for real-time Big Data analytics with SAP HANA, and Unleashing IT big data special edition
Built up on our vision of shared infrastructure and unified management for enterprise applications, the Cisco UCS Common Platform Architecture (CPA) for Big Data has become a popular choice for enterprise Big Data deployments. It has been widely adopted in finance, healthcare, service provider, entertainment, insurance, and public sectors. The new Cisco UCS CPA V2 improves both performance and capacity featuring Intel Xeon E5-2600 v2 family of processors, industry leading storage density, and industry’s first transparent cache acceleration for Big Data.
The Cisco UCS CPA v2 offers a choice of infrastructure options, including “Performance Optimized”, “Balanced”, “Capacity Optimized”, and “Capacity Optimized with Flash” to support a range of workload needs.
Up to 160 servers (3200 cores, 7.6PB storage) are supported in single switching/UCS domain. Scaling beyond 160 servers can be implemented by interconnecting multiple UCS domains using Nexus 6000/7000 Series switches, scalable to thousands of servers and to hundreds of petabytes storage, and managed from a single pane using UCS Central in a data center or distributed globally.
The Cisco UCS CPA v2 solutions are available through Cisco UCS Solution Accelerator Paks program designed for rapid deployments, tested and validated for performance, and optimized for cost of ownership: Performance Optimized half-rack (UCS-SL-CPA2-P) ideal for MPP databases and scale-out data analytics, Performance and Capacity Balanced rack (UCS-SL-CPA2-PC) ideal for high performance Hadooop and NoSQL deployments, Capacity Optimized rack (UCS-SL-CPA2-C) when capacity matters, and Capacity Optimized with Flash rack (UCS-SL-CPA2-CF) offers industry’s first transparent caching option for Hadoop and NoSQL. Start with any configuration and scale as your workload demands.
Cisco supports leading Hadoop and NoSQL distributions, including Cloudera, HortonWorks, Intel, MapR, Oracle, Pivotal and others. For more information visit Cisco Big Data Portal, and Big Data Design Zone that offers Cisco Validated Designs (CVD) – pretested and validated architectures that accelerate the time to value for customers while reducing risks and deployment challenges.