At the June Hadoop Summit in San Jose, Hadoop was re-affirmed as the data center “killer app,” riding an avalanche of Enterprise Data, which is growing 50x annually through 2020. According to IDC, the Big Data market itself growing six times faster than the rest of IT. Every major tech company, old and new, is now driving Hadoop innovation, including Google, Yahoo, Facebook Microsoft, IBM, Intel and EMC – building value added solutions on open source contributions by Hortonworks, Cloudera and MAPR. Cisco’s surprisingly broad portfolio will be showcased at Strataconf in New York on Oct. 15 and at our October 21st executive webcast. In this third of a blog series, we preview the power of Application Centric Infrastructure for the emerging Hadoop eco-system.
Why Big Data?
Organizations of all sizes are gaining insight and creativity into use cases that leverage their own business data.
The use cases grow quickly as businesses realize their “ability to integrate all of the different sources of data and shape it in a way that allows business leaders to make informed decisions.” Hadoop enables customers to gain insight from both structure and unstructured data. Data Types and sources can include 1) Business Applications -- OLTP, ERP, CRM systems, 2) Documents and emails 3) Web logs, 4) Social networks, 5) Machine/sensor generated, 6) Geo location data.
IT operational challenges
Even modest-sized jobs require clusters of 100 server nodes or more for seasonal business needs. While, Hadoop is designed for scale out of commodity hardware, most IT organizations face the challenge of extreme demand variations in bare-metal workloads (non-virtualizable). Furthermore, they are requested by multiple Lines of Business (LOB), with increasing urgency and frequency. Ultimately, 80% of the costs of managing Big Data workloads will be OpEx. How do IT organizations quickly, finish jobs and re-deploy resources? How do they improve utilization? How do they maintain security and isolation of data in a shared production infrastructure?
And with the release of Hadoop 2.0 almost a year ago, cluster sizes are growing due to:
- Expanding data sources and use-cases
- A mixture of different workload types on the same infrastructure
- A variety of analytics processes
In Hadoop 1.x, compute performance was paramount. But in Hadoop 2.x, network capabilities will be the focus, due to larger clusters, more data types, more processes and mixed workloads. (see Fig. 1)
ACI powers Hadoop 2.x
Cisco’s Application Centric Infrastructure is a new operational model enabling Fast IT. ACI provides a common policy-based programming approach across the entire ACI-ready infrastructure, beginning with the network and extending to all its connected end points. This drastically reduces cost and complexity for Hadoop 2.0. ACI uses Application Policy to:
- Dynamically optimize cluster performance in the network
- Redeploy resources automatically for new workloads for improved utilization
- Ensure isolation of users and data as resources are deployments change
Let’s review each of these in order:
Cluster Network Performance: It’s crucial to improve traffic latency and throughput across the network, not just within each server.
- Hadoop copies and distributes data across servers to maximize reliability on commodity hardware.
- The large collection of processes in Hadoop 2.0 are usually spread across different racks.
- Mixed workloads in Hadoop 2.0, support interactive and real-time jobs, resulting in the use of more on-board memory and different payload sizes.
As a result, server IO bandwidth is increasing which will place loads on 10 gigabit networks. ACI policy works with deep telemetry embedded in each Nexus 9000 leaf switch to monitor and adapt to network conditions.
Using policy, ACI can dynamically 1) load-balance Big Data flows across racks on alternate paths and 2) prioritize small data flows ahead of large flows (which use the network much less frequently but use up Bandwidth and Buffer). Both of these can dramatically reducing network congestion. In lab tests, we are seeing flow completion nearly an order of magnitude faster (for some mixed workloads) than without these policies enabled. ACI can also estimate and prioritize job completion. This will be important as Big Data workloads become pervasive across the Enterprise. For a complete discussion of ACI’s performance impact, please see a detailed presentation by Samuel Kommu, chief engineer at Cisco for optimizing Big Data workloads.
Resource Utilization: In general, the bigger the cluster, the faster the completion time. But since Big Data jobs are initially infrequent, CIOs must balance responsiveness against utilization. It is simply impractical for many mid-sized companies to dedicate large clusters for the occasional surge in Big Data demand. ACI enables organizations to quickly redeploy cluster resources from Hadoop to other sporadic workloads (such as CRM, Ecommerce, ERP and Inventory) and back. For example, the same resources could run Hadoop jobs nightly or weekly when other demands are lighter. Resources can be bare-metal or virtual depending on workload needs. (see Figure 2)
How does this work? ACI uses application policy profiles to programmatically re-provision the infrastructure. IT can use a different profile to describe different application’s needs including the Hadoop eco-system. The profile contains application’s network policies, which are used by the Application Policy Infrastructure controller in to a complete network topology. The same profile contains compute and storage policies used by other tools, such as Cisco UCS Director, to provisioning compute and storage.
Data Isolation and Security: In a mature Big Data environment, Hadoop processing can occur between many data sources and clients. Data is most vulnerable during job transitions or re-deployment to other applications. Multiple corporate data bases and users need to be correctly to ensure compliance. A patch work of security software such as perimeter security is error prone, static and consumes administrative resources.
In contrast, ACI can automatically isolate the entire data path through a programmable fabric according to pre-defined policies. Access policies for data vaults can be preserved throughout the network when the data is in motion. This can be accomplished even in a shared production infrastructure across physical and virtual end points.
As organizations of all sizes discover ways to use Big Data for business insights, their infrastructure must become far more performant, adaptable and secure. Investments in fabric, compute and storage must be leveraged across, multiple Big Data processes and other business applications with agility and operational simplicity.
Leading the growth of Big Data, the Hadoop 2.x eco-system will place particular stresses on data center fabrics. New mixed workloads are already using 10 Gigabit capacity in larger clusters and will soon demand 40 Gigabit fabrics. Network traffic needs continuous optimization to improve completion times. End to end data paths must use consistent security policies between multiple data sources and clients. And the sharp surges in bare-metal workloads will demand much more agile ways to swap workloads and improve utilization.
Cisco’s Application Centric Infrastructure leverages a new operational and consumption model for Big Data resources. It dynamically translates existing policies for applications, data and clients in to fully provisioned networks, compute and storage. . Working with Nexus 9000 telemetry, ACI can continuously optimize traffic paths and enforce policies consistently as workloads change. The solution provides a seamless transition to the new demands of Big Data.
To hear about Cisco’s broader solution portfolio be sure to for register for the October 21st executive webcast ‘Unlock Your Competitive Edge with Cisco Big Data Solutions.’ And stay tuned for the next blog in the series, from Andrew Blaisdell, which showcases the ability to predictably deliver intelligence-driven insights and actions.
Tags: ACI, analytics, Big Data, Cisco Application Centric Infrastructure, Nexus 9000, UCS, UnlockBigData
As we think of Healthcare and Big data Analytics, some of the topics that come to fore front are personalized medicines, managing readmissions, identifying health risk indexes and many more. While each of these is important areas that benefit from power of Big Data Analytics, one of the areas that is at table stakes in Healthcare is protecting critical care systems. Can the power of big data analytics provide us a protective shield?
Before we dive in, the question that comes up is why is Healthcare Security any different and why Big Data Analytics instead of the traditional approaches to protection that we have today.
This was the topic of my presentation at the recently concluded COM.BigData 2014 conference in Washington DC: ‘Dynamic Protection for Critical Care Systems using Cisco Cloud web security (CWS): Unleashing the power of Big Data Analytics’.
While the Health IT transitions are opening up healthcare access in newer ways that has significant security implications, there are additional trends that are making Healthcare a prime target.
Targeting Healthcare Industry
According to the World Privacy Forum, the street value of a stolen Healthcare data is ~ $50 as compared to $1 for a stolen social security number. The Ponemon Institute, in its third annual report on Medical Identity theft, 2012, estimates the economic impact of medical identity theft at 41.3 billion per year, a significant increase from 30.9 billion per year in 2011. In addition, new attack models such as ransomware can capitalize on the sensitivity of the situation, where the question is not about losing your data, but your life. Adding up all these, healthcare industry is an attractive target.
The expanded boundaries
Read More »
Tags: analytics, Big Data, Cloud web security, critical systems, Dynamic Protection, healthcare IT, security
Finding a molecule with the potential to become a new drug is complicated. It’s time-consuming. Fewer than 10 percent of molecules or compounds discovered are promising enough to enter the development pipeline. And fewer of those ever come to market. At Pfizer, if it were not for data virtualization, it would be even more challenging.
Years of Data, Thousands of Decisions
The pipeline from discovery to licensing occurs in phases over 15-20 years, and few compounds complete the journey. The initial study phase represents a multimillion-dollar investment decision. Each succeeding phase – proof-of-concept study, dose range study, and large-scale population study – represents a magnitude-larger investment and risk than the one before.
Senior management and portfolio managers need to know:
- Which projects the company should fund?
- Which compounds are meeting Pfizer’s high standards for efficacy and safety?
- What are scientists discovering in clinical trials?
Portfolio and project managers routinely make complex tactical decisions such as:
- How to allocate scarce R&D resources across different projects?
- How to prioritize multiple development scenarios?
- What is impact of a clinical trial result on downstream manufacturing?
Before Pfizer adopted Cisco Data Virtualization, getting useful data to answer these questions took weeks or months. Why so long? The problem has several dimensions. First, each phase of development generates massive amounts of data and requires extensive analysis to provide an accurate picture. Second, data comes from Pfizer research scientists all over the world; from physicians; clinical trials; product owners and managers; marketing teams; and hundreds of different back-end systems. Third, the scientific method is based on trial and error, with unpredictable results. Thus no two decisions are alike and therefore the specific data required for each decision is unique.
Data Virtualization Provides the Solution
To support their decision-making needs, Pfizer needed a solution that would allow them to pull all this diverse information together in an agile, ad hoc way. Cisco Data Virtualization – agile data integration software that makes it easy to access and gather relevant data, no matter where data sources reside – provided the solution.
With Cisco Data Virtualization, Pfizer’s research and portfolio data resides in one virtual place and provides “one version of the truth” that is available for everyone to use to address the myriad decisions that arise. Further, by applying virtualization instead of consolidation, infrastructure costs are also reduced.
According to Pfizer, “data virtualization is far less expensive than building specialized data marts to answer questions. With Cisco Data Virtualization, our portfolio teams get answers in hours or days for about one-tenth the cost.”
This data virtualization progress has not gone unnoticed. At Data Virtualization Day 2012, Pfizer was awarded the “Data Virtualization Champion” award for consistently achieving and promoting data virtualization value within the organization and across the industry.
Learn from other leaders in the industry and see who wins this year’s Data Virtualization Leadership Awards at Data Virtualization Day 2014 on October 1. Register now!
To read more about this Pfizer case study click here.
To learn more about Cisco Data Virtualization, check out our page.
Join the Conversation
Follow us @CiscoDataVirt #DVDNYC
Tags: analytics, Big Data, cloud, data, data virtualization, Internet of Everything
Responses in a recent Cisco-sponsored Cloud Security Alliance survey (hyperlink) illustrate that many data privacy challenges previously cast in the “too hard” basket can be more readily navigated though focusing on universal principles across Cloud, IoT and Big Data. Survey responses showed a surprisingly strong level of interest in a global consumer bill of rights and responses were overwhelming in favor of the OECD data privacy principles facilitating the trends of Cloud, IoT and Big Data.
Following are the most significant findings:
Data Residency and Sovereignty
Data residency and sovereignty challenges continue to emerge. However, there was a common theme of respondents identifying “personal data” and Personally Identifiable Information (PII) as the data that is required to remain resident in most countries.
73 percent of respondents indicated that there should be a call for a global consumer bill of rights and saw the United Nations as fostering that. This is of great significance with the harmonization efforts taking place in Europe with a single EU data Privacy Directive to represent 28 European member states. As well as with the renewed calls for a U.S. Consumer Bill of Privacy Rights in the United States and cross-border privacy arrangements in Australia and Asia.
Finally we explored whether OECD privacy principles that have been very influential in the development of many data privacy regulations also facilitate popular trends in cloud, IoT and big data initiatives or cause room for tension. The responses were very much in favor of facilitating the various trends.
The survey report includes an executive summary from Dr. Ann Cavoukian, Former Information and Privacy Commissioner of Ontario, Canada and commentary from other industry experts on the positive role that privacy can play in developing new and innovative cloud, IoT and Big Data Solutions. Read the Data Protection Heat Index survey report:
Tags: Big Data, cloud, IoE, privacy, report, security, survey
Big Data is not just about gathering tons of data, the digital exhaust from the internet, social media, and customer records. The real value is in being able to analyze the data to gain a desired business outcome.
Those of us who follow the Big Data market closely never lack for something new to talk about. There is always a story about how a business is using Big Data in a different way or about some new breakthrough that has been achieved in the expansive big data ecosystem. The good news for all of us is, we have clearly only scratched the surface of the Big Data opportunity!
With the increasing momentum of the Internet of Everything (IoE) market transition, there will be 50 billion devices connected to the Internet by 2020—just five years from now. As billions of new people, processes, and things become connected, each connection will become a source of potentially powerful data to businesses and the public sector. Organizations who can unlock the intelligence in this data can create new sources of competitive advantage, not just from more data but from better access to better data.
What we haven’t heard about – yet—are examples of enterprises that are applying the power of this data pervasively in their organizations: giving them a competitive edge in marketing, supply chain, manufacturing, human resources, customer support, and many more departments. The enterprise that can apply the power of Big Data throughout their organization can create multiple and simultaneous sources of ongoing innovation—each one a constantly renewable or perpetual competitive edge. Looking forward, the companies that can accomplish this will be the ones setting the pace for the competition to follow.
Cisco has been working on making this vision of pervasive use of Big Data within enterprises a reality. We’d like to share this vision with you in an upcoming blog series and executive Webcast entitled, ‘Unlock Your Competitive Edge with Cisco Big Data Solutions’, that will air on October 21st at 9:00 AM PT.
I have the honor of kicking off the multi-part blog series today. Each blog will focus on a specific Cisco solution our customers can utilize to unlock the power of their big data – enterprise-wide-- to deliver a competitive edge to our customers. I’m going to start the discussion by highlighting the infrastructure implications for Big Data in the internet of Everything (IoE) era and focus on Cisco Unified Computing System initially.
Enterprises who want to make strategic use of data throughout their organizations will need to take advantage of the power of all types of data. As IoE increasingly takes root, organizations will be able to access data from virtually anywhere in their value chain. No longer restricted to small sets of structured, historical data, they’ll have more comprehensive and even real-time data including video surveillance information, social media output, and sensor data that allow them to monitor behavior, performance, and preferences. These are just a few examples, but they underscore the fact that not all data is created equally. Real-time data coming in from a sensor may only be valuable for minutes, or even seconds – so it is critical to be able to act on that intelligence as quickly as possible. From an infrastructure standpoint, that means enterprises must be able to connect the computing resource as closely as possible to the many sources and users of data. At the same time, historical data will also continue to be critical to Big Data analytics.
Cisco encourages our customers to take a long-term view—and select a Big Data infrastructure that is distributed, and designed for high scalability, management automation, outstanding performance, low TCO, and the comprehensive, security approach needed for the IoE era. And that infrastructure must be open—because there is tremendous innovation going on in this industry, and enterprises will want to be able to take full advantage of it.
One of the foundational elements of our Big Data infrastructure is the Cisco Unified Computing System (UCS). UCS integrated infrastructure uniquely combines server, network and storage access and has recently claimed the #1, x86 blade server market share position in the Americas. It’s this same innovation that propelled us to the leading blade market share position that we are directly applying to Big Data workloads. With its highly efficient infrastructure, UCS lets enterprises manage up to 10,000 UCS servers as if they were a single pool of resources, so they can support the largest data clusters.
Because enterprises will ultimately need to be able to capture intelligence from both data at rest in the data center and data at the edge of the network, Cisco’s broad portfolio of UCS systems gives our customers the flexibility to process data where it makes the most sense. For instance, our UCS 240 rack system has been extremely popular for Hadoop-based Big Data deployments at the data center core. And Cisco’s recently introduced UCS Mini is designed to process data at the edge of the network.
Because the entire UCS portfolio utilizes the same unified architecture, enterprises can choose the right compute configuration for the workload, with the advantage of being able to use the same powerful management and orchestration tools to speed deployment, maximize availability, and significantly lower your operating expenses. Being able to leverage UCS Manager and Service Profiles, Unified Fabric and SingleConnect Technology, our Virtual interface card technology, and industry leading performance really set Cisco apart from our competition.
So, please consider this just an introduction to the first component of Cisco’s “bigger”, big data story. To hear more, please make plans to attend our upcoming webcast entitled, ‘Unlock Your Competitive Edge With Cisco Big Data Solutions’ on October 21st.
Every Tuesday and Thursday from now until October 21st, we’ll post another blog in the series to provide you with additional details of Cisco’s full line of products, solutions and services.
View additional blogs in the series:
9/25: Unlock Big Data with Breakthroughs in Management Automation
9/30: Turbocharging New Hadoop Workloads with Application Centric Infrastructure
10/2: Enable Automated Big Data Workloads with Cisco Tidal Enterprise Scheduler
10/7: To Succeed with Big Data, Enterprises Must Drop an IT-Centric Mindset: Securing IoT Networks Requires New Thinking
10/9: Aligning Solutions to meet our Customers’ Data Challenges
10/14: Analytics for an IoE World
Please let me know if you have any comments or questions, or via Twitter at @CicconeScott.
Tags: ACI, analytics, Big Data, blade server, Blade Servers, Cisco UCS, Cisco UCS C240 M3 Rack Server, Cisco Unified Computing System, Cisco Unified Data Center, Cisco Unified Fabric, Cloudera, data virtualization, Hadoop, Hortonworks, Internet of Everything, IoE, MapR, rack server, security, UCS Central, UCS service profiles