At the June Hadoop Summit in San Jose, Hadoop was re-affirmed as the data center “killer app,” riding an avalanche of Enterprise Data, which is growing 50x annually through 2020. According to IDC, the Big Data market itself growing six times faster than the rest of IT. Every major tech company, old and new, is now driving Hadoop innovation, including Google, Yahoo, Facebook Microsoft, IBM, Intel and EMC – building value added solutions on open source contributions by Hortonworks, Cloudera and MAPR. Cisco’s surprisingly broad portfolio will be showcased at Strataconf in New York on Oct. 15 and at our October 21st executive webcast. In this third of a blog series, we preview the power of Application Centric Infrastructure for the emerging Hadoop eco-system.
Why Big Data?
Organizations of all sizes are gaining insight and creativity into use cases that leverage their own business data.
The use cases grow quickly as businesses realize their “ability to integrate all of the different sources of data and shape it in a way that allows business leaders to make informed decisions.” Hadoop enables customers to gain insight from both structure and unstructured data. Data Types and sources can include 1) Business Applications -- OLTP, ERP, CRM systems, 2) Documents and emails 3) Web logs, 4) Social networks, 5) Machine/sensor generated, 6) Geo location data.
IT operational challenges
Even modest-sized jobs require clusters of 100 server nodes or more for seasonal business needs. While, Hadoop is designed for scale out of commodity hardware, most IT organizations face the challenge of extreme demand variations in bare-metal workloads (non-virtualizable). Furthermore, they are requested by multiple Lines of Business (LOB), with increasing urgency and frequency. Ultimately, 80% of the costs of managing Big Data workloads will be OpEx. How do IT organizations quickly, finish jobs and re-deploy resources? How do they improve utilization? How do they maintain security and isolation of data in a shared production infrastructure?
And with the release of Hadoop 2.0 almost a year ago, cluster sizes are growing due to:
- Expanding data sources and use-cases
- A mixture of different workload types on the same infrastructure
- A variety of analytics processes
In Hadoop 1.x, compute performance was paramount. But in Hadoop 2.x, network capabilities will be the focus, due to larger clusters, more data types, more processes and mixed workloads. (see Fig. 1)
ACI powers Hadoop 2.x
Cisco’s Application Centric Infrastructure is a new operational model enabling Fast IT. ACI provides a common policy-based programming approach across the entire ACI-ready infrastructure, beginning with the network and extending to all its connected end points. This drastically reduces cost and complexity for Hadoop 2.0. ACI uses Application Policy to:
- Dynamically optimize cluster performance in the network
- Redeploy resources automatically for new workloads for improved utilization
- Ensure isolation of users and data as resources are deployments change
Let’s review each of these in order:
Cluster Network Performance: It’s crucial to improve traffic latency and throughput across the network, not just within each server.
- Hadoop copies and distributes data across servers to maximize reliability on commodity hardware.
- The large collection of processes in Hadoop 2.0 are usually spread across different racks.
- Mixed workloads in Hadoop 2.0, support interactive and real-time jobs, resulting in the use of more on-board memory and different payload sizes.
As a result, server IO bandwidth is increasing which will place loads on 10 gigabit networks. ACI policy works with deep telemetry embedded in each Nexus 9000 leaf switch to monitor and adapt to network conditions.
Using policy, ACI can dynamically 1) load-balance Big Data flows across racks on alternate paths and 2) prioritize small data flows ahead of large flows (which use the network much less frequently but use up Bandwidth and Buffer). Both of these can dramatically reducing network congestion. In lab tests, we are seeing flow completion nearly an order of magnitude faster (for some mixed workloads) than without these policies enabled. ACI can also estimate and prioritize job completion. This will be important as Big Data workloads become pervasive across the Enterprise. For a complete discussion of ACI’s performance impact, please see a detailed presentation by Samuel Kommu, chief engineer at Cisco for optimizing Big Data workloads.
Resource Utilization: In general, the bigger the cluster, the faster the completion time. But since Big Data jobs are initially infrequent, CIOs must balance responsiveness against utilization. It is simply impractical for many mid-sized companies to dedicate large clusters for the occasional surge in Big Data demand. ACI enables organizations to quickly redeploy cluster resources from Hadoop to other sporadic workloads (such as CRM, Ecommerce, ERP and Inventory) and back. For example, the same resources could run Hadoop jobs nightly or weekly when other demands are lighter. Resources can be bare-metal or virtual depending on workload needs. (see Figure 2)
How does this work? ACI uses application policy profiles to programmatically re-provision the infrastructure. IT can use a different profile to describe different application’s needs including the Hadoop eco-system. The profile contains application’s network policies, which are used by the Application Policy Infrastructure controller in to a complete network topology. The same profile contains compute and storage policies used by other tools, such as Cisco UCS Director, to provisioning compute and storage.
Data Isolation and Security: In a mature Big Data environment, Hadoop processing can occur between many data sources and clients. Data is most vulnerable during job transitions or re-deployment to other applications. Multiple corporate data bases and users need to be correctly to ensure compliance. A patch work of security software such as perimeter security is error prone, static and consumes administrative resources.
In contrast, ACI can automatically isolate the entire data path through a programmable fabric according to pre-defined policies. Access policies for data vaults can be preserved throughout the network when the data is in motion. This can be accomplished even in a shared production infrastructure across physical and virtual end points.
As organizations of all sizes discover ways to use Big Data for business insights, their infrastructure must become far more performant, adaptable and secure. Investments in fabric, compute and storage must be leveraged across, multiple Big Data processes and other business applications with agility and operational simplicity.
Leading the growth of Big Data, the Hadoop 2.x eco-system will place particular stresses on data center fabrics. New mixed workloads are already using 10 Gigabit capacity in larger clusters and will soon demand 40 Gigabit fabrics. Network traffic needs continuous optimization to improve completion times. End to end data paths must use consistent security policies between multiple data sources and clients. And the sharp surges in bare-metal workloads will demand much more agile ways to swap workloads and improve utilization.
Cisco’s Application Centric Infrastructure leverages a new operational and consumption model for Big Data resources. It dynamically translates existing policies for applications, data and clients in to fully provisioned networks, compute and storage. . Working with Nexus 9000 telemetry, ACI can continuously optimize traffic paths and enforce policies consistently as workloads change. The solution provides a seamless transition to the new demands of Big Data.
To hear about Cisco’s broader solution portfolio be sure to for register for the October 21st executive webcast ‘Unlock Your Competitive Edge with Cisco Big Data Solutions.’ And stay tuned for the next blog in the series, from Andrew Blaisdell, which showcases the ability to predictably deliver intelligence-driven insights and actions.
Tags: ACI, analytics, Big Data, Cisco Application Centric Infrastructure, Nexus 9000, UCS, UnlockBigData
Guest post by Aaron Newcomb, Solutions Marketing Manager, NetApp
No one wants the 2:00 am distressed phone call disturbing a good night’s sleep. For IT Managers and Database Administrators that 2:00 am call is typically bad news regarding the systems they support. Users in another region are not able to access an application. Customers are not placing orders because the system is responding too slowly. Nightly reporting is taking too long and impacting performance during peak business hours. When your business critical applications running on Oracle Database are not performing at the speed of business that creates barriers to customer satisfaction and remaining competitive. NetApp wants to help break down those barriers and help our customers get a good night sleep instead of worrying about the performance of their Oracle Database.
NetApp today unveiled a solution designed to address the need for extreme performance for Oracle Databases with FlexPod Select for High Performance Oracle RAC. This integrated infrastructure solution offers a complete data center infrastructure including networking, servers, storage, and the management software you need to run your business 24x7 365 days a year. Since NetApp and Cisco validate the architecture you can deploy your Oracle Databases with confidence and in much less time than traditional approaches. Built with industry-leading NetApp EF-550 flash storage arrays and Cisco UCS B200 M3 Blade Servers this solution can deliver the highest levels of performance for the most demanding Oracle Database workloads on the planet.
The system will deliver more than one million IOPS of read performance for Oracle Database workloads at sub-millisecond latencies. This means faster response times for end users, improved database application performance, and more overhead to run additional workload or consolidate databases. Not only that, but this pre-validated and pre-tested solution is based on a balanced configuration so that the infrastructure components you need to run your business are working in harmony instead of competing for resources. The solution is built with redundancy in mind to eliminate risk and allow for flexibility in deployment options. The architecture scales linearly so that you can start with a smaller configuration and grow as your business needs change optimizing a return on investment. If something goes wrong the solution is backed by our collaborative support agreement so there is no finger pointing and only swift problem resolution.
So what would you do with one million IOPS? Build a new application that will respond to a competitive threat? Deliver faster results for your company? Increase the number of users and transactions your application can support without having to worry about missing critical service level agreements? If nothing else, imagine how great you will sleep knowing that your business is running with the performance needed for success.
Tags: Cisco UCS, data center, FlexPod, netapp, Oracle, Oracle OpenWorld, UCS
Guest Blog Post by Krisine A. Snow, President, Cisco Capital
The pressure for businesses to quickly adapt and innovate—to capitalize on new market opportunities and stay ahead of the competition—is increasing to achieve their business goals. And it is being felt not only by IT organizations, but by entire companies as businesses rely more and more on technology. Cloud computing in particular has had a profound impact on businesses today, emerging as a key technology requirement to foster innovation and growth.
In March 2014, Cisco announced that it would invest $1 billion to expand its cloud business over the next two years. Today, in addition to the expansion of Cisco’s Intercloud product offerings and partner ecosystem, Cisco Capital has earmarked $1 billion in financing for Cisco customers and partners to help them adopt the Cisco technologies they’ll need to transition to the cloud.
As the financing arm of Cisco, Cisco Capital has developed a number of programs into this investment that will focus on financing Cisco Application Centric Infrastructure, facilitating technology migrations and providing flexible payment structures. As these type of transitions can require sizeable investments for companies, financing provides a cost-effective way for organizations to invest in their business.
By leveraging financing, organizations can align technology investments to the ever-evolving priorities of the business. Financing allows businesses to:
- Preserve cash that can then be reinvested into the business—spreading the cost of an IT investment over time conserves funds, enabling organizations to invest more heavily in departments such as R&D and ultimately speeding the pace of innovation.
- Accelerate the return on investment— aligning cash outlay to solution implementation and revenue stream generation.
- Adopt new technologies faster—with the ability to implement new technologies more quickly, businesses remain agile and ahead of the competition.
- “Green” the business — provides a vehicle to dispose of retired or under-utilized assets in an environmentally conscious manner with end-of-life strategies and migration programs or recycle programs.
Cisco Capital Financing the Cloud Suite
Cisco Capital creates tailored financial solutions and offerings for customers and partners that complement Cisco’s products and technologies, and are designed to support how customers and partners buy and deploy them. As a part of the $1 billion commitment, Cisco Capital is providing four programs specifically designed to address cloud adoption and migration.
Designed for both end-user customers and cloud service providers (Cisco partners), Cisco Capital flexible payment structures offer payment deferral options of up to 12 months, affordable monthly rates and structured payment streams. These structured loans and leases finance complete solutions including hardware, software and services from both Cisco and non-Cisco complementary solution providers.
Also geared towards end-user customers and cloud service providers are low total cost of ownership (TCO) offers aimed for customers looking to adopt Cisco Application Centric Infrastructure, a foundation for Intercloud infrastructure. Developed with below market payment terms, this program enables customers to keep technology up to date and refresh when needed, ultimately lowering TCO and the long term cost of maintenance.
Specifically for qualified cloud service providers Cisco Capital has developed two tailored programs including Accelerate Loans and Monetization of Managed Services. With an Accelerate Loan, no payments are required during the first 12 months in which the cloud data center is being built, allowing the service provider to align payments to the solution deployment and revenue generation.
The Monetization of Managed Services offering allows qualified cloud service providers to acquire the technology needed to deliver managed services solutions to customers without incurring up-front cost or debt through an asset light approach. Key benefits include alignment of expenses to revenue for optimized cash flow and potential relief from asset disposition obligations at the end of the term.
While there are a number of strategies businesses can employ when planning for such a large-scale technology investment, Cisco Capital is uniquely positioned to help Cisco customers and partners embrace the transition to the cloud. Because Cisco Capital has such a deep understanding of the products, services and overall solutions being offered by Cisco, we are able to create customized financing solutions that will help our customers and partners adopt and deploy technologies like Intercloud in the most efficient and cost-effective way possible.
For more information, visit: Financing the Cloud
Disclaimer: Eligibility for financing is subject to standard underwriting procedures.
Tags: cisco capital, Cisco cloud, cloud, financing, InterCloud
Data Centers are becoming increasingly smart, intelligent and elastic. With the advancement in cloud and virtualization technologies, customers demand dynamic workload management, efficient and optimal use of their resources. In addition the configuration and administration of Data Center solutions is complex and is going to become increasingly so.
With these requirements and architectures in mind we have a industry first solution called Remote Integrated Service Engine (RISE). RISE is a technology that simplifies provisioning, out of box management of service appliances like load balancers, firewalls, network analysis modules. It makes data center and campus networks dynamic, flexible, easy to configure and maintain.
RISE can dynamically provision network resources for any type of service appliance (physical and virtual form factors). External appliances can now operate as integrated service modules with Nexus Series of switches without burning a slot in a switch . This technology provides robust application delivery capabilities that accelerate the application performance manifold.
RISE is supported on all Nexus Series switches with services like Citrix NetScaler MPX, VPX, SDX and Cisco Prime NAM with many more in the pipeline.
Advantages & Features
- Simplified Out-of-Box experience : reduces the administrator’s manual configuration steps from 30 to 8 steps !!
- Supported on Citrix NetScaler MPX, SDX, VPX, and Nexus 1KV with VPX
- Supported on Cisco Prime Network Analyzer Module
- Automatic Policy Based Routing - Eliminates need for SNAT or Manual PBR
- Direct and Indirect Attach mode integration
- Show module for RISE
- Attach module for RISE
- Auto Attach – Zero touch configuration of RISE
- Health Monitoring of appliance
- Appliance HA and VPC supported
- Nexus 5K/6K support (EFT available)
- IPV6 support (EFT available)
- DCNM support
- Order of magnitude OPEX savings: reduction in configuration, and ease of deployment
- Order of magnitude CAPEX savings: Wiring, Power Rackspace and Cost savings
For more information, schedule an EFT or POC Contact us at email@example.com
RISE press release on Wall Street Journal : http://online.wsj.com/article/PR-CO-20140408-905573.html
RISE At A Glance white paper: http://www.cisco.com/c/dam/en/us/products/collateral/switches/nexus-7000-series-switches/at-a-glance-c45-731306.pdf
RISE Video at Interop: https://www.youtube.com/watch?v=1HQkew4EE2g
Cisco RISE page: www.cisco.com/go/rise
Gartner blog on RISE: “Cisco and Citrix RISE to the Occasion”: http://blogs.gartner.com/andrew-lerner/2014/03/31/cisco-and-citrix-rise-to-the-adc-occasion/
Tags: 7000, Cisco, Cisco Nexus Switches, Cisco Prime NAM, Citrix NetScaler, Citrix NetScaler VPX, cloud, data center, innovation, nexus, Nexus 7000, partner, RISE, virtualization
Guest Blog Post by Stephen Nola, Group Executive, IT-as-a-Service, Dimension Data
At Dimension Data we are all about accelerating ambition and this includes enabling Cisco’s ambition to build the world’s largest global Intercloud, a network of clouds to address customer requirements for a globally distributed, highly secure cloud platform. Dimension Data is partnering with Cisco to provide our cloud technology in Cisco-branded managed service offerings and the public cloud – a core component of the enterprise hybrid IT solution.
As cloud adoption is maturing, companies are taking a more holistic approach to incorporating cloud into the modern IT landscape. This transformation is increasingly application centric which means that organisations will be sourcing multiple delivery models that are best aligned to applications that are fit for cloud, born for the cloud, and not for the cloud, ever.
The formula for hybrid IT will include public and private cloud, managed hosting and managed services on and off premises. Cisco realizes that Dimension Data’s highly connected and integrated solutions leverage new and flexible consumption models for enterprises all over the world. In fact, Dimension Data offers our Managed Cloud Platform on five continents with datacenters strategically located to address regulatory compliance and data sovereignty for multi-national companies.
Dimension Data has been accelerating adoption of Cisco technology for over 23 years. Inclusion in the Intercloud ecosystem is good for our clients and will enable greater reach for Dimension Data’s Cloud. As Cisco Intercloud subscribers, enterprises around the world will leverage Dimension Data’s global network of inter-connected cloud datacenters as a core component of their hybrid cloud strategy. Dimension Data is proud of our Cisco heritage and our strategic partnership that will take our companies and our clients into the future.
Stephen Nola, Group Executive – ITaaS
Steve was appointed Group Executive – ItaaS in 2013. Since he joined Dimension Data in 1989, Steve has held a number of key roles in the Group. In 2011, he was appointed Chief Executive Officer, Cloud Business Unit. Prior to this, Steve was Chief Executive Officer of Dimension Data Australia from 2001. Before being appointed Chief Executive Officer for the Australia region, Steve was Chief Executive Officer for Dimension Data Integration, and Joint Managing Director of Dimension Data Australia. Before joining Dimension Data Australia (formerly known as Com Tech Communications) Steve worked for Telstra from 1987 to 1989. Steve is a member of the advisory board at RMIT for the Bachelor of Information Technology course and a member of the Starlight Foundation Victoria Board. He holds a Bachelor of Electrical Engineering (Honours) from the Royal Melbourne Institute of Technology (RMIT), majoring in robotics.
Tags: Cisco Partner, CiscoCloud, cloud, dimension data, InterCloud