In this week’s episode of Engineers Unplugged, Roving Reporter Lauren Malhoit (@malhoit) talks to Adam Eckerle (@eck79) about #UCSGrandSlam and UCS as a platform. Great view of the tech from the admin perspective: “UCS is built from the ground up with virtualization in mind.” Great practical episode for anyone exploring UCS!
Much like UCS, the Unicorn Challenge is a platform for creativity.
**Want to be Internet Famous? Join us for our next shoot: VMworld Barcelona. Tweet me @CommsNinja!**
This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:
- Episodes will publish weekly (or as close to it as we can manage)
- Subscribe to the podcast here: engineersunplugged.com
- Follow the #engineersunplugged conversation on Twitter
- Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
- Practice drawing unicorns
Join the behind the scenes by liking Engineers Unplugged on Facebook.
Tags: ACI, UCS, UCSGrandSlam, virtualization
At the June Hadoop Summit in San Jose, Hadoop was re-affirmed as the data center “killer app,” riding an avalanche of Enterprise Data, which is growing 50x annually through 2020. According to IDC, the Big Data market itself growing six times faster than the rest of IT. Every major tech company, old and new, is now driving Hadoop innovation, including Google, Yahoo, Facebook Microsoft, IBM, Intel and EMC – building value added solutions on open source contributions by Hortonworks, Cloudera and MAPR. Cisco’s surprisingly broad portfolio will be showcased at Strataconf in New York on Oct. 15 and at our October 21st executive webcast. In this third of a blog series, we preview the power of Application Centric Infrastructure for the emerging Hadoop eco-system.
Why Big Data?
Organizations of all sizes are gaining insight and creativity into use cases that leverage their own business data.
The use cases grow quickly as businesses realize their “ability to integrate all of the different sources of data and shape it in a way that allows business leaders to make informed decisions.” Hadoop enables customers to gain insight from both structure and unstructured data. Data Types and sources can include 1) Business Applications – OLTP, ERP, CRM systems, 2) Documents and emails 3) Web logs, 4) Social networks, 5) Machine/sensor generated, 6) Geo location data.
IT operational challenges
Even modest-sized jobs require clusters of 100 server nodes or more for seasonal business needs. While, Hadoop is designed for scale out of commodity hardware, most IT organizations face the challenge of extreme demand variations in bare-metal workloads (non-virtualizable). Furthermore, they are requested by multiple Lines of Business (LOB), with increasing urgency and frequency. Ultimately, 80% of the costs of managing Big Data workloads will be OpEx. How do IT organizations quickly, finish jobs and re-deploy resources? How do they improve utilization? How do they maintain security and isolation of data in a shared production infrastructure?
And with the release of Hadoop 2.0 almost a year ago, cluster sizes are growing due to:
- Expanding data sources and use-cases
- A mixture of different workload types on the same infrastructure
- A variety of analytics processes
In Hadoop 1.x, compute performance was paramount. But in Hadoop 2.x, network capabilities will be the focus, due to larger clusters, more data types, more processes and mixed workloads. (see Fig. 1)
ACI powers Hadoop 2.x
Cisco’s Application Centric Infrastructure is a new operational model enabling Fast IT. ACI provides a common policy-based programming approach across the entire ACI-ready infrastructure, beginning with the network and extending to all its connected end points. This drastically reduces cost and complexity for Hadoop 2.0. ACI uses Application Policy to:
– Dynamically optimize cluster performance in the network
– Redeploy resources automatically for new workloads for improved utilization
– Ensure isolation of users and data as resources are deployments change
Let’s review each of these in order:
Cluster Network Performance: It’s crucial to improve traffic latency and throughput across the network, not just within each server.
- Hadoop copies and distributes data across servers to maximize reliability on commodity hardware.
- The large collection of processes in Hadoop 2.0 are usually spread across different racks.
- Mixed workloads in Hadoop 2.0, support interactive and real-time jobs, resulting in the use of more on-board memory and different payload sizes.
As a result, server IO bandwidth is increasing which will place loads on 10 gigabit networks. ACI policy works with deep telemetry embedded in each Nexus 9000 leaf switch to monitor and adapt to network conditions.
Using policy, ACI can dynamically 1) load-balance Big Data flows across racks on alternate paths and 2) prioritize small data flows ahead of large flows (which use the network much less frequently but use up Bandwidth and Buffer). Both of these can dramatically reducing network congestion. In lab tests, we are seeing flow completion nearly an order of magnitude faster (for some mixed workloads) than without these policies enabled. ACI can also estimate and prioritize job completion. This will be important as Big Data workloads become pervasive across the Enterprise. For a complete discussion of ACI’s performance impact, please see a detailed presentation by Samuel Kommu, chief engineer at Cisco for optimizing Big Data workloads.
Resource Utilization: In general, the bigger the cluster, the faster the completion time. But since Big Data jobs are initially infrequent, CIOs must balance responsiveness against utilization. It is simply impractical for many mid-sized companies to dedicate large clusters for the occasional surge in Big Data demand. ACI enables organizations to quickly redeploy cluster resources from Hadoop to other sporadic workloads (such as CRM, Ecommerce, ERP and Inventory) and back. For example, the same resources could run Hadoop jobs nightly or weekly when other demands are lighter. Resources can be bare-metal or virtual depending on workload needs. (see Figure 2)
How does this work? ACI uses application policy profiles to programmatically re-provision the infrastructure. IT can use a different profile to describe different application’s needs including the Hadoop eco-system. The profile contains application’s network policies, which are used by the Application Policy Infrastructure controller in to a complete network topology. The same profile contains compute and storage policies used by other tools, such as Cisco UCS Director, to provisioning compute and storage.
Data Isolation and Security: In a mature Big Data environment, Hadoop processing can occur between many data sources and clients. Data is most vulnerable during job transitions or re-deployment to other applications. Multiple corporate data bases and users need to be correctly to ensure compliance. A patch work of security software such as perimeter security is error prone, static and consumes administrative resources.
In contrast, ACI can automatically isolate the entire data path through a programmable fabric according to pre-defined policies. Access policies for data vaults can be preserved throughout the network when the data is in motion. This can be accomplished even in a shared production infrastructure across physical and virtual end points.
As organizations of all sizes discover ways to use Big Data for business insights, their infrastructure must become far more performant, adaptable and secure. Investments in fabric, compute and storage must be leveraged across, multiple Big Data processes and other business applications with agility and operational simplicity.
Leading the growth of Big Data, the Hadoop 2.x eco-system will place particular stresses on data center fabrics. New mixed workloads are already using 10 Gigabit capacity in larger clusters and will soon demand 40 Gigabit fabrics. Network traffic needs continuous optimization to improve completion times. End to end data paths must use consistent security policies between multiple data sources and clients. And the sharp surges in bare-metal workloads will demand much more agile ways to swap workloads and improve utilization.
Cisco’s Application Centric Infrastructure leverages a new operational and consumption model for Big Data resources. It dynamically translates existing policies for applications, data and clients in to fully provisioned networks, compute and storage. . Working with Nexus 9000 telemetry, ACI can continuously optimize traffic paths and enforce policies consistently as workloads change. The solution provides a seamless transition to the new demands of Big Data.
To hear about Cisco’s broader solution portfolio be sure to for register for the October 21st executive webcast ‘Unlock Your Competitive Edge with Cisco Big Data Solutions.’ And stay tuned for the next blog in the series, from Andrew Blaisdell, which showcases the ability to predictably deliver intelligence-driven insights and actions.
Tags: ACI, analytics, Big Data, Cisco Application Centric Infrastructure, Nexus 9000, UCS, UnlockBigData
Guest post by Aaron Newcomb, Solutions Marketing Manager, NetApp
No one wants the 2:00 am distressed phone call disturbing a good night’s sleep. For IT Managers and Database Administrators that 2:00 am call is typically bad news regarding the systems they support. Users in another region are not able to access an application. Customers are not placing orders because the system is responding too slowly. Nightly reporting is taking too long and impacting performance during peak business hours. When your business critical applications running on Oracle Database are not performing at the speed of business that creates barriers to customer satisfaction and remaining competitive. NetApp wants to help break down those barriers and help our customers get a good night sleep instead of worrying about the performance of their Oracle Database.
NetApp today unveiled a solution designed to address the need for extreme performance for Oracle Databases with FlexPod Select for High Performance Oracle RAC. This integrated infrastructure solution offers a complete data center infrastructure including networking, servers, storage, and the management software you need to run your business 24×7 365 days a year. Since NetApp and Cisco validate the architecture you can deploy your Oracle Databases with confidence and in much less time than traditional approaches. Built with industry-leading NetApp EF-550 flash storage arrays and Cisco UCS B200 M3 Blade Servers this solution can deliver the highest levels of performance for the most demanding Oracle Database workloads on the planet.
The system will deliver more than one million IOPS of read performance for Oracle Database workloads at sub-millisecond latencies. This means faster response times for end users, improved database application performance, and more overhead to run additional workload or consolidate databases. Not only that, but this pre-validated and pre-tested solution is based on a balanced configuration so that the infrastructure components you need to run your business are working in harmony instead of competing for resources. The solution is built with redundancy in mind to eliminate risk and allow for flexibility in deployment options. The architecture scales linearly so that you can start with a smaller configuration and grow as your business needs change optimizing a return on investment. If something goes wrong the solution is backed by our collaborative support agreement so there is no finger pointing and only swift problem resolution.
So what would you do with one million IOPS? Build a new application that will respond to a competitive threat? Deliver faster results for your company? Increase the number of users and transactions your application can support without having to worry about missing critical service level agreements? If nothing else, imagine how great you will sleep knowing that your business is running with the performance needed for success.
Tags: Cisco UCS, data center, FlexPod, netapp, Oracle, Oracle OpenWorld, UCS
Cisco UCS M-Series servers have been purpose built to fit specific need in the data center. The core design principles are around sizing the compute node to meet the needs of cloud scale applications.
When I was growing up I used to watch a program on PBS called 3-2-1 Contact, most afternoons, when I came home from school (Yes, I’ve pretty much always been a nerd). There was an episode about size and efficiency, that for some reason I have always remembered. This episode included a short film to demonstrate the relationship between size and efficiency.
The plot goes something like this. Kid #1 says that his uncle’s economy car, that gets a whopping 15 miles to the gallon (this was the 1980s), is more efficient than a school bus that gets 6 miles to the gallon. Kid #2 disagrees and challenges Kid #1 to a contest. But here’s the rub, the challenge is to transport 24 children from the bus stop to school, about 3 miles a way, on a single gallon of fuel. Long story short, the school bus completes the task with one trip, but the car has to make 8 trips and runs out of fuel before it completes the task. So kid #2 proves the school bus is more efficient.
The only problem with this logic is that we know that the school bus is not more efficient in all cases.
For transporting 50 people a bus is very efficient, but if you need to transport 2 people 100 miles to a concert the bus would be a bad choice. Efficiency depends on the task at hand. In the compute world, a task equates to the workload. Using a 1RU 2-socket E5 server for the distributed cloud scale workloads that Arnab Basu has been describing would be equivalent to using a school bus to transport a single student. This is not cost effective.
Thanks to hypervisors, we can have multiple workloads on a single server so that we achieve the economies of scale. However there is a penalty to building that type of infrastructure. You add licensing costs, administrative overhead, and performance penalties.
Customers deploying cloud scale applications are looking for ways to increase the compute capacity without increasing the cost and complexity. They need all terrain vehicles, not school buses. Small, cost effective, and easy to maintain resources that serve a specific purpose.
Many vendors entering this space are just making the servers smaller. Per the analogy above smaller helps. But one thing we have learned from server virtualization is that there is real value in the ability to share the infrastructure. With a physical server the challenge becomes how do you share components in compute infrastructure without a hypervisor? Power and cooling are easy, but what about network, storage and management. This is where M-Series expands on the core foundations of unified compute to provide a compute platform that meets the needs of these applications.
There are 2 key design principles in Unified Compute:
1.) Unified Fabric
2.) Unified Management
Over the next couple of weeks Mahesh Natarajan and I will be describing how and why these 2 design principles became the corner stone for building the M-Series modular servers.
Tags: Cisco Data Center, Cisco UCS, Cisco UCS Manager, Cloud Computing, UCS, UCS m-series, UCSGrandSlam