Cisco Blogs


Cisco Blog > Data Center and Cloud

Big Returns on Big Data through Operational Intelligence

Guest Blog by Jack Norris

Jack is responsible for worldwide marketing for MapR Technologies, the leading provider of a enterprise grade Hadoop platform. He has over 20 years of enterprise software marketing experience and has demonstrated success from defining new markets for small companies to increasing sales of new products for large public companies. Jack’s broad experience includes launching and establishing analytic, virtualization, and storage companies and leading marketing and business development for an early-stage cloud storage software provider.

Big Data use cases are changing the competitive dynamics for organizations with a range of operational use cases. Operational intelligence refers to applications that combine real-time, dynamic, analytics that deliver insights to business operations. Operational intelligence requires high performance. “Performance” is a word that is used quite liberally and means different things to different people. Everyone wants something faster. When was the last time you said, “No, give me the slow one”?

When it comes to operations, performance is about the ability to take advantage of market opportunities as they arise. To do this requires the ability to quickly monitor what is happening. It requires both real-time data feeds and the ability to quickly react. The beauty of Apache Hadoop, and specifically MapR’s platform, is that data can be ingested as a real-time stream; analysis can be performed directly on the data, and automated responses can be executed. This is true for a range of applications across organizations, from advertising platforms, to on-line retail recommendation engines, to fraud and security detection.

When looking at harnessing Big Data, organizations need to realize that multiple applications will need to be supported. Regardless of which application you introduce first, more will quickly follow. Not all Hadoop distributions are created equal. Or more precisely, most Hadoop distributions are very similar with only minor value-added services separating them. The exception is MapR. With the best of the Hadoop community updates coupled with MapR’s innovations, the broadest set of applications can be supported including mission-critical applications that require a depth and breadth of enterprise-grade Hadoop features.

Read More »

Tags: , , , , , , , ,

Self-Service Arrives for Workload Automation – and Saves the Day

It’s close to 11 p.m. on the last day of the quarter in a large corporation. IT gets an urgent request to postpone a closing of the books process because there’s a large order stuck in the CRM system.

This means that the order won’t hit the books and be recorded as a booking.  The customer won’t get her order, the salesperson won’t get paid, and finance will show a missing number.

This generates an urgent call to the team that manages the workload automation platform: Hold the closing workflow!  Stop the presses!

The admins have to get to their console to find the job and pause it.  Not a huge deal, except there are thousands of jobs to be run and hundreds of business people calling on a regular basis, at all kind of hours.

Some customers have created help desks for their workload automation teams or they may even off-shore the call center to serve these kinds of requests.

No more.  Introducing self-service for workload automation.

Read More »

Tags: , , , , , ,

Tale of Three Cities: Report on Cisco’s Tidal Enterprise Scheduler User Groups

The Intelligent Automation Solutions Business Unit hosts user groups for our Workload Automation software customers.  Our Tidal Enterprise Scheduler  is used by many enterprises to manage the execution of business process and moving data around the data center.  We recently met many customers during our user groups in Chicago, Boston and New York City.  We see some very interesting differences in our user base  and how our customers use our product between these cities.  For example in our Chicago

user group during the winter we had some key large customer implementation and many customer s who were deploying job scheduling for department level deployments and wanting to drive the usage throughout their enterprise.  It is very common to start using Workload Automation in one key area and then expand into other areas as the success multiplies.  It was good to see old friends who have used our scheduler for almost a decade as well as new users learning how to use our software product to accomplish cool new technical use cases.

Read More »

Tags: , , , , , ,

Critical Path and “What if?” Analytics for Enterprise Job Scheduling – get your Big Data in the right place before you make a resume impacting decision

They say that data about your data is more important than the data itself.  Having the right data in the data warehouse at the right time or loaded up for Hadoop Analysis is critical.  I have heard of stories where the wrong product was sent to the wrong store for sale due to incorrect conclusions on what was selling best.  This was due to reports and decisions being made on the wrong data.  This can be a resume impacting decision in this modern world of data driven product placements around the globe.  In previous blog about Enterprise Job Scheduling (aka Workload Automation) http://blogs.cisco.com/datacenter/workload-automation-job-scheduling-applications-and-the-move-to-cloud/ I discussed the basic uses of automating and scheduling batch workloads.  Business intelligent, data warehousing and Big Data initiatives need to aggregate data from different sources and load them into very large data warehouses.

Let’s look into the life of the administrator and operations of a workload automation tool.  The typical Enterprise may have thousands if not ten thousands of job definitions.  Those are individual jobs that get run:  look for this file in a drop box,  FTP data from that location,  extract this specific set of data from an Oracle database, connect to that windows server and launch this process,  load this data into a datawarehouse using Informatica PowerCenter, run this process chain in SAP BW and take that information to this location.  All this occurs to get the right data in the right place at the right time.  These jobs are then strung together in a sequences we in the Intelligent Automation Solutions Business Unit at Cisco call Job Groups.  These groups can represent business processes that are automated.  They many have 10’s to hundreds of steps.  Each job may have dependency on other jobs for completion.  The jobs may be waiting for resources to become available.  This all leads to a very complex execution sequence.  These jobs groups run every day; some run multiple times a day, some only run at the end of the quarter.

The typical IT operations team has a group of people that design, test and implement these job groups by working with people in business IT that design and implement business processes.  Often times these job groups need to finish by a certain time to meet the needs of the business.  If you are a stock exchange some job groups have to finish say in so many hours after the market closes.  If you have to get your data to a downstream business partner (or customer) by a certain time you become very attached to watching those jobs execute.  No pun intended, your job may be on the line.

A new technology has hit the scene for our customers of the Cisco Tidal Enterprise Scheduler.  It is called  JAWS Historical and Predictive Analytics. http://www.termalabs.com/products/cisco-tidal-enterprise-scheduler.html .  These modules takes all historical and real time performance data information from the Scheduler and through a set of algorithms produce historical, real-time, predictive, and business analytics historical and predictive analytics.   This is the data about the data I mentioned previously.  Our customers can do what if analyses as well as get early indication that a particular job group is not able to finish in time.  The administrators can take action before it is too late.  This is critical in getting the data in the right place so that analytics can be performed correctly and therefore not sending 1000 of the wrong product to the wrong store location.  Thanks to our partners from Terma Software Labs http://info.termalabs.com/cisco-systems-and-terma-software-labs-to-join-forces-for-more-sla-aware-workload-processing/ .

Tags: , , ,

What provisioning the Cloud infrastructure and cooking have in common…

What provisioning the Cloud infrastructure and cooking have in common…

 

I like to cook. Sometimes, I’ll grab whatever ingredients I have on hand, put them in a Dutch oven, throw in a few spices, and make a delicious casserole that can never be repeated. At other times, I’ll follow a recipe to the letter, measure and weigh everything that goes in, and produce a great meal that I can repeat consistently each time.

When provisioning servers and blades for a Cloud infrastructure, the same 2 choices exist: follow your instinct and build a working (but not repeatable) system, or follow a recipe that will ensure that systems are built in an exacting fashion, every time. Without a doubt, the latter method is the only way to proceed.

Enter the Cisco Tidal Server Provisioner (an OEM from www.linmin.com) , an integral component of Cisco Intelligent Automation for Cloud and Cisco Intelligent Automation for Compute. TSP lets you easily create “recipes” that can be easily deployed onto physical systems and virtual machines with repeatability and quality, every time. These recipes can range from simple, e.g., install a hypervisor or an operating system, to very complex: install an operating system, then install applications, run startup scripts, configure the system, access remote data, register services, etc.

Once you have a recipe (we call it a Provisioning Template), you can apply it to any number of physical systems or virtual machines without having to change the recipe. Some data centers use virtualization for sand box development and prototyping, and use physical servers and blades for production. Some data centers do the opposite: prototype on physical systems, then run the production environment in a virtualized environment. And of course, some shops are “all physical” or “all virtual”. Being able to deploy a recipe-based payload consistently on both physical and virtual systems provides the ultimate flexibility. Yes, once you’ve created a virtual machine, you’ll likely use VMware vSphere services to deploy, clone and move VMs, but as long as you’re using TSP to create that “first VM”, you have the assurance that you have a known-good, repeatable way of generating the golden image. When time comes to update the golden image, don’t touch the VM: instead, change the recipe, provision a new VM, and proceed from there.

Read More »

Tags: , , , , , , , ,