Cisco Blogs


Cisco Blog > Data Center and Cloud

Critical Path and “What if?” Analytics for Enterprise Job Scheduling – get your Big Data in the right place before you make a resume impacting decision

They say that data about your data is more important than the data itself.  Having the right data in the data warehouse at the right time or loaded up for Hadoop Analysis is critical.  I have heard of stories where the wrong product was sent to the wrong store for sale due to incorrect conclusions on what was selling best.  This was due to reports and decisions being made on the wrong data.  This can be a resume impacting decision in this modern world of data driven product placements around the globe.  In previous blog about Enterprise Job Scheduling (aka Workload Automation) http://blogs.cisco.com/datacenter/workload-automation-job-scheduling-applications-and-the-move-to-cloud/ I discussed the basic uses of automating and scheduling batch workloads.  Business intelligent, data warehousing and Big Data initiatives need to aggregate data from different sources and load them into very large data warehouses.

Let’s look into the life of the administrator and operations of a workload automation tool.  The typical Enterprise may have thousands if not ten thousands of job definitions.  Those are individual jobs that get run:  look for this file in a drop box,  FTP data from that location,  extract this specific set of data from an Oracle database, connect to that windows server and launch this process,  load this data into a datawarehouse using Informatica PowerCenter, run this process chain in SAP BW and take that information to this location.  All this occurs to get the right data in the right place at the right time.  These jobs are then strung together in a sequences we in the Intelligent Automation Solutions Business Unit at Cisco call Job Groups.  These groups can represent business processes that are automated.  They many have 10’s to hundreds of steps.  Each job may have dependency on other jobs for completion.  The jobs may be waiting for resources to become available.  This all leads to a very complex execution sequence.  These jobs groups run every day; some run multiple times a day, some only run at the end of the quarter.

The typical IT operations team has a group of people that design, test and implement these job groups by working with people in business IT that design and implement business processes.  Often times these job groups need to finish by a certain time to meet the needs of the business.  If you are a stock exchange some job groups have to finish say in so many hours after the market closes.  If you have to get your data to a downstream business partner (or customer) by a certain time you become very attached to watching those jobs execute.  No pun intended, your job may be on the line.

A new technology has hit the scene for our customers of the Cisco Tidal Enterprise Scheduler.  It is called  JAWS Historical and Predictive Analytics. http://www.termalabs.com/products/cisco-tidal-enterprise-scheduler.html .  These modules takes all historical and real time performance data information from the Scheduler and through a set of algorithms produce historical, real-time, predictive, and business analytics historical and predictive analytics.   This is the data about the data I mentioned previously.  Our customers can do what if analyses as well as get early indication that a particular job group is not able to finish in time.  The administrators can take action before it is too late.  This is critical in getting the data in the right place so that analytics can be performed correctly and therefore not sending 1000 of the wrong product to the wrong store location.  Thanks to our partners from Terma Software Labs http://info.termalabs.com/cisco-systems-and-terma-software-labs-to-join-forces-for-more-sla-aware-workload-processing/ .

Tags: , , ,

Meet Stephen Speirs: Data Center and Cloud Blogger Expert

March 2, 2012 at 2:39 pm PST

Based in the Glasgow Cisco Scotland office, Stephen is a distinguished blogger from the Data Center and Cloud team in Cisco Services.  Stephen joined Cisco in the year 2000 via the Atlantech Technologies acquisition and was Senior Manager within Product Management in Cisco’s Network Management R&D team, and he focused on IP/MPLS service provider network management. 

During this time, he brought to market the unique Cisco MPLS Diagnostics Expert product, taking it from (literally) a corridor conversation through definition to launch, and on to win multiple industry awards.   He has over 20 years of industry experience in IT, Data Center, and Service Provider Network Management which he shares with the world through his writing. By keeping customers’ new technology adoption challenges at the forefront of his mind and weaving novelty into his blogging best practices, Stephen has gained the popularity of many of his readers and established himself as a role model for many other Cisco bloggers.

Stephen’s Customer-Centric Vision

Blogging is no one-way conversation for Stephen. He has the customer in mind at all times and is always conscious of their careabouts. Prior to writing, he interviews customers and partners to better understand their viewpoints and present a more well-rounded perspective.

Read More »

Tags: , , , , , , , , , , , , ,

Prediction for 2nd half of 2012: Infrastructure as a Service deployments expand to include IT as a Service

IT shops deploying clouds over the past year have been focused on Infrastructure as a Service ( http://en.wikipedia.org/wiki/Infrastructure_as_a_service#Infrastructure ) as a way to drive speed in virtual and physical server provisioning, cost savings in operations, proactive service level agreements, and increased control and governance.   In one of my blogs I introduced our Cisco Intelligent Automation for Cloud http://blogs.cisco.com/datacenter/the-secret-is-now-out-you-can-simplify-cloud-deployments-with-cisco-unified-management/ and how that addresses both private, hybrid and public clouds IaaS.   Key to this is the service catalog and self service portal.  Moving to cloud is NOT about taking hundreds of server configuration templates and moving to them immediate self service.  All you are doing in that model is automating VM sprawl.  They key is defining a limited set of services and options that your end users such as application owners and technical folks can order through a self service portal and manage their life-cycle.

Read More »

Tags: , , , , , , , , , , ,

This Old Data Center

February 22, 2012 at 10:00 am PST

For this week’s Data Center Deconstructed we’re setting the Wayback machine to 1998, when Cisco opened a new engineering Data Center at its headquarters in San Jose, California.

Read More »

Tags: , , , , , , ,

Cisco at RSA 2012: Putting Things In Context

It’s that time of year again. The annual RSA security show brings together all the major security vendors under one roof for a week of training, announcements, and vendors hawking their latest wares. This year we can expect the usual cadre of legacy security vendors with their stand-alone, siloed products pretending that they now support clouds and mobile workers and BYOD. Booth babes, jugglers, magicians, and flashy giveaways will fill the exhibit halls while vendors play shell games with the security of customers, all adding a cacophony of noise to an already confusing situation.

Amidst all the hoopla and fanfare, however, Cisco Systems, the largest security vendor in the world, will be there with perhaps the only reasonable strategy for securing the networks organizations are creating today.

Read More »

Tags: , , , , , , ,