Unified Computing was born on March 16th, 2009 and bold predictions were made that day regarding what UCS would do for customers and the industry. If we take a trip in the way back machine and unearth some of actual slides from that event (thank you Mr. Peabody), here is what we find:
Three years later we know that the vision of what was needed was spot on and the predictions of the impact were actually too conservative. Customers using UCS are telling us they’re experiencing:
80 percent increase in administrator productivity
90 percent reduction in deployment times
40 percent improvement in application performance
30 percent lower infrastructure costs
60 percent reduction in power and cooling costs
And now it gets even better. Today brings new innovation across the UCS platform: a third generation of technology that delivers the power of unification and continues to lead the transition to fabric based data center infrastructure. Most of all in this announcement we’re celebrating how the innovation in UCS is paying off for our customers. Its one thing to have a vision and another to deliver on it: this week Gartner updated its Magic Quadrant for Blade Servers and Cisco moved from Visionary to Leader.
Witness the world record application performance benchmark results posted by Intel in this launch. UCS certainly isn’t the new kid on the server block anymore. This system more than holds its own.
So enough of the rhetoric: where’s the beef in the new news? It turns out that there is so much new technology here that I need to break it into another post…
Tags: blades, data center, Servers, UCS, unified computing, Unified Data Center
In part 1 of this posting, I related a real-life experience of mine, where I learned that customer problems were often a better source for product and service definition than formally stated customer requirements. I’d like to take this discussion further, via a concept in product and project management called the “tyre swing”. Read More »
Tags: cisco_services, cloud, Cloud Computing, data center, data_center
I was sitting in a room with a client the other day and normally in these conference rooms with the mahogany tables and high back leather chairs*, you have Cisco on one side of the table, and the client on the other. However, this wasn’t the case, as the table was formica and the chairs were folding. Also, in the room was two groups that had never spoken before except in rare cases, “The network is down!” or “Our hosts can’t see their storage!” Yes my friends, it was the LAN and SAN folks in the room. The topic of FCoE was in front of us and the question was around their soon to be deployed Nexus 5000 switching infrastructure. The discussion between the two parties over who would manage the Nexus 5000 reminded me of a scene from Ghostbusters… Read More »
Tags: data center, FCoE, Fibre Channel, MDS, Nexus 5000, SAN, storage area network
Very soon Intel is going to announce a new generation of processors. Cisco and Intel partnership increased significantly over the past years with the astonishing success of the Unified Computing Systems, based on Intel processors and the unique technology provided by Cisco.
So guess what ? We are of course ready to announce a third generation of Unified Computing Systems, which are taking advantage of the new features delivered by Intel, combined with the latest innovations from Cisco.
So please join us on March 8th at 9:00 am PST (12:00 pm EST) to understand how Cisco is delivering on the vision of Gartner, which identified Fabric Computing as the preferred infrastructure for virtualization and cloud to make your data center architecture more agile, scalable, and adaptable.
What can you expect from this 60 minutes webcast ? Well Cisco CEO John Chambers, VP Server Access and Virtualization Soni Jiandani,but also Intel senior executives, and CEOs from large organizations (manufacturers, services..) will detail the financial and organizational benefits that you will get in deploying these new systems.
To register immediately to this live broadcast
and learn how you can improve significantly now your infrastructure click here
Tags: Cisco, data center, Fabric computing, Intel, Servers, UCS, unified computing system
They say that data about your data is more important than the data itself. Having the right data in the data warehouse at the right time or loaded up for Hadoop Analysis is critical. I have heard of stories where the wrong product was sent to the wrong store for sale due to incorrect conclusions on what was selling best. This was due to reports and decisions being made on the wrong data. This can be a resume impacting decision in this modern world of data driven product placements around the globe. In previous blog about Enterprise Job Scheduling (aka Workload Automation) http://blogs.cisco.com/datacenter/workload-automation-job-scheduling-applications-and-the-move-to-cloud/ I discussed the basic uses of automating and scheduling batch workloads. Business intelligent, data warehousing and Big Data initiatives need to aggregate data from different sources and load them into very large data warehouses.
Let’s look into the life of the administrator and operations of a workload automation tool. The typical Enterprise may have thousands if not ten thousands of job definitions. Those are individual jobs that get run: look for this file in a drop box, FTP data from that location, extract this specific set of data from an Oracle database, connect to that windows server and launch this process, load this data into a datawarehouse using Informatica PowerCenter, run this process chain in SAP BW and take that information to this location. All this occurs to get the right data in the right place at the right time. These jobs are then strung together in a sequences we in the Intelligent Automation Solutions Business Unit at Cisco call Job Groups. These groups can represent business processes that are automated. They many have 10’s to hundreds of steps. Each job may have dependency on other jobs for completion. The jobs may be waiting for resources to become available. This all leads to a very complex execution sequence. These jobs groups run every day; some run multiple times a day, some only run at the end of the quarter.
The typical IT operations team has a group of people that design, test and implement these job groups by working with people in business IT that design and implement business processes. Often times these job groups need to finish by a certain time to meet the needs of the business. If you are a stock exchange some job groups have to finish say in so many hours after the market closes. If you have to get your data to a downstream business partner (or customer) by a certain time you become very attached to watching those jobs execute. No pun intended, your job may be on the line.
A new technology has hit the scene for our customers of the Cisco Tidal Enterprise Scheduler. It is called JAWS Historical and Predictive Analytics. http://www.termalabs.com/products/cisco-tidal-enterprise-scheduler.html . These modules takes all historical and real time performance data information from the Scheduler and through a set of algorithms produce historical, real-time, predictive, and business analytics historical and predictive analytics. This is the data about the data I mentioned previously. Our customers can do what if analyses as well as get early indication that a particular job group is not able to finish in time. The administrators can take action before it is too late. This is critical in getting the data in the right place so that analytics can be performed correctly and therefore not sending 1000 of the wrong product to the wrong store location. Thanks to our partners from Terma Software Labs http://info.termalabs.com/cisco-systems-and-terma-software-labs-to-join-forces-for-more-sla-aware-workload-processing/ .
Tags: data center, intelligent automation, job scheduling, workload automation