For the most part, my last post was concerned about what Cisco ONE was, so explore a little more into the why. I am going to assume you read my last post, so let’s dig in. One of the fundamental concepts behind ONE is illustrated below–the idea of exposing the network in a highly granular way and emphasizing the ability to not only exert programmatic control over switch behavior, but the ability of the network to present interesting and useful information back up to the applications.
Great news for SAP users! Cisco’s industry leading IT process automation software based upon the Cisco Process Orchestrator and our knowledge based automation solution will be sold by SAP.
This solution addresses market demand for IT process automation, helping IT staff standardize and unify operational processes across the enterprise to support maximum up-time and optimal resource usage. What a big deal for SAP customers who will now get the advantages of this automation to achieve their business goals. We have well over 300 out of the box automation workflows that drive operational excellence. Imagine being the person who used to do all of this manually. Sign me up for:
- automated health checks
- unified incident response
- predefined corrective actions
- advanced reporting
- the ability to create reusable workflows with your best practices through a drag and drop editor
In fact, the core software that drives this is the same software that orchestrates our Intelligent Automation for Cloud Solution. Very cool that we can solve such different problems (SAP IT Process Automation) and Cloud Orchestration (provisioning of resources for physical and virtual servers) with the same software product.
This automation product will be co-marketed and sold with SAP Landscape Visualization Management (LVM) solution as part of SAP’s virtualization and cloud solution offering.
I want to extend a shout out to Eric Robertson,
our Product Manager for our SAP solutions and stay tuned for even more exciting solutions based upon Cisco Intelligent Automation.
Last month, James Sharp wrote about Cisco Advanced Services, and how together with partners, Cisco can offer Canadian businesses the services they need, when they need them.
James goes on to say how there are specific ways advanced services can help depending on which stage of the lifecycle you are currently in. This makes me think of food, and how meals during the day help keep me running efficiently and effectively, just like the services Cisco offers for your data center during different stage of the lifecycle process.
Didn’t your parents always tell you that breakfast is the most important part of the day? Well, without proper planning, you will certainly be in a hard place to begin building. Planning services assess your current data center and provide in-depth recommendations for building and managing.
Lunch involves design and implementation. No, there’s no nap after.
If you don’t have hearty dinner, you may not be on the top of your game for the next day. Managing is all about being proactive by anticipating issues before they arise and optimizing performance.
And as for dessert? Read what James has to say about his personal experiences with advanced services and Cisco customers:
Last week we participated in the annual Hadoop Summit held in San Jose, CA. When we first met with Hortonworks about the Summit many months back they mentioned this year’s Hadoop Summit would be promoting Reference Architectures from many companies in the Hadoop Ecosystem. This was great to hear as we had previously presented results from a large round of testing on Network and Compute Considerations for Hadoop at Hadoop World 2011 last November and we were looking to do a second round of testing to take our original findings and test/develop a set of best practices around them including failure and connectivity options. Further the set of validation demystifies the one key Enterprise ask “Can we use the same architecture/component for Hadoop deployments?”. Since a lot of the value of Hadoop is seen once it is integrated into current enterprise data models the goal of the testing was to not only define a reference architecture, but to define a set of best practices so Hadoop can be integrated into current enterprise architectures.
Below are the results of this new testing effort presented at Hadoop Summit, 2012. Thanks to Hortonworks for their collaboration throughout the testing.
Back in March we announced the third generation of UCS, with significant expansions to the I/O and systems management capabilities of the platform as well as a new lineup of servers. This month we’re continuing to expand the UCS server lineup with the addition of four new models. The latest batch of M3 systems are comprised of three Intel Xeon “EN” class machines (E5-2400 series processors) as well as a four socket “EP” (E5-2600 series) blade server. Specifically: the UCS B22 and B420 M3 blades and the C22 and C24 M3 rack servers. These new servers round out the UCS portfolio with an even stronger set of products optimized for scale-out and light general-purpose computing as well as a new price/performance 4S category in the mid-range.
If you prefer watching than reading , here is a nice conversation between Intel Boyd Davis , VP & GM, Data Center Infrastructure group, Cisco Jim McHugh, VP UCS Marketing, and Scott Ciccone, Sr. Product Marketing Manager, highlighting the key benefits of these new models.
To figure out how these fit in, let’s step back and consider the broader evolution of server technology in play here:
1) Cisco has made server I/O more powerful and much simpler.
One of the key differentiators of UCS is the way in which high-capacity server network access has been aggregated through Cisco Virtual Interface Cards and infused with built-in high performance virtual networking capabilities. In “pre-UCS” server system architectures, one of the main design considerations was the type and quantity of physical network adapters required. Networking, combined with computing sockets/cores/frequency/cache, system memory, and local disk are historically the primary resources considered in the balancing act of cost, physical space and power consumption, all of which are manifested in the various permutations of server designs required to cover the myriad of workloads most efficiently. Think of these as your four server subsystem food groups. Architecture purists will remind us that everything outside the processors and their cache falls into the category of “I/O” but let’s not get pedantic because that will mess up my food group analogy. In UCS, I/O is effectively taken off the table as a design worry because every server gets its full USRDA of networking through the VIC: helping portions of bandwidth, rich with Fabric Extender technology vitamins that yield hundreds of Ethernet and FC adapters through one physical device. Gone are the days of hemming and hawing over how many mezz card slots your blade has or how many cards you’re going to need to feed that hungry stack of VM’s on your rack server. This simplification changes things for the better because it takes a lot of complication out of the equation.