Cisco’s Intelligent Automation software helps our customers to achieve benefits such as lower TCO, greater efficiency, and greater business agility—without compromising stability, manageability, and security. Our automation software can be used for a broad range of IT processes, but we have a very unique and differentiated solution for the automation of SAP-related tasks. Last week, we announced a global reseller agreement that allows SAP to sell our Cisco Intelligent Automation for SAP software solution.
SAP views Cisco’s software as the preferred data center automation tool for SAP customers, and they are branding and selling our solution as SAP IT Process Automation by Cisco . With nearly 42,000 SAP customers worldwide, this is a great new channel opportunity for our automation software—and a great complement to SAP Solution Manager and Run SAP methodology. It’s also a powerful example of the strong partnership between Cisco and SAP to meet our customers’ needs.
This video with my colleague Flint Brenton (senior vice president, Cisco Intelligent Automation Solutions) and Stephen Spears (senior vice president, SAP Application Lifecycle Management Software) provides an overview of the importance of this agreement and the value to SAP customers:
Lately I’ve been seeing some industry people trying to apply the principles of data center network fabric models to their Wide Area Networks (WANs), and implying that such can be extended through service provider WANs. Data center fabrics and WANs are horses of very different colors with way too many differences for these perspectives to hold up.
Fundamentally they are different beasts with one more easily tamed than the other. Data center networks generally have well known end points and well-ordered designs.
Multi-tenant Data Center Designs
Bandwidth within data centers is virtually unlimited relative to WAN bandwidth. It is much more stable and constrained in its characteristics when it comes to things like latency, loss, jitter, capacity, restoration capabilities – all of which have significant influence on WAN services delivery. The same data center network assumptions exist between each of the end points, which makes fabric modeling for data centers generally a good approximation and thus possible to use.
Did you know that 93 percent of B2B buyers use search to begin the buying process (Source: Marketo). What’s more is that most buying cycles are 70 to 80 percent complete before companies are even willing to engage with sales people (Source: SiriusDecisions).
With statistics like that, wouldn’t you want to ensure your online assets provide the most compelling information about your products and services? After all, the goal is to get prospective customers to the finish line, preferably with you.
Enventis, a Cisco Gold and Unified Communications Master Specialized Partner, did just that.
They know prospective customers have a lot of options when it comes to selecting a data center for their network, server, and storage needs. Enventis also knows those prospective customers would have to travel great distances to visit their facilities located in Edina, Minnesota. So they created a virtual tour to help save on travel time and money, while providing an accurate view of what their data center looks like.
Want to see their tour? Watch the video below.
I chatted with Enventis Senior Marketing Specialist Elke Zimmermann about the making of the Enventis Data Center Virtual Tour video, the results that came out of it, and tips for creating your own virtual tour. Read More »
Last week we participated in the annual Hadoop Summit held in San Jose, CA. When we first met with Hortonworks about the Summit many months back they mentioned this year’s Hadoop Summit would be promoting Reference Architectures from many companies in the Hadoop Ecosystem. This was great to hear as we had previously presented results from a large round of testing on Network and Compute Considerations for Hadoop at Hadoop World 2011 last November and we were looking to do a second round of testing to take our original findings and test/develop a set of best practices around them including failure and connectivity options. Further the set of validation demystifies the one key Enterprise ask “Can we use the same architecture/component for Hadoop deployments?”. Since a lot of the value of Hadoop is seen once it is integrated into current enterprise data models the goal of the testing was to not only define a reference architecture, but to define a set of best practices so Hadoop can be integrated into current enterprise architectures.
Below are the results of this new testing effort presented at Hadoop Summit, 2012. Thanks to Hortonworks for their collaboration throughout the testing.
Back in March we announced the third generation of UCS, with significant expansions to the I/O and systems management capabilities of the platform as well as a new lineup of servers. This month we’re continuing to expand the UCS server lineup with the addition of four new models. The latest batch of M3 systems are comprised of three Intel Xeon “EN” class machines (E5-2400 series processors) as well as a four socket “EP” (E5-2600 series) blade server. Specifically: the UCS B22 and B420 M3 blades and the C22 and C24 M3 rack servers. These new servers round out the UCS portfolio with an even stronger set of products optimized for scale-out and light general-purpose computing as well as a new price/performance 4S category in the mid-range.
If you prefer watching than reading , here is a nice conversation between Intel Boyd Davis , VP & GM, Data Center Infrastructure group, Cisco Jim McHugh, VP UCS Marketing, and Scott Ciccone, Sr. Product Marketing Manager, highlighting the key benefits of these new models.
To figure out how these fit in, let’s step back and consider the broader evolution of server technology in play here:
1) Cisco has made server I/O more powerful and much simpler.
One of the key differentiators of UCS is the way in which high-capacity server network access has been aggregated through Cisco Virtual Interface Cards and infused with built-in high performance virtual networking capabilities. In “pre-UCS” server system architectures, one of the main design considerations was the type and quantity of physical network adapters required. Networking, combined with computing sockets/cores/frequency/cache, system memory, and local disk are historically the primary resources considered in the balancing act of cost, physical space and power consumption, all of which are manifested in the various permutations of server designs required to cover the myriad of workloads most efficiently. Think of these as your four server subsystem food groups. Architecture purists will remind us that everything outside the processors and their cache falls into the category of “I/O” but let’s not get pedantic because that will mess up my food group analogy. In UCS, I/O is effectively taken off the table as a design worry because every server gets its full USRDA of networking through the VIC: helping portions of bandwidth, rich with Fabric Extender technology vitamins that yield hundreds of Ethernet and FC adapters through one physical device. Gone are the days of hemming and hawing over how many mezz card slots your blade has or how many cards you’re going to need to feed that hungry stack of VM’s on your rack server. This simplification changes things for the better because it takes a lot of complication out of the equation.