Cisco Blogs


Cisco Blog > Data Center and Cloud

Together We Can: Cisco at NetApp Insight 2014

October 23, 2014 at 2:24 pm PST

NetApp insight

Next week in Las Vegas, NetApp’s annual conference devoted to the technology-curious storage and data management professional, including system engineers, professional services consultants, channel partners, and for the first time ever, NetApp customers. Cisco is an diamond level sponsor of the event and will have a very strong presence highlighting FlexPod, based on Cisco UCS integrated infrastructure and NetApp storage systems, Cisco ACI, and Cloud to more than 5000 attendees.  We have several planned activities throughout the conference week, including keynotes, boot camps, breakout sessions, booth demos and more. The events agenda is posted here.

The highlight of the event will be the Cisco keynote given by Rob Lloyd, President, Cisco Development and Sales on Tuesday, October 28th. Arrive early -- I am anticipating a full house for this presentation. As I FlexPod DCmentioned earlier, we have a lot of good information to share and will have breakout speaking sessions, boot camps, and speaking sessions that highlight Cisco and NetApp’s technology alignment, leadership, and partnership and raise awareness of FlexPod solutions and momentum.

A list of our boot camps, breakout sessions, and theater presentations is below:

Boot Camps:
• Evolution of Cisco UCS Portfolio
• FlexPod with ACI Infrastructure: Current and Future Directions

Breakout Sessions:
• An Application-Centric Approach to Managing Your Infrastructure
• Build the Next Generation Automated Data Center with Cisco 9000
• MetroCluster on FlexPod with the Nexus 7000
• FlexPod Solutions with Cisco Application-Centric Infrastructure (ACI)
• Security on FlexPod Datacenter
• FlexPod Deep Dive

Expo Theater Presentation:
• Maximize Your Purchasing Power with Cisco Capital

Take advantage of this great opportunity to network with peers and stop by Cisco booth to learn about our products and solutions. I hope to see you in Las Vegas.

Do Your Own Self-Audit to Get the Most from the Hybrid Cloud

The hybrid cloud offers a key opportunity to businesses and other organizations.  Specifically, a hybrid cloud merges public cloud and private cloud resources.  Private clouds can either be premises-based or managed by a service provider.  By taking a hybrid approach, a company can dynamically extend the capabilities of its private cloud using public cloud resources.

Hybrid clouds offer many advantages over using just public or private cloud resources.  One of the most important is the ability to expand day-to-day operations in a cost-effective manner.  One method for using hybrid cloud in this way is described in the blog, “Do Your Homework Before Shopping for Hybrid Cloud Services” from our partner SungardAS.

Businesses begin by performing a self-audit of applications.  This includes identifying mission-critical applications.  Mission-critical applications are those that, if not available, could prevent an organization from functioning.  These applications are kept within the private cloud.

Less critical applications are those such as infrastructure services, messaging, collaboration, and database applications.  These may be candidates for moving to the public cloud.  In many cases, they can be maintained at a lower operating cost than an on-premises deployment.  In addition, applications in the public cloud can be easily and quickly scaled.  This gives organizations much needed flexibility and agility.  In turn, this enables organizations to act on market opportunities more quickly, giving them a powerful competitive edge.

Cloud applications can also be tightly integrated with network resources under a common management framework, such as those offered by SungardAS in partnership with Sigma Solutions.  This provides even greater flexibility as users move between virtual and physical environments.

With the right service provider, applications in the public cloud can be as or even more reliable than if they were in a private cloud.  For example, the public cloud uses resource pools to assure greater business continuity.  Consider if the server hosting your applications goes down.  In a private cloud, you may experience an interruption in service as your IT team addresses the problem.  With a public cloud, your service provider can move your applications and data to another server.  In many cases, users won’t even notice anything has out of the ordinary has happened.

Downtime is never convenient.  Which is why enterprise-class service is the standard for our partners who provide Cisco Powered services.  Even when an application itself isn’t mission-critical, the people using it may be performing mission-critical tasks.  Such tasks could include team collaboration to meet a crucial deadline or closing a sale with an important customer.

Hybrid cloud is already transforming the way we do business.  Want to learn more about how your business can take full advantage of the hybrid cloud from market leaders like Cisco, SungardAS, and Sigma Solutions?  Then click here for access to tools to help you, including the white paper, “The Compelling Business Case for Hybrid Cloud Services.”  You can also learn more about why Cisco Powered is the industry standard for cloud and managed services.

Tags: , , ,

OpenStack Juno: The Basics

Guest Blog by Mark Voelker, Technical Lead, Cisco http://blogs.cisco.com/author/MarkVoelker/

Today, the OpenStack@Cisco team is in a celebratory mood: OpenStack 2014.2 (“Juno”) has been released!  The 10th release of OpenStack contains hundreds of new features and thousands of bugfixes and is the result of contributions from over 1400 developers.  You can find out more about Cisco’s contributions to Juno here.  What’s more, in just a few short weeks we’ll be joining the rest of the OpenStack Community in Paris for the OpenStack Summit, where plans for the next release (“Kilo“) will be laid.  We think that OpenStack’s appeal has never been higher, and are excited to see continued growth forecast for the OpenStack market.  Since OpenStack continues to see new growth, we thought this would be a good time to take a step back and review a few basics for those of you that are just beginning to get acquainted with today’s dominant open source cloud platform.

First, a bit of history.  OpenStack was founded in the summer of 2010 as an open source project driven primarily by Rackspace Hosting (who contributed a scalable object storage system that is today known as OpenStack Swift) and NASA (who contributed a compute controller that is today known as OpenStack Nova).  The announcement quickly attracted attention, and in September of 2012 the OpenStack Foundation was created as an independent body to promote the development, distribution, and adoption of the OpenStack platform.  Since then, the Foundation has grown to over 18,800 members spanning over 140 countries and representing over 400 supporting companies.

Simply put, OpenStack is “Open source software for creating private and public clouds.”  Not only is it developed by a wide variety of corporate and individual contributors, it is also used by hundreds of companies (including Cisco!) for a variety of purposes.  You can find a sampling at the OpenStack User Stories and OpenStack SuperUser websites.  The software itself is a set of loosely coupled distributed systems comprised of several discrete pieces of software with a focus on supporting multi-tenancy and scalability for on-demand resources.  Whereas OpenStack originally contained just two major components, today’s integrated Juno release contains 11:

Read More »

Tags: , , , , ,

Hard Choices !

Sorry .. I did not mean to steal the title of Hillary Clinton’s book. It so happened that we had to deal with “hard choices” of our own,  when we had to decide on the management approach to our new M-Series platform. In the first blog of the UCS M-Series Modular Servers journey series, Arnab briefly alluded to the value our customers placed on UCS Manager.As we started to have more customer conversations, we recognized a clear demarcation when it came to infrastructure management. There was a group of customers who just would not take any offering from us that is not managed by UCS Manager. On the other hand, a few customers who had built their own management framework were more enamored by the disaggregated server offering that we intended to build. For the second set of customers, there was a strong perception that UCS Manager did not add much value to their operations. We were faced with a very difficult choice of whether to release the platform with UCS Manager or provide standalone management. After multiple rounds of discussions, we made a conscious decision to launch M-Series as a UCS Manager managed platform only.  Ironically enough, it was one such customer discussion that vindicated our decision. This happened to be a customer deploying large cloud scale applications and did not care much UCS Manager. During the conversation, they talked about some BIOS issues in their super large web farm that surfaced couple of years back. After almost 2 years, they were still rolling out the BIOS updates !

UCS Manager is the industry’s first tool to elegantly break down the operational silos in the datacenter by introducing a policy-based management of disparate infrastructure elements in the datacenter. This was made possible by the concept of Service Profiles, which made it easy for the rapid adoption of converged infrastructure. Service Profiles allowed the abstraction of all elements associated with a server’s identity and rendering the underlying servers pretty much stateless. This enabled rapid server re-purposing and workload mobility as well as made it easy for enforcing operational policies like firmware updates.  And, the whole offering has been built on the foundation of XML APIs, which makes it extremely easy to integrate with other datacenter management, automation and orchestration tools. You can learn more about UCS Manager by clicking here.

UCS M-Series Modular Servers are the latest addition to the infrastructure that can be managed by UCS Manager. M-Series is targeted at cloud-scale applications, which will be deployed in 1000s, if not 10s of 1000s of nodes. Automation of policy enforcement is more paramount than the traditional datacenter deployments. Managing groups of compute elements as a single entity, fault aggregation, BIOS updates and firmware upgrades are a few key features of UCS Manager that kept surfacing repeatedly during multiple customer conversations.  That was one of the primary drivers in our decision to release this platform with UCS Manager.

In the cloud-scale space, the need to almost instantaneously deploy lots of severs at a time is a critical requirement. Also, all of the nodes are pretty much deployed as identical compute elements. Standardization of configurations across all of the servers is very much needed.  UCS Manager makes it extremely easy to create the service profile templates ahead of time (making use of the UCS Manager emulator) and create any number of service profile clones literally at the push of a button. Associating the service profiles with the underlying infrastructure is also done with a couple of clicks. Net-Net: you rack, stack, and cable once; re-provision and re-deploy to meet your workload needs without having to make any physical changes to your infrastructure.

Storage Profiles is the most notable enhancement to UCS Manager in order to support M-series. This feature allows our customers to slice and dice the SSDs in the M-Series chassis into smaller virtual disks. Each of these virtual disks is then served up as if they are local PCIe devices to the server nodes within the compute cartridges plugged into the chassis. Steve has explained that concept elaborately in the previous blog. In the next edition, we will go into more details about Storage Profiles and other pertinent UCS Manager features for the M-Series.

Tags: , , , , ,

SAP HANA Tailored Data Center Integration (TDI) expanded for Networking

Cisco embraces SAP HANA TDI for Networking

SAP recently announced that they have expanded their SAP HANA Tailored Data Center Integration (TDI) to include networking.   So what does this mean?  It means that if a SAP customer installs SAP HANA, and that same customer has enough capacity on their existing networking equipment to satisfy the SAP HANA certification requirements for networking, then the customer can utilize their existing networking architecture for SAP HANA without having to purchase additional equipment to meet those requirements. Read More »

Tags: , ,