Cisco Blogs


Cisco Blog > Data Center

Investigating OpenStacks Multicast capabilities (Part 1 of 3)

This is my first blog post within the Data Center and Cloud technology area. I recently joined the Openstack@Cisco team under Lew Tucker focusing on advanced OpenStack System research as a Cloud Architect. As part of this role I performed a gap analysis on the functionality (or the lack thereof) of multicast within an OpenStack based private Cloud. Coming from Advanced Services I have seen multicast as a critical component of many datacenters providing group based access to data (streaming content, video conferencing, etc.) . Within a Cloud environment this requirement is almost if not more as critical as it is for enterprise data centers.

This blog will be the first in a series highlighting the current state of multicast capabilities within OpenStack. Here, I focused the analysis on OpenStack Icehouse running on top of Redhat 7 with OVS and a VLAN based network environment. I would like to thank the OpenStack Systems Engineering team for their great work on lying the foundation for this effort (preliminary tests on Ubuntu and Havana).

I used a virtual traffic generator called TeraVM to generate multicast based video traffic allowing for Mean Opinion Score calculation. The Mean Opinion Score or MOS is a calculated value showing the quality of video traffic based on latency, jitter, out of order packets and other network statistics. Historically, the MOS value was based on human perception of the quality of voice calls, hence the word opinion. Since then it has developed to an industry standardized way of measuring the quality of video and audio in networks. It is therefore a good way to objectively measure the performance of multicast on an OpenStack based Cloud. The MOS value ranges from 1 (very poor) to 5 (excellent). Anything above ~4.2 is typically acceptable for Service Provider grade video transmission.

I performed the multicast testing on a basic controller/compute node OpenStack environment, with neutron handling network traffic. In this blog, I focus my analysis solely on opensource components of OpenStack with Cisco products (CSR and N1K) being discussed in a follow-up blog. The tenant/provider networks are separated using VLANs. A Nexus 3064-X is used as the top of rack switch providing physical connectivity between the compute nodes. The nodes are based on UCS-C servers.

Read More »

Tags: , ,

Industry’s First Ever Standard Based Benchmark Result for Big Data

Over the past quarter century, industry standard bodies like the TPC and SPEC have developed several standards for performance benchmarking, which have been a significant driving force behind the development of faster, less expensive, and more energy efficient systems. The two most influential database benchmark standards have been TPC-C ( industry standard for benchmarking transaction processing systems) and TPC-D and its successor TPC-H (industry standards for benchmarking decision support systems). The first TPC-C result1 was published in 1992 and the first TPC-D result2 was published by in 1997, both by IBM. 1000+ combined publications and hundreds of research papers ever since that drove several innovations in relational database management systems.

The industry and technology landscapes have changed. IT is being extended far beyond traditional transaction processing and data warehousing to big data and analytics. Foreseeing the industry transition the TPC has developed TPC Express Benchmark HS (TPCx-HS) – industry’s first (and so far the only) standard for benchmarking big data systems to provide the industry with verifiable performance, price-performance and availability metrics of hardware and software systems dealing with Big Data. This benchmark can be used to asses a broad range of system topologies and implementation of Hadoop systems in a technically rigorous and directly comparable, in a vendor-neutral manner.

It’s my great pleasure to announce industry’s first ever TPC Express Benchmark HS result. We published not one but three results at 1TB, 3TB and 10TB Scale Factors today demonstrating the performance and scaling of Cisco UCS Integrated Infrastructure for Big Data:

[HSph = a composite metric representing the processing power. $/HSph=price-performance]. The results were audited by TPC certified auditor.

The benchmark configuration consists of Cisco UCS Integrated Infrastructure for Big Data  (Cisco UCS CPA v2) with two redundant active-active Cisco UCS 6296 Fabric Interconnects running Cisco UCS Manager version 2.2, 16 Cisco UCS C240 M3 Servers running Red Hat Enterprise Linux Server 6.4 and MapR Distribution including Apache Hadoop.

Additional Information
Cisco UCS Performance Brief
TPC Express Benchmark HS Official Site
TPC Press Release on TPC Express Benchmark HS
Partner Blog

154 tpmC, $188,562/tpmC, 12/1995, IBM
284 QthD, $52,170/QphD, IBM, 09/1992, IBM

Software Defined Networks with L4-L7 ADC Policy Automation

It appears only a short time ago we introduced Cisco ACI to the market, but it is already the one-year anniversary time. In this one-year period, we have seen tremendous momentum on customer adoption and partner eco-system for both the Nexus 9k hardware platform and the ACI software. To date there are more than 1,000 plus Nexus 9k hardware customers and 200 plus ACI software customers. And don’t forget the growing eco-system of partners that now stands at an impressive 34.

To commemorate this one-year anniversary of ACI and its success, we have planned a grand Data Center Webcast to be broadcast on Jan 13 at 9 AM PST. Click here to register for the webcast. Attendees of the webcast will have the opportunity to hear from our ACI ecosystem partners how their solutions integrate to help customize and extend ACI deployments. The audience will also hear from Cisco customers all over the world about the benefits they’ve discovered with our ACI architecture. Check out Cisco exec Shashi Kiran’s blog for more details on the webcast.

For the remainder of this blog I am going to focus on the ACI L4-L7 partner eco-system momentum. Since August 2014, major L4-L7 Application Delivery Controller (ADC) vendors have collaborated with our Insieme Business Unit to build, test, certify joint integrated solutions and introduce publicly downloadable device packages for customers to seamlessly deploy ACI in existing ADC deployments.

servicechainnew

What makes the ACI integration with L4-L7 ADC vendors’ devices so seamless and easy? Well, the answer lies in the flexible and open service policy management inherent in ACI. The highly open and programmable nature of Cisco APIC and the ability to selectively associate service chains with specific applications and data flows, and the flexibility of applying application delivery policies to different applications (Figure-1). This far exceeds that of a traditional network based ADC. To date F5, Citrix, A10 Networks have built FCS versions of device packages for Cisco ACI. I want to take you on a quick tour of each of these ACI joint solutions, and the benefits they uniquely bring to existing customer deployments.

The exciting L4-L7 eco-system ramp began in August 2014 when ADC market leader F5 announced the availability of its device package for ACI. Since then, our partnership has clicked into high gear. We had a very successful F5 Agility event at Copenhagen (June) and New York (early August) showcasing the Cisco ACI-F5 BIG-IP joint solution in breakout sessions, world of solutions Expo, and in keynotes Panels. Cisco also published a jointly written technical whitepaper, a solutions brief and a Design guide with F5. In the webcast planned for Jan 13, we have an exclusive partner panel session featuring F5 exec, Calvin Rowland, and Cisco Exec, Soni Jiandani. I urge you to tune in to this webcast to get the low-down on the customer traction and how customers are benefiting from the policy based automation and application centric approach of our joint solution.

The Citrix and Cisco strategic partnership dates back to early 2010 with a strategic alliance on the UCS-Citrix Desktop Virtualization front. Since then, our alliance has expanded to other technology areas, and in August we introduced the ACI-Citrix NetScaler joint solution to market with the availability of the Citrix device package for Cisco ACI. Citrix and Cisco ACI engineering teams are also actively working in IETF and ODL standards efforts to create thought leadership around NSH and the OpFlex protocols. I can vouch that it will be a rewarding experience for you to listen to Steve Shah of Citrix at the Jan 13 webcast, and get insights on how customers are benefiting from our joint solution featuring open policy model and a programmable infrastructure. Check out the solutions brief and whitepaper from our joint website to gather more details.

A10 Networks is the new kid on the ACI eco-system block. ACI’s SDN paradigm is a natural fit for A10 Networks’s vision and strategy to expose L4-L7 networking features programmatically. As a first step, A10 Networks has successfully certified their device package for ACI and is now available for download. The A10 device package is open source, and can be easily enhanced by customers to create custom value with near ubiquitous programmability. Exciting near term joint engagements include potentially collaborating on an OpFlex and NSH standards effort as well as some advanced ADC features such as WAF, SSL offload, GSLB, and device partitions among others. I do not want to steal all of the webcast’s thunder, so tune in on Jan 13 to get a 360 degree view from A10 CTO Raj Jalan.

As I am writing this blog there is more exciting news. Yes, Radware is also testing their ACI device package with the Insieme Business Unit now. Stay tuned to hear more outcomes on this engagement. The L4-L7 ACI eco-system momentum is truly on a fast track. In closing, I want to re-iterate, do not forget to register for Cisco’s ACI webcast set for Jan 13.

Related Links

http://blogs.cisco.com/datacenter/citrix-netscaler-device-package-for-cisco-aci-goes-fcs

http://blogs.cisco.com/datacenter/f5-device-package-for-cisco-apic-goes-fcs

http://blogs.cisco.com/datacenter/aci_webcast

 

Tags: , , , ,

A New Generation of Cisco UCS Power Calculator

We are proud to announce the new Cisco UCS Power Calculator and Estimation Tool. It features an all new User Interface (UI) and is currently live at http://ucspowercalc.cisco.com

The tool contains many new features, including the ability to create templates and projects where configuration data is stored. Templates and projects improve agility as well as enable collaboration among users through exporting and importing user-specific  configuration data.

powercalcimage

Additionally, the new power calculator offers a powerful RESTful API, which allows third party applications to connect and generate power estimations by simply passing through actual configuration data.  This architecture provides a single source for all power estimates.

powercalcconnect

Common to the Cisco UCS management tool portfolio, the API-driven architecture for the new power calculator enables integration opportunities with a number of Cisco tools. One example is tighter integration with Cisco Commerce Workspace (CCW) power calculator widget – for real-time estimation of solution power while building out configurations.  Third-party, non-Cisco tools (e.g. DCIM) can also now connect directly to the power calculator and assist users with data center infrastructure planning. For questions on how to integrate your application with the new power calculator and estimation tool’s REST API, please contact Roy Zeighami or Jeffrey Metcalf at (ucs-power-calc-dev@cisco.com).

Previous versions of the Cisco UCS Power Calculator will be retired with redirects to the new Cisco UCS Power Calculator.

Cheers and Thanks! to Intel for the collaboration!

UCS Power Calculator: http://ucspowercalc.cisco.com
UCS Communities: http://communities.cisco.com/ucs
UCS Platform Emulator:  http://communities.cisco.com/ucspe
UCS Developed Integrations:  http://communities.cisco.com/ucsintegrations

Tags: , , , , ,

Disaster Recovery Oversights

It can be challenging and expensive to design an efficient network and data center that minimizes downtime.  Yet, even if you’ve put together a bulletproof solution, there’s always the possibility of disaster to consider.

Developing a robust disaster recovery plan involves much more than just installing redundant resources.  There are so many factors to consider, and so many that are easily overlooked.  For example, a comprehensive disaster recovery plan includes not only redundant electrical systems; it ensures electricity sources are redundant as well.

Disaster recovery is an example of an application that is well-suited for the cloud.  Certainly you can take on the challenge – and expense – of putting together a complex, in-house solution.  Alternatively, you can leverage the expertise and up-to-date solutions available from cloud providers.  Cloud-based disaster recovery services can also be put in place must faster and at substantially lower cost.

Partnering with a cloud provider can greatly simplify implementing a comprehensive disaster recovery plan. Not every cloud provider offers enterprise-class service.  Nor do they all guarantee their promises with written service level agreements. Choosing the wrong service or the wrong provider can put the reliability of your recovery strategy at risk.

In Shopping List for Cloud Recovery Services, cloud provider Sungard AS reviews key factors to consider when evaluating disaster recovery cloud services.   They offer many service levels, such as the speed with which different infrastructure and applications are restored.  Properly balancing your plan with your business requirements leads to the best price.  The right provider can also help you understand your vulnerabilities and different approaches to address them.

Cloud-based disaster recovery services provide a cost-effective approach to enable you to ensure the safety of your organization’s data and continuity of operations.  Learn more about how industry leaders like Cisco, Sungard AS, and Allstream are working together to manage risk in the cloud.

Tags: , , ,