Cisco Blogs


Cisco Blog > Data Center and Cloud

Could Big Data and Cloud go together?

In today’s era of increasing connectivity, data is getting generated in vast proportions.  Moreover, it is also important to be able to generate insights from it quickly and act accordingly.  Gone are the days when one would move data into a data warehouse and then extract insights from it to act at a later date.  Here are four scenarios why.

Scenario 1: Cloud and Social

If a discussion around a brand is trending positively or negatively, that organization needs to take action then and cannot wait for a future time to do so. They might want to capitalize on the positive sentiment and amplify it or perhaps take action and remedy a trending negative sentiment. Both Twitter and Facebook provide several real time analytics capabilities leveraging big data technologies that they pioneered themselves.  These analytics run within their cloud environment and provider users real time insights.

Read More »

Tags: , , , ,

The New IT: How Cloud Changes Everything

The move to cloud can be daunting.  In their blog, Overcoming the Organizational Challenges of Moving to Cloud, Presidio describes one of the barriers organizations face when transitioning to cloud.  Building a hybrid cloud – one based on private on-premises resources extended through the public cloud network – requires many shifts in thinking for IT.

In short, IT is becoming the cloud services broker within their own company.  This means IT is less about building out infrastructure than it is brokering cloud-based services and applications. To achieve this, IT needs to be able to provide services tailored to their user base.  In turn, IT needs to be able to access flexible services designed to meet their specific requirements.

Another important shift in thinking for IT is to realize that the cloud is a doorway to more than just virtual servers.  It is a portal to new applications and new ways of doing business so you can act upon emerging opportunities quickly.  It is an assurance that you can have the performance you need to follow through on these opportunities.  And it is a direct connection to ongoing innovation, enabling your organization to seamlessly access leading technology without extensive capital investment.

Cloud providers like Presidio understand this dual role of cloud, to help manage costs today while enabling unrestricted expansion into the future.  And, with their Cisco Powered services, Presidio offers a cloud that is also built for reliability, security, and scalability.

Learn more about how Presidio’s Hybrid Cloud and Cisco Powered cloud and managed services can transform your business.

Tags: , ,

ACI Delivers Operational Simplicity

January 12, 2015 at 6:45 am PST

Let’s start this blog with a hypothetical scenario. Suppose you’re the CIO and you’ve committed to your CEO and the Board of Directors that your company will execute an innovative new strategy to delight your customers directly through their mobile devices  and leverage the cloud to service them at scale.  You are 90% complete in your development and you just need to complete your final beta and production readiness testing to roll it out to production when you learn your leading competitor is going to beat you to the market with their own application next month.

Is your cloud infrastructure really as agile as you believe? Will the physical infrastructure scale with the load?  Will you be able to secure your application and data from threats?  Can you rapidly deploy a production application across servers, networks, and storage infrastructure securely?

During Cisco’s Data Center webcast Jan 13, we’ll walk through the hypothetical use case above and then see how Day 1 operations for configuration and deployment can be addressed in real life by customers and enabling vendors.

Episode 1: An Impossible Deadline

day1 (click to play)

We’ll look at Day 2 operations and learn how important application visibility across physical and virtual infrastructure is to meet the most stringent uptime requirements.

Episode 2: The Needle in the Haystack

day2

 (click to play)

Finally, we examine the challenges of de-commissioning applications securely while maintaining compliance.

Episode  3: Operation Clean Up

day3

 (click to play)

DCJoin us during the webcast as we hear from ACI customers who will share their production experiences with ACI and  how it impacts their day 1 and day 2 operations.  Hear ACI ecosystem partners who will share how they collaborate through ACI’s open policy model to simplify application delivery, security and orchestrate open clouds.

Make a note on your calendar for January 13th at 9 AM PST/ 12 EST  and see  Is your Data Center Ready for the Application Economy? (Register Here!).

The video on demand will be accessible through this same link.

 

If you are traveling to Cisco Live in Milan, Italy; please come to my session  PSODCT-2455 “Simplify Day 0, 1, and 2 Operations in Application Centric Data Centers” on Jan 29th from 1:15PM — 2:15PM to learn how operations like tenant on-boarding, creating applications containers and self-service catalogs, application monitoring and troubleshooting can all be simplified with application policy driven automation.

Tags: , ,

Investigating OpenStacks Multicast capabilities (Part 1 of 3)

This is my first blog post within the Data Center and Cloud technology area. I recently joined the Openstack@Cisco team under Lew Tucker focusing on advanced OpenStack System research as a Cloud Architect. As part of this role I performed a gap analysis on the functionality (or the lack thereof) of multicast within an OpenStack based private Cloud. Coming from Advanced Services I have seen multicast as a critical component of many datacenters providing group based access to data (streaming content, video conferencing, etc.) . Within a Cloud environment this requirement is almost if not more as critical as it is for enterprise data centers.

This blog will be the first in a series highlighting the current state of multicast capabilities within OpenStack. Here, I focused the analysis on OpenStack Icehouse running on top of Redhat 7 with OVS and a VLAN based network environment. I would like to thank the OpenStack Systems Engineering team for their great work on lying the foundation for this effort (preliminary tests on Ubuntu and Havana).

I used a virtual traffic generator called TeraVM to generate multicast based video traffic allowing for Mean Opinion Score calculation. The Mean Opinion Score or MOS is a calculated value showing the quality of video traffic based on latency, jitter, out of order packets and other network statistics. Historically, the MOS value was based on human perception of the quality of voice calls, hence the word opinion. Since then it has developed to an industry standardized way of measuring the quality of video and audio in networks. It is therefore a good way to objectively measure the performance of multicast on an OpenStack based Cloud. The MOS value ranges from 1 (very poor) to 5 (excellent). Anything above ~4.2 is typically acceptable for Service Provider grade video transmission.

I performed the multicast testing on a basic controller/compute node OpenStack environment, with neutron handling network traffic. In this blog, I focus my analysis solely on opensource components of OpenStack with Cisco products (CSR and N1K) being discussed in a follow-up blog. The tenant/provider networks are separated using VLANs. A Nexus 3064-X is used as the top of rack switch providing physical connectivity between the compute nodes. The nodes are based on UCS-C servers.

Read More »

Tags: , ,

Industry’s First Ever Standard Based Benchmark Result for Big Data

Over the past quarter century, industry standard bodies like the TPC and SPEC have developed several standards for performance benchmarking, which have been a significant driving force behind the development of faster, less expensive, and more energy efficient systems. The two most influential database benchmark standards have been TPC-C ( industry standard for benchmarking transaction processing systems) and TPC-D and its successor TPC-H (industry standards for benchmarking decision support systems). The first TPC-C result1 was published in 1992 and the first TPC-D result2 was published by in 1997, both by IBM. 1000+ combined publications and hundreds of research papers ever since that drove several innovations in relational database management systems.

The industry and technology landscapes have changed. IT is being extended far beyond traditional transaction processing and data warehousing to big data and analytics. Foreseeing the industry transition the TPC has developed TPC Express Benchmark HS (TPCx-HS) -- industry’s first (and so far the only) standard for benchmarking big data systems to provide the industry with verifiable performance, price-performance and availability metrics of hardware and software systems dealing with Big Data. This benchmark can be used to asses a broad range of system topologies and implementation of Hadoop systems in a technically rigorous and directly comparable, in a vendor-neutral manner.

It’s my great pleasure to announce industry’s first ever TPC Express Benchmark HS result. We published not one but three results at 1TB, 3TB and 10TB Scale Factors today demonstrating the performance and scaling of Cisco UCS Integrated Infrastructure for Big Data:

[HSph = a composite metric representing the processing power. $/HSph=price-performance]. The results were audited by TPC certified auditor.

The benchmark configuration consists of Cisco UCS Integrated Infrastructure for Big Data  (Cisco UCS CPA v2) with two redundant active-active Cisco UCS 6296 Fabric Interconnects running Cisco UCS Manager version 2.2, 16 Cisco UCS C240 M3 Servers running Red Hat Enterprise Linux Server 6.4 and MapR Distribution including Apache Hadoop.

Additional Information
Cisco UCS Performance Brief
TPC Express Benchmark HS Official Site
TPC Press Release on TPC Express Benchmark HS
Partner Blog

154 tpmC, $188,562/tpmC, 12/1995, IBM
284 QthD, $52,170/QphD, IBM, 09/1992, IBM