Cisco Blogs


Cisco Blog > Data Center

Changing the Economics of Desktop and App Virtualization at the Enterprise Edge

Screen Shot 2014-10-13 at 3.21.53 PMIn my last post I discussed how Cisco UCS Mini is helping us expand beyond the traditional confines of the data center, to deliver desktop and app virtualization with exceptional user experience, manageability and TCO savings – at the enterprise edge.

We’re only going to see more investment and focus in this space, thanks to the general trend towards making VDI and app virtualization more tenable in a wider array of use cases across the enterprise, pushing from data center to enterprise edge.  This week, I want to offer a proof point I alluded to in my last post, enabled by our partners Nimble Storage and VMware.

Let’s take a look at the Nimble Storage SmartStack for ROBO (Remote Office / Branch Office) Desktop Virtualization.  As you know, we’ve seen incredible traction with our friends at Nimble.  Their CS array has seen wide market acceptance, and is now offered as an Integrated Infrastructure solution in the form of SmartStack, delivering the modularity, scale and manageability that IT demands.

I’m pleased to highlight that SmartStack now offers a solution optimized for ROBO.  In case you missed it, check out Nimble Storage’s announcement.   Bringing together best-of-breed components including VMware Horizon 6, Nimble’s CS300 array, and UCS Mini, this offering delivers on the key attributes critical to the enterprise edge:

  •  ­­Greater Consolidation: dense storage, compute and expansive I/O capacity of the Nimble + Cisco UCS solution, allows for hundreds of users in a small footprint, 50% of what traditional solutions might occupy.
  • Simplified Management: anchored on UCS Manager, this platform enables centralized IT to remotely spin up desktop and application virtualization capacity at remote/branch offices, without the error-prone, manual intervention required by traditional compute platforms.
  • High Performance: the combination of Nimble’s large IOPS footprint, low latency and Cisco UCS processing power delivers adaptive, exceptional performance in a compact form factor. This, along with VMware Horizon, allows IT to maintain an exceptional user experience through heavy application use and periods such as boot/login storms, patch operations and upgrades.

For more information on the Nimble SmartStack for ROBO Desktop Virtualization, please check out Sheldon D’Paiva’s blog on SmartStack ROBO.

Tags: , , , , , , , , ,

Cisco at OpenStack Summit, Paris

Cisco is again a Premiere Sponsor of the OpenStack Summit, November 3-7 at Le Palais des Congrès in Paris.  Here’s a summary of Cisco sponsored activities for your schedule.

Paris view

Premier Breakout Session:  “A World of Many (OpenStack) Clouds”
Wed. 05 Nov; 13:50 – 14:30
Cisco VP and Cloud CTO, Lew Tucker, will talk about how Cisco is working with leading service providers and enterprise customers to enable a world of interconnected clouds.  Find out how Cisco is delivering greater automation, programmability, and openness for IT infrastructure, to support the next generation of virtualization and cloud.

Cisco Expo Booth, Location #C3
Stop by and pick up a special OpenStack@Cisco gift while supplies last.  Cisco specialists in services, sales and product development will be available to chat and answer any questions.

Mon. 03 Nov:  8:15 – 9:30 and 11:15 – 19:30
Tues. 04 Nov:  10:45 – 18:00
Wed. 05 Nov:  9:00 – 16:30

See demonstrations of:
-OpenStack Networking Using Cisco CSR and Nexus
-Cisco UCS Integrated Infrastructure with Red Hat OpenStack Platform
-Group-Based Policy for Cloud Deployment
-Cisco UCS Bare-Metal-as-a-Service Cloud

Metacloud Acquisition
Find out more about Metacloud, which officially became a part of Cisco on 17 SEP.  Metacloud offers OpenStack clouds as a service, giving customers a choice of hosted or hybrid architecture, to operate like a public cloud from inside an organization’s own data center.

Breakout: Group Based Policy Extension for Networking
Mon. 03 Nov; 16:20 – 17:00
Sumit Naiksatam, Principal Engineer, Cisco
https://openstacksummitnovember2014paris.sched.org/speaker/sumitnaiksatam#.VCXaM8Yac0s

Breakout: Deploying and Auto-Scaling Applications on OpenStack with Heat
Tues. 04 Nov; 11:15 – 11:55
Daneyon Hansen, Software Engineer, Cisco
https://openstacksummitnovember2014paris.sched.org/speaker/daneyonhansen#.VCXY7cYac0s

Panel Discussion: OpenStack Design Guide
Tues. 04 Nov; 14:00 – 14:40
Featuring: Maish Saidel-Keesing, Platform Architect, Cisco Video Technologies
https://openstacksummitnovember2014paris.sched.org/event/2345c8d9cfe52ebb104e860338dc2d7a#.VCXiAcYac0s

Panel Discussion: Tips and Tools for Building a Successful OpenStack Group
Tues. 04 Nov; 14:50-15:30
Featuring Shannon McFarland, Principal Engineer and Mark T. Voelker, Technical Lead; Cisco
https://openstacksummitnovember2014paris.sched.org/event/d1f8591a8436a656196478278fa83593#.VCXhGsYac0s

Breakout: Using Ceilometer Data to Detect Fraud in the OpenStack Cluster
Wed. 05 Nov; 9:50 – 10:30
Debojyoti Dutta, with Marc Solanas Tarre, Principal Engineers, Cisco
https://openstacksummitnovember2014paris.sched.org/speaker/dedutta#.VCXYSsYac0s

Breakout: Under the Hood with Nova, Libvirt and KVM (Part Two)
Wed. 05 Nov; 9:50 – 10:30
Rafi Khardalian, CTO, Metacloud/Cisco
https://openstacksummitnovember2014paris.sched.org/speaker/rkhardalian#.VCXZcMYac0s

Breakout: Scaling OpenStack Services: The Pre-TripleO Service Cloud
Wed. 05 Nov; 16:30 – 17:10
Kevin Bringard, with Richard Maynard
Technical Leads, Cisco
https://openstacksummitnovember2014paris.sched.org/speaker/kevinbringard1#.VCXXv8Yac0s

Evening Reception with Red Hat
Wed. 05 Nov; 20:00 – 2:00
Each attendee who completes the Red Hat and Cisco Booth Rally Challenge (instructions onsite) will receive a ticket for the Evening Reception held at Faust, an entertainment facility located at the foot of the Ivalides Esplanade, underneath the Alexandre III Bridge.  Shuttle transportation will be available.  Food and drinks will be served.  This is an awesome location and might very well be the highlight of the week.

RH Party

 

 

 

 

 

Tags: , , , , , , , ,

Enable Automated Big Data Workloads with Cisco Tidal Enterprise Scheduler

In our previous big data blogs, a number of my Cisco associates have talked about the right infrastructure, the right sizing, the right integrated infrastructure management and the right provisioning and orchestration for your clusters. But, to gain the benefits of pervasive use of big data,  you’ll need to accelerate your big data deployments and make a seamless pivot of your “back of the data center” science experiment into the standard data center operational processes to speed delivery of the value of these new analytics workloads.

If you are using a “free” (hint: nothing’s free), or open source workload scheduler, or even a solution that can manage day-to-day batch jobs, you may run into problems right off the bat. Limitations may come in the form of dependency management, calendaring, error recovery, role-based access control and SLA management.

And really, this is just the start of your needs for full-scale, enterprise-grade workload automation for Big Data environments! As the number of your mission-critical big data workloads increases, predictable execution and performance will become essential.

Lucky for you Cisco has exactly what you need! Read More »

Tags: , , , , , , , , ,

#EngineersUnplugged S6|Ep11: #UCSGrandSlam from Admin View

In this week’s episode of Engineers Unplugged, Roving Reporter Lauren Malhoit (@malhoit) talks to Adam Eckerle (@eck79) about #UCSGrandSlam and UCS as a platform. Great view of the tech from the admin perspective: “UCS is built from the ground up with virtualization in mind.” Great practical episode for anyone exploring UCS!

Much like UCS, the Unicorn Challenge is a platform for creativity.

Much like UCS, the Unicorn Challenge is a platform for creativity.

**Want to be Internet Famous? Join us for our next shoot: VMworld Barcelona. Tweet me @CommsNinja!**

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , ,

Turbocharging new Hadoop workloads with Application Centric Infrastructure

At the June Hadoop Summit in San Jose, Hadoop was re-affirmed as the data center “killer app,” riding an avalanche of Enterprise Data, which is growing 50x annually through 2020.  According to IDC, the Big Data market itself growing six times faster than the rest of IT. Every major tech company, old and new, is now driving Hadoop innovation, including Google, Yahoo, Facebook Microsoft, IBM, Intel and EMC – building value added solutions on open source contributions by Hortonworks, Cloudera and MAPR.  Cisco’s surprisingly broad portfolio will be showcased at Strataconf in New York on Oct. 15 and at our October 21st executive webcast.  In this third of a blog series, we preview the power of Application Centric Infrastructure for the emerging Hadoop eco-system.

Why Big Data?

Organizations of all sizes are gaining insight and creativity into use cases that leverage their own business data.

BigDataUseCaseTable

The use cases grow quickly as businesses realize their “ability to integrate all of the different sources of data and shape it in a way that allows business leaders to make informed decisions.” Hadoop enables customers to gain insight from both structure and unstructured data.  Data Types and sources can include 1) Business Applications – OLTP, ERP, CRM systems, 2) Documents and emails 3) Web logs, 4) Social networks, 5) Machine/sensor generated, 6) Geo location data.

IT operational challenges

Even modest-sized jobs require clusters of 100 server nodes or more for seasonal business needs.  While, Hadoop is designed for scale out of commodity hardware, most IT organizations face the challenge of extreme demand variations in bare-metal workloads (non-virtualizable). Furthermore, they are requested by multiple Lines of Business (LOB), with increasing urgency and frequency. Ultimately, 80% of the costs of managing Big Data workloads will be OpEx. How do IT organizations quickly, finish jobs and re-deploy resources?  How do they improve utilization? How do they maintain security and isolation of data in a shared production infrastructure?

And with the release of Hadoop 2.0 almost a year ago, cluster sizes are growing due to:

  • Expanding data sources and use-cases
  • A mixture of different workload types on the same infrastructure
  • A variety of analytics processes

In Hadoop 1.x, compute performance was paramount.  But in Hadoop 2.x, network capabilities will be the focus, due to larger clusters, more data types, more processes and mixed workloads.  (see Fig. 1)

HadoopClusterGrowth

ACI powers Hadoop 2.x

Cisco’s Application Centric Infrastructure is a new operational model enabling Fast IT.  ACI provides a common policy-based programming approach across the entire ACI-ready infrastructure, beginning with the network and extending to all its connected end points.  This drastically reduces cost and complexity for Hadoop 2.0.  ACI uses Application Policy to:

–          Dynamically optimize cluster performance in the network
–          Redeploy resources automatically for new workloads for improved utilization
–          Ensure isolation of users and data as resources are deployments change

Let’s review each of these in order:

Cluster Network Performance: It’s crucial to improve traffic latency and throughput across the network, not just within each server.

  • Hadoop copies and distributes data across servers to maximize reliability on commodity hardware.
  • The large collection of processes in Hadoop 2.0 are usually spread across different racks.
  • Mixed workloads in Hadoop 2.0, support interactive and real-time jobs, resulting in the use of more on-board memory and different payload sizes.

As a result, server IO bandwidth is increasing which will place loads on 10 gigabit networks.  ACI policy works with deep telemetry embedded in each Nexus 9000 leaf switch to monitor and adapt to network conditions.

HadoopNetworkConditions

Using policy, ACI can dynamically 1) load-balance Big Data flows across racks on alternate paths and 2) prioritize small data flows ahead of large flows (which use the network much less frequently but use up Bandwidth and Buffer). Both of these can dramatically reducing network congestion.  In lab tests, we are seeing flow completion nearly an order of magnitude faster (for some mixed workloads) than without these policies enabled.  ACI can also estimate and prioritize job completion.  This will be important as Big Data workloads become pervasive across the Enterprise. For a complete discussion of ACI’s performance impact, please see a detailed presentation by Samuel Kommu, chief engineer at Cisco for optimizing Big Data workloads.

 

Resource Utilization: In general, the bigger the cluster, the faster the completion time. But since Big Data jobs are initially infrequent, CIOs must balance responsiveness against utilization.  It is simply impractical for many mid-sized companies to dedicate large clusters for the occasional surge in Big Data demand.   ACI enables organizations to quickly redeploy cluster resources from Hadoop to other sporadic workloads (such as CRM, Ecommerce, ERP and Inventory) and back.  For example, the same resources could run Hadoop jobs nightly or weekly when other demands are lighter. Resources can be bare-metal or virtual depending on workload needs. (see Figure 2)

HadoopResourceRedeployment

How does this work? ACI uses application policy profiles to programmatically re-provision the infrastructure.  IT can use a different profile to describe different application’s needs including the Hadoop eco-system. The profile contains application’s network policies, which are used by the Application Policy Infrastructure controller in to a complete network topology.  The same profile contains compute and storage policies used by other tools, such as Cisco UCS Director, to provisioning compute and storage.

 

Data Isolation and Security: In a mature Big Data environment, Hadoop processing can occur between many data sources and clients.  Data is most vulnerable during job transitions or re-deployment to other applications.  Multiple corporate data bases and users need to be correctly to ensure compliance.   A patch work of security software such as perimeter security is error prone, static and consumes administrative resources.

ACI_SecurityImperatives

 

In contrast, ACI can automatically isolate the entire data path through a programmable fabric according to pre-defined policies.  Access policies for data vaults can be preserved throughout the network when the data is in motion.  This can be accomplished even in a shared production infrastructure across physical and virtual end points.

 

Conclusion

As organizations of all sizes discover ways to use Big Data for business insights, their infrastructure must become far more performant, adaptable and secure.  Investments in fabric, compute and storage must be leveraged across, multiple Big Data processes and other business applications with agility and operational simplicity.

Leading the growth of Big Data, the Hadoop 2.x eco-system will place particular stresses on data center fabrics. New mixed workloads are already using 10 Gigabit capacity in larger clusters and will soon demand 40 Gigabit fabrics.  Network traffic needs continuous optimization to improve completion times.  End to end data paths must use consistent security policies between multiple data sources and clients.  And the sharp surges in bare-metal workloads will demand much more agile ways to swap workloads and improve utilization.

Cisco’s Application Centric Infrastructure leverages a new operational and consumption model for Big Data resources.  It dynamically translates existing policies for applications, data and clients in to fully provisioned networks, compute and storage. .  Working with Nexus 9000 telemetry, ACI can continuously optimize traffic paths and enforce policies consistently as workloads change.  The solution provides a seamless transition to the new demands of Big Data.

To hear about Cisco’s broader solution portfolio be sure to for register for the October 21st executive webcast  ‘Unlock Your Competitive Edge with Cisco Big Data Solutions.’  And stay tuned for the next blog in the series, from Andrew Blaisdell, which showcases the ability to predictably deliver intelligence-driven insights and actions.

Tags: , , , , , ,