Cisco Blogs


Cisco Blog > Data Center and Cloud

Cisco and OpenStack: Juno Release – Part 1

The next stable OpenStack release codenamed “Juno” is slated to be released October 16, 2014. From improving live upgrades in Nova to enabling easier migration from Nova Network to Neutron, the OpenStack Juno release will address operational challenges in addition to providing many new features and enhancements across all projects.

As indicated in the latest Stackalytics contributor statistics, Cisco has contributed to seven different OpenStack projects including Neutron, Cinder, Nova, Horizon and Ceilometer as part of the Juno development cycle. This is up from five projects in the Icehouse release. Cisco also ranks first in the number of completed blueprints in Neutron as well.

In this blog post, I’ll focus on Neutron contributions, which are the major share of contributions in Juno from Cisco.

blueprint_completed blueprint_completed_neutron

Cisco OpenStack team lead Neutron Community Contributions

An important blueprint that Cisco collaborated on and implemented with the community was to develop the Router Advertisement Daemon (radvd) for IPv6. With this support, multiple IPv6 configuration modes including SLAAC and DHCPv6 (both Stateful and Stateless modes) are now possible in Neutron. The implementation provides for running a radvd process in the router namespace for handling IPv6 auto address configuration.

To support the distributed routing model introduced by Distributed Virtual Router (DVR), this Firewall as a Service (FWaaS) blueprint implementation handles firewalling North–South traffic with DVR. The fix ensures that firewall rules are installed in the appropriate namespaces across the Network and Compute nodes to support perimeter firewall (North-South). However, firewalling East-West traffic with DVR will be handled in the next development cycle as a Distributed Firewall use case.

Additional capabilities in the ML2 and services framework were contributed for enabling better plugin and vendor driver integration. This included the following blueprint implementations -

Cisco device specific contributions in Neutron

Cisco added Application Policy Infrastructure Controller (APIC) ML2 MD and Layer 3 Service Plugin in the Juno development cycle. The ML2 APIC MD translates Neutron API calls into APIC data model specific requests and achieves tenant Layer 2 isolation through End-Point-Groups (EPG).

The APIC MD supports dynamic topology discovery using LLDP, reducing the configuration burden in Neutron for APIC MD and also ensures data is in-sync between Neutron and APIC. Additionally, the Layer 3 APIC service plugin enables configuration of internal and external subnet gateways on routers using Contracts to enable communication between EPGs as well as provide external connectivity. The APIC ML2 MD and Service Plugin have also been made available with OpenStack IceHouse release. Installation and Operation Guide for the driver and plugin is available here.

Enterprise-class virtual networking solution using Cisco Nexus1000v is enabled in OpenStack with its own core plugin. In addition to providing host based overlays using VxLAN (in both unicast and multi-cast mode), it provides Network and Policy Profile extensions for virtual machine policy provisioning.

The Nexus 1000v plugin added support for accepting REST API responses in JSON format from Virtual Supervisor Module (VSM) as well as control for enabling Policy Profile visibility across tenants. More information on features and how it integrates with OpenStack is provided here.

As an alternative to the default Layer 3 service implementations in Neutron, a Cisco router service plugin is now available that delivers Layer 3 services using the Cisco Cloud Services Router(CSR) 1000v.

The Cisco Router Service Plugin introduces a notion of “hosting device” to bind a Neutron router to a device that implements the router configuration. This allows the flexibility to add virtual as well as physical devices seamlessly into the framework for configuring services. Additionally, a Layer 3+ “configuration agent” is available upstream as well that interacts with the service plugin and is responsible for configuring the device for routing and advanced services.  The configuration agent is multi-service capable, supports configuration of hardware or software based L3 service devices via device drivers and also provides device health monitoring statistics.

The VPN as a Service (VPNaaS) driver using the CSR1000v has been available since the Icehouse release, as a proof-of-concept implementation. The Juno release enhances the CSR1000v VPN driver such that it can be used in a more dynamic, semi-automated manner to establish IPSec site-to-site connections, and paves the way for a fully integrated and dynamic implementation with the Layer 3 router plugin planned for the Kilo development cycle.

Summary

The OpenStack team at Cisco has led, implemented and successfully merged upstream numerous blueprints for the Neutron Juno release.  Clearly, some have been critical for the community and others enable customers to better integrate Cisco networking solutions with OpenStack Networking.

Stay tuned for more information on other project contributions in Juno and on Cisco lead sessions at the Kilo Summit in Paris !

You can also download OpenStack Cisco Validated Designs, White papers, and more at www.cisco.com/go/openstack

Tags: , , , , , , ,

#EngineersUnplugged S6|Ep11: #UCSGrandSlam from Admin View

October 1, 2014 at 10:35 am PST

In this week’s episode of Engineers Unplugged, Roving Reporter Lauren Malhoit (@malhoit) talks to Adam Eckerle (@eck79) about #UCSGrandSlam and UCS as a platform. Great view of the tech from the admin perspective: “UCS is built from the ground up with virtualization in mind.” Great practical episode for anyone exploring UCS!

Much like UCS, the Unicorn Challenge is a platform for creativity.

Much like UCS, the Unicorn Challenge is a platform for creativity.

**Want to be Internet Famous? Join us for our next shoot: VMworld Barcelona. Tweet me @CommsNinja!**

This is Engineers Unplugged, where technologists talk to each other the way they know best, with a whiteboard. The rules are simple:

  1. Episodes will publish weekly (or as close to it as we can manage)
  2. Subscribe to the podcast here: engineersunplugged.com
  3. Follow the #engineersunplugged conversation on Twitter
  4. Submit ideas for episodes or volunteer to appear by Tweeting to @CommsNinja
  5. Practice drawing unicorns

Join the behind the scenes by liking Engineers Unplugged on Facebook.

Tags: , , ,

Security for an Application-Centric World

October 1, 2014 at 5:00 am PST

Organizations are migrating to the cloud because it dramatically reduces IT costs as we make much more efficient use of resources (either ours or by leveraging some cloud provider’s resources at optimal times). When done right, cloud also increases business agility because applications and new capacity can be spun up quickly on demand (on-premises or off), network and services configurations can be updated automatically to suit the changing needs of the applications, and, with enough bacon, unicorns can fly and the IT staff can get home at a reasonable hour.

Whenever you ask a CIO-type at any of these organizations what’s holding them back from all this cloud goodness, though, more often than not the answer has something to do with security: “Don’t trust the cloud…”, “Don’t trust the other guy in the cloud…”, “Cloud’s not compliant…”.  You have to be something of a control freak to be a CIO/CISO these days, and, well, isn’t “cloud” all about giving up some control, after all (in return for efficiency and agility)?

Even if you overcome your control issues and you find a cloud you can trust (even if it’s your own private cloud – we can take baby steps here…), if we are going to achieve our instant on-demand application deployment, network provisioning and cost-efficient workload placement process, it turns out all the security stuff can throw another obstacle in our way. Cloud security isn’t like old-fashioned data center security where you could just put a huge firewall in front of the data center and call it good. For secure multi-tenancy and a secure cloud overall, virtually every workload (or “every virtual workload”?) needs to be secured from every other (except for the exceptions we want to make). Some folks call this “microsegmentation”, a fancy word for an old concept, but, a fundamental requirement that cloud deployments need to address. (Spoiler alert: ACI does this very well.) Read More »

Tags: , , ,

New Nexus 9300 Switches join the Nexus 9000 Series

It’s an exciting time in to be in our industry, especially as we witness how technology continues to reshape how we connect and communicate through a myriad of applications and devices not only within our own companies, but also with our customers and partners.

At the epicenter of this technological transformation, we continue to find that the network is what ultimately enables these applications and their users to connect. We also quickly find that if this same network is not ready to deal with the ever increasing influx of devices, new applications with varying traffic patterns, and 24  x 7 access from pretty much anywhere, it can quickly turn into an IT departments nightmare.

It is exactly to deal with these new types of requirements that the award-winning Nexus 9000 Series (made up of both the Nexus 9500 and Nexus 9300 portfolios) was introduced into the market almost 11 months ago. Now, over 600 customers have purchased this new switching family and are experiencing the positive impact that having a high performing, scalable, programmable, and resilient data center network has on application performance and overall user quality of experience in both traditional and Application Centric Infrastructure (ACI) architectures.

Today we are happy to announce the addition of three new switches into the Nexus 9300 Series as well as a 6-port 40Gbps module to deliver more flexibility and form factor options to meet different architectural needs.  The new products are:

  • Cisco Nexus 9372TX: 1-rack-unit switch supporting 1.44 Tbps of bandwidth across 48 fixed 1/10-Gbps BASE-T ports and 6 fixed 40-Gbps QSFP+ ports
  • Cisco Nexus 9372PX: 1-rack-unit switch supporting 1.44 Tbps of bandwidth across 48 fixed 1/10-Gbps SFP+ ports and 6 fixed 40-Gbps QSFP+ ports
  • Cisco Nexus 9332PQ: 1-rack-unit switch supporting 2.56 Tbps of bandwidth across 32 x 40Gbps QSFP+ ports
  • 6-port 40 Gigabit Ethernet Module for the Nexus 93128TX, 9396TX , and 9396PX for connectivity options to meet your needs

These new switches deliver high performance, additional buffers, as well as support for VXLAN routing in a compact form factor. In addition to this, support for the Cisco Nexus 2000 Fabric Extenders has also been added to the Nexus 9300 portfolio. So if you already had Fabric Extenders in your data center or are looking for a scalable and operationally simplified architecture – you can now have the best of both worlds.

But it doesn’t end there – in case you missed it, Cisco recently announced the availability of the Application Policy Infrastructure Controller (APIC) making the creation of a more simplified, robust, application-centric infrastructure  a reality with the Nexus 9000 Series as the network foundation. You can read more about it here – in Craig Huitema’s blog, which outlines not only new products on the nexus 9000 series including 100Gbps on the Nexus 9500, but also how we have simplified the introduction of the Nexus 9000 and ACI into data centers through different ACI starter kits and bundles. In addition, for those of you that want to deploy the Nexus 7000 in combination with the Nexus 9300s, new bundles that bring together the Nexus 7000 and Nexus 9300 are also available.

As you can see, we continue to deliver the products and architectural options that will allow data centers of all sizes to address increasing and changing application requirements.  Between the Nexus 9300 and Nexus 9500 portfolios and their ability to be deployed into 3-tier, spine/leaf, or ACI architectures, customers can benefit from more connectivity options and a diverse set of form factors to meet varying data center needs. I invite you to learn more about the Nexus 9000 Series at www.cisco.com/go/nexus9000.

Tags: , , , , , ,

Turbocharging new Hadoop workloads with Application Centric Infrastructure

At the June Hadoop Summit in San Jose, Hadoop was re-affirmed as the data center “killer app,” riding an avalanche of Enterprise Data, which is growing 50x annually through 2020.  According to IDC, the Big Data market itself growing six times faster than the rest of IT. Every major tech company, old and new, is now driving Hadoop innovation, including Google, Yahoo, Facebook Microsoft, IBM, Intel and EMC – building value added solutions on open source contributions by Hortonworks, Cloudera and MAPR.  Cisco’s surprisingly broad portfolio will be showcased at Strataconf in New York on Oct. 15 and at our October 21st executive webcast.  In this third of a blog series, we preview the power of Application Centric Infrastructure for the emerging Hadoop eco-system.

Why Big Data?

Organizations of all sizes are gaining insight and creativity into use cases that leverage their own business data.

BigDataUseCaseTable

The use cases grow quickly as businesses realize their “ability to integrate all of the different sources of data and shape it in a way that allows business leaders to make informed decisions.” Hadoop enables customers to gain insight from both structure and unstructured data.  Data Types and sources can include 1) Business Applications -- OLTP, ERP, CRM systems, 2) Documents and emails 3) Web logs, 4) Social networks, 5) Machine/sensor generated, 6) Geo location data.

IT operational challenges

Even modest-sized jobs require clusters of 100 server nodes or more for seasonal business needs.  While, Hadoop is designed for scale out of commodity hardware, most IT organizations face the challenge of extreme demand variations in bare-metal workloads (non-virtualizable). Furthermore, they are requested by multiple Lines of Business (LOB), with increasing urgency and frequency. Ultimately, 80% of the costs of managing Big Data workloads will be OpEx. How do IT organizations quickly, finish jobs and re-deploy resources?  How do they improve utilization? How do they maintain security and isolation of data in a shared production infrastructure?

And with the release of Hadoop 2.0 almost a year ago, cluster sizes are growing due to:

  • Expanding data sources and use-cases
  • A mixture of different workload types on the same infrastructure
  • A variety of analytics processes

In Hadoop 1.x, compute performance was paramount.  But in Hadoop 2.x, network capabilities will be the focus, due to larger clusters, more data types, more processes and mixed workloads.  (see Fig. 1)

HadoopClusterGrowth

ACI powers Hadoop 2.x

Cisco’s Application Centric Infrastructure is a new operational model enabling Fast IT.  ACI provides a common policy-based programming approach across the entire ACI-ready infrastructure, beginning with the network and extending to all its connected end points.  This drastically reduces cost and complexity for Hadoop 2.0.  ACI uses Application Policy to:

-          Dynamically optimize cluster performance in the network
-          Redeploy resources automatically for new workloads for improved utilization
-          Ensure isolation of users and data as resources are deployments change

Let’s review each of these in order:

Cluster Network Performance: It’s crucial to improve traffic latency and throughput across the network, not just within each server.

  • Hadoop copies and distributes data across servers to maximize reliability on commodity hardware.
  • The large collection of processes in Hadoop 2.0 are usually spread across different racks.
  • Mixed workloads in Hadoop 2.0, support interactive and real-time jobs, resulting in the use of more on-board memory and different payload sizes.

As a result, server IO bandwidth is increasing which will place loads on 10 gigabit networks.  ACI policy works with deep telemetry embedded in each Nexus 9000 leaf switch to monitor and adapt to network conditions.

HadoopNetworkConditions

Using policy, ACI can dynamically 1) load-balance Big Data flows across racks on alternate paths and 2) prioritize small data flows ahead of large flows (which use the network much less frequently but use up Bandwidth and Buffer). Both of these can dramatically reducing network congestion.  In lab tests, we are seeing flow completion nearly an order of magnitude faster (for some mixed workloads) than without these policies enabled.  ACI can also estimate and prioritize job completion.  This will be important as Big Data workloads become pervasive across the Enterprise. For a complete discussion of ACI’s performance impact, please see a detailed presentation by Samuel Kommu, chief engineer at Cisco for optimizing Big Data workloads.

 

Resource Utilization: In general, the bigger the cluster, the faster the completion time. But since Big Data jobs are initially infrequent, CIOs must balance responsiveness against utilization.  It is simply impractical for many mid-sized companies to dedicate large clusters for the occasional surge in Big Data demand.   ACI enables organizations to quickly redeploy cluster resources from Hadoop to other sporadic workloads (such as CRM, Ecommerce, ERP and Inventory) and back.  For example, the same resources could run Hadoop jobs nightly or weekly when other demands are lighter. Resources can be bare-metal or virtual depending on workload needs. (see Figure 2)

HadoopResourceRedeployment

How does this work? ACI uses application policy profiles to programmatically re-provision the infrastructure.  IT can use a different profile to describe different application’s needs including the Hadoop eco-system. The profile contains application’s network policies, which are used by the Application Policy Infrastructure controller in to a complete network topology.  The same profile contains compute and storage policies used by other tools, such as Cisco UCS Director, to provisioning compute and storage.

 

Data Isolation and Security: In a mature Big Data environment, Hadoop processing can occur between many data sources and clients.  Data is most vulnerable during job transitions or re-deployment to other applications.  Multiple corporate data bases and users need to be correctly to ensure compliance.   A patch work of security software such as perimeter security is error prone, static and consumes administrative resources.

ACI_SecurityImperatives

 

In contrast, ACI can automatically isolate the entire data path through a programmable fabric according to pre-defined policies.  Access policies for data vaults can be preserved throughout the network when the data is in motion.  This can be accomplished even in a shared production infrastructure across physical and virtual end points.

 

Conclusion

As organizations of all sizes discover ways to use Big Data for business insights, their infrastructure must become far more performant, adaptable and secure.  Investments in fabric, compute and storage must be leveraged across, multiple Big Data processes and other business applications with agility and operational simplicity.

Leading the growth of Big Data, the Hadoop 2.x eco-system will place particular stresses on data center fabrics. New mixed workloads are already using 10 Gigabit capacity in larger clusters and will soon demand 40 Gigabit fabrics.  Network traffic needs continuous optimization to improve completion times.  End to end data paths must use consistent security policies between multiple data sources and clients.  And the sharp surges in bare-metal workloads will demand much more agile ways to swap workloads and improve utilization.

Cisco’s Application Centric Infrastructure leverages a new operational and consumption model for Big Data resources.  It dynamically translates existing policies for applications, data and clients in to fully provisioned networks, compute and storage. .  Working with Nexus 9000 telemetry, ACI can continuously optimize traffic paths and enforce policies consistently as workloads change.  The solution provides a seamless transition to the new demands of Big Data.

To hear about Cisco’s broader solution portfolio be sure to for register for the October 21st executive webcast  ‘Unlock Your Competitive Edge with Cisco Big Data Solutions.’  And stay tuned for the next blog in the series, from Andrew Blaisdell, which showcases the ability to predictably deliver intelligence-driven insights and actions.

Tags: , , , , , ,