Cisco Blogs


Cisco Blog > Data Center and Cloud

Thoughts on #OpenStack and Software-Defined Storage

May 14, 2014 at 6:18 am PST

This week has been the semi-annual OpenStack Summit in Atlanta, GA. In a rare occurrence I’ve been able to be here as an attendee, which has given me wide insight into a world of Open Source development I rarely get to see outside of some interpersonal conversations with DevOps people. (If you’re not sure what OpenStack is, or what the difference is between it and OpenFlow, OpenDaylight, etc., you may want to read an earlier blog I wrote that explains it in plain English).

On the first day of the conference there was an “Ask the Experts” session based upon storage. Since i’ve been trying to work my way into this world of Programmability via my experience with storage and storage networking, I figured it would be an excellent place to start. Also, it was the first session of the conference.

During the course of the Q&A, John Griffith, the Program Technical Lead (PTL) of the Cinder project (Cinder is the name of the core project within OpenStack that deals with block storage) happened to mention that he believed that Cinder represented software-defined storage as a practical application of the concept.

I’m afraid I have to respectfully disagree. At least, I would hesitate to give it that kind of association yet. Read More »

Tags: , , , , , ,

Network Design for Automation

20140519-CISCO-spine-and-leafThere has been a lot of recent online discussion about automation of the datacenter network, how we all may (or may not) need to learn programming, the value of a CCIE, and similar topics. This blog tries to look beyond all that. Assume network configuration has been automated. How does that affect network design?

Automation can greatly change the network landscape, or it may change little. It depends on what you’re presently doing for design. Why? The reason is that the programmers probably assumed you’ve built your network in a certain way. As an example, Cisco DFA (Dynamic Fabric Automation) and ACI (Application Centric Infrastructure) are based on a Spine-Leaf CLOS tree topology.

Yes, some OpenFlow vendors have claimed to support arbitrary topologies. Arbitrary topologies are just not a great idea. Supporting them makes the programmers work harder to anticipate all the arbitrary things you might do. I want the programmers to focus on key functionality. Building the network in a well-defined way is a price I’m quite willing to pay. Yes, some backwards or migration compatibility is also desirable.

The programmers probably assumed you bought the right equipment and put it together in some rational way. The automated tool will have to tell you how to cable it up, or it  might check your compliance with the recommended design. Plan on this when you look to automation for sites, a datacenter, or a WAN network.

The good news here is the the Cisco automated tools are likely to align with Cisco Validated Designs. The CVD’s provide a great starting point for any network design, and they have recently been displaying some great graphics. They’re a useful resource if you don’t want to re-invent the wheel — especially a square wheel. While I disagree with a few aspects of some of them, over the years most of them have been great guidelines.

The more problematic part of this is that right now, many of us are (still!) operating in the era of hand-crafted networks. What does the machine era and the assembly line bring with it? We will have to give up one-off designs and some degree of customization. The focus will shift to repeated design elements and components. Namely, the type of design the automated tool can work with.

Some network designers are already operating in such a fashion. Their networks may not be automated, but they follow repeatable standards. Like an early factory working with inter-changeable parts. Such sites have likely created a small number of design templates and then used them repeatedly. Examples: ”small remote office”, “medium remote office”, “MPLS-only office”, or “MPLS with DMVPN backup office”.

However you carve things up, there should only be a few standard models, including “datacenter” and perhaps “HQ” or “campus”. If you know the number of users (or size range) in each such site, you can then pre-size WAN links, approximate number of APs, licenses, whatever. You can also pre-plan your addressing, with, say, a large block of  /25′s for very small offices, /23′s for medium, etc.

On the equipment side, a small office might have one router with both MPLS and DMVPN links, one core switch, and some small number of access switches. A larger office might have one router each for MPLS and one for DMPVN, two core switches, and more access switches. Add APs, WAAS, and other finishing touches as appropriate. Degree of criticality is another dimension you can add to the mix: critical sites would have more redundancy, or be more self-contained. Whatever you do, standardize the equipment models as much as possible, updating every year or two (to keep the spares inventory simple).

It takes some time to think through and document such internal standards. But probably not as much as you think! And then you win when you go to deploy, because everything becomes repeatable.

Read More »

Tags: , , , , , , , , ,

Openstack Expectations for Summit

Openstack SummitAs I was flying to Atlanta for Openstack Summit, I was thinking about the difference in my expectations for this summit from the summit last year in Portland.

In Portland, Havana was just released and was starting to become interesting to service providers as the project was maturing and gaining interest with some enterprises. The Havana release was not ready for enterprises but Icehouse, the next release was bringing features that are of great interest. I was interested in getting involved in Icehouse so I attended with my R&D team and networked. There was not much excitement at the event and the attendance was not that great. Walking into the exhibit hall was depressing as there were only a small number of exhibits and mostly tables with brochures.

One year later, and the excitement around Openstack and Icehouse is high. Openstack has finally hit the feature capability and scale requirements needed to be accepted by the enterprise. Over the last year, numerous enterprises performed Proof of Concepts (PoCs) on Havana and 2014 is quickly becoming the year of Openstack coming out! The Icehouse features that are of greatest interest are:

  • Ceilometer support in Horizon for administrators to view daily usage reports per project across services.
  • Keystone now enables federated authentication via Shibboleth for multiple Identity Providers, and mapping federated attributes into OpenStack group-based role assignments 
  • Keystone assignment backed is completely separate from the identity backend. This allows much greater flexibility in which data comes from where. This allows an enterprise back your deployment’s identity data to LDAP, and your authorization data to RSA for instance.
  • Token KVS driver is now capable of writing to persistent Key-Value stores such as Redis, Cassandra, or MongoDB. In combination with above, this means we can use Redis or Cassandra for tokens and LDAP for user/pass/domain/etc.
  • Notifications  are now emitted in response to create, update and delete events on roles, groups, and trusts.
  • LDAP driver for the assignment backend now supports group-based role assignment operations.
  • Ceilometer API now gives direct access to samples decoupled from a specific meter events API, in the style of StackTach 
  • New Metric sources, including Neutron north-bound API on SDN controller, VMware vCenter Server API, SNMP daemons on bare metal hosts and OpenDaylight REST APIs    [ Check also Mike Cohen's blog  Delivering Policy in the Age of OpenSource  ]

For the full set of features, please refer to: https://wiki.openstack.org/wiki/ReleaseNotes/Icehouse

I’m really looking forward to the Summit in Atlanta and will be spending most of my time in the Juno Design Summit contributing to Heat, Ceilometer, and Solum.

You can also follow me on Twitter @kenowens12

 

Tags: , , , , ,

Delivering Policy in the Age of Open Source

This is an exciting time in the history of datacenter infrastructure.  We are witnessing the collision of two major trends: the maturation of open source software and the redefinition of infrastructure policy.
The trend towards open source is self-evident.  Platforms such as OpenStack and OpenDaylight are gaining huge developer mindshare as well as support and investment from major vendors.  Even some newer technologies like Docker, which employs linux kernel containers, and Ceph, a software-based storage solution, offer promising paths in open source.  Given the fundamental requirements of interoperability in architecturally diverse infrastructure environments, its no surprise that open source is gaining momentum.

The second trend around policy is a bit earlier in its evolution but equally disruptive.  Today, there is a huge disconnect between how application developers think about their requirements and the languages and tools through which they are communicated to the infrastructure itself.  For example,  just to handle networking, a simple three tier app must be deconstructed into an array of VLANs, ACLs, and routes spread across a number of devices.  Storage and compute present similar challenges as well.   To simplify this interaction and create more scalable systems, we need to actually rethink how resources are requested and distributed between different components.  This really boils down to shifting the abstraction model away from configuring individual devices to focus on separately capturing user intent, operational, infrastructure, and compliance requirements.

At Cisco, we’ve really embraced both of these trends.  We are active contributors to over 100 open source projects and were founding members of OpenStack Neutron and OpenDaylight.  We’ve also made open source a successful business practice by incorporating and integrating popular projects with our products.  In parallel, Cisco has accumulated a lot of experience in describing policy through the work we’ve done with Cisco Unified Computing (UCS) and most recently with Cisco Application-Centric Infrastructure (ACI).

Building on this foundation, we see a unique opportunity to collaborate with the open source community to deliver a vision for policy-driven infrastructure.  This will enhance the usability, scale, and interoperability of open source software and benefit the entire infrastructure ecosystem.

This vision includes two initiatives in the open source community:

GroupBasedPolicy

  1. Group-Based Policy: An information model designed to express applications’ resource requirements from the network through a hardware-independent, declarative language and leave a simple control and dataplane in place.  This approach replaces traditional networking constructs like VLANs with new primitives such as “groups”, which model tiers or components of an application, and “contracts” describing relationships between them.  Group-Based Policy will be available in the context of OpenStack Neutron as well as OpenDaylight through a plug in model that can support any software or hardware infrastructure.
  2. OpFlex: A distributed framework of intelligent agents within each networking device designed to resolve policies.  These agents would translate an abstract, hardware-independent policy taken from a logically central repository into device-specific features and capabilities.

 

Let’s look a bit more closely at each of these initiatives.

Read More »

Tags: , , , , , , , , , , , , , , ,

How Anyone in Any Industry Can Get Started in Cloud

In today’s business landscape, cloud adoption and deployment is more than just a technical discussion. It’s really a choice about how to operate your business, regardless of what industry or vertical your organization is affiliated with.

However, as a former CIO, I understand that many CIOs are more concerned with the challenges they face when moving to the cloud than the benefits they can achieve.

For example, in the past, all of your company information and applications were locked-up behind a firewall. As such, none of your customers or remote employees could gain access to your network. Now, through clouds, you can put your business out in the world – where your customers, employees, partners and more can gain access. It’s truly making business more accessible, in an incredibly flexible way – but it can be a daunting process.

Recently, I had the chance to participate in a new Cloud Insights Video Podcast and share how all verticals face similar challenges when it comes to cloud. It probably comes as no surprise that the key areas of concern are security and privacy.

So, how can CIOs address these challenges?  

Go find the right partner.

Security and privacy are very real challenges, and it’s the CIOs job to address them, but he/she doesn’t have to go at it alone. Businesses should look for a cloud service provider to become a trusted business partner. When a business is looking for a cloud service provider to host its application or data, the main questions that arise are:

    • How are we going to ensure security?
    • How will I maintain control over the data and applications that I put in the cloud?
    • How do I maintain visibility?

When these questions about control and visibility are answered, it inevitably leads to trust. And when a CIO feels there is a level of trust for information and application security within the cloud, it ripples down through the organization, ultimately empowering customer relationships.

It’s transformational when a CEO can say to customers, “We do have that level of control and visibility and you can look to us to take care of your information.”

As organizations in various verticals look to move past security concerns, CIOs need to find a partner they trust and start a conversation, they may be surprised at how quickly some of their concerns can be mitigated.

Visit Cloud Executive Perspectives to get additional cloud insights for IT leaders and subscribe to the Cisco Cloud Insights video podcast channel on iTunes or via RSS.  Additional Cisco Cloud Insights videos can also be found here.

Follow @CiscoCloud and use #CiscoCloud to join the conversation!

Cloud Insights: How Anyone in Any Vertical Can Get Started in Cloud from Cisco Business Insights

 

Additional Resources:

In the same video podcast series:  How Cisco IT Solved Its Internal Cloud Dilemma by Didier Rombaut via #CiscoBlog

Cisco Solutions for Open and Secure Intercloud Workload Migration.  Join our webcast to learn how the Cisco InterCloud solution helps ensure the same network security, quality of service (QoS), and access control policies previously enforced in the data center are implemented in the public cloud.  Wednesday, May 14, 9:00 a.m. Pacific Time / 12:00 pm Eastern Time

Cisco Solutions for Open and Secure Intercloud Workload Migration. Join our webcast to learn how the Cisco InterCloud solution helps ensure the same network security, quality of service (QoS), and access control policies previously enforced in the data center are implemented in the public cloud. The webcast is available on demand.

Watch the Cisco Intercloud Workload Migration Webcast  (available on demand)

Register for @CiscoLive May 18-22 2014 — San Francisco #CLUS:

Register today for the Cisco Powered Cloud Day at Cisco Live on Monday, May 19. The insightful day will focus on opportunities and challenges that can be addressed with cloud.

Watch Cisco Live’s Technology Business Vision  keynote by Rob Lloyd on Tuesday, May 20 at 10:00 a.m. PDT.

Watch Cisco Live’s Cloud Technology Trend Keynote – Aligning Your Strategy and Business for Cloud Success by Dr. Gee Rittenhouse and Faiyaz Shahpurwala on Tuesday, May 20 – 1:30 p.m. PDT.

Tags: , , , , , , , , , , , , , ,