Cisco Blogs


Cisco Blog > Data Center and Cloud

Docker Networking Going Enterprise ?

October 14, 2014 at 4:18 am PST

The second revolution in server virtualization is here.  Virtual Machines were the first revolution that allowed users the ability to run multiple workloads on a single server through a hypervisor. Now the next wave is here.  Linux Containers have recently started to gain momentum with many enterprise customers asking me if they should consider it and if Cisco offered Docker support in the enterprise-grade virtual networking products.

I approached my engineers to see whether our recently introduced Nexus 1000v for the Linux KVM hypervisor, which already has 10000+ customers across various hypervisors, is able to support linux containers or more specifically the popular linux container technology, Dockers.

One of the key advantages of Nexus 1000V today is that it allows easy management of policies for all of the virtual machines.  For example, with a single command or REST API call, a security policy can be deployed or altered across all virtual interfaces connected to a Virtual Extensible LAN (VXLAN). My reasoning was that we should able to extend that to support to linux containers/dockers.

So I approached Tim Kuik (@tjkuik) and Dave Thompson (@davetho610) and much to my delight, they not only said Nexus 1000V can do it but also showed how to do it so that customers can take advantage of this today in their deployments.

I have included Tim and Dave’s how to attach Docker containers to the Nexus 1000v and to assign policies write-up below so that you can try this in your setup.  Happy reading.

How to use Dockers with Nexus 1000V for KVM Hypervisor: 

dockerpic

Begin by installing the Nexus 1000v to manage one or more Ubuntu servers:   The Nexus 1000v is managed by a Virtual Supervisor Module (VSM).  Once the package is installed on your servers, the servers will appear on the VSM as Virtual Ethernet Modules (VEM).  Below we can see our VSM is managing a server named Bilbo:

dockerpic1

We’ve also pre-configured our server to have a port-channel that is capable of carrying vlan traffic 100-109.  We’ve used an Ethernet Port-profile to conveniently manage the uplinks for all of our servers:

dockerpic2

A key concept of the Nexus 1000v is that of a Port-profile.  The Port-profile allows for a shared set of port attributes to be cohesively managed in a single policy definition.  This policy can include an ACL definition, Netflow specification, VLAN or VXLAN designation, and/or other common port configuration attributes.  We can, of course, create multiple Port-profiles.  Perhaps we would have one per level of service or tenant.  The Port-profile provides the mechanism to collect and manage the set of containers that share the same policy definition.

Below we create a Port-profile that could be used by any number of containers on any number of servers.

dockerpic3

Install docker on your server. [https://docs.docker.com/installation/ubuntulinux/]

dockerpic4

The purpose of the container is to run our application. Let’s create one for this example which can handle ssh sessions.  Here is an example Dockerfile which does that: dockerpic5

At this point, via Docker, you can build an image specified by this Dockerfile.

dockerpic6

All the pieces are now in place.  The Nexus 1000v is running.  We have a policy definition that will assign the interfaces to vlan 100 (port-profile vlan100).  Docker is installed on our server.  We have created a useful container image.  Let’s create an actual container from this image:

dockerpic7

The container instance started at this point is running with just a loopback interface since we used the argument “–networking=false”.  We can now add an eth0 interface to this container and set it up to be handled by the Nexus 1000v on the host.

Setup a few env variables we will use as part of the procedure.  Find the PID of the running container and generate UUIDs to be used as identifiers for the port and container on the Nexus 1000v:

dockerpic8

In this example the following PID and UUIDs were set:

dockerpic9

Create a linux veth pair and assign one end to the Nexus 1000v.  We will use the port-profile defined on the VSM named “vlan100’ which will configure the port to be an access port on VLAN 100:

dockerpic10

When an interface is added to the Nexus 1000v, parameters are specified for that interface by adding keys to the external_ids column of the Interface table.  In the example above the following keys are defined:

  • iface-id: Specifies the UUID for the interface being added. The Nexus 1000v requires a UUID for each interface added to the switch so we generate one for this.
  • attached-mac: Provides the MAC of the connected interface. We get this from the ‘ip link show’ command for the interface to be added to the container.
  • profile:  Provides the name of the port-profile which the Nexus 1000v should use to configure policies on the interface.
  • vm-uuid: Specifies the UUID for the entity which owns the interface being added.  So in this case that’s the container instance.  Since Docker doesn’t create a linux type UUID for the container instance, we generate one for this as well.
  • vm-name: Specifies the name of the entity which owns the interface.  In this case it’s the container name.

Move the other end of the linux veth pair to the container’s namespace, rename it as eth0, and give it a static IP address of 172.22.64.201 (of course DHCP could be used instead to assign the address):

dockerpic11

On the Nexus 1000v’s VSM you will see logs like this indicating the interface has been added as switch port vethernet1 on our virtual switch:

dockerpic12The following VSM commands show that switch port veth1 is up on VLAN 100 and is connected to host interface veth18924_eth0 on host bilbo:

dockerpic13

On the host bilbo we can use vemcmd to get information on the port status:

dockerpic14

That’s it.  We now have a useful Docker container with an interface on the Nexus 1000v using our selected policy.   Using another server (and/or container) that is on the same vlan, we can ssh into this container using the IP address we assigned:

dockerpic15

When shutting down the Docker container, remove the port before removing the container:

dockerpic16

Tags: ,

Announcing Cisco UCS Integrated Infrastructure for Big Data

Big Data is expected to fuel the next industrial revolution; an early sign is the wide adoption across major sectors including agriculture, education, entertainment, finance, healthcare, manufacturing and governments. The Big data technology and services market is growing about six times the growth rate of overall information and communication technology market, at 27% compound annual growth to $26 billion and expected drive 240 billion of  worldwide IT spending directly and indirectly in 2016, and moving towards a trillion dollars in 2020.

Through disruptive innovations Cisco UCS has demonstrated highest industry growth in the worldwide server market, and has been validated by analysts as a leading platform for business applications including big data and analytics. The Cisco solution for Big Data, the Cisco UCS Common Platform Architecture (CPA) for Big Data, has already become a popular choice for enterprise Big Data deployments across major industry verticals. Today we are announcing the third generation of our solution, Cisco UCS Common Platform Architecture for Big Data v3, extending on our vision of Integrated Infrastructure to help organizations deploy and scale applications faster to drive the revenue side of the business while reducing risk and TCO.

The new solution, Cisco UCS Integrated Infrastructure for Big Data, offers a set of reference architectures and solution bundles designed and optimized with  leading big data and analytics software partners including Actian, Cloudera, DataStax, Elastic Search, HortonWorks, MapR, MarkLogic, MongoDB, Pivotal, Platfora, SAP, SAS, Splunk, and others. These architectures can be used as is or customized to meet specific business requirements.

In addition, directly available from Cisco– Cloudera Enterprise Basic/Flex/Data Hub editions, HortonWorks Data Platform Enterprise/Enterprise Plus Subscriptions, MapR M5 Enterprise Hadoop Platform/M7 Enterprise MapReduce with HBase Platform, SUSE Linux Enterprise Server and Red Hat Enterprise Linux. Cisco also offers Cisco UCS Director Express for Big Data, a software stack integrated with Hadoop distributions from Cloudera, MapR, and Hortonworks1, which automates deployment of Hadoop on C240 based Cisco UCS CPA v2 and v3for Big Data, providing a single management pane across both physical infrastructure and Hadoop software.

Cisco UCS Integrated Infrastructure for Big Data: Cisco UCS CPA v3 Reference Architectures and Single SKU bundles:

Big Data   Starter High Performance Performance Optimized Capacity Optimized Extreme Capacity
 Designed for Performance     and density for analytics engines, NoSQL databases, and entry-level Hadoop     deployments Extreme     performance and density for analytics engines Balance of compute and storage for scale-out applications including Hadoop, NoSQL, and MPP databases Storage-intensive Hadoop and scale-out storage deployments. Industry leading storage density with low cost per terabyte
Server  UCS C220 M4  UCS C220 M4 UCS C240 M4 UCS C240 M4 UCS C3160
CPU 2 x Intel Xeon E5-2620 v3 (15M Cache, 2.40 GHz) 2 x Intel Xeon E5-2680 v3 (30M Cache, 2.50 GHz) 2 x Intel Xeon E5-2680 v3 (30M Cache, 2.50 GHz) 2 x Intel Xeon  E5-2620 v3  (15M Cache, 2.40 GHz) 2 x Intel Xeon  E5-2695 v2  (30M Cache, 2.40 GHz)
Memory 256GB 256GB 256GB 128GB 256GB
Storage  Controller Cisco 12-Gbps SAS Modular Raid Controller with 2-GB FBWC Cisco 12-Gbps SAS Modular Raid Controller with 2-GB FBWC Cisco 12-Gbps SAS Modular Raid Controller with 2-GB FBWC Cisco 12-Gbps SAS Modular Raid Controller with 2-GB FBWC Cisco 12-Gbps SAS Modular Raid Controller with 4-GB FBWC
Storage 8 1.2-TB 10K SAS  SFF HDD 2 1.2-TB 10K SAS SFF HDD, 6 400-GB SAS SSD 2 120-GB  SATA SSD, 24 1.2-TB 10K SAS SFF HDD 2 120-GB  SATA SSD. 12 4-TB 7.2K SAS SFF HDD 2 120-GB  SATA SSD, 60 4-TB 7.2K SAS SFF HDD
Network Controller Cisco UCS VIC 1227 2 10GE SFP+ Cisco UCS VIC 1227 2 10GE SFP+ Cisco UCS VIC 1227 2 10GE SFP+ Cisco UCS VIC 1227 2 10GE SFP+ 2 Cisco UCS VIC 1227 2 10GE SFP+
Network and Cluster Scaling 2 Cisco UCS 6248UP FIs, Scale up to 32 servers with no additional switching infrastructure 2 Cisco UCS6248UP FIs, Scale up to 32 servers with no additional switching     infrastructure 2 Cisco UCS 6296UP FIs, Scale up to 80 servers per domain, Scale to thousands of servers with Cisco Nexus 7000 or 9000 Series Switches 2 Cisco UCS 6296UP FIs, Scale up to 80 servers per domain, Scale to thousands of servers with Cisco Nexus 7000 or 9000 Series Switches Integrates into existing  or new Cisco UCS and Nexus infrastructure
Cisco Single SKU Smart Play Offers UCS-SL-CPA3-S
(8 servers)
UCS-SL-CPA3-H
(8 Servers)
UCS-SL-CPA3-P
(16 Servers) 
UCS-SL-CPA3-C
(16 Servers)
UCS-SL-CPA3-D
(2 Servers) 

 

Available 12/2014

Let’s Play Ball – Cisco at SAP TechEd && d-code October 20 – 24 in Las Vegas

showfloorTE13LV_7061 copy

It’s October, which means only one thing…and it’s not MLB baseball playoffs.  The Cisco & SAP teams are covering all the bases for the upcoming SAP TechEd && d-code October 20 – 24 in Las Vegas.

Leading off, we have a full line-up of demos, sessions and events that will highlight how we deliver a complete compute platform SAP Applications, including SAP HANA.

Connect with Cisco in Booth 1000

Learn about Cisco Data Center products and talk to Cisco solution heavy hitters in booth 1000. We’ll be conducting live solution demonstrations on:
•    SAP HANA
•    SAP Solutions on Vblock
•    SAP Solutions on FlexPod
•    Internet of Things, jointly with SAP
•    IT Process Automation by Cisco
•    Application Centric Infrastructure

Read More »

Tags: , , ,

Cisco Empowered Women’s Network Sponsors San Jose State University STEM Challenge

Last month Cisco Empowered Women’s Network (CiscoEWN) sponsored a San Jose State University STEM Challenge together with CloudNOW.  The goal was to promote technology career paths for college women and to recognize students’ innovative efforts at San Jose State University.

Through my involvement with CloudNOW, a non-profit organization whose mission is to drive the professional development of women from school age through their working career I got to meet Debra Caires,  the Director of Internships Programs at San Jose Statue University.  Debra has a relentless energy and passion in fostering STEM careers for college students and for promoting gender diversity.  Debra organized this STEM Challenge event where we also announced CloudNOW;s Top College Women in Cloud awards.  We are accepting submissions for these currently and all college women and men too are encouraged to apply.

stem blog post photo 2

The CiscoEWN generously supported this event in a number of ways.  The guest, speaker at this event was Tami Newcombe, Vice President of Sales at Cisco Systems. The CiscoEWN offered students ways to connect with them to reap the benefits of the many seminars and events that they run.

Cisco’s Tami Newcombe opened the event with reflections on her own career trajectory spanning a mix of engineering and sales executive leadership. Tami encouraged students “ to go break glass” and be bold in their career paths but also gave practical advice on being attuned to the dynamics of the organization that they work at.  Tami talked about the value of internships and described it as the “new interview” in the job search process for students and employers alike.

I shared further details on their Top College in Women in Cloud award and was pleased to see great interest. We are looking forward to strong representation in from San Jose State University.

The event concluded with the judging of 40 student STEM posters by judges from Cisco Systems, CloudNOW and Adobe Systems. Students created these posters based on their contributions to industry during their internships. The technology areas spanned Big Data, cybersecurity, Internet of Things to emerging digital technologies.

We awarded first and second place prizes to Jordan Jennings and Sindusha Doddapeni respectively.

 

stem blog post photo 1

 

 

 

 

 

 

Photos by San Jose State University Student Eileen Wai

 

Tags: , ,

The Benefits of an Application Policy Language in Cisco ACI: Part 1 – Enabling Automation

October 10, 2014 at 5:00 am PST

[Note: This is the first of a four-part series on the OpFlex protocol in Cisco ACI, how it enables an application-centric policy model, and why other SDN protocols do not.  Part 2 | Part 3 | Part 4]

IT departments and lines of business are looking at cloud automation tools and software-defined networking (SDN) architectures to accelerate application delivery, reduce operating costs, and increase business agility. The success of an IT or cloud automation solution depends largely on the business policies that can be carried out by the infrastructure through the SDN architecture.

Through a detailed comparison of critical architectural components, this blog series shows how the Cisco Application Centric Infrastructure (ACI) architecture supports a more business-relevant application policy language, greater scalability through a distributed enforcement system rather than centralized control, and greater network visibility than alternative software overlay solutions or traditional SDN designs.

Historically, IT departments have sought out greater automation as device proliferation has accelerated to overcome the challenges of applying manual processes for critical tasks. About 20 years ago the automation of desktop and PC management was an imperative, and about 10 years ago server automation became important as applications migrated to larger numbers of modular x86 and RISC-based systems. Today, with the consolidation of data centers, IT must address not only application and data proliferation, but also the emergence of large scale application virtualization and cloud deployments, requiring IT to focus on cloud and network automation.

The emergence of SDN promised a new era of centrally managed, software-based automation tools that could accelerate network management, optimization, and remediation. Gartner has defined SDN as “a new approach to designing, building and operating networks that focuses on delivering business agility while lowering capital and operational costs.” (Source: “Ending the Confusion About Software-Defined Networking: A Taxonomy”, Gartner, March 2013)

Furthermore, Gartner, in an early 2014 report (“Mainstream Organizations Should Prepare for SDN Now”, Gartner, March 2014), notes that “SDN is a radical new way of networking and requires senior infrastructure leaders to rethink traditional networking practices and paradigms.” In this same report, Gartner makes an initial comparison of mainstream SDN solutions that are emerging, including VMware NSX, and Cisco ACI. There has been some discussion whether Cisco ACI is an SDN solution or something more, but most agree that, in a broad sense, the IT automation objectives of SDN and Cisco ACI are basically the same, and some of the baseline architectural features, including a central policy controller, programmable devices, and use of overlay networks, lead to a useful comparison.

This blog series focuses on the way that Cisco ACI expands traditional SDN methodology with a new application-centric policy model. It specifically compares critical protocols and components in Cisco ACI with VMware NSX to show the advantages of Cisco ACI over software overlay networks and the advantages of the ACI application policy model over what has been offered by prior SDN solutions. It also discusses what the Cisco solution means for customers, the industry, and the larger SDN community.

Read More »

Tags: , , , , , , ,