I can’t believe it has been a year since I wrote the blog series (part-I, part-II, part-III and part-IV – last one with help from Vincent Esposito) to share some ideas about how to bring theory to practice when it comes to ACI and Micro Segmentation.
In the last 12 months we have added quite a lot of new functionality to ACI and in this post I begin another small series to share the latest about APIC related to Micro Segmentation. Now we are getting to the point where the architectural advantages of ACI and APIC will begin to show and shine compared to alternatives.
To begin with, the APIC declarative approach to network and policy allows it to interact with different data plane implementations. APIC does not need to have low-level information of the data plane specifics, since each data plane will be programmed in its own particular way via a local OpFlex agent. This approach has advantages scaling, but in addition, it allows us to adapt to changing environments and potentially work with third party data plane elements. As an example, APIC can program L2, L3 and stateful security policies to Open vSwitch instances. We use that approach as part of our OpenStack KVM integration as well as on the APIC CNI-plugin integration with Kubernetes.
A consequence of this architectural advantage of APIC is that it does not depend 100% on the virtual switch. In other vendor SDN implementations, you have to install (and license) the vendor’s virtual switch and in absence of it, you get nothing. Not the case with APIC.
For instance, in the case of the VMware native VDS we can’t program policies on it, but we can program it using open northbound APIs with simple features in order steer all traffic to the ACI leaf, where we can apply policies. In a way, we program the VDS to act like a FEX: all traffic goes to the leaf where we can do more intelligent things. So sometimes we apply policy on an ACI leaf, sometimes we apply policy on a virtual switch, and sometimes we will do it in other data planes.
The other architectural advantage is that our model expresses policy intent, and policy is not just restricted to security. For example, QoS settings can be part of policy. I will elaborate this a bit more in upcoming parts of this series.
Now … what is new with APIC as it comes to Micro Segmentation?
With ACI 2.3 and now 3.0 we have added a lot of new features in multiple domains. From resource quota management for users and tenants to QinQ, VEPA, enhancements in CoPP and various routing protocols, Multi-Site and more. It is always good to check the Release Notes for details.
Specific to Micro Segmentation I will focus on five things on this post:
- Support for additional VM-attributes: vSphere Tags and Custom Properties
- Logical Operators for VM-attribute combinations
- EPG Contract Inheritance
- DNS-based uEPGs
- IntraEPG Contracts
It is very important to remark that Micro Segmentation does not necessarily mandate the use of Micro EPGs. Regular EPGs allow you to segment subnets into smaller chunks, as small as you want. However regular EPGs select endpoints based on path and encapsulation, whereas Micro EPGs (uEPGs) allow for more dynamic endpoint classification.
It is also important to highlight a change on the APIC GUI for ACI Micro EPG configurations. Prior to APIC 2.3, the attributes where specified on the main policy screen of the uEPG configuration. Starting with APIC 2.3 we have added a specific section for such configuration. Notice below, for a uEPG called ‘apache-servers-gold’, there is a new folder called ‘uSeg Attributes’ where you will now specify the classification.
ACI 3.0 also introduces significant changes to the APIC GUI. In the screen shots below I am using ACI 2.3 for features that were added on that release, and the new GUI for features added with 3.0.
Support for vSphere Tags and Microsoft Custom Properties to classify endpoints in uEPGs
Matching on the VM name, Guest OS, or Data Center is no doubt useful, but many customers prefer to use pure metadata to classify their Virtual Machines. The challenge with VM metadata is that it is not standard.
[Note: When you think about that, one thing that strikes me is that customers have never pushed for standards related to x86 virtualisation management. Nobody imagines that you could vMotion between different hypervisors – although … why not? – but it’s not even that, it’s that even VM meta data is not standard! Nobody complains that there’s no coherence or consistency for meta-data between virtualisation platforms. Food for thought!]
In vSphere customers have traditionally used Custom Attributes to add meta-data to virtual machines. This is useful for instance to record who’s the VM owner, what application team is responsible for it, or when was the last time a snapshot was taken. Custom Attributes also work great to classify VMs into uEPGs. But Custom Attributes are in the process of being deprecated in favour of vSphere Tags.
Tags were introduced with vSphere 5.1. A tag is a label that you can apply to objects in the vSphere inventory. In our particular case, we are interested in tags assigned to Virtual Machines. Similar to Custom Attributes, Tags allow you to identify that VM belongs to a particular environment (i.e. PCI, HIPPA …) or organisations (HR, Finance, …), etc. And now, since ACI 2.3, you can use those tags to identify the VMs, and classify them on the right uEPG and therefore apply the relevant policy.
As with other attributes, you can use Tags within uEPG configurations and configure this from the GUI, NX-OS APIC CLI, vCenter Plugin or direct REST calls. Below you can see an example of the APIC GUI and another of the vCenter Plugin:
Hyper-V readers may be looking for a similar feature. Despair not! … in Hyper-V there is something similar called “Custom Properties”. They are similar to vSphere Custom Attributes, and they allow you to add arbitrary meta-data to a VM.
With ACI 3.0 we are adding Custom Properties to the list of attributes supported for uEPGs. Small clarification: with 3.0 the feature is in beta support only. Full support will come in the following release. Microsoft Custom Properties are mapped in APIC as “Custom Attributes” in search for consistency.
[Note: as a side note, we also added IntraEPG isolation for Hyper-V with ACI 3.0]
Support for Logical Operators when using VM attributes to classify endpoints in uEPGs
Prior to ACI 2.3, you could combine multiple VM-attributes to select which Virtual Machines belong to a uEPG, but only one attribute would eventually determine the classification and we had a pre-defined attribute precedence for this. This does not allow for a lot of flexibility.
With the new functionality the idea is very simple: you can now define multiple blocks of statements of attributes, and you can express whether you want to match on all attributes (logical AND) or any attribute (logical OR).
uEPGs can be configured in this way from the APIC GUI, the NX-OS APIC CLI, the vCenter Plugin or direct REST calls of course. The APIC GUI configuration highlights is outlined on the graphics below:
Of course you can get to a situation where you have two uEPGs configured with different Logic and a VM that matches the criteria for both of them. The tie breaker for that situation is a new uEPG attribute: the uEPG Precedence. Below you see an XML snippet where we have a uEPG with precedence of 25. The default is zero.
The precedence can also be set of course on the GUI, the vCenter Plugin or the NX-OS CLI. I am adding GUI screenshots and XML samples just to illustrate options.
Use of Logical Operators with APIC 2.3 is supported only with vSphere VMM domains, whether using AVS or VDS. With APIC 3.0 we are adding support for Logical Operations with Microsoft Hyper-V.
EPG Contract Inheritance
In ACI if you use policy enforcement between EPGs you control policy using relations to contracts. One interest in using uEPGs is that you can have a base EPG where you define endpoint encapsulation (i.e. the base VLAN for attaching bare metals, or the dvPortGroup or Network on a vSphere or Hyper-V environment). The endpoints on the base EPG will have certain access privileges. This may be limited to shared services, tooling and provisioning services, or can be application specific.
Then you may want to further refine your segmentation and define that specific endpoints require specific policy. You do that by configuring uEPGs. the uEPG will have relation to a set of contracts specific to what you want to represent as “specific policy” but often times, you want it to also have the same policy that you had already configured for the base EPG.
Another example is where you have “many of the same”. So you have an EPG that has access to a set of resources and then you will configure a lot of other EPGs that should be isolated amongst them but have access to the same set of resources.
Prior to ACI 2.3, in all those cases you had to specifically configure all contracts for each new EPG or uEPG. This is not by mistake, this is because we built the ACI policy model thinking of com-posable policies where by you pick for each object (EPG) the relations that you need and only those that you need, which is very safe and works great in fully automated environments.
Now with ACI 2.3 we added a feature that enables one EPG or uEPG to inherit policy from another. So you can configure a new uEPG and set its “master” to be the base EPG, from my example on the paragraphs above. In this manner, you only have to configure the new stuff you want for the uEPG, but all the policies you had selected for the base are automatically inherited.
The graphic below shows the concept. EPG_B and EPB_C both inherit from EPG_A (their master EPG). Each of them can then have specific contract relations, for instance EPG_B provides ‘Contract_TomCat’.
This model does simplify a lot of configurations, specially for those doing operations manually. For instance if you want to now add a new contract to all EPGs, you only add it to EPG_A:
And you can inherit from more than one Master. For instance below, EPG_B now inherits also from EPG_A1 and as such provides ‘Contract_Syslog’:
DNS-based Micro EPGs
This feature has been introduced in preview or beta mode in both ACI 2.3 and 3.0. This means it is not yet TAC supported. The goal is to classify endpoints onto a uEPG based on a FQDN and/or a domain name.
The configuration requires two steps, both done at the tenant level. First you specify a DNS Server Group. Here is where you define the DNS server that we will use for a particular tenant for DNS-based uEPGs. When you add a new DNS Server Group you will get a screenshot that says the feature is in beta, and upon acknowledgement, you can specify the addresses of the DNS servers.
Once this is done, you can configure DNS attributes as the classification criteria for your uEPGs:
This is a feature you have to use carefully. You can add a full domain name, for instance mydomain.net, or a FQDN like app1.mydomain.net. APIC will query the DNS server to find the IP address(es) for the domain you specify. This could be one address, or many. Those IP addresses must be known to the fabric on a base EPG, and if they are, then APIC will configure the fabric to classify those endpoints on the right uEPG.
A use can be to have a farm of Virtual Machines on a default EPG with default policy configured. When you define a new application URL you create a uEPG correspondingly: for example app1-web.mydomain.net, with the specific contracts for that app. Then as your app is registered on your DNS server, the VMs that deliver that application will be placed on the uEPG. That approach can also be useful to expose services from CloudFoundry or OpenShift.
Before ACI 3.0, an EPG could have two types of relations to contracts: provider (RsProv) or consumer (RsCons). Internal to the EPG you had two options: the default that allows all endpoints in an EPG to communicate without restriction, or the IntraEPG Isolation setting that would block all communication.
With ACI 3.0 we are introducing a new type of relation: IntraEPG (RsIntraEPG). This will allow you to apply filters that restrict communication between endpoints on the same EPG. This works for EPGs and Micro EPGs as well. As of ACI 3.0 this feature is supported with vSphere VDS VMM domains and with Bare Metal (PhysDoms). Support for other VMM domains will be added in the future.
One common use case is when you have applications that require a heart-beat. For instance, imagine you run a distributed web application that uses Zookeeper to implement distributed services. The endpoints for this application will be all on the same EPG, and you will use whatever contract you require towards other EPGs to expose your application.
But the endpoints in the EPG need to communicate for Zookeeper to work, so you can not use Isolated EPGs. Now you can define a contract with the filters required for Zookeeper and associate to the EPG (or uEPG). So you have a full white-list model approach.
What is next?
In the next few posts I will drive through some basic demos that make use of these features combined together, and illustrate some potential use cases. We will also see that all those features can extend to more than one DC and more than one vCenter by using Multi-POD, which is an added benefit.
It is also very important to remember that Micro Segmentation is but one more tool to help with DC Security. Cisco has a very broad portfolio to provide for a comprehensive security strategy that can leverage ACI Micro Segmentation. Cisco ACI looks at network security as a whole – beyond just segmentation and L2-L4 distributed policy. And offers several key network security features. It can work with a broad security ecosystem and will soon also add encryption capabilities.
It is so well explained that it is very easy to understand and digest.
The way you tell it make it easy to catch up on how micro segmentation is evolving with 3.0
Future blog on migration paths to Multi-POD would be useful.
Appreciate the write-up…nice work!
Comments are closed.