Avatar

Try this hands-on learning lab:
Learn how to use Terraform with Cisco Meraki

Meraki auto vpn

As the Meraki Auto-VPN network becomes widely adopted for on-premises environments, the natural next step for customers will be to extend their automated SD-WAN network into their public cloud infrastructure.

Most organizations have different levels of domain expertise among engineers—those skilled in on-premises technologies may not be as proficient in public cloud environments, and vice versa. This blog aims to help bridge that gap by explaining how to set up a working Auto-VPN architecture in a multi-cloud environment (AWS and Google Cloud). Whether you’re an on-premises network engineer looking to explore cloud networking or a cloud engineer interested in Cisco’s routing capabilities, this guide will provide actionable steps and techniques. While this blog focuses on multi-cloud connectivity, learning how to set up vMX Auto-VPN in the public cloud will prepare you to do the same for on-premises MX devices.

Multi-Cloud Auto-VPN Objectives

The goal for this Proof-of-Concept (POC) is to conduct a successful Internet Control Message Protocol (ICMP) reachability test between the Amazon EC2 test instance on the AWS private subnet and the Compute Engine test instance on Google Cloud using only its internal IP address. You can use this foundational knowledge as a springboard to build a full-fledge design for your customers or organization.

Meraki auto vpn

Using a public cloud is a great way to conduct an Auto-VPN POC. Traditionally, preparing for Auto-VPN POCs requires at least two physical MX appliances and two IP addresses that are not CGNAT-ed by the carrier, which can be difficult to acquire unless your organization has IPs readily available. However, in the public cloud, we can readily provision an IP address obtained from the public cloud provider’s pool of external IP addresses.

For this POC, we will use ephemeral public IPv4 addresses for the WAN interface of the vMX. This means that if the vMX is shut down, the public IPv4 address will be released, and a new one will be assigned. While this is acceptable for POCs, reserved public IP addresses are preferred for production environments. In AWS, the reserved external IP address is called Elastic IP, and in Google Cloud, this is called an external static IP address.

Meraki auto vpn

Prepare the AWS Environment

First, we will prepare the AWS environment to deploy the vMX, connect it to the Meraki dashboard, and set up Auto-VPN to expose internal subnets.

1. Create the VPC, Subnets, and Internet Gateways

In the AWS Cloud, private resources are always hosted in a Virtual Private Cloud (VPC). In each VPC, there are subnets. The concept of subnets is similar to what many of us are familiar with in the on-premises world. Each VPC must be created with an IP address range (e.g., 192.168.0.0/16) and the subnets that live inside this VPC must share this range. For example, subnet A can be 192.168.1.0/24 and subnet B can be 192.168.2.0/24. Internet Gateway (IGW) is an AWS component that provides internet connectivity to the VPC. By adding IGW to the VPC, we are allocating the resource (e.g., internet connectivity) to the VPC. We have not yet allowed our resources to have internet reachability.

As shown below, we will create a VPC (VPC-A) in the US-East-1 region with a Classless Interdomain Routing (CIDR) range of 192.168.0.0/16.

Meraki auto vpn

Next, we will create two subnets in VPC-A, both having IP addresses from VPC-A’s 192.168.0.0/16 range. A-VMX (subnet) will host the vMX and A-Local-1 (subnet) will host the EC2 test instance to perform the ICMP reachability test with Google Cloud’s Compute Engine over Auto-VPN.

Meraki auto vpn

We will now create an IGW and attach it to VPC-A. The IGW is needed so the vMX (to be deployed in a later step) can communicate to Meraki dashboard over the internet. The vMX will also need the IGW to establish Auto-VPN connectivity over the internet with the vMX on Google Cloud.

Meraki auto vpn

2. Create Subnet-Specific Route Tables

In AWS, each subnet is associated with a route table. When traffic leaves the subnet, it consults its associated route table to look for the next-hop address for the destination. By default, each newly created subnet will share the VPC’s default route table. In our Auto-VPN example, the two subnets cannot share the same default route table because we need granular control of individual subnet traffic. Therefore, we will create individual subnet-specific route tables.

The two route tables shown below are each associated with a corresponding subnet. This allows traffic originating from each subnet to be routed based on its individual route table.

Meraki auto vpn

3. Configure the Default Route on Route Tables

In AWS, we must explicitly configure the route tables to direct traffic heading toward 0.0.0.0/0 to be sent to the IGW. Subnets with EC2 test instances that require an internet connection will need their route tables to also have a default route to the internet via the IGW.

The route table for A-VMX (subnet) is configured with a default route to the internet. This configuration is necessary for the vMX router to establish an internet connection with the Meraki dashboard. It also enables the vMX to establish an Auto-VPN connection over the internet with Google Cloud’s vMX in a later stage.

Meraki auto vpn

For this POC, we configured the default route for the route table A-Local-1 (subnet). During the ICMP reachability test, our local workstation will first need to SSH into the EC2 test instance. This will require the EC2 test instance to have an internet connection; therefore, the subnet it resides in needs a default route to the internet via the IGW.

Meraki auto vpn

4. Create Security Groups for vMX and EC2 Test Instances

In AWS, a security group is similar to the concept of distributed stateful firewalls. Every resource (i.e., EC2 and vMX) hosted in the subnet must be associated with a security group. The security group will define the inbound and outbound firewall rules to apply to the resource.

We created two security groups in preparation for the vMX and the EC2 test instances.

Meraki 1

In the security group for the EC2 test instance, we need to allow SSH from your workstation to establish connection with and allow inbound ICMP from Google Cloud’s Compute Engine test instance for the reachability test.

Meraki 2

On the security group for vMX, we only need to allow inbound ICMP to the vMX instance.

Meraki 3

The Meraki dashboard maintains a list of firewall rules to enable vMX (or MX) devices to operate as intended. However, because the firewall rules specify outbound connections, we generally do not need to modify the security groups. By default, security groups allow all outgoing connections, and as a stateful firewall, outgoing traffic will be allowed inbound even if the inbound rules do not explicitly allow it. The only exception is ICMP traffic, which requires an inbound security rule to explicitly allow the ICMP traffic from the indicated sources.

Meraki 4

Deploy vMX and Onboard to Meraki Dashboard

On your Meraki dashboard, ensure that you have sufficient vMX licenses and create a new security appliance network.

Navigate to the Appliance Status page under the Security & SD-WAN section and click Add vMX. This action informs the Meraki cloud that we intend to deploy a vMX and will require an authentication token.

meraki 5

The Meraki dashboard will provide an authentication token, which will be used when provisioning the vMX on AWS. The token will inform the Meraki dashboard that the vMX belongs to our Meraki organization. We will need to save this token safely somewhere to be used in the later stage.

meraki 6

We can now deploy the vMX via the AWS Marketplace. Deploy the vMX using the EC2 deployment process.

meraki 7

As part of this demonstration, this vMX will be deployed in A-VPC (VPC), in the A-VMX (subnet), and will be automatically assigned a public IP address. The instance will also be associated to the SG-A-VMX security group created earlier.

meraki 8

In the user data section, we will paste the authentication token (which was copied earlier) into this field. We can now deploy the vMX.

meraki 9

After waiting a few minutes, we should see that the vMX instance is up on AWS and the Meraki dashboard is registering that the vMX is online. Note that the WAN IP address of the vMX corresponds to the public IP address on the A-VMX instance.

meraki 10

meraki 11

Ensure that the vMX is configured in VPN passthrough/concentrator mode.

meraki a

Disable Source and Destination Check on the vMX Instance

By default, AWS does not allow their EC2 instance to send and receive traffic unless the source or destination IP address is the instance itself. However, because the vMX is performing the Auto-VPN function, it will be handling traffic where the source and destination IP addresses are not the instance itself.

meraki b

Selecting this check box will allow the vMX’s EC2 instance to route traffic even if the source/destination is not itself.

meraki c

Understand How Traffic Received from Auto-VPN is Routed to Local Subnets

After the vMX is configured in VPN concentrator mode, the Meraki dashboard no longer allows (or restricts) the vMX to only advertise subnets that its LAN interfaces are connected to. When deployed in the public cloud, the vMXs do not behave the same as MX hardware appliances.

The following examples show the Meraki Auto-VPN GUI when the MX is configured in routed mode.

meraki d

 

meraki e

For an MX appliance operating in routed mode, the Auto-VPN will detect the LAN-facing subnets and only offer these subnets as options to advertise in Auto-VPN. In most cases, this is because the default gateway of the subnets is hosted on the Meraki MX itself, and the LAN ports are directly connected to the relevant subnets.

meraki f

However, in the public cloud, vMXs do not have multiple NICs. The vMX only has one private NIC and it is associated to the A-VMX (subnet) where the vMX is hosted. The default gateway of the subnet is on the AWS router itself rather than the vMX. It is preferable to use VPN concentrator mode on the vMX because we can advertise subnets across Auto-VPN even if the vMX itself is not directly connected to the relevant subnets.

As shown in the network diagram below, the vMX is not directly connected to the local subnets and the vMX does not have additional NIC extended into the other subnets. However, we can still allow Auto-VPN to work using the AWS route table, which is the same route table associated to the A-VMX (subnet).

meraki g

Assuming Auto-VPN is established and traffic sourcing from Google Cloud’s compute instance is attempting to reach AWS’s EC2 instance, the traffic has now landed on the AWS vMX. The vMX will send the traffic out from its only LAN interface even if the A-VMX (subnet) is not the destination. The vMX will trust that traffic coming out from its LAN interface and onto the A-VMX subnet will be delivered appropriately to its destination after consulting the A-VMX (subnet) route table.

The A-VMX’s route table has only two entries. One matches the VPC’s CIDR range, 192.168.0.0/16, with a target of local. The other is the default route, sending traffic for the internet via the IGW. The first entry is relevant for this discussion.

meraki h

The packet sourcing from Google Cloud via Auto-VPN is likely to be destined for A-Local-1 (subnet), which falls within the IP range 192.168.0.0/16.

meraki i

meraki j(Illustrated solely for the purpose of understanding the concept of VPC Router)

All subnets on AWS created under the same VPC can be natively routed without additional configuration on the route tables. For every subnet that we create, there is a default gateway, which is hosted on a virtual router known as the VPC router. This router hosts all the subnets’ default gateways under one VPC. This enables packet sourcing from Google Cloud via Auto-VPN, destined for A-Local-1 (subnet), to be routed natively from A-VMX (subnet). The entry 192.168.0.0/16 with a target “local” means that inter-VLAN routing will consult the VPC router. The VPC router will route the traffic to the correct subnet, which is the A-Local-1 subnet.

Prepare the Google Cloud Environment

1. Create the VPC and Subnets

In Google Cloud, private resources are always hosted in a VPC, and in each VPC, there are subnets. The concept of VPC and subnets are similar to what we discussed in AWS.

The first exception is that in Google Cloud, we do not need to explicitly create an internet gateway to allow internet connectivity. The VPC natively supports internet connectivity, and we will only need to configure the default route in the later stage.

The second exception is that in Google Cloud, we do not need to define a CIDR range for the VPC. The subnets are free to use any CIDR range if they do not conflict with each other.

As shown below, we created a VPC named “vpc-c.” In Google Cloud, we do not need to specify the region when creating a VPC because it spans globally in contract to AWS. However, as subnets are regional resources, we will then need to indicate the region.

meraki k

As shown below, we created two subnets in vpc-c (VPC), both with addresses in a similar range (although not required). For Auto-VPN, the IP range for the subnets also should not conflict with the IP ranges over at AWS networks.

c-vmx (subnet) will host the vMX and c-local-subnet-1 (subnet) host the Compute Engine’s test instance to perform the ICMP reachability test with AWS’s EC2 over Auto-VPN.

meraki l

2. Review the Route Table

The following route table is currently unpopulated for vpc-c (VPC).

meraki m

In Google Cloud, all routing decisions are configured on the main route table, one per project. It has the same capabilities as AWS, except all routing configurations across subnets are configured on the same page. Traffic routing policies with source and destinations will also need to include the relevant VPC.

3. Configure the Default Route on Route Tables

In Google Cloud, we need to explicitly configure the route tables to direct traffic heading to 0.0.0.0/0 to be sent to the default internet gateway. Subnets with compute instances that require internet connection will need its route table to have a default route to the internet via the default internet gateway.

In the image below, we configured a default route entry. In a later step, the vMX instance that we create will have internet outbound connectivity to reach Meraki dashboard. This is required so that vMX can establish Auto-VPN over internet connection to AWS vMX.

meraki n

For this POC, the default route will also be useful during the ICMP reachability test. Our local workstation will first need to SSH into the Compute Engine test instance. This will require the Compute Engine test instance to have an internet connection; therefore, the subnet where it resides must have a default route to the internet via the default internet gateway.

4. Create Firewall Rules for vMX and Compute Engine Test Instances

In Google Cloud, VPC firewalls are used to perform stateful firewall services specific to each VPC. In AWS, security groups are used to achieve similar outcomes.

The following image shows two security rules that we created in preparation for the Compute Engine test instance. The first rule will allow ICMP traffic sourcing from 192.168.20.0/24 (AWS) into the Compute Engine with a “test-instance” tag. The second rule will allow SSH traffic sourcing from my workstation’s IP into the Compute Engine with a “test-instance” tag.

meraki o

We will use network tags in Google Cloud to apply VPC firewall rules to selected resources.
In the following image, we have added an additional rule for the vMX. This is to allow the vMX to perform its uplink connection monitoring using ICMP. Although the Meraki dashboard specifies other outbound IPs and ports to be allowed for other purposes, we do not need to explicitly configure them in the VPC firewall. Traffic outbound will be allowed by default and being a stateful firewall, return traffic will be allowed as well.

As shown below, we added an additional rule for the vMX. This is to allow the vMX to perform its uplink connection monitoring using ICMP. Although the Meraki dashboard specifies other outbound IPs and ports to be allowed for other purposes, we do not need to explicitly configure them in the VPC firewall. Traffic outbound will be allowed by default and being a stateful firewall, return traffic will be allowed as well.

meraki p

Deploy the vMX and Onboard to Meraki Dashboard

On your Meraki dashboard, follow the same steps as described in the previous section to create a vMX security appliance network and obtain the authentication token.

Over at Google Cloud, we can proceed to deploy the vMX via Google Cloud Marketplace. Deploy the vMX using the Compute Engine deployment process.

meraki q

As shown below, we entered the authentication token retrieved from the Meraki Dashboard into the “vMX Authentication Token” field. This vMX will also be configured in the vpc-c (VPC), c-vmx (subnet), and will obtain an ephemeral external IP address. We can now deploy the vMX.

meraki r

After a few minutes, we should see the vMX instance is up on Google Cloud and the Meraki dashboard is registering that the vMX is online. Note that the WAN IP address of the vMX corresponds to the public IP address on the c-vmx instance.

meraki s

meraki t

Unlike AWS, there is no need to disable source/destination checks on Google Cloud’s Compute Engine vMX instance.

Ensure that the vMX is configured as VPN passthrough/concentrator mode.

meraki u

Route Traffic from Auto-VPN vMX to Local Subnets

We previously discussed why vMX needs to be configured in VPN passthrough or concentrator mode, instead of routed mode. The reasoning holds true even if the environment is on Google Cloud instead of AWS.

meraki v

Like the vMX on AWS, the vMX on Google Cloud only has one private NIC. The private NIC is associated with the c-vmx (subnet) where the vMX is hosted. The same concept applies to Google Cloud and the vMX does not need to be directly connected to the local subnets to allow Auto-VPN to work. The solution will use on Google Cloud’s route table to make routing decisions when traffic exits the vMX after terminating the Auto-VPN.

meraki w

Assuming the Auto-VPN is established and traffic sourcing from AWS’s EC2 instance is attempting to reach Google Cloud Compute Engine’s test instance, the traffic has now landed on the Google Cloud vMX. The vMX will send the traffic out from its only LAN interface even if the c-vmx (subnet) is not the destination. The vMX will trust that traffic coming out from its LAN interface and onto the c-vmx subnet will be delivered appropriately to its destination after consulting the VPC route table.

Unlike the AWS route table, there is no entry in the Google Cloud route table to suggest that traffic within the VPC can be routed accordingly. This is an implicit behavior on Google Cloud and does not require a route entry. The VPC routing construct on Google Cloud will handle all inter-subnet communications if they are part of the same VPC.

Configure vMX to Use Auto-VPN and Advertise AWS and Google Cloud Subnet

Now we will head back to the Meraki dashboard and configure the Auto-VPN between the vMX on both AWS and Google Cloud.

At this point, we have already built an environment like the network diagram below.

meraki x

meraki y

meraki z

On the Meraki dashboard, enable Auto-VPN by configuring the vMX as a hub. You can also enable the vMX as a spoke if your design specifies it. If your network will benefit from your sites having full mesh connectivity with your cloud environment, configuring the vMX as a hub is preferred.

Next, we will advertise the subnet that sits behind the vMX. For the vMX on AWS, we have advertised 192.168.20.0/24, and for the vMX on Google Cloud, we have advertised 10.10.20.0/24. While the vMX does not directly own (or connect) to these subnets, traffic exiting the vMX will be handled by the AWS/Google Cloud routing table.

After a few minutes, the Auto-VPN connectivity between the vMX will be established. The following image shows the status for the vMX hosted on Google Cloud. You will see a similar status for the vMX hosted on AWS.

meraki 01

The Meraki route table below shows that from the perspective of the vMX on Google Cloud, the next-hop address to 192.168.20.0/24 is via the Auto-VPN toward vMX on AWS.

meraki 02

 

Modify the AWS and Google Cloud Route Table to Redirect Traffic to Auto-VPN

Now that the Auto-VPN configuration is complete, we will need to inform AWS and Google Cloud that traffic destined to each other will need to be directed to the vMX. This configuration is necessary because the route tables in each public cloud do not know how to route the traffic destined for the other public cloud.

The following image shows that the route table for the A-Local-1 (subnet) on AWS has been modified. For the highlighted route entry, traffic heading toward Google Cloud’s subnet will be routed to the vMX. Specifically, the traffic is routed to the elastic network interface (ENI), which is essentially the vMX’s NIC.

In the image below, we modified the route table of Google Cloud. Unlike AWS, where we can have an individual route table per subnet, we need to use attributes such as tags to identify traffic of interest. For the highlighted entry, traffic heading toward AWS’s subnet and sourcing from Compute Engine with a “test-instance” tag will be routed toward the vMX.

meraki 03

meraki 04

Deploy Test Instances in AWS and Google Cloud

Next, we will deploy the EC2 and Compute Engine test instances on AWS and Google Cloud. This is not required from the perspective of setting up the Auto-VPN. However, this step will be useful to validate if the Auto-VPN and various cloud constructs are set up properly.

As shown below, we deployed an EC2 instance in the A-Local-1 (subnet). The assigned security group “SG-A-Local-Subnet-1” has been pre-configured to allow SSH from my workstation’s IP address, and ICMP from Google Cloud’s 10.10.20.0/24 subnet.

meraki 05

We also deployed a basic Compute Engine instance in the c-local-1 (subnet). We need to add the network tag “test-instance” to ensure the VPC firewall applies the relevant rules. By configuration of the firewall rules, the test instance will allow SSH from my workstation’s IP address, and ICMP from AWS’s 192.168.20.0/24 subnet.

meraki 06

At this stage, we have achieved a network architecture as shown below. vMX and test instances are deployed on both AWS and Google Cloud. The Auto-VPN connection has also been established between the two vMXs.

meraki 07

Verify Auto-VPN Connectivity Between AWS and Google Cloud

We will now conduct a simple ICMP reachability test between the test instance in AWS and Google Cloud. A successful ICMP test will show that all components, including the Meraki vMX, AWS, and Google Cloud have been properly configured to allow end-to-end reachability between the two public clouds over Auto-VPN.

As shown below, the ICMP reachability test from the AWS test instance to the Google Cloud test instance was successful. This confirms that the two cloud environments are correctly connected and can communicate with each other as intended.

meraki 08

I hope that this blog post provided you guidance for designing and deploying Meraki vMX in a multi-cloud environment.

meraki 09

Simplify Meraki Deployment with Terraform

Before you go, I recommend checking out Meraki’s support with Terraform. Because cloud operations often rely heavily on Infrastructure-as-Code (IaC), software like Terraform play a pivotal role in a multi-cloud environment. By using Terraform with Meraki’s native API capabilities, you can integrate the Meraki vMX more deeply into your cloud operations. This enables you to build deployment and configuration into your Terraform processes.

Refer to the links below for more information:



Authors

Sing Yuen Tang

Solutions Engineer

Cisco Singapore