If you haven’t already, please read my prior two blogs on VMware Cloud on AWS: VMware SDDC with NSX Expands to AWS and VMware Cloud on AWS with NSX – Connecting SDDCs Across Different AWS Regions; also posted on my personal blog at humairahmed.com. The prior blogs provide a good intro and information of some of the functionality and advantages of the service. In this blog post I expand the discussion to the advantages of VMware Cloud on AWS being able to communicate with native AWS resources. This is something that would be desired if you have native AWS EC2 instances you want VMware Cloud on AWS workloads to communicate with or if you want to leverage other native AWS services like AWS S3 VPC Endpoint or RDS.
From my prior blogs you know that with VMware Cloud on AWS customers get the best of both worlds for their move to a Software Defined Data Center (SDDC) – the leading compute, storage, and network virtualization stack for enterprises deployed on dedicated, elastic, bare-metal, and highly available AWS infrastructure. And yes, as discussed in my last post, customers can easily have a global footprint by deploying multiple SDDCs in different regions. But what if customers need access to native resources on AWS? VMware Cloud on AWS provides a benefit here as well.
VMware Cloud on AWS is born from a strategic partnership between VMware and Amazon; as such, both have worked together to develop functionality that allows for native access to AWS resources. When customers login and click the Create SDDC button as shown below, the first step is linking to an AWS account. This linking process is important to understand as it enables permissions and access for internal communication between VMware Cloud on AWS and native AWS resources.
AWS has a service called CloudFormation which simplifies provisioning and management on AWS. It allows the creation of templates for service/application architectures you want. AWS CloudFormation uses these templates for quick and reliable provisioning of the services or applications, which are called stacks.
VMware Cloud on AWS leverages CloudFormation templates where its own VMware stack is leveraged. To connect/link VMware Cloud on AWS to an existing or new AWS account you own, you simply need to log into your AWS account, and click the below OPEN AWS CONSOLE WITH CLOUDFORMATION TEMPLATE button within VMware Cloud on AWS as shown below.
In AWS, you will need to acknowledge changes to AWS account and grant permissions as shown below.
What’s happening here is that you’re giving VMware Cloud on AWS permission to discover resources like VPCs and respective subnets. Also, appropriate policies and roles are being applied so the VMware Cloud on AWS account instance can connect into your VPC. This is a one-time operation that needs to be executed as part of the provisioning process.
Another important thing that occurs here during the linking process is that AWS Elastic Network Interfaces (ENIs) are created within the AWS customer VPC and used by VMware Cloud on AWS. A screenshot of these ENIs created in the AWS customer VPC is shown below.
An ENI is used to communicate to the NSX logical network subnets in VMware Cloud on AWS. This ENI will also be listed within the respective VPC subnet’s route table to direct traffic directly to VMware Cloud on AWS; no traffic is sent over the AWS Internet Gateway (IGW) thus providing for more efficient and high bandwidth access between the AWS customer VPC and VMware Cloud on AWS. Customers also realize savings as compared to having to do VPC Peering or utilizing IGW where transit charges can be incurred. This is an incredibly cool and useful capability which highlights the joint collaboration of VMware and Amazon. It’s important to ensure you never delete these ENIs.
In the below screenshot of my respective subnet route table in AWS where my native EC2 workloads reside, you can see the ENI (eni-e753b5d) used to reach my NSX logical network subnets (10.16.4.X/28) in VMware Cloud on AWS. You can see a default route to the AWS IGW also exists. An AWS IGW and respective route will exist by default in the default VPC and route table; if using a non-default VPC, an AWS IGW and respective route is not present by default, but can be added if desired. In this example/post IGW will not be utilized.
Below is the diagram of my setup. You can see VMware Cloud on AWS has direct connectivity to AWS resources through the AWS network via ENI.
A few important things to note about the above diagram and AWS/VMware Cloud on AWS environment:
- the security policies in VMware Cloud on AWS are on the CGW Edge firewall and must allow traffic for respective native AWS workloads you want to be able to communicate with in the AWS customer VPC
- the security group and ACL security policies in the AWS customer VPC must allow traffic for respective workloads you want to be able to communicate with in VMware Cloud on AWS
- every AWS VPC has a default security group and network ACL
- the network ACL default policy in a VPC allows all traffic inbound and outbound
- the default security group policy in a VPC allows all traffic from inbound to the default security group; outbound traffic on the default security group allows all traffic
- in AWS customer VPC, if there is no IGW/route to IGW or public IP, there can be no communication via Internet; IGW is not needed for AWS VPC Connectivity using ENI
- the ENIs used for AWS VPC connectivity between the AWS customer VPC and VMword Cloud on AWS are members of the default security group; it’s important not to change the default rules to where the AWS VPC connectivity using the respective ENIs is blocked
Below is the diagram from my VMware Cloud on AWS environment. You can see at the bottom right of the diagram that connectivity is established to my AWS VPC where my native EC2 workloads reside. The Amazon VPC icon in this diagram represents the left part of the lab diagram above labeled Native AWS.
Below, you can see I have deployed two EC2 instances in AWS; this is also reflected in the lab diagram above, Both instances have only private IP addresses. The EC2 instance on the top has an IP address of 172. 31. 25. 164. The EC2 instance below it has an IP address of 172. 31. 25. 154.
You can see from the above, both EC2 instances have been placed in a custom security group titled Web Servers.
I will ping the EC2 instances in my Web Servers security group from my App VM on my App NSX logical switch in VMware Cloud on AWS. I also have a web server running on my EC2 instance with IP address of 172. 31. 25. 154. Accordingly, I allow both HTTP, SSH, and ICMP traffic to my EC2 instances in the Web Serverssecurity group as shown below. Note, I only edit the inbound rules here, and leave the outbound as default.
In AWS, network ACLs are stateless while security group policies are stateful. When you define an ACL security policy allowing specific traffic in, you also have to define an ACL security policy going out. Security groups behave differently, since they are stateful. When you define a security policy allowing specific traffic in, by default the respective return traffic will be allowed out.
Since I will also show an EC2 instance pinging the App VM in VMware Cloud on AWS, I also have to edit the inbound policy for the security group my ENIs are in to allow traffic from my Web Servers security group. The reason for this is, as mentioned prior and shown in the respective AWS customer VPC subnet route table, the created ENIs are used for communicating to the networks in VMware Cloud on AWS. Similar to prior example, I leave the outbound as default.
I also insure the respective traffic is allowed through the VMware Cloud on AWS CGW firewall as shown below. The first four rules allow for HTTP, ICMP, and SSH traffic; the policies are consistent with those I created in AWS to allow successful communication between native AWS resources and VMware Cloud on AWS workloads. There is a default Default Deny All as the last firewall rule (not shown below).
You can see from below screen shots, from my App VM (10.61.4.17) on my App NSX logical network in VMware on Cloud on AWS, I can ping both EC2 instances (172. 31. 25. 154 and 172. 31. 25. 164) in AWS VPC. I can also ping from the EC2 instances to my App VM in VMware Cloud on AWS.
From VM in VMware Cloud on AWS to EC2 Instances in AWS VPC:
From EC2 Instance in AWS VPC to VM in VMware Cloud on AWS:
Below, traceroute from the App VM (10.61.4.17) in VMware on Cloud on AWS to the Web EC2 instance in AWS (172.31.25.154) shows the path from the App VM to the NSX DLR (10.61.4.30) to CGW Edge (169.254.3.1) and directly out the CGW host to the destination EC2 instance (172.31.25.154).
As expected, and based on my security policies, from my App VM, I can also access the web server on the EC2 host via http as shown below.
Similar to communicating with AWS EC2 instances, using the same communication channel via direct high bandwidth connectivity, VMware Cloud on AWS can natively access and utilize additional services like AWS S3 VPC Endpoint and RDS. The ability to directly access these native services from VMware Cloud on AWS opens the door for many additional use cases and benefits; these additional examples and use cases will be covered in a follow-up post.
For more information on VMware Cloud on AWS, and how to get started check-out the below links.