Written by Leon Liefting, Full Stack Developer at DPG Media
My new job suggests getting the Developer Associate certification for AWS. It’s something I wanted to do earlier, but then I went to work for my previous job that chose to go for Azure. At my new job at DPG Media IT, most services I work with have been put in AWS, so it seems like the right moment to dive into AWS more. Especially in building networks.
In this blog post, I’ll describe my previous AWS experience and how I watched a good video from the AWS official YouTube channel and put the concepts it explains into practice (from a Linux terminal).
Also: I will also explain how to clean up your resources so you won’t get huge bills.
My earlier experiences with AWS
I have some experience with AWS at another former employer. At the time my team worked on a bunch of microservices made with Java and Spring. Because we were used to that, I explored Elastic Beanstalk to get one of our applications running. I used AWS CodeStar. With CodeStar you have a wizard that lets you create Web Applications that run in Elastic Beanstalk with a few button clicks.
This also sets up continuous integration and continuous delivery in a few minutes. These things usually took us weeks to set up.
While it is really easy to get into AWS with AWS CodeStar, it generates a bunch of infrastructure components you usually have no clue about. Especially when you try to change things, you often run into permission errors of components that you weren’t even aware of that they existed. I decided to start from scratch and build everything with the AWS CLI (command line interface) to have full control.
Simplifying applications with AWS Lambda
The application I migrated to Elastic Beanstalk had three REST calls. After the bills started to drop in I figured out that running beanstalk instances was expensive. You could configure beanstalk to only boot up an instance whenever requests emerged, but starting up instances with JVMs is slow and uses a lot of memory. Some colleagues in the company were using AWS Lambda’s with Python, so I decided to take their approach (nowadays I would use Node.js with TypeScript).
Instead of the verbose Java application with the hundreds of dependencies from the Spring framework, we managed to write three Lambda functions with each having a maximum of 30 lines of Python code. Even though it was quite some work to set up API Gateway as a newcomer to AWS, the routing logic was clearly defined in API Gateway. Before it was in the annotations spread out over controller files and tied together via reflection on runtime.
The costs of the Lambda functions turned out to be much lower than our Beanstalk setup was.
Back to the AWS Networking Fundamentals
At my current job with DPG Media IT we use Beanstalk, Lambda, Elastic Container Services, and a lot more. Things are separated in virtual private networks (VPC’s) and I realized I really had to improve my knowledge of the Networking Fundamentals.
I watched the following AWS Networking fundamentals video by Perry Wald & Tom Adamski:
If you’re going to read on, watch this video. I’d say the entire thing just once, and go back to the specific segments as much as you need to.
When I got to 11:18 of the video and saw the picture below, I knew that this wouldn’t be something that I can simply read it and expect me to be able to explain it to somebody else.
If I want to understand this thoroughly, I really need to build my own little network and play with it. I decided that I want to build the above picture, but with two instances that run hello-world Node.js servers. Both instances need to be in their own private subnet in a different availability zone. I want to be able to reach them via SSH and then place a load balancer in front of the instances that divide the traffic 50/50.
The final picture will look like this:
How to build the infrastructure?
There are several ways to get things done in AWS.
- The “AWS Management Console”, which is basically just going to aws.amazon.com in your browser. As I’ve mentioned earlier, this is pretty limited as what you see in the web UI doesn’t always display what you have in your infrastructure perfectly. This tutorial doesn’t focus on the Management Console, but you should be familiar with it and use it to quickly verify the resources you created.
- The AWS CLI, which is a command-line interface tool you can use to view/edit/delete about everything in AWS. You probably shouldn’t use the CLI to create your production environments (use infrastructure as code solutions instead). But it is very useful to master the CLI. I typically use the CLI when I am either troubleshooting or experimenting. This tutorial uses the AWS CLI.
- CloudFormation is an “Infrastructure as Code” service where you can define your resources in YAML or JSON format. Whenever you update these resource files, CloudFormation will update your AWS infrastructure accordingly. Before looking into CloudFormation I recommend learning how to use the AWS Management Console and AWS CLI first. This tutorial does not use CloudFormation (but maybe a next one will).
- The AWS CDK, stands for AWS Cloud Development Kit and lets you define your AWS infrastructure in your favorite programming language. This let’s you manage your infrastructure more dynamically than static CloudFormation files. I would only recommend looking into this if your CloudFormation files grow too big or if you really hate JSON/YAML. This tutorial does not use the AWS CDK.
- Terraform is an “InfraStructure” similar to CloudFormation, but made by a third party and can be used for other clouds too. Use Terraform instead of CloudFormation if you want to prevent a vendor lock in to Amazon. This tutorial does not use Terraform.
1 — Setting up an AWS account
There are some things you need to do if you have not already:
- Set up two-factor authentication for the root user -> My Security Credentials
- You shouldn’t use your Root account for developing. Follow these steps to set up an IAM user with admin rights.
- When creating a user you get a CSV file with your credentials that you have to use when logging in. You cannot simply log in with a username and password, but also need the account id. I recommend putting the entire url in your password manager:
https://<accountid>.signin.aws.amazon.com/console - Also set up two-factor authentication for the new user you created.
- Download and install the AWS CLI and configure it to use with your IAM user by typing
aws configure
- When prompted for a region, pick one and remember to use that region for everything in AWS from now on. I use eu-central-1.
- When prompted for an output format, I picked JSON.
- Once ready you can use a test command like
aws ec2 describe-vpcs
Warning: AWS will bill you for the resources you are about to create. If you’re going to follow this tutorial, make sure you delete whatever resources you create when you’re done (I included steps to remove everything we will create at the bottom).
2 — VPC with node server in public subnet
Before we start creating a network with two instances behind a load balancer, let’s first start with a VPC with one instance (that runs Node.js) available to the internet that we can configure via ssh:
The steps of this part are based on this official AWS Guide.
First, we have to create a VPC. I am using the 172.31.0.0/16 block as explained in the video.
aws ec2 create-vpc --cidr-block 172.31.0.0/16
Note: I don’t know these aws commands off the top of my head, I look them up all the time by reading the CLI documentation.
Take a note of the VPC ID, you’ll need it a lot later on. It’s smart to write down all commands you run in a notepad application and as you go through this tutorial. I recommend giving the vpc (and all other resources you’ll create) a name with the create-tags command:
aws ec2 create-tags --resources <basic_vpc_id> --tags
Key=Name,Value=basic-vpc
We can see all vpc’s in our AWS region by:
- Running
aws ec2 describe-vpcs
- Or in the AWS Management Console go to Services -> VPC -> Your VPCs
You’ll see that there are two VPC’s. One is the default one as explained in the video, you can ignore it.
3 — Creating the public subnet
We create a public subnet in a VPC, we need to point it to an availability zone. You can see which availability zones there are in your region with:
aws ec2 describe-availability-zones
Use the following command to create two public subnets (at first we’ll only use the first one). Use the vpc of the VPC you just created and two valid availability zone id’s in your AWS region:
aws ec2 create-subnet --cidr-block 172.31.0.0/24 --vpc-id <basic_vpc_id> --availability-zone-id euc1-az2
aws ec2 create-subnet --cidr-block 172.31.1.0/24 --vpc-id <basic_vpc_id> --availability-zone-id euc1-az3
You see that we use the same CIDR blocks as in the video. I’m going to call them public-subnet-a and public-subnet-b:
aws ec2 create-tags --resources <public_subnet_a_id> --tags Key=Name,Value=public-subnet-a
aws ec2 create-tags --resources <public_subnet_b_id> --tags Key=Name,Value=public-subnet-b
Finally make sure that everything in this subnet gets a public ip:
aws ec2 modify-subnet-attribute --subnet-id <public_subnet_a_id> --map-public-ip-on-launch
aws ec2 modify-subnet-attribute --subnet-id <public_subnet_b_id> --map-public-ip-on-launch
4 — Creating the internet gateway
We are going to launch an ec2 instance in one of these public subnets. We will install a Node.js web server on the instance via SSH. In order to be able to download and install Node.js, the ec2 instance needs to be able to reach the internet. For this we need to create an internet gateway:
aws ec2 create-internet-gateway
Again, write down the internet gateway id. You can easily find it with aws ec2 describe-internet-gateway
. I’m going to call mine “internet-gateway” with a name tag:
aws ec2 create-tags --resources <internet_gateway_id> --tags Key=Name,Value=internet-gateway
Let’s attach the internet gateway to our VPC:
aws ec2 attach-internet-gateway --vpc-id <basic_vpc_id> --internet-gateway-id <internet_gateway_id>
5 — Creating the routing table for the subnets
In order for our instances to be able to use this internet gateway, we need to create a routing table for our public subnets and name it “public-route-table”:
aws ec2 create-route-table --vpc-id <basic_vpc_id>
Be sure to write down the route-table id. Let’s give the route-table a name and create a route in this table that points all traffic to 0.0.0.0/0 to our new internet gateway:
aws ec2 create-tags --resources <public_route_table_id> --tags Key=Name,Value=public-route-tableaws ec2 create-route --route-table-id <public_route_table_id> --destination-cidr-block 0.0.0.0/0 --gateway-id <internet_gateway_id>
Finally, we associate the routing table with our subnets:
aws ec2 associate-route-table --subnet-id <public_subnet_a_id> --route-table-id <public_route_table_id>
aws ec2 associate-route-table --subnet-id <public_subnet_b_id> --route-table-id <public_route_table_id>
6 — Creating the key pair needed for SSH access
Before we can create an instance that can be accessed via SSH, we need to create a key pair:
aws ec2 create-key-pair --key-name key-pair-for-instance1 --query 'KeyMaterial' --output text > key-pair-for-instance1.pem
In order to be able to use this key file, you need to give it read rights:
chmod 400 key-pair-for-instance1.pem
7 — Creating a security group for our VPC that SSH and Web traffic
By default the internet can’t simply reach your instance in the VPC, even if it has a public ip. We need to define a security group that allows our instances to be reached over port 22 (for SSH) and port 80 (for HTTP). In my case I prefer only allowing ssh access from my home ip address instead of allowing it form 0.0.0.0/0.
Create the security group (write down the security group id):
aws ec2 create-security-group --group-name WebAndSSHAccess --description "Security group for SSH access" --vpc-id <basic_vpc_id>
Allow SSH access (only from your IP address):
aws ec2 authorize-security-group-ingress --group-id <web_and_ssh_access_security_group_id> --protocol tcp --port 22 --cidr xx.xx.xx.xx/32
Allow HTTP access (from any IP address):
aws ec2 authorize-security-group-ingress --group-id <web_and_ssh_access_security_group_id> --protocol tcp --port 80 --cidr 0.0.0.0/0
8 — Creating the ec2 instance
We are finally reaching the point that we actually see something out of the many lines of configuration we’ve done so far. You need to pick a Linux AMI (Amazon Machine Image) to use. I typically use the latest Amazon Linux 2 image, because I’m pretty sure the people that work at Amazon are better at configuring memory etc than I am. All we need here is the AMI number. This documentation describes how to find one for your region.
Here is the command to run an instance (this is just one instance in the first subnet “public-subnet-a”):
aws ec2 run-instances --image-id <ami_number> --count 1 --instance-type t2.micro --key-name key-pair-for-instance1 --security-group-ids <web_and_ssh_access_security_group_id> --subnet-id <public_subnet_a_id>
It will take some time to boot it up, so use the following command to refresh and find the public ip address apointed to the new instance:
aws ec2 describe-instances
Or spam the refresh button in the AWS Management Console until it shows a public IP address, write the public IP and instance ID down.
9 — SSHing into the instance
Simply run the ssh command to connect with the public IP address of the instance:
ssh -i key-pair-for-instance1.pem ec2user@xx.xxx.xx.xx
The first thing you should check is whether you can reach the internet, for example with wget google.com
If you have internet access, it will look like this.
10 — Turning it into a web server
We are going to turn this ec2 instance into a web server by running Node.js on it. I usually install Node.js with “nvm”, an easy tool to manage node.js versions.
- Follow these steps to install nvm. After running the curl command you should exit ssh by typing exit and reconnect to have nvm usable.
- Run
nvm install node
(on the instance) - Create an index.js file with
touch index.js
- Edit your index.js file with VI or Nano and copy paste the following script in it:
const port = 8080;
let http = require('http');http.createServer((req, res) => {
console.log('url:', req.url);
res.end('hello world');
}).listen(port, (error) => {
console.log(`server is running on ${port}`);
});
- Start Node.js by running
node index.js &
- You should see “server is running on 8080” and be able to get hello world when running
wget localhost:8080
11 — Using port 80
You cannot reach your hello world server via the http://<publicip>:8080 yet because we only whitelisted port 80. If you try to change the const port = 8080;
to const port = 80;
you’ll get an error because you can’t use port 80 without running Node.js as root.
Running Node.js as root is a bad idea, and this post helped me find a solution for not having to do that. Run the following command:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
This should point all incoming traffic on port 80 to your Node.js runtime that runs on port 8080. Test it out with your browser (don’t forget that you have to use plain HTTP and not HTTPS).
Wow it works!
12 — Updating our setup to a VPC with instances in private subnets
This setup with instances in a public subnet is nice to show that we are able to create a working web server, but typically you don’t want a public IP address for every instance in your public subnet. The world is running out of IP addresses and that is why Amazon is charging us quite some money for using them. Typically every network uses a NAT gateway with one public IP, and many computers behind it. Again the picture of a VPC with instances in private subnets that can be reached via a load balancer:
We have to clean up first before adding things again. Let’s kill the instance we’ve just created:
aws ec2 terminate-instances --instance-ids <instance_id>
We can also delete the key pair we created to SSH with it:
aws ec2 delete-key-pair --key-name key-pair-for-instance1
Delete the security group we created, we have to redo all rules anyway:
aws ec2 delete-security-group --group-id <web_and_ssh_access_security_group_id>
Instead of mapping public IP addresses to everything in the public subnet, let’s turn this off:
aws ec2 modify-subnet-attribute --subnet-id <public_subnet_a_id> --no-map-public-ip-on-launch
aws ec2 modify-subnet-attribute --subnet-id <public_subnet_b_id> --no-map-public-ip-on-launch
13 — Creating the private subnets
For our two public subnets we used the CIDR blocks 172.31.0.0/24 and 172.31.1.0/24. So for our two private subnets we can use 172.31.2.0 and 172.31.3.0. Be sure to put the private subnets in the same availability zones as you did put your public subnets.
Here are the commands:
aws ec2 create-subnet --cidr-block 172.31.2.0/24 --vpc-id <basic_vpc_id> --availability-zone-id euc1-az2
aws ec2 create-subnet --cidr-block 172.31.3.0/24 --vpc-id <basic_vpc_id> --availability-zone-id euc1-az3
Similar to the public subnets, I’ll give the private subnets a name each:
aws ec2 create-tags --resources <private_subnet_a_id> --tags Key=Name,Value=private-subnet-a
aws ec2 create-tags --resources <private_subnet_b_id> --tags Key=Name,Value=private-subnet-b
14 — Creating NAT gateways in the public subnet
If we create EC2 instances in a private subnet, they are not able to connect to the internet as they don’t have a public IP address (and we don’t want that for security reasons). Instead we’ll create a NAT gateway in each public subnet which the EC2 instances from the private subnet will use to communicate to the internet. They will be able to reach the internet and the internet is only able to reply to requests. The internet won’t be able to reach the EC2 instances directly.
Before we create a NAT gateway we need to create two elastic IP addresses:
aws ec2 allocate-address
aws ec2 allocate-address
Each of these commands will respond to some JSON with a public IP address and an allocation Id. Write down the allocation Ids.
Now we can create the nat gateways and assign our newly created elastic IP address to it via the allocation Id:
aws ec2 create-nat-gateway --subnet-id <public_subnet_a_id> --allocation-id <first_allocation_id>
aws ec2 create-nat-gateway --subnet-id <public_subnet_b_id> --allocation-id <second_allocation_id>
Again write down the nat gateway ID’s. I would also recommend naming them:
aws ec2 create-tags --resources <nat_gateway_a_id> --tags Key=Name,Value=nat-gateway-aaws ec2 create-tags --resources <nat_gateway_b_id> --tags Key=Name,Value=nat-gateway-b
Don’t forget to clean these up if you no longer use them. It cost me 41 dollars to keep a nat gateway with an elastic IP address online for a month.
15 — Creating route tables for the private subnets
Where the public subnets can share one route table that is identical, we’ll need to have two route tables for each private subnet. Both tables point to the correct nat gateway.
So let’s create two more route tables:
aws ec2 create-route-table --vpc-id <basic_vpc_id>aws ec2 create-route-table --vpc-id <basic_vpc_id>
Write down the RouteTableId’s and label them:
aws ec2 create-tags --resources <private_route_table_a_id> --tags Key=Name,Value=private-route-table-aaws ec2 create-tags --resources <private_route_table_b_id> --tags Key=Name,Value=private-route-table-b
Route all traffic from private-route-table-a to nat-gateway-a, and the traffic from private-route-table-b to nat-gateway-b:
aws ec2 create-route --route-table-id <private_route_table_a_id> --destination-cidr-block 0.0.0.0/0 --gateway-id <nat_gateway_a_id>aws ec2 create-route --route-table-id <private_route_table_b_id> --destination-cidr-block 0.0.0.0/0 --gateway-id <nat_gateway_b_id>
Finally, associate the route tables with the correct private subnet:
aws ec2 associate-route-table --subnet-id <private_subnet_a_id> --route-table-id <private_route_table_a_id>aws ec2 associate-route-table --subnet-id <private_subnet_b_id> --route-table-id <private_route_table_b_id>
16 — Creating security groups
We need to create two security groups. One for the web servers (ec2 instances) and one for the load balancer that we haven’t created yet.
Let’s first create these two groups:
aws ec2 create-security-group --group-name web-servers --description "Security group for web servers" --vpc-id <basic_vpc_id>aws ec2 create-security-group --group-name load-balancer-group --description "Security group for load balancer" --vpc-id <basic_vpc_id>
As always, write down the security group id’s. For the web server group, we want the web servers only to be reached by the load balancer. So let’s add an ingress rule for that:
aws ec2 authorize-security-group-ingress --group-id <web_server_security_group_id> --protocol tcp --port 80 --source-group <load_balancer_security_group_id>
For the load balancer security group, we want that any IP on the internet is able to reach the load balancer:
aws ec2 authorize-security-group-ingress --group-id <load_balancer_security_group_id> --protocol tcp --port 80 --cidr 0.0.0.0/0
The load balancer itself only needs to be able to reach the web servers, nothing else:
aws ec2 authorize-security-group-egress --group-id <load_balancer_security_group_id> --protocol tcp --port 80 --source-group <web_server_security_group_id>
17 — How to SSH into an EC2 instance that is located in a private subnet?
We just made security groups, but you might have notices we didn’t allow any traffic to our instances via port 22 (SSH). We previously SSH’ed to the public IP addresses of the EC2 instances. As we now put the EC2 instances in private subnets behind NAT gateways, how do we turn our EC2 instances into web servers without SSH access?
There are some guides on the internet which tell you to put bastion hosts in the public subnets and use those to SSH into the private subnets, but that is kind of ruining the point of putting the instances in private subnets.
Luckily, AWS built the Systems Manager to help us getting into our EC2 instances. Before we can use this, we first need to create a role with a policy.
Create a file called ec2-role.json with touch ec2-role.json
and put the following in it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
Run the following commands to create an instance profile for the Systems Manager:
aws iam create-role --role-name ec2-role --assume-role-policy-document file://ec2-role.jsonaws iam attach-role-policy --role-name ec2-role --policy-arn "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"aws iam create-instance-profile --instance-profile-name ssm-instance-profile-for-ec2-instancesaws iam add-role-to-instance-profile --instance-profile-name ssm-instance-profile-for-ec2-instances --role-name ec2-role
18 — Creating the load balancer
Use the following command to create a load balancer attached to our two public subnets:
aws elbv2 create-load-balancer --name application-load-balancer --subnets <public_subnet_a_id> <public_subnet_b_id> --security-groups <web_server_security_group_id>
Copy the generated LoadBalancerArn and the DNSName, as you’ll need it later to create a listener.
A load balancer needs a target group, so let’s create one:
aws elbv2 create-target-group --name node-ec2-instances --protocol HTTP --port 80 --vpc-id <basic_vpc_id>
Copy the generated TargetGroupArn, as you’ll need it later to add instances to this target group.
19 — Creating the instances again
Let’s create two instances, one in private-subnet-a and private-subnet-b:
aws ec2 run-instances --image-id <ami_number> --count 1 --instance-type t2.micro --security-group-ids <web_server_security_group_id> --subnet-id <private_subnet_a_id> --iam-instance-profile Name=ssm-instance-profile-for-ec2-instancesaws ec2 run-instances --image-id <ami_number> --count 1 --instance-type t2.micro --security-group-ids <web_server_security_group_id> --subnet-id <private_subnet_a_id> --iam-instance-profile Name=ssm-instance-profile-for-ec2-instances
Now we have launched the instances, let’s add them to the target group of the load balancer:
aws elbv2 register-targets --target-group-arn <target_group_arn> --targets Id=<instance_1_id> Id=<instance_2_id>
Write down the targetGroupArn. Finally we need to create a listener:
aws elbv2 create-listener --load-balancer-arn <load_balancar_arn><load_balancar_arn> --protocol HTTP --port 80 --default-actions Type=forward,TargetGroupArn=<target_group_arn>
20 — Setting up the servers again via the Systems Manager
You can SSH into the instances via the Systems Manager on the AWS Management Console, or via the AWS CLI.
First see if you can find your instances:
aws ssm describe-instance-information
If you cannot connect to the targets, check your security groups and route tables and see if you’ve missed anything. If you find a mistake and fix it, it helps to remove the instances and recreate them.
If you can find the targets, let’s continue:
aws ssm start-session --target <instance_1_id>
I would type bash
and move to the home folder with cd
. Now simply execute these steps again:
- Follow these steps to install nvm. After running the curl command you should exit ssh by typing exit and reconnect to have nvm usable.
- Run
nvm install node
(on the instance) - Create an index.js file with
touch index.js
- Edit your index.js file with VI or Nano and copy paste the following script in it:
const port = 8080;
let http = require('http');http.createServer((req, res) => {
console.log('url:', req.url);
res.end('hello world from instance 1');
}).listen(port, (error) => {
console.log(`server is running on ${port}`);
});
- Start Node.js by running
node index.js &
- You should see “server is running on 8080” and be able to get hello world when running
wget localhost:8080
- Redirect port 80 to 8080:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
Do these steps for both servers, and it’s useful if you let them return a different message to identify the servers.
21 — Verifying that it works
You can test your node server by visiting the DNSName of the load balancer you saved earlier. If you didn’t, run:
aws elbv2 describe-load-balancers
Browse to the DNSName of the load balancer and see if you get a reply from both instances.
Note that in Firefox and curl it’s kind of 50% chance which response I receive. Chrome seems to cheat and cache the first response.
I hope this long tutorial was useful to you! I certainly learned a lot from writing this up.
22 — Cleaning up
Lots of AWS tutorials and workshops I followed let me create lots of resources without explaining how to clean these up when finished. This resulted in me canceling my AWS account and making a new one a couple of times now.
I have tried my best on figuring out what you need to get rid of all these resources.
- Shutting down the EC2 instances:
aws ec2 terminate-instances --instance-ids <instance_1_id> <instance_2_id>
- We first want to remove the listener. You first need to find the listener arn. You can find it with this command:
aws elbv2 describe-listeners --load-balancer-arn <load_balancar_arn>
- You can remove the listener with the following command:
aws elbv2 delete-listener --listener-arn <listener_arn>
- You can remove the target group:
aws elbv2 delete-target-group --target-group-arn <target_group_arn>
- You can remove the load balancer:
aws elbv2 delete-load-balancer --load-balancer-arn <load_balancer_arn>
- Removing the security groups is tricky because we made the “Security group for web servers” and “Security group for load balancer” groups depend on each other with ingress and egress rules. Revoking the ingress on the webserver group should do the trick:
aws ec2 revoke-security-group-ingress --group-id <web_server_security_group_id> --protocol tcp --port 80 --source-group <load_balancer_security_group_id>
- Now you should be able to delete the load balancer security groups with:
aws ec2 delete-security-group --group-id <load_balancer_security_group_id>aws ec2 delete-security-group --group-id <web_server_security_group_id>
- Remove the instance profile for the Session Manager:
aws iam remove-role-from-instance-profile --instance-profile-name ssm-instance-profile-for-ec2-instances --role-name ec2-roleaws iam delete-instance-profile --instance-profile-name ssm-instance-profile-for-ec2-instancesaws iam detach-role-policy --role-name ec2-role --policy-arn "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"aws iam delete-role --role-name ec2-role
- Removing the route tables is also a bit tricky, you cannot remove the route tables with references to the nat gateways. And you can’t remove the nat gateways when there are still route tables referencing to them. You also have to watch out to not delete the route tables from the default VPC. Use this command to figure out which route table associations exist:
aws ec2 describe-route-tables --filter Name=route-table-id,Values=<private_route_table_a_id>,<private_route_table_b_id>,<public_route_table_id>
- If you found the route table association id’s from private_route_table_a, private_route_table_b and public_route_table you can remove the associations:
aws ec2 disassociate-route-table --association-id <private_route_table_a_assoication_id>aws ec2 disassociate-route-table --association-id <private_route_table_b_assoication_id>aws ec2 disassociate-route-table --association-id
<public_route_table_association_1_id>aws ec2 disassociate-route-table --association-id
<public_route_table_association_2_id>
- Now you should be able to remove the route tables:
aws ec2 delete-route-table --route-table-id <private_route_table_a_id>aws ec2 delete-route-table --route-table-id <private_route_table_b_id>aws ec2 delete-route-table --route-table-id
<public_route_table_id>
- Remove the NAT gateways:
aws ec2 delete-nat-gateway --nat-gateway-id <nat_gateway_a_id>aws ec2 delete-nat-gateway --nat-gateway-id <nat_gateway_b_id>
- Release the two elastic IP addresses that were associated with the NAT gateways. It can take a while before these IP’s can be removed after deleting the NAT gateways. I got the message “An error occurred (AuthFailure) when calling the ReleaseAddress operation: You do not have permission to access the specified resource.”. After waiting a minute or so I was able to release the addresses:
aws ec2 release-address --allocation-id <first_allocation_id>
aws ec2 release-address --allocation-id <second_allocation_id>
- Before you can delete the internet gateway you first need to detach it from the VPC:
aws ec2 detach-internet-gateway --vpc-id <basic_vpc_id> --internet-gateway-id <internet_gateway_id>aws ec2 delete-internet-gateway --internet-gateway-id <internet_gateway_id>
- Removing the subnets:
aws ec2 delete-subnet --subnet-id <private_subnet_a_id>
aws ec2 delete-subnet --subnet-id <private_subnet_b_id>
aws ec2 delete-subnet --subnet-id <public_subnet_a_id>
aws ec2 delete-subnet --subnet-id <public_subnet_b_id>
- Removing the VPC:
aws ec2 delete-vpc --vpc-id <basic_vpc_id>
That should be it. If you learned anything please clap on this story and share it with your network!