Create Terraform AWS modules using ChatGPT — Bonus a Lambda Function

Rishi Raj Singh
FAUN — Developer Community 🐾
5 min readFeb 14, 2023

--

If you have not heard of ChatGPT, check this link to give a try.

ChatGPT was launched as a prototype in November 2022 by OpenAI, it is an advanced AI chatbot trained by OpenAI which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

ChatGPT relies on the powerful GPT-3.5 technology. GPT stands for Generative Pre-Trained Transformer, a complex neural network based on the revolutionary Attention concept.

I will demonstrate steps to generate Terraform modules by using ChatGPT.

Here is the question.

I got the below code:

# Module to create a VPC with two subnets (public and private)

variable "vpc_cidr_block" {
type = string
}

variable "public_subnet_cidr_block" {
type = string
}

variable "private_subnet_cidr_block" {
type = string
}

resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block

tags = {
Name = "my-vpc"
}
}

resource "aws_subnet" "public_subnet" {
vpc_id = aws_vpc.vpc.id
cidr_block = var.public_subnet_cidr_block
availability_zone = "us-east-1a"

tags = {
Name = "public-subnet"
}
}

resource "aws_subnet" "private_subnet" {
vpc_id = aws_vpc.vpc.id
cidr_block = var.private_subnet_cidr_block
availability_zone = "us-east-1b"

tags = {
Name = "private-subnet"
}
}

output "vpc_id" {
value = aws_vpc.vpc.id
}

output "public_subnet_id" {
value = aws_subnet.public_subnet.id
}

output "private_subnet_id" {
value = aws_subnet.private_subnet.id
}

It also instructs on how to use it.

# Example Terraform configuration file to use the "vpc" module

module "vpc" {
source = "./modules/vpc"

vpc_cidr_block = "10.0.0.0/16"
public_subnet_cidr_block = "10.0.1.0/24"
private_subnet_cidr_block = "10.0.2.0/24"
}

Next, lets try to create a bit complex module, the instructions are not entirely correct (Subnet CIDR blocks are wrong).

Lets check the module that was generated and as I expected it has used the wrong subnet cidr blocks as provided. I guess that's understandable I somehow expected it to come back with an error.

variable "ami_id" {}

resource "aws_vpc" "main" {
cidr_block = "192.168.0.0/16"
tags = {
Name = "my-vpc"
}
}

resource "aws_subnet" "main" {
vpc_id = aws_vpc.main.id
cidr_block = "10.2.0.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = false
tags = {
Name = "my-subnet"
}
}

resource "aws_security_group" "allow_ssh_https" {
name_prefix = "allow_ssh_https"
vpc_id = aws_vpc.main.id

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_instance" "main" {
ami = var.ami_id
instance_type = "t2.micro"
subnet_id = aws_subnet.main.id
vpc_security_group_ids = [aws_security_group.allow_ssh_https.id]
key_name = null

tags = {
Name = "my-instance"
}
}
module "example" {
source = "path/to/module"
ami_id = "ami-0123456789abcdef0"
}

It also tells us that there is no key pair.

Lets try another one.

provider "kubernetes" {
config_context_cluster = "your-eks-cluster-name"
}

resource "kubernetes_namespace" "nginx_namespace" {
metadata {
name = "nginx-namespace"
}
}

resource "kubernetes_deployment" "nginx_deployment" {
metadata {
name = "nginx-deployment"
namespace = kubernetes_namespace.nginx_namespace.metadata[0].name
}

spec {
replicas = 1

selector {
match_labels = {
app = "nginx"
}
}

template {
metadata {
labels = {
app = "nginx"
}
}

spec {
container {
image = "nginx"
name = "nginx"
ports {
container_port = 80
}
}
}
}
}
}

resource "kubernetes_service" "nginx_service" {
metadata {
name = "nginx-service"
namespace = kubernetes_namespace.nginx_namespace.metadata[0].name
}

spec {
selector = {
app = "nginx"
}

port {
port = 80
target_port = 80
}

type = "LoadBalancer"
}
}

Lets create a lambda function in python which can automate the backups of snapshot creation of all volumes which are hosted in ap-south-1 region.

import boto3
from datetime import datetime, timedelta

def lambda_handler(event, context):
ec2 = boto3.resource('ec2', region_name='ap-south-1')
snapshot_ids = []

# Create snapshots for all volumes
for volume in ec2.volumes.all():
description = 'Automated snapshot of ' + volume.id
snapshot = volume.create_snapshot(Description=description)
snapshot_ids.append(snapshot.id)
print('Created snapshot:', snapshot.id)

# Get snapshots and their creation times
snapshots = ec2.snapshots.filter(SnapshotIds=snapshot_ids)
now = datetime.now()
delete_time = now - timedelta(days=30)

# Delete snapshots older than 30 days
for snapshot in snapshots:
if snapshot.start_time < delete_time:
snapshot.delete()
print('Deleted snapshot:', snapshot.id)

👋 If you find this helpful, please click the clap 👏 button below a few times to show your support for the author 👇

🚀Join FAUN Developer Community & Get Similar Stories in your Inbox Each Week

--

--