Good evening everyone, this is the continuation of my last post about how I built my blog. Previously, I briefly explained the architecture of my blog and went through some of the steps I took like, buying a domain, creating a Route53 delegation set, and creating a Route53 hosted zone.

In this post, I will talk about creating SSL Certificates for the blog using Amazon Certificate Manager and validating the certificate in an automated way using DNS validation. I will talk about creating S3 buckets to host my static site, a CloudFront distribution to serve my static site, how to securely grant my CloudFront distribution access to my S3 bucket using Origin Access Identity, and finally I will talk about creating the DNS records in my Route53 Hosted Zone which will make my blog publicly accessible.

Because this is a continuation of my last post we can skip all the prep work as this will pick up from where we left off.

Before we proceed let’s first check if our blog’s domain is using our Route53 nameservers, if yes, this means that any DNS records we add to our Route53 Hosted Zone will be reachable from anywhere across the world. This also means that we can do DNS-based validation for our SSL Certificates later. You can revisit this step here.

If you are on a Linux system run the command below.

dig NS mycoolblog.wtf

It should output the 4 nameservers from your Route53 delegation set, the output should look similar to this.

If you are on a Windows system then try the Powershell command below.

Resolve-DnsName -Name mycoolblog.wtf -Type NS

Create your SSL Certificates using Amazon Certificate Manager

To start things off, let’s provision an SSL Certificate and validate it using DNS validation method. Go ahead and add the snippet below to your main.tf and save.

resource "aws_acm_certificate" "this" {
  provider                  = aws.useast1
  domain_name               = "mycoolblog.wtf"
  subject_alternative_names = [ "www.mycoolblog.wtf" ]
  validation_method         = "DNS"
}

resource "aws_route53_record" "this_dvo" {
  for_each = {
    for dvo in aws_acm_certificate.this.domain_validation_options : dvo.domain_name => {
      name   = dvo.resource_record_name
      record = dvo.resource_record_value
      type   = dvo.resource_record_type
    }
  }

  provider        = aws.useast1
  allow_overwrite = true
  name            = each.value.name
  records         = [ each.value.record ]
  ttl             = 60
  type            = each.value.type
  zone_id         = aws_route53_zone.this.zone_id
}

resource "aws_acm_certificate_validation" "this_dvo" {
  provider                = aws.useast1
  certificate_arn         = aws_acm_certificate.this.arn
  validation_record_fqdns = [for record in aws_route53_record.this_dvo : record.fqdn]
}

Okay, there are a few things going on here so I will explain. What this snippet does is, it first creates an SSL cert for the domain name mycoolblog.wtf and the alternative domain name www.mycoolblog.wtf, then it adds TXT DNS records for the domain name and alternative domain name in our Route53 Hosted Zone which will be used for doing the DNS validation which is done on the aws_acm_certificate_validation resource we declared above, this is mainly to prove that the created SSL cert is indeed being created by the same person/entity who owns the domain names which the SSL cert are created for. Proceed and run terraform plan and terraform apply to create the SSL cert and do the DNS validation.

:memo: NOTE
Notice that for the 3 resources declared above we explicitly specified the provider to use our aws.useast1 alias which points to the us-east-1 region, this is because the SSL cert will be associated with our CloudFront distribution later on and CloudFront distributions require SSL Certs to be created on the us-east-1 region. This is a good example of being able to orchestrate infrastructure creation across different regions using terraform.

Once terraform has successfully applied your changes go to your AWS Console and navigate to ACM and validate if the SSL cert is indeed created and has the Issued status declared. You do not have to manually check this because terraform will only succeed if the validation went through and if didn’t it should have given a relevant error.

Create an ACM certifcate

Click on the drop-down icon on the left side of your SSL Cert to check if the validation is indeed successful.

Check ACM validation status
:memo: NOTE
SSL Certificates on ACM are free and if you use DNS Validation it auto-renews unfortunately, you can only use them together with other AWS Services.

Create an Origin Access Identity

We then create an Origin Access Identity, we will use this identity and attach it to our CloudFront distribution. This way we can keep our S3 Bucket (the bucket which will host our static blog) private and only allow the OAI to read the contents of our S3 Bucket, since the OAI will be attached to our CloudFront distribution this means that only our CF distribution will be able to read our S3 bucket. Add the snippet below to your main.tf and save.

resource "aws_cloudfront_origin_access_identity" "this" {
  comment = "This Origin Access Identity is for mycoolblog.wtf"
}

That was short, there’s nothing special here and this is very straightforward. Proceed and run terraform plan and terraform apply to create the OAI later on, we will reference this OAI when creating the S3 buckets and CloudFront distributions.

Create S3 buckets

Next, we need a place where we can host our static blog and assets, this is where S3 buckets come into play. The benefit of this solution is that because the blog itself is static and browser-side rendering we don’t really need compute so there’s no need for an actual compute resource, the only requirement is that any browser will be able to download my content and render it locally/client-side. Add the snippet below to your main.tf and save.

resource "aws_s3_bucket" "this" {
  bucket = "mycoolblog.wtf"
  acl    = "private"
}

resource "aws_s3_bucket" "this_www" {
  bucket = "www.mycoolblog.wtf"
  acl    = "private"

  website {
    redirect_all_requests_to = "https://mycoolblog.wtf"
  }
}

One would assume that this is going to be complex but it isn’t, although it is very straightforward I’m pretty sure you are wondering why the heck are we creating x2 S3 buckets?

Well, if you look at the x2 S3 buckets the first one is named mycoolblog.wtf and the second one is named www.mycoolblog.wtf, they are not very different from each other, both are configured to be private the only difference is that the second one is configured to have a website endpoint. The first one is where our static blog content will be published/deployed and the second one will be used to redirect any www.mycoolblog.wtf requests to mycoolblog.wtf. Go ahead and run terraform plan and terraform apply to create the S3 buckets.

:memo: NOTE
We are redirecting www to non-www for aesthetic reasons and Google Analytics will complain that hostnames are redundant.

Create an S3 buckets policy

In this step we will create an S3 bucket policy that will allow OAI to read objects inside our mycoolblog.wtf S3 bucket and associate this policy to our mycoolblog.wtf S3 bucket. Add the snippet below to your main.tf and save.

resource "aws_s3_bucket_policy" "this" {
  bucket = aws_s3_bucket.this.id
  policy = jsonencode(
    {
      Version = "2012-10-17"
      Id      = "MyBucketPolicy"
      Statement = [
        {
          Action = "s3:GetObject"
          Effect = "Allow"
          Principal = {
            AWS = "${aws_cloudfront_origin_access_identity.this.iam_arn}"
          }
          Resource = "${aws_s3_bucket.this.arn}/*"
          Sid      = "AllowOAI"
        },
      ]
    }
  )
}

Take note that the policy attribute accepts valid JSON-formatted policies. Run terraform plan and terraform apply to create the S3 bucket policy.

Create CloudFront distributions

Next, we create the CloudFront distributions, this will “serve” our static blog to the world, it will also act as the perimeter, and cache content from our S3 bucket to edge locations around the world. Add the snippet below to your main.tf and save.

data "aws_cloudfront_cache_policy" "this" {
  name = "CachingOptimized"
}

resource "aws_cloudfront_distribution" "this" {
  enabled     = true
  price_class = "PriceClass_All"
  aliases     = [ "mycoolblog.wtf" ]

  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate.this.arn
    ssl_support_method  = "sni-only"
  }

  default_root_object = "index.html"
  is_ipv6_enabled     = true
  comment             = "mycoolblog.wtf distribution"

  origin {
    domain_name = aws_s3_bucket.this.bucket_regional_domain_name
    origin_id   = "myS3Origin"

    s3_origin_config {
      origin_access_identity = aws_cloudfront_origin_access_identity.this.cloudfront_access_identity_path
    }
  }

  default_cache_behavior {
    target_origin_id       = "myS3Origin"
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    cache_policy_id        = data.aws_cloudfront_cache_policy.this.id

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }

  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
      locations        = []
    }
  }
}

resource "aws_cloudfront_distribution" "this_www" {
  enabled     = true
  price_class = "PriceClass_All"
  aliases     = [ "www.mycoolblog.wtf" ]

  viewer_certificate {
    acm_certificate_arn = aws_acm_certificate.this.arn
    ssl_support_method  = "sni-only"
  }

  default_root_object = "index.html"
  is_ipv6_enabled     = true
  comment             = "www.mycoolblog.wtf distribution"

  origin {
    domain_name = aws_s3_bucket.this_www.website_endpoint
    origin_id   = "myS3Origin"

    custom_origin_config {
      http_port              = 80
      https_port             = 443
      origin_protocol_policy = "http-only"
      origin_ssl_protocols   = ["TLSv1.2"]
    }
  }

  default_cache_behavior {
    target_origin_id       = "myS3Origin"
    compress               = true
    viewer_protocol_policy = "redirect-to-https"
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    cache_policy_id        = data.aws_cloudfront_cache_policy.this.id

    forwarded_values {
      query_string = false

      cookies {
        forward = "none"
      }
    }
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
      locations        = []
    }
  }
}

Before we proceed let’s have a look at the snippet above;

  • the data section declared for aws_cloudfront_cache_policy allows us to reference an already existing pre-canned caching policy, in this case, we are referencing a caching policy named CachingOptimized. The data section/block allows us to use any information defined outside terraform or by another terraform config.

  • the first aws_cloudfront_distribution resource is the distribution that will be associated with our mycoolblog.wtf S3 bucket. A few key details to take note of here are;

    • the viewer_certificate block is where we reference our ACM certificate. Take note of the sni-only value for ssl_support_method.

    • the origin block is where we declare the backend which in this case is the S3 bucket. This is configured to reference the bucket_regional_domain_name of the mycoolblog.wtf S3 bucket. There is also an s3_origin_config sub-block here which tells this CloudFront distribution to attach the OAI to it. (Remember the OAI we created earlier which already has access to our S3 bucket?)

    • the default_cache_behavior block is where we define the default caching behavior for any origins assigned to it. In our case the default_cache_behavior is assigned to our S3Origin. and its cache_policy_id is referencing the data section we declared for aws_cloudfront_cache_policy.

  • the second aws_cloudfront_distribution resource is the distribution that will be associated with our www.mycoolblog.wtf S3 bucket. It’s almost similar to the first one except for the origin block, this block is configured to reference the website_endpoint of the www.mycoolblog.wtf S3 bucket instead of the bucket_regional_domain_name, this is because we configured the origin S3 bucket as a website endpoint with a redirect policy, and the redirect policy will only be triggered if we use the website_endpoint. We also have a custom_origin_config sub-block instead of s3_origin_config sub-block, this is also because the origin S3 bucket is configured as a website endpoint. This means that the Cloudfront will not be able to access any content on this S3 bucket which is fine because any requests going to the origin S3 bucket will be redirected to https://mycoolblog.wtf by design anyways.

Let’s proceed and run terraform plan and terraform apply to create the Cloudfront distributions.

Create DNS Records

This is the last step and after this people should be able to navigate to your blog from any browser anywhere in the world, the only thing left now is to create the DNS records. Add the snippet below to your main.tf and save.

resource "aws_route53_record" "this" {
  allow_overwrite = true
  zone_id         = aws_route53_zone.this.zone_id
  name            = "mycoolblog.wtf"
  type            = "A"

  alias {
    name                   = aws_cloudfront_distribution.this.domain_name
    zone_id                = aws_cloudfront_distribution.this.hosted_zone_id
    evaluate_target_health = true
  }
}

resource "aws_route53_record" "this_www" {
  allow_overwrite = true
  zone_id         = aws_route53_zone.this.zone_id
  name            = "www.mycoolblog.wtf"
  type            = "A"

  alias {
    name                   = aws_cloudfront_distribution.this_www.domain_name
    zone_id                = aws_cloudfront_distribution.this_www.hosted_zone_id
    evaluate_target_health = true
  }
}

Here we create x2 A records, the first A record is for mycoolblog.wtf which references to the mycoolblog.wtf CloudFront distribution’s domain name, and the second A record is for www.mycoolblog.wtf which references to the www.mycoolblog.wtf CloudFront distribution’s domain name. Go ahead and run terraform plan and terraform apply to create the DNS Records.

Wrap up

That’s it! you now have a highly available and globally distributed infrastructure that can host static websites, from blogs to single-page applications. The next step is to publish/deploy static content to the S3 bucket, given that our Cloudfront is configured to use index.html as the document root you could handcraft and upload a simple index.html to the S3 bucket and Invalidate your CloudFront distribution to test if everything is working. At this point, it’s totally up to you which method or framework to use to create your static website, in my case I used Hugo as I have mentioned in a previous post.

Curious how much is my monthly operational expense? Just $1 - this is only for Route53 Hosted Zone as this is the only resource here which is not free regardless if Free Tier is active or not as Route53 Hosted Zone is one of the AWS Services not eligible for Free Tier.

Monthly operational expense