Zombies and Clouds

A circular flowing figure eight representing CICD with labels and two engineers working on it.

Nuxt Static Site

After I deployed this blog, which is a Nuxt Static Site, I encountered several additional issues than the one found in the Nuxt Static S3 Hosting Issues

The additional issues I'd like to resolve include:

  1. Zombie Routes - Make sure that any routes that have been removed or renamed, remove or redirect them from production.
  2. Invalidate CloudFront Cache - Use AWS CloudFront commands so that after a new deployment, invalidate the CloudFront cache so that new views of the site will get updated content.

I'd like to do so with CICD tools so that this solution can be applied to multiple sites, automatically and hands-free.

Zombie Routes

Zombie routes happen when you remove or rename a endpoint, post, article or page from your website. The last stage of the CodeDeploy is a copy command (aws s3 cp), so any previous files or directories are not updated.

For example, suppose you have a blog post with a title of apple and you decide to delete it. The S3 buckets objects are at s3://<root url>/blog/apple. Your newly generated site does not have have apple, but since the new site is copied, the s3://<root url>/blog/apple file remains alongside the new files. While navigating your site from home or other links, the apple blog post does not appear. It is not in your list of blog posts for example. However, if any users use a direct link, say from Social Media post or Search result, to /blog/apple they will get the old html content. The html file and content is still there.

Similarly suppose you rename apple to banana, while this will generate a new route /blog/banana and S3 files s3://<root url>/blog/banana, the s3://<root url>/blog/apple objects remain.

Note that updates to a route are deploy and work as intended. I.e. if you update the apple post, the newer version of /blog/apple.html will overwrite the older one with the S3 'copy' and provide users the updated content and no dead content or code.

Remove Zombie Routes

There are several solutions to the Zombie route problem.

  1. Ignore it
  2. Manually delete old routes
  3. CICD Delete Everything before Deploy
  4. CICD Delete Targeted Routes
  5. Add a Nuxt Route Redirect

Never Remove or Rename Routes

Maybe you have the ability to never have to remove or rename a post, an article, or a page, but I find myself failing at this.

Manually Delete

You can use the AWS console or CLI, find the dead files in your production S3 bucket and remove them. While I do this all the time, it violates the spirit of CICD and can lead to errors or dead routes still remaining. We want to automate this task to avoid errors.

Route Redirects

In Nuxt, you should provide a route rule when you delete or rename a route. You can add a redirect by adding a routeRules entry in in your nuxt.config.js

export default defineNuxtConfig({
  routeRules: {
    '/old-page': { redirect: '/new-page' },
    '/another-old-page': { redirect: '/new-page-2' }
  }
})

Burn them All

But sometimes you'd like to remove everything for a clean start. One method is to add a 'remove all files' script aws s3 rm s3://<root url> --recursive in the buildspec.yml file post_build phase (See JAMStart Pipeline buildspec.yml). The build node, where the build is run, has aws CLI availability with proper policy/permissions. For example, here is the snippet of the script I use at pennockprojects.

phases:
  # other phases
  post_build:
    commands:
      - aws s3 rm s3://pennockprojects.com --recursive
  # other options

Sync to the Resource

A better method is to use the sync command instead of cp in the buildspec.yml file post_build phase. The sync command will copy new and updated files, but also remove files that are not in the new build. For example, here is the snippet of the script I use at pennockprojects.

phases:
  # other phases
  post_build:
    commands:
      - aws s3 sync .output/public s3://pennockprojects.com --delete
  # other options

A sync with a --delete will remove any files in the target that are not in the source. This is a great way to make sure that Zombie routes are removed.

Invalidate CloudFront Cache

When you deploy a new version of your site, the CloudFront cache may still have old versions of your files. This can lead to users seeing outdated content even after a successful deployment. The AWS command that you can issue for this looks like this:

aws cloudfront create-invalidation --distribution-id <distribution-id> --paths "/*"