Grimoire Entries tagged with “Cloud

Building Gatsby with Google Cloud

A Cloud Build setup that caches node_modules and Gatsby-specific build folders. It also cleans your website bucket before deploying and sets cache-control headers that are appropriate for Gatsby. Caching really speeds up the builds and gives you almost Netlify-like speeds.

Four substitution variables need to be set in your Cloud Build trigger:

_CACHE_BUCKET bucket storing node_modules, .cache, and public folders.

_WEBSITE_BUCKET bucket serving your site.

_IMMUTABLE Header, e.g. "Cache-Control:public,max-age=31536000,immutable"

_REVALIDATE Header, e.g. "Cache-Control:public,max-age=0,must-revalidate"

Two things to be aware of:

  1. I’m not sure if the website bucket really needs to be cleaned out before deployment, although it does make viewing the files in the console more coherent.
  2. The gsutil setmeta commands are “Class A” storage operations so they do accumulate some cost, although just a few cents if you have a small site and only deploy a few times per week. There may be a better way to set the metadata.
# cloudbuild.yaml
steps:
  # If a cache exists, fetch it from Cloud Storage
  - id: Fetch cache
    name: gcr.io/cloud-builders/gcloud
    entrypoint: sh
    args:
      - -c
      - |
        (
          set -e
          gsutil hash -h yarn.lock | grep md5 | tr -s " " | awk '{print $3}' > hashed.yarn-lock
          gsutil -m cp "gs://$_CACHE_BUCKET/$(cat hashed.yarn-lock)" cache.tar.gz 2> /dev/null
          test -f cache.tar.gz
          tar -zxf cache.tar.gz
        ) || true
  # Install Node dependencies
  - id: Yarn install
    name: node
    entrypoint: sh
    args:
      - -c
      - |
        test -f cache.tar.gz || yarn install --prod --pure-lockfile
  # Build Gatsby site
  - id: Yarn build
    name: node
    entrypoint: yarn
    args:
      - build
  # Empty website bucket, continue if bucket is already empty.
  - id: Empty bucket
    name: gcr.io/cloud-builders/gcloud
    entrypoint: sh
    args:
      - -c
      - |
        (
          set -e
          gsutil -q -m rm -f gs://$_WEBSITE_BUCKET/**
        ) || true
    waitFor:
      - Yarn build
  # Cache node_modules and Gatsby-specific cache directories
  - id: Save cache
    name: gcr.io/cloud-builders/gcloud
    entrypoint: sh
    args:
      - -c
      - |
        tar -zcf cache.tar.gz node_modules public .cache
        gsutil -m cp cache.tar.gz "gs://$_CACHE_BUCKET/$(cat hashed.yarn-lock)"
    waitFor:
      - Yarn build
  # Copy files to website bucket
  - id: Copy to bucket
    name: gcr.io/cloud-builders/gcloud
    entrypoint: sh
    args:
      - -c
      - |
        gsutil -q -m cp -R -z css,html,js,json,map public/* gs://$_WEBSITE_BUCKET
    waitFor:
      - Empty bucket
  # Set Gatsby-specific caching
  - id: Set cache-control metadata
    name: gcr.io/cloud-builders/gcloud
    entrypoint: bash
    args:
      - -c
      - |
        gsutil -q -m setmeta -h $_REVALIDATE gs://$_WEBSITE_BUCKET/**/**
        gsutil -q -m setmeta -h $_IMMUTABLE gs://$_WEBSITE_BUCKET/static/**
        gsutil -q -m setmeta -h $_IMMUTABLE gs://$_WEBSITE_BUCKET/**/**.{css,js}
        gsutil -q -m setmeta -h $_REVALIDATE gs://$_WEBSITE_BUCKET/sw.js
    waitFor:
      - Copy to bucket

Handy Kubernetes Commands

Set current cluster

gcloud container clusters get-credentials <CLUSTER>

Run command on remote container

kubectl exec --stdin --tty <POD> -c <COMMAND>

Get CPU, memory use of all containers

kubectl top pod --containers --use-protocol-buffers

Get CPU, memory use of all nodes

kubectl top node --use-protocol-buffers

Create secret

kubectl create secret generic <SECRET_NAME> --from-literal=<key>=<value>

Copy files from local machine to pod

Useful when backing up or restoring persistent volume claims.

# copier.yaml

apiVersion: v1
kind: Pod
metadata:
  name: copier
spec:
  containers:
    - name: caddy
      image: caddy:alpine
      volumeMounts:
        - name: persistent-storage
          mountPath: /var/lib/ghost/content
  volumes:
    - name: persistent-storage
      persistentVolumeClaim:
        claimName: pvc-rwo
kubectl apply -f ./copier.yaml

Copy from local to pod

kubectl cp ./backup/dir copier:/srv/www

Copy from pod to local

kubectl cp copier:/srv/www ./backup/dir 

Perform rolling update of a service

kubectl rollout restart deployment <DEPLOYMENT_NAME>
LinkedInTwitterGitHub