Flux v2 scaffold
Kubernetes cluster configuration that uses GitOps to manage state.
Includes Flux, Helm, cert-manager, Nginx Ingress and Sealed Secrets.
- fluxcd.io
- helm.sh
- kubernetes.github.io/ingress-nginx/
- github.com/bitnami-labs/sealed-secrets
- github.com/jetstack/cert-manager
- kustomize.io
- Ivar Abrahamsen : @flurdy : github.com/flurdy : eray.uk
- For a Flux v1 setup, please follow the older Lemmings repository
- For a Flux v2 setup, please follow this, the Double Dragon repository
-
brew install kubectl -
Create Kubernetes cluster (see Kubernetes as a Service providers below)
-
Set up Kubernetes context (see provider CLIs and kubectx CLI below)
-
Test cluster connection:
kubectl cluster-info
Flux uses your Github Personal Access Token to access your repos. If you need to create a new one you need to make sure it has the required accesses ticked. For example Flux will need to access to the deploy key for the repo which require Admin access.
In your dotfiles make sure you expose it as GITHUB_TOKEN.
At the same time set the GITHUB_USER env-var to your github username.
(Nudge: direnv)
-
Initialize an empty Double Dragon repo for your setup.
git clone git@github.com:flurdy/doubledragon.git; mkdir doubledragon-fleet; cp doubledragon/README.md doubledragon-fleet/; cp doubledragon/LICENSE doubledragon-fleet/; cd doubledragon-fleet; git init; git add README.md LICENSE; git commit -m "Starting our double dragon fleet";Replace doubledragon-fleet with whatever you want to call your repository.
You may wish use the original
doubledragonrepo to compare. -
Create a private github repository
Manually create a private
doubledragon-fleetrepo via github.com or with the Github CLIbrew install gh; gh auth login; gh repo create --private doubledragon-fleet -r origin -s .Note, the CLI for some reason does not like if Github PAT env-var is set so you may have to temporarily unset when using it.
In Bash:
unset GITHUB_TOKENIn Fish:
set -e GITHUB_TOKENMake sure you set the
GITHUB_TOKENenv-var again afterwards. -
And push your local repo to the github repo
git push -u origin main -
Edit the
README.mdas you see fit. -
Edit the
LICENSEas you see fit. -
Note, Flux can also talk to Bitbucket, Gitlab, Github Enterprise and self-hosted git repositories
-
Lets temporarily add the repo and cluster name as environment variables so that most commands in this howto can be copy-pasted directly
In Bash:
export DOUBLEDRAGON_REPO=doubledragon-fleet; export DOUBLEDRAGON_NAME=doubledragon; export DOUBLEDRAGON_CLUSTER=doubledragon-01In Fish:
set -x DOUBLEDRAGON_REPO doubledragon-fleet; set -x DOUBLEDRAGON_NAME doubledragon; set -x DOUBLEDRAGON_CLUSTER doubledragon-01Replace doubledragon-fleet, doubledragon, doubledragon-01 with whatever you decide to call your repository and cluster
-
brew install fluxcd/tap/flux -
Test if Flux ir ready to be installed on your cluster
flux check --pre
flux bootstrap github \
--token-auth=false \
--components-extra=image-reflector-controller,image-automation-controller \
--owner=$GITHUB_USER \
--repository=$DOUBLEDRAGON_REPO \
--branch=main \
--path=./clusters/$DOUBLEDRAGON_CLUSTER \
--read-write-key=true \
--personal
-
This will create a github repo set in
$DOUBLEDRAGON_REPOif it does not already exist. And names your initial cluster as set in$DOUBLEDRAGON_CLUSTER. -
Update your local repo with the origin changes
git pull
Unlike Flux v1 which was a simpler one repo per cluster, Flux v2 is more flexible with potentially many clusters per repo and more abstractions if desired.
Flux v2 also prefer to use Kustomize for templating, but you do not have to use it.
Flux can act directly in your cluster, but this setup does everything via config files and git. That way we have a replayable paper trail.
So a Flux repo may look like this at the start:
|-- apps
| |-- base
| |-- overlays
| | |-- doubledragon
|-- clusters
| |-- doubledragon-01
|-- infrastructure
| |-- sources
-
apps/baseis where you define your apps; deployments, services, etc. -
apps/overlays/doubledragonis where you choose which apps a cluster has, and any customization specific to that cluster. -
cluster/doubledragon-01with links to apps and infrastructure active in your specific cluster. -
infrastructure/sourceswhere to find images from registries, Helm repos, etc. -
Create some of these folders:
mkdir -p apps/base; mkdir -p apps/overlays/$DOUBLEDRAGON_NAME; mkdir -p infrastructure/sources -
Check the file structure
tree
-
Lets separate some of our resources into two namespaces.
You may go with further specific namespaces if you prefer.
(For some reason kustomize does not let you add several namespaces in one kustomization so we will add plain files to the cluster)
-
Edit
clusters/$DOUBLEDRAGON_CLUSTER/namespaces.yamlapiVersion: v1 kind: Namespace metadata: name: infrastructure --- apiVersion: v1 kind: Namespace metadata: name: apps -
Push to Flux
git add clusters/$DOUBLEDRAGON_CLUSTER/namespaces.yaml; git commit -m "Namespaces"; git push
Safely store encrypted secrets in the git repository.
There are several alternative encrypted secrets solutions, such as Mozilla's SOPS, but Sealed Secrets works well for me.
-
Add a Helm repository for Sealed Secrets
flux create source helm sealed-secrets-source \ --interval=1h \ --namespace=infrastructure \ --url=https://bitnami-labs.github.io/sealed-secrets \ --export > infrastructure/sources/sealed-secrets-source.yaml -
Install Helm chart
mkdir -p infrastructure/sealed-secrets; flux create helmrelease sealed-secrets \ --interval=1h \ --release-name=sealed-secrets-controller \ --target-namespace=infrastructure \ --source=HelmRepository/sealed-secrets-source \ --chart=sealed-secrets \ --chart-version=">=1.15.0-0" \ --crds=CreateReplace \ --export > infrastructure/sealed-secrets/sealed-secrets.yaml -
Add to git so Flux can act on it
git add infrastructure/sources/sealed-secrets-source.yaml \ infrastructure/sealed-secrets/sealed-secrets.yaml; git commit -m "Added Sealed Secrets" -
Next we need to add simple Kustomization files that activates Sealed Secrets for our cluster. These will be very simple for now, later on they will be more elaborate and helpful
-
Edit
infrastructure/sealed-secrets/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sealed-secrets.yaml -
Edit
infrastructure/sources/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sealed-secrets-source.yaml -
Append to
infrastructure/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sources - sealed-secrets -
Create a pollable link in our cluster for all infrastructure:
flux create kustomization infrastructure \ --target-namespace=infrastructure \ --source=flux-system \ --path="./infrastructure" \ --prune=true \ --interval=10m \ --export > clusters/$DOUBLEDRAGON_CLUSTER/infrastructure.yaml -
Push to git
git add infrastructure/sources/kustomization.yaml; git add infrastructure/sealed-secrets/kustomization.yaml; git add infrastructure/kustomization.yaml; git add clusters/$DOUBLEDRAGON_CLUSTER/infrastructure.yaml; git commit -m "Activated Sealed Secrets"; git push -
Flux should pick this up and install the Helm chart for Sealed Secrets
-
Install
kubesealCLIbrew install kubeseal -
Retrieve public key from this cluster
mkdir -p clusters/$DOUBLEDRAGON_CLUSTER/secrets; kubeseal --fetch-cert \ --controller-name=sealed-secrets-controller \ --controller-namespace=infrastructure \ > clusters/$DOUBLEDRAGON_CLUSTER/secrets/sealed-secrets-cert.pem-
Some cluster setups may block access to your sealed-secrets-controller, e.g. a GKE cluster.
So instead we can temporarily proxy that locally like this:
kubectl --namespace infrastructure port-forward \ service/sealed-secrets-controller 8081:8080 -
And use
curlto download the certificate instead:curl localhost:8081/v1/cert.pem \ > clusters/$DOUBLEDRAGON_CLUSTER/secrets/sealed-secrets-cert.pem
-
-
Add it to source control
git add clusters/$DOUBLEDRAGON_CLUSTER/secrets/sealed-secrets-cert.pem; git commit -m "Sealed Secret public key"; git push
-
Add a Helm repository
flux create source helm ingress-nginx-source \ --interval=1h \ --namespace=infrastructure \ --url=https://kubernetes.github.io/ingress-nginx \ --export > infrastructure/sources/ingress-nginx-source.yaml -
Append it to the exiting sources kustomization
infrastructure/sources/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sealed-secrets-source.yaml - ingress-nginx-source.yaml -
Install the ingress controller with the Helm chart
mkdir -p infrastructure/ingress-nginx; flux create helmrelease ingress-nginx \ --interval=1h \ --release-name=ingress-nginx \ --target-namespace=apps \ --namespace=infrastructure \ --source=HelmRepository/ingress-nginx-source \ --chart=ingress-nginx \ --chart-version=">=1.0-4" \ --crds=CreateReplace \ --export > infrastructure/ingress-nginx/ingress-nginx.yaml -
Add kustomization
infrastructure/ingress-nginx/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ingress-nginx.yaml -
Append it to
infrastructure/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sources - sealed-secrets - ingress-nginx -
Add to git and push
git add infrastructure/sources/ingress-nginx-source.yaml; git add infrastructure/sources/kustomization.yaml; git add infrastructure/ingress-nginx/ingress-nginx.yaml; git add infrastructure/ingress-nginx/kustomization.yaml; git add infrastructure/kustomization.yaml; git commit -m "Nginx Ingress"; git push
-
Add a Jetstack source repo
flux create source helm jetstack-source \ --interval=1h \ --namespace=infrastructure \ --url=https://charts.jetstack.io \ --export > infrastructure/sources/jetstack-source.yaml-
Append it to the exiting sources kustomization
infrastructure/sources/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sealed-secrets-source.yaml - ingress-nginx-source.yaml - jetstack-source.yaml
-
-
Define Cert Manager Helm properties
mkdir infrastructure/cert-manager;-
Edit
infrastructure/cert-manager/values.yamlcrds: enabled: true
-
-
Install Cert Manager Helm
flux create helmrelease cert-manager \ --chart=cert-manager \ --source=HelmRepository/jetstack-source.infrastructure \ --release-name=cert-manager \ --namespace=infrastructure \ --target-namespace cert-manager \ --create-target-namespace \ --chart-version=">=1.12.0" \ --interval=1h \ --values=infrastructure/cert-manager/values.yaml \ --export > infrastructure/cert-manager/cert-manager.yaml-
Create kustomization
infrastructure/cert-manager/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - cert-manager.yaml -
Append it to infrastructure kustomization
infrastructure/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - sources - sealed-secrets - ingress-nginx - cert-manager
-
-
Add to repo and push
git add infrastructure/sources/jetstack-source.yaml; git add infrastructure/sources/kustomization.yaml; git add infrastructure/cert-manager/cert-manager.yaml; git add infrastructure/cert-manager/kustomization.yaml; git add infrastructure/kustomization.yaml; git commit -m "Cert-manager"; git push -
Verify Cert manager works
-
Install the cert-manager CLI
Optional but handy
brew install cmctl -
Verify
cmctl check apiHopefully that will return "
The cert-manager API is ready"
-
Lets create a staging and production certificate issuers with Lets Encrypt, so that testing in staging does not flood the prod instance.
mkdir -p clusters/$DOUBLEDRAGON_CLUSTER/certificate-issuers
-
Create and edit the staging issuer at
clusters/$DOUBLEDRAGON_CLUSTER/certificate-issuers/letsencrypt-issuer-staging.yamlapiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory email: youremail@example.com privateKeySecretRef: name: letsencrypt-staging-secret solvers: - http01: ingress: class: nginx -
Replace
youremail@example.comwith an email address you have access to -
Add to flux and watch till active
git add clusters/$DOUBLEDRAGON_CLUSTER/certificate-issuers/letsencrypt-issuer-staging.yaml; git commit -m "Staging issuer"; git push; kubectl get clusterissuer -A --watch -
Secure an app
I.e. add a TLS certificate to an ingress.
[!NOTE] This step may have to wait until you add your own apps later on in the tutorial.
-
Edit your app's ingress
apps/base/someapp/ingress.yamlAdd the annotation and tls sections
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt-staging name: someapp-ingress namespace: apps spec: rules: - host: someapp.example.com http: paths: - pathType: Prefix path: / backend: service: name: someapp-service port: number: 80 tls: - hosts: - someapp.example.com secretName: someapp-cert-staging -
Add to git
git add apps/base/someapp/ingress.yaml; git commit -m "Secured someapp"; git push; kubectl get ingress -n apps --watchSoon the
someapp.example.comline will show 443 as available port. It all ok.Note, your browser will throw a warning when accessing this site as the certificate for staging is not signed. Unlike prod.
-
-
Now lets add a prod issuer, create and edit
clusters/$DOUBLEDRAGON_CLUSTER/certificate-issuers/letsencrypt-issuer-prod.yamlapiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: youremail@example.com privateKeySecretRef: name: letsencrypt-prod-secret solvers: - http01: ingress: class: nginx -
Add to flux and watch till active
git add clusters/$DOUBLEDRAGON_CLUSTER/certificate-issuers/letsencrypt-issuer-prod.yaml; git commit -m "Prod issuer"; git push; kubectl get clusterissuer -A --watch -
Update the certificate for your app
[!NOTE] Again, only after you have added your own apps later on in the tutorial.
Change the
cluster-issuerannotation andsecretNameinapps/base/someapp/ingress.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: cert-manager.io/cluster-issuer: letsencrypt-prod name: someapp-ingress namespace: apps spec: rules: - host: someapp.example.com http: paths: - pathType: Prefix path: / backend: service: name: someapp-service port: number: 80 tls: - hosts: - someapp.example.com secretName: someapp-cert-prod-
Push and check when the certificate change goes live
git add apps/base/someapp/ingress.yaml; git commit -m "Secured someapp with prod cert"; git push; kubectl describe ingress someapp-ingress -n apps --watch
-
To access private Docker container image repositories we need to setup some more sources, image sources. And some secrets to access those.
- Please follow flurdy's 'kubernetes-docker-registry guide' for your relevant registries.
Warning
GCR has been deprecated and replaced by Google Artifact Registry. This is only as an example of how you would seal and add any registry to your cluster
-
For example if you needed GCR, and have followed the guide above and got a
gcr-registry.ymlfile (or.yaml),and maybe the initial
gcp-service-account.jsonsource as well. -
Make sure the raw secrets do not get added to git by accident
echo gcp-service-account.json >> .gitignore; echo gcr-registry.yml >> .gitignore; git add .gitignore -
Seal the secrets
mkdir -p clusters/$DOUBLEDRAGON_CLUSTER/registries/apps; mkdir -p clusters/$DOUBLEDRAGON_CLUSTER/registries/infrastructureA registry secret for both the apps namespace
kubeseal --format=yaml --namespace=apps \ --cert=clusters/$DOUBLEDRAGON_CLUSTER/secrets/sealed-secrets-cert.pem \ < gcr-registry.yml \ > clusters/$DOUBLEDRAGON_CLUSTER/registries/apps/sealed-gcr-registry.yamland the infrastructure namespace
kubeseal --format=yaml --namespace=infrastructure \ --cert=clusters/$DOUBLEDRAGON_CLUSTER/secrets/sealed-secrets-cert.pem \ < gcr-registry.yml \ > clusters/$DOUBLEDRAGON_CLUSTER/registries/infrastructure/sealed-gcr-registry.yaml -
Add the secrets to Flux
git add clusters/$DOUBLEDRAGON_CLUSTER/registries/apps/sealed-gcr-registry.yaml; git add clusters/$DOUBLEDRAGON_CLUSTER/registries/infrastructure/sealed-gcr-registry.yaml; git commit -m "GCR registry"; git pushYou may later need more for other and future namespaces, e.g.
defaultandflux-system -
Remove
gcr-registry.yml(andgcp-service-account.json)Later on when you have tested the registry by confirming that the cluster can download actual deployment images for your apps, you should delete the unencrypted registry files
rm gcr-registry.yml gcp-service-account.json -
Lets set up repo image scanning
To check when a new repo tag and image has been added to a registry.
For example if you have an app that stores its images in a private repo like GCR.
Otherwise you can wait to do this step later.
flux create image repository someapp-source \ --image=ghcr.io/someorg/someuser/somerepo \ --interval=5m \ --namespace=infrastructure \ --secret-ref=gcr-registry \ --export > infrastructure/sources/someapp-source.yaml-
(Change somerepo to your app name. And use the correct GCR image path.)
-
This example refers to the
gcr-registrysealed secret -
Append this to the YAML in
infrastructure/sources/someapp-source.yaml:accessFrom: namespaceSelectors: - matchLabels: kubernetes.io/metadata.name: appsNote, indentation is under
spec.
-
-
Note, for GCR there is the alternative option of a more secure short-lived acccess token instead.
This can be done with Flux. You need to set up a cronjob to refresh it.
-
Add these to
infrastructure/sources/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: ... - someapp-source.yaml -
Add to git/flux
git add infrastructure/sources/someapp-source.yaml; git add infrastructure/sources/kustomization.yaml; git commit -m "Sources for someapp" git push
Lets create a Hello World app.
-
First lets create a base layer
mkdir -p apps/base/hello -
And an initial deployment yaml for an Hello app
kubectl create deployment hello-deployment \ --image=nginxdemos/hello:0.3 \ --namespace=apps \ --dry-run=client -o yaml \ > apps/base/hello/deployment.yaml -
Lets prune the output a bit:
apps/base/hello/deployment.yaml, and change the app labels to justhelloapiVersion: apps/v1 kind: Deployment metadata: labels: app: hello name: hello-deployment namespace: apps spec: replicas: 1 selector: matchLabels: app: hello template: metadata: labels: app: hello spec: containers: - image: nginxdemos/hello:0.3 name: hello -
Add a service at
apps/base/hello/service.yamlapiVersion: v1 kind: Service metadata: name: hello-service namespace: apps spec: selector: app: hello ports: - protocol: TCP port: 80 targetPort: 80 -
And an ingress at
apps/base/hello/ingress.yamlapiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hello-ingress namespace: apps annotations: kubernetes.io/ingress.class: nginx spec: rules: - host: hello.example.com http: paths: - path: / pathType: Prefix backend: service: name: hello-service port: number: 80 -
Bundle these in
apps/base/hello/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: apps resources: - deployment.yaml - service.yaml - ingress.yaml
mkdir -p apps/overlays/$DOUBLEDRAGON_NAME/hello
-
Edit
apps/overlays/$DOUBLEDRAGON_NAME/hello/kustomization.yamlIn more complicated apps this may have some overrides but for now very simple.
apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: apps bases: - ../../../base/hello -
Edit
apps/overlays/$DOUBLEDRAGON_NAME/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: apps bases: - hello -
Add it all to the repo
git add apps/base/hello/deployment.yaml; git add apps/base/hello/service.yaml; git add apps/base/hello/ingress.yaml; git add apps/base/hello/kustomization.yaml; git add apps/overlays/$DOUBLEDRAGON_NAME/hello/kustomization.yaml; git add apps/overlays/$DOUBLEDRAGON_NAME/kustomization.yaml; git commit -m "Hello app files"; git push
-
Create a kustomization for all apps in the overlay
flux create kustomization apps \ --target-namespace=apps \ --source=flux-system \ --path="./apps/overlays/$DOUBLEDRAGON_NAME" \ --depends-on=infrastructure \ --prune=true \ --interval=10m \ --export > clusters/$DOUBLEDRAGON_CLUSTER/apps.yaml -
Add update the repo
git add clusters/$DOUBLEDRAGON_CLUSTER/apps.yaml; git commit -m "Adding apps to the cluster"; git push
-
Find the ingress controller's
External IPkubectl get services -n apps ingress-nginx-controller -
Use
curlto resolve the URL. Replace11.22.33.44with the external IP, andlynxto view itcurl -H "Host: hello.example.com" \ --resolve hello.example.com:80:11.22.33.44 \ --resolve hello.example.com:443:11.22.33.44 \ http://hello.example.com | lynx -stdin -
This should show a basic hello world page, with an Nginx logo and some server address, name and date details.
-
Remove / comment out the app
Normally you just edit
apps/overlay/$DOUBLEDRAGON_NAME/kustomizaton.yamlAnd comment outapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization namespace: apps bases: # - helloBut since this is the only app in there it might break the YAML.
So the easiert is to delete cluster's
apps.yamlfilegit rm clusters/$DOUBLEDRAGON_CLUSTER/apps.yaml; git commit -m "Removing apps from the cluster"; git pushThat should cascade the changes via the aggregated kustomize, and remove the ingress, service and deployment from the live cluster.
-
If permanent, remove the app's
apps/overlaysandapps/basefolders as well as a tidy-up chore.
Ok, so a Hello world is not what you intended to use your cluster for. Lets see how you could add a real application to the cluster.
For simplification lets call the app myfirstapp.
Replace any reference to it with a correct name.
These will be very similar to the Hello app.
-
Names and labels
Change all the names and lables from
hellotomyfirstapp. -
Deployment
One thing to change is the image name and tag, and adding a registry secret
E.g.:
spec: containers: - image: gcr.io/somethingsomething:1.2.3 name: myfirstapp-container ... imagePullSecrets: - name: gcr-registry -
More deployment config
For the Hello deployment the basic config was sufficient, but for more normal workflows you will most likely add more e.g. resource limits, env-vars, secrets etc.
E.g.:
spec: containers: - image: gcr.io/somethingsomething:1.2.3 ... resources: requests: memory: "250Mi" cpu: "50m" limits: memory: "800Mi" cpu: "250m" env: - name: SOMEVAR value: "5" envFrom: - secretRef: name: some-secretThese are out-of-scope for this tutorial though.
-
Ingress
You will need to change the hostname in
hosts, and possibly the paths if necessary. -
Overlay
You could be clever with the overlay but for now just copy what Hello did
-
Add it all to git
git add apps/base/myfirstapp/deployment.yaml; git add apps/base/myfirstapp/service.yaml; git add apps/base/myfirstapp/ingress.yaml; git add apps/base/myfirstapp/kustomization.yaml; git add overlay/$DOUBLE_DRAGON_CLUSTER/myfirstapp/kustomization.yaml; git add overlay/$DOUBLE_DRAGON_CLUSTER/kustomization.yaml; git commit -m "Added myfirstapp"
-
Registry
You would probably store your app's images in a private Docker registry.
Revisit the Container registries section to add relevant registry secrets.
-
Image repository
Lets add an Image Repository for your application. So that the deployment below can scan and find the Docker image it requires
flux create image repository myfirstapp-source \ --image=gcr.io/somethingsomething \ --interval=5m \ --namespace=infrastructure \ --export > infrastructure/sources/myfirstapp-source.yaml -
Append this to the YAML in infrastructure/sources/myfirstapp-source.yaml:
... accessFrom: namespaceSelectors: - matchLabels: kubernetes.io/metadata.name: apps -
Append this source to the Kustomization at
infrastructure/sources/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: ... - myfirstapp-source.yaml -
Add to git
git add infrastructure/sources/myfirstapp-source.yaml; git add infrastructure/sources/kustomization.yaml; git commit -m "myfirstapp source"; git push
-
Like hello use
curlcurl -H "Host: myfirstapp.example.com" \ --resolve myfirstapp.example.com:80:11.22.33.44 \ --resolve myfirstapp.example.com:443:11.22.33.44 \ http://myfirstapp.example.com | lynx -stdinReplace
myfirstapp.example.comwith your hostname
That should be the basics to get your first application added
Letting Flux automatically update the image if a newer one get uploaded to the docker registry is a handy feature.
Flux allows different policies on when to update it, such as only when approved, only on major version upgrades and more.
Here is how to update on every new semver tag:
-
Image repository
As shown above in the Your first application section, you need an image repository source(s) for your application images
-
Image policies
Similarily we need a policy on how to act on any changes to the source repository
flux create image policy myfirstapp-policy \ --image-ref=myfirstapp-source \ --namespace=apps \ --select-semver=0.3.x \ --export > ./apps/base/myfirstapp/image-policy.yamlThis policy allows updates on any new
0.3.xsemver versions. So if on0.3.1if0.3.2gets uploaded it will trigger this policy. A new lower version of0.3.0would not.However
0.4.0will not get uploaded. Nor would1.0.0. For that the--select-semverwould have to be0.xor justxI think -
Add source namespace to the policy
The CLI does not have that option so please change the policy to refer to the "infrastructure" namespace:
--- apiVersion: image.toolkit.fluxcd.io/v1beta1 kind: ImagePolicy metadata: name: myfirstapp-policy namespace: apps spec: imageRepositoryRef: name: myfirstapp-source namespace: infrastructure policy: semver: range: 0.3.x -
Append this policy to the Kustomization of the app
apps/base/myfirstapp/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: ... - image-policy.yaml -
Apply the policy
We need to tell Flux where this policy would apply
Edit
apps/base/myfirstapp/deployment.yamland modify theimageline by appending the policy name... spec: containers: - image: gcr.io/something:0.3.1 # {"$imagepolicy": "apps:myfirstapp-policy "} ...This seems a bit hacky but this is how it works
-
Image Update
We also need to tell the
appskustomization about how to update any the images. I.e. create a git commit.flux create image update apps-image-update --namespace=apps \ --git-repo-ref=flux-system \ --git-repo-namespace=flux-system \ --git-repo-path="./" \ --checkout-branch=main \ --push-branch=main \ --author-name=fluxcdbot \ --author-email=fluxcdbot@users.noreply.github.com \ --commit-template="{{range .Updated.Images}}{{println .}}{{end}}" \ --export > apps/overlays/$DOUBLE_DRAGON_NAME/image-updates.yaml -
Append to
apps/overlays/$DOUBLE_DRAGON_NAME/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - myfirstapp ... resources: ... - image-updates.yaml -
Add to the repo
git add apps/base/myfirstapp/deployment.yaml; git add apps/base/myfirstapp/image-policy.yaml; git add apps/base/myfirstapp/kustomization.yaml; git add apps/overlays/$DOUBLE_DRAGON_NAME/image-updates.yaml; git add apps/overlays/$DOUBLE_DRAGON_NAME/kustomization.yaml; git commit -m "myfirstapp image policy"; git push -
New versions
Any new images in the docker repository that is in the chosen semver version range will now automatically update the deployment.
Note: You need to wait for the various polling of the image repository, the policy, flux itself etc. So changes might take awhile.
An alternative is that Flux also let policy changes be prompted via webhook, which is also possible.
-
Tail if the policy has updated
flux get image policy -n apps --watchOr any image states
flux get images all --all-namespaces
- Add/update your deployments, services, charts, docker registries, secrets, kustomizations etc
-
Add source if in a private repo. And add/append to source kustomization.
infrastructure/sources/someapp-source.yamlinfrastructure/sources/kustomization.yamlinfrastructure/kustomization.yaml
-
Add deploy, service, ingress to new app base folder.
apps/base/someapp/deployment.yamlapps/base/someapp/service.yamlapps/base/someapp/ingress.yamlapps/base/someapp/image-policy.yaml
-
Add/append to apps kustomization and overlay.
apps/base/someapp/kustomization.yamlapps/base/kustomization.yamlapps/overlay/somecluster/someapp/kustomization.yamlapps/overlay/somecluster/kustomization.yaml
(Some of the Kustomization files can be short-cutted if they do nothing but redirect)
Optionally you may want to set up monitoring of the cluster, with metrics and logging.
Flux has a ready made example of how to set up a Prometheus and Loki stack to achieve this.
And some genearl guidance on setting this up
-
Clone the example repo locally.
-
Copy the
monitoring.yamlfrom that repo's cluster folder into your cluster.- flux2-monitoring-example/clusters/test/monitoring.yaml
=>
cluster/$DOUBLE_DRAGON_NAME/monitoring.yaml
- flux2-monitoring-example/clusters/test/monitoring.yaml
=>
-
Copy over the entire
monitoringfolder into your Flux repo.- flux2-monitoring-example/monitoring
=>
./monitoring
- flux2-monitoring-example/monitoring
=>
-
Modify the Prometheus Helm values for
podMonitorNamespaceSelectorto include other namespaces if tagged- monitoring/controllers/kube-prometheus-stack/release.yaml
-
podMonitorNamespaceSelector: matchLabels: kubernetes.io/metadata.name: monitoring
-
Modify the existing namespaces to include a monitoring label
cluster/doubledragon/namespaces.yaml-
apiVersion: v1 kind: Namespace metadata: name: infrastructure labels: app.kubernetes.io/component: monitoring --- apiVersion: v1 kind: Namespace metadata: name: apps labels: app.kubernetes.io/component: monitoring
-
Add to git, push and wait
git add cluster/$DOUBLE_DRAGON_NAME/monitoring.yaml; git add monitoring; git add -p cluster/$DOUBLE_DRAGON_NAME/namespaces.yaml; git commit -m "Added monitoring"; git push; flux reconcile kustomization infrastructure --with-source; kubectl get pods -A --watch -
Patience. It takes a while to set up and syncronise itself.
-
You do not need to expose the Grafana UI to the world. Setting up a tunnel instead works fine
-
kubectl -n monitoring port-forward svc/kube-prometheus-stack-grafana 3000:80 - localhost:3000/d/flux-cluster
-
Kubernetes can be configured to auto scale your nodes if increased loads/pods. This is usually configured using the provider's UI, or CLI tool.
However many will rely on your cluster having the metrics-server installed.
Contrary to the name, this service is only metrics related for auto-scaling. Actual metrics will be better served with the Prometheus stack mentioned above.
-
Add the Helm source
flux create source helm metrics-server-source \ --interval=1h \ --namespace=infrastructure \ --url=https://kubernetes-sigs.github.io/metrics-server/ \ --export > infrastructure/sources/metrics-server-source.yaml -
Edit
infrastructure/sources/kustomization.yamland append
- metrics-server-source.yamlto itapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: .... - metrics-server-source.yaml -
Add source to git
git add infrastructure/sources/metrics-server-source.yaml; git add infrastructure/sources/kustomization.yaml; git commit -m "Metrics server source" -
Install the Helm chart
mkdir -p infrastructure/metrics-server; flux create helmrelease metrics-server \ --interval=1h \ --release-name=metrics-server \ --namespace=infrastructure \ --target-namespace=infrastructure \ --source=HelmRepository/metrics-server-source \ --chart=metrics-server \ --chart-version=">=0.7.2" \ --crds=CreateReplace \ --export > infrastructure/metrics-server/metrics-server.yaml -
Create
infrastructure/metrics-server/kustomization.yamlapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - metrics-server.yaml -
Edit
infrastructure/kustomization.yamland append
- metrics-serverto itapiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: .... - metrics-server -
Push to git
git add infrastructure/metrics-server/metrics-server.yaml; git add infrastructure/metrics-server/kustomization.yaml; git add infrastructure/kustomization.yaml; git commit -m "Adding metrics server"; git push
-
Once Flux is running, by convention avoid using
kubectl create|applyetc. -
And by the same convention avoid using
flux create.i.e avoid acting directly for any write operations.
Nearly all changes should be via Git. Export any changed YAML to Git as above.
Otherwise the git source and cluster state will start to diverge and hard to recreate.
-
Any
kubectlandfluxinteraction should be read only. Those are fine. -
Sometimes whilst troubleshooting you will have to use the scalpel and use
kubectl create|apply|deleteorflux create.But minimise the usage, and try to update the yaml to reflect any permanent changes.
Frequent issues and how to monitor.
-
Flux logs
flux logs -Af --since 3h -
Kubernetes logs
kubectl logs podname -f
-
Flux kustomization status
flux get kustomizations --watch -
Kubernetes status
kubectl get deploy,service,ingress,pods,secret,imagerepository,clusterissuer -n appsOr watch a single resource type
kubectl get deploy -A --watch
-
Certain operation takes a few minutes, e.g. pod creation, waiting on Flux scan polling
-
Nginx:
x509 certificate is not validHappens sometimes with the Nginx ingresses. Seems to be a known problem that require manual patching. Or as I fix it:
- Comment out the nginx controller and the ingresses from the kustomization files.
- Wait until Flux has removed them from the cluster.
- Uncomment and add them back in.
- Note this may change the external IP assigned to the cluster's load balancer.
-
Fixed typos, and nothing changes?
Sometimes some resources gets added with a typo, but you fixed it and pushed the change to the repo, yet Flux or Kubernetes do not pick up the change?
Most of the time Flux and Kubernetes notices and changes the resources. But sometimes not.
-
Force the change.
Simply remove the resource, push to git, let the system catch up, add it back with the typo corrected, and the change gets picked up
Most of the time the "removal" can be done by commenting out the reference to it in a
kustomization.yamlfile. Instead of removing actual deployment etc git files and history. -
Scale the deployments down and back up
kubectl scale deploy -n apps --replicas=0 myfirstapp-deploymentWait until pods are destroyed
kubectl get pods -n apps --watchThen scale back up
kubectl scale deploy -n apps --replicas=2 myfirstapp-deploymentNote, sometimes flux notices the inconsistency and scales the app back up as well before you do.
-
Force flux do reconcile the cluster state and the repository state
If impatient
flux reconcile kustomization apps --with-source
The nuclear option. But sometimes neccessary.
flux uninstall --namespace=flux-system
Though usually I just spin up another cluster instead.
Note, any encrypted secrets will have to be re-sealed when re-installing flux with the new certificate
-
Now that you have a working cluster, scrap it. If you want to.
Create a new cluster without all the mistakes from setting up the first cluster.
-
Or when you just need another cluster naturally, you can do the same.
-
In only a few steps, you do not have to do it all again
-
Create the cluster with your provider
-
Authenticate
kubectlwith the new cluster -
Set as the current kubernetes context
-
Maybe export a new env-var (and old ones if no longer set)
In Bash:
export DOUBLEDRAGON_REPO=doubledragon-fleet; export DOUBLEDRAGON_CLUSTER=doubledragon-01; export DOUBLEDRAGON_CLUSTER_NEW=doubledragon-02In Fish:
set -x DOUBLEDRAGON_REPO doubledragon-fleet; set -x DOUBLEDRAGON_CLUSTER doubledragon-01; set -x DOUBLEDRAGON_CLUSTER_NEW doubledragon-02 -
Bootstrap the new cluster with
doubledragon-02or$DOUBLEDRAGON_CLUSTER_NEWas the nameflux bootstrap github \ --token-auth=false \ --components-extra=image-reflector-controller,image-automation-controller \ --owner=$GITHUB_USER \ --repository=$DOUBLEDRAGON_REPO \ --branch=main \ --path=./clusters/$DOUBLEDRAGON_CLUSTER_NEW \ --read-write-key \ --personalAfter a while pull the changes
git pull
-
Add namespaces to the new cluster
cp clusters/$DOUBLEDRAGON_CLUSTER/namespaces.yaml clusters/$DOUBLEDRAGON_CLUSTER_NEW/; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/namespaces.yaml; git commit -m "Double Dragon II namespaces"; git push -
Copy the
infrastructure.yamlkustomization to the new clustercp clusters/$DOUBLEDRAGON_CLUSTER/infrastructure.yaml clusters/$DOUBLEDRAGON_CLUSTER_NEW/; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/infrastructure.yaml; git commit -m "Double Dragon II infrastructure"; git pushThis will add the Sealed Secrets, Nginx, and everything in sources to the new cluster. And more if you have extended it.
The sources may cause issues initially until we re-encrypt any secrets.
-
Download the Sealed Secrets public key for this cluster
mkdir -p clusters/$DOUBLEDRAGON_CLUSTER_NEW/secrets; kubeseal --fetch-cert \ --controller-name=sealed-secrets-controller \ --controller-namespace=infrastructure \ > clusters/$DOUBLEDRAGON_CLUSTER_NEW/secrets/sealed-secrets-cert.pem; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/secrets/sealed-secrets-cert.pem; git commit -m "Sealed Secret public key"; git push -
Re-encrypt secrets such as the GCR registry secret if needed
E.g.
mkdir -p clusters/$DOUBLEDRAGON_CLUSTER_NEW/registries/apps; mkdir -p clusters/$DOUBLEDRAGON_CLUSTER_NEW/registries/infrastructure; kubeseal --format=yaml --namespace=apps \ --cert=clusters/$DOUBLEDRAGON_CLUSTER_NEW/secrets/sealed-secrets-cert.pem \ < gcr-registry.yml \ > clusters/$DOUBLEDRAGON_CLUSTER_NEW/registries/apps/sealed-gcr-registry.yml; kubeseal --format=yaml --namespace=infrastructure \ --cert=clusters/$DOUBLEDRAGON_CLUSTER_NEW/secrets/sealed-secrets-cert.pem \ < gcr-registry.yml \ > clusters/$DOUBLEDRAGON_CLUSTER_NEW/registries/infrastructure/sealed-gcr-registry.yml; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/registries/apps/sealed-gcr-registry.yml; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/registries/infrastructure/sealed-gcr-registry.yml; git commit -m "GCR registry for cluster DD-02 ns"; git push
-
Such as certificate-manager issuers
mkdir -p clusters/$DOUBLEDRAGON_CLUSTER_NEW/certificate-issuers; cp clusters/$DOUBLEDRAGON_CLUSTER/certificate-issuers/* \ clusters/$DOUBLEDRAGON_CLUSTER_NEW/certificate-issuers/; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/certificate-issuers/*; git commit -m "Staging and Prod issuer"; git push
-
Optionally create a new overlay
Or share the same common one in
apps/overlays/doubledragonlinked inapps.yamlcp clusters/$DOUBLEDRAGON_CLUSTER/apps.yaml clusters/$DOUBLEDRAGON_CLUSTER_NEW/; git add clusters/$DOUBLEDRAGON_CLUSTER_NEW/apps.yaml; git commit -m "Apps overlay for cluster DD-02"; git push -
And that will be it
-
Note, the exposed load balancer external IP will be different.
-
Cloud providers
- Amazon AWS EKS: aws.amazon.com/eks/
- Google Cloud GKE: cloud.google.com/kubernetes-engine/
- Microsoft Azure AKS: azure.microsoft.com/en-us/services/kubernetes-service/
- DigitalOcean Kubernetes: www.digitalocean.com/products/kubernetes/
-
Cloud provider CLIs
-
brew cask install google-cloud-sdk -
brew install doctl -
brew tap weaveworks/tap brew install weaveworks/tap/eksctl -
brew install azure-cli
-
-
brew install kubectx -
github.com/vmware-tanzu/octant
brew install octant -
brew install derailed/k9s/k9s -
brew install stern -
flurdy.com/docs/kubernetes/registry/kubernetes-docker-registry.html
Client tools are also available on Linux, Windows and more.
The Lemmings and Double Dragon code bases are licensed under the MIT license which lets you pretty much do as you please with it.
Though please attribute back if possible.
- This guide heavily used the docs and example project available on the official Flux website
- Kubernetes official docs
- cert-manager.io/docs/
- 2025-03-25 Updated for Flux 2.5
- 2023-03-21 Your first app and image updates
- 2023-02-09 Double Dragon tweaks and env-vars
- 2022-11-10 Double Dragon refreshed
- 2021-07-10 Flux 2. Lemmings => Double Dragon
- 2020-02-13 Flux 1.1, fluxcd.io annotations, and Helm 3
- 2019-11-07 Flux 0.16, flux.weave.works annotations and Helm 2
