diff --git a/config/_default/config.toml b/config/_default/config.toml index 77c49aee..8262897b 100644 --- a/config/_default/config.toml +++ b/config/_default/config.toml @@ -75,7 +75,7 @@ copyright = "acend gmbh" github_repo = "https://github.com/acend/terraform-training" github_branch = "main" -enabledModule = "base azure" +enabledModule = "base azure cloudscale" trainingFlavor = "Azure" # Enable Lunr.js offline search diff --git a/content/en/docs/06_cloudscale/1-first-server.md b/content/en/docs/06_cloudscale/1-first-server.md new file mode 100644 index 00000000..8211f3d8 --- /dev/null +++ b/content/en/docs/06_cloudscale/1-first-server.md @@ -0,0 +1,231 @@ +--- +title: "10.1. The First Server" +weight: 101 +sectionnumber: 10.1 +onlyWhen: cloudscale +--- + + +## Preparation + +Continue in the cloudscale working directory created in the chapter preparation: + +```bash +cd $LAB_ROOT/cloudscale +``` + + +## Step {{% param sectionnumber %}}.1: Configure the Terraform Provider + +Create a `versions.tf` file to pin the cloudscale provider version: + +```terraform +terraform { + required_version = ">= 1.12.0" + + required_providers { + cloudscale = { + source = "cloudscale-ch/cloudscale" + version = "~> 5.0" + } + } +} +``` + + +### Explanation + +The [cloudscale Terraform provider](https://registry.terraform.io/providers/cloudscale-ch/cloudscale) +is maintained by cloudscale.ch and mirrors the full cloudscale.ch REST API. Setting +`version = "~> 5.0"` pins to the `5.x` series and allows patch-level upgrades while +preventing breaking changes from a future major version. + + +## Step {{% param sectionnumber %}}.2: Declare Variables + +Create a `variables.tf` file: + +```terraform +variable "username" { + description = "Your workshop username. Used as a prefix for all resource names." + type = string +} + +variable "zone" { + description = "The cloudscale.ch zone to deploy resources in (lpg1 or rma1)." + type = string + default = "lpg1" +} + +variable "ssh_public_key" { + description = "Content of your SSH public key (e.g. the output of: cat ~/.ssh/id_ed25519.pub)." + type = string +} +``` + +Create a `terraform.tfvars` file and fill in your values: + +```terraform +username = "YOUR_USERNAME" +zone = "lpg1" +ssh_public_key = "" +``` + +{{% alert title="Note" color="secondary" %}} +Replace `YOUR_USERNAME` with your assigned workshop username and paste your actual SSH +public key string as the value for `ssh_public_key`. +{{% /alert %}} + + +## Step {{% param sectionnumber %}}.3: Create the cloud-init Script + +The web server will run nginx. Its `index.html` page is generated at boot time by querying +the cloudscale **metadata service** — a local HTTP endpoint available on every cloudscale +VM at `169.254.169.254`. + +Create the directory and file `cloud-init/web.yaml`: + +```bash +mkdir -p cloud-init +``` + +```yaml +#cloud-config +package_update: true +packages: + - nginx + - curl +runcmd: + - curl -sf --retry 5 --retry-delay 2 http://169.254.169.254/openstack/latest/meta_data.json -o /tmp/meta.json + - python3 -c "import json; d=json.load(open('/tmp/meta.json')); open('/var/www/html/index.html','w').write('

AlpDeploy

Hostname: '+d.get('hostname','?')+'

Zone: '+d.get('availability_zone','?')+'

\n')" + - systemctl enable nginx + - systemctl start nginx +``` + + +### Explanation + +The cloudscale metadata service implements the **OpenStack metadata format**. The endpoint +`http://169.254.169.254/openstack/latest/meta_data.json` returns a JSON document that +includes: + +| Field | Example value | Meaning | +| --- | --- | --- | +| `hostname` | `alpdeploy-jane-web` | The server name | +| `availability_zone` | `lpg1` | The zone the server lives in | +| `uuid` | `abcd-1234-...` | The server's unique ID | + +The `runcmd` cloud-init module runs shell commands once at first boot, after packages are +installed. Using the Python one-liner avoids shell-level quoting issues with heredocs +inside Terraform configuration. + + +## Step {{% param sectionnumber %}}.4: Define the Server Resource + +Create `main.tf`: + +```terraform +provider "cloudscale" { + # Authentication is done via the CLOUDSCALE_API_TOKEN environment variable. +} + +locals { + prefix = "alpdeploy-${var.username}" +} + +resource "cloudscale_server" "web" { + name = "${local.prefix}-web" + flavor_slug = "flex-4-2" + image_slug = "debian-13" + zone_slug = var.zone + volume_size_gb = 10 + ssh_keys = [var.ssh_public_key] + user_data = file("${path.module}/cloud-init/web.yaml") +} +``` + + +### Explanation + +| Argument | Value | Notes | +| --- | --- | --- | +| `flavor_slug` | `flex-4-2` | 2 vCPUs, 4 GB RAM | +| `image_slug` | `debian-13` | Debian 13 (Trixie) — default user: `debian` | +| `zone_slug` | variable | Either `lpg1` (Lupfig AG) or `rma1` (Rümlang ZH) | +| `volume_size_gb` | `10` | Root disk size in GiB | +| `ssh_keys` | list of key strings | Key content, not a path | +| `user_data` | cloud-init YAML | Injected at first boot | + +When no `interfaces` block is specified, the server gets a **public** IPv4 and IPv6 +address on the cloudscale internet network by default. + +The provider reads the API token exclusively from the `CLOUDSCALE_API_TOKEN` environment +variable. This keeps credentials out of your Terraform code and state file. + + +## Step {{% param sectionnumber %}}.5: Declare Outputs + +Create `outputs.tf`: + +```terraform +output "web_public_ip" { + description = "The public IPv4 address of the web server." + value = cloudscale_server.web.public_ipv4_address +} +``` + + +## Step {{% param sectionnumber %}}.6: Deploy + +Initialise the working directory and apply: + +```bash +terraform init +terraform apply +``` + +Terraform will display an execution plan showing one resource to create. Confirm with +`yes`. + +After the apply completes, retrieve the public IP: + +```bash +terraform output web_public_ip +``` + +Expected output (example): + +```text +"185.98.123.45" +``` + + +## Step {{% param sectionnumber %}}.7: Verify the Web Server + +Cloud-init takes about 60–90 seconds to install nginx and generate the page. Once it has +finished, `curl` the IP: + +```bash +curl http://$(terraform output -raw web_public_ip) +``` + +Expected output: + +```text +

AlpDeploy

Hostname: alpdeploy-jane-web

Zone: lpg1

+``` + +You can also SSH into the server to explore it: + +```bash +ssh debian@$(terraform output -raw web_public_ip) +``` + +{{% details title="Hints" %}} +If `curl` times out, cloud-init is probably still running. Check progress with: + +```bash +ssh debian@$(terraform output -raw web_public_ip) cloud-init status --wait +``` + +{{% /details %}} diff --git a/content/en/docs/06_cloudscale/2-persistent-volume.md b/content/en/docs/06_cloudscale/2-persistent-volume.md new file mode 100644 index 00000000..81eccf66 --- /dev/null +++ b/content/en/docs/06_cloudscale/2-persistent-volume.md @@ -0,0 +1,163 @@ +--- +title: "10.2. Persistent Storage" +weight: 102 +sectionnumber: 10.2 +onlyWhen: cloudscale +--- + + +## Preparation + +Continue in the same working directory from the previous lab: + +```bash +cd $LAB_ROOT/cloudscale +``` + + +## Step {{% param sectionnumber %}}.1: Add a Block Volume + +The AlpDeploy web service needs persistent storage that survives server re-creates — for +example for uploaded files or a local cache. cloudscale.ch provides SSD and bulk block +volumes that can be attached to servers. + +Add the following resource to `main.tf`: + +```terraform +locals { + prefix = "alpdeploy-${var.username}" +} + +resource "cloudscale_server" "web" { + name = "${local.prefix}-web" + flavor_slug = "flex-4-2" + image_slug = "debian-13" + zone_slug = var.zone + volume_size_gb = 10 + ssh_keys = [var.ssh_public_key] + user_data = file("${path.module}/cloud-init/web.yaml") +} + +resource "cloudscale_volume" "web_data" { + name = "${local.prefix}-web-data" + zone_slug = var.zone + size_gb = 50 + type = "ssd" + server_uuids = [cloudscale_server.web.id] +} +``` + + +### Explanation + +The `cloudscale_volume` resource creates a block storage volume and attaches it to the +server by listing the server's UUID in `server_uuids`. + +| Argument | Value | Notes | +| --- | --- | --- | +| `size_gb` | `50` | Volume size — can be increased later | +| `type` | `"ssd"` | `"ssd"` for NVMe-backed storage, `"bulk"` for capacity-optimised HDD | +| `server_uuids` | list | Volumes support multi-attach (multiple servers) | + +Because `server_uuids` references `cloudscale_server.web.id`, Terraform automatically +creates the server first and attaches the volume afterwards. This **implicit dependency** +is a core Terraform feature — you rarely need explicit `depends_on`. + + +## Step {{% param sectionnumber %}}.2: Apply the Change + +```bash +terraform apply +``` + +Terraform shows one new resource to add (`cloudscale_volume.web_data`) and no changes to +the existing server. + +```text +Plan: 1 to add, 0 to change, 0 to destroy. +``` + +Confirm with `yes`. + + +## Step {{% param sectionnumber %}}.3: Verify and Prepare the Volume + +SSH into the server: + +```bash +ssh debian@$(terraform output -raw web_public_ip) +``` + +Confirm the volume is visible as a second block device: + +```bash +lsblk +``` + +Expected output (the root disk is `/dev/vda`, the new volume is `/dev/vdb`): + +```text +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS +vda 252:0 0 10G 0 disk +├─vda1 252:1 0 9G 0 part / +├─vda2 252:2 0 1K 0 part +└─vda5 252:5 0 975M 0 part [SWAP] +vdb 252:16 0 50G 0 disk +``` + +Format the volume with ext4 (only needed on first use): + +```bash +sudo mkfs.ext4 /dev/vdb +``` + +Create a mount point and mount the volume: + +```bash +sudo mkdir -p /data +sudo mount /dev/vdb /data +df -h /data +``` + +Expected output: + +```text +Filesystem Size Used Avail Use% Mounted on +/dev/vdb 49G 24K 47G 1% /data +``` + + +## Step {{% param sectionnumber %}}.4: Make the Mount Permanent + +Add the volume to `/etc/fstab` so it is automatically mounted after a reboot: + +```bash +echo '/dev/vdb /data ext4 defaults 0 2' | sudo tee -a /etc/fstab +``` + +Verify the entry is correct: + +```bash +sudo mount -a +df -h /data +``` + +Exit the SSH session: + +```bash +exit +``` + + +### Explanation + +{{% alert title="Terraform vs. configuration management" color="secondary" %}} +Terraform is responsible for **infrastructure**: creating the volume and attaching it to +the server. Formatting the filesystem and mounting it are **OS-level operations** that fall +outside Terraform's scope. In production, tools like Ansible, cloud-init, or a +configuration management system handle these steps. + +In a real project you would encode the `mkfs` and `fstab` steps in the cloud-init +`user_data` (or a separate provisioner), but for learning purposes running them manually +makes the boundary between infrastructure and configuration management clear. +{{% /alert %}} diff --git a/content/en/docs/06_cloudscale/3-private-network.md b/content/en/docs/06_cloudscale/3-private-network.md new file mode 100644 index 00000000..b070001a --- /dev/null +++ b/content/en/docs/06_cloudscale/3-private-network.md @@ -0,0 +1,277 @@ +--- +title: "10.3. Private Network" +weight: 103 +sectionnumber: 10.3 +onlyWhen: cloudscale +--- + + +## Preparation + +Continue in the same working directory: + +```bash +cd $LAB_ROOT/cloudscale +``` + + +## Step {{% param sectionnumber %}}.1: Create a Private Network and Subnet + +AlpDeploy's backend service should not be exposed to the internet. You will create an +isolated private network and place a dedicated backend server on it — only reachable from +the web tier. + +Create a new file `network.tf`: + +```terraform +resource "cloudscale_network" "backend" { + name = "${local.prefix}-backend" + zone_slug = var.zone + auto_create_ipv4_subnet = false +} + +resource "cloudscale_subnet" "backend" { + cidr = "10.0.1.0/24" + network_uuid = cloudscale_network.backend.id + dns_servers = ["8.8.8.8", "8.8.4.4"] +} +``` + + +### Explanation + +| Resource | Purpose | +| --- | --- | +| `cloudscale_network` | An isolated Layer 2 network segment | +| `cloudscale_subnet` | An IP address range (CIDR) on top of the network | + +Setting `auto_create_ipv4_subnet = false` prevents cloudscale from automatically creating +a default subnet so that we can define our own CIDR (`10.0.1.0/24`). + +cloudscale assigns DHCP addresses from `.101` to `.254` within the subnet. +Addresses `.1` to `.100` are reserved for static assignment — which we will use +for the backend server. + + +## Step {{% param sectionnumber %}}.2: Add the Backend Server cloud-init Script + +The backend server runs a minimal Python HTTP API that returns its hostname in JSON. Create +`cloud-init/backend.yaml`: + +```yaml +#cloud-config +write_files: + - path: /opt/backend.py + permissions: '0755' + content: | + #!/usr/bin/env python3 + from http.server import HTTPServer, BaseHTTPRequestHandler + import socket + import json + + class Handler(BaseHTTPRequestHandler): + def log_message(self, *args): + pass + + def do_GET(self): + body = json.dumps({"status": "ok", "host": socket.gethostname()}).encode() + self.send_response(200) + self.send_header("Content-Type", "application/json") + self.end_headers() + self.wfile.write(body) + + HTTPServer(("0.0.0.0", 8080), Handler).serve_forever() + - path: /etc/systemd/system/backend.service + content: | + [Unit] + Description=AlpDeploy Backend API + After=network.target + + [Service] + ExecStart=/usr/bin/python3 /opt/backend.py + Restart=always + + [Install] + WantedBy=multi-user.target +runcmd: + - systemctl daemon-reload + - systemctl enable backend + - systemctl start backend +``` + + +## Step {{% param sectionnumber %}}.3: Add the Backend Server Resource + +Create `backend.tf`: + +```terraform +resource "cloudscale_server" "backend" { + name = "${local.prefix}-backend" + flavor_slug = "flex-4-1" + image_slug = "debian-13" + zone_slug = var.zone + volume_size_gb = 10 + ssh_keys = [var.ssh_public_key] + user_data = file("${path.module}/cloud-init/backend.yaml") + + interfaces { + type = "private" + addresses { + subnet_uuid = cloudscale_subnet.backend.id + address = "10.0.1.10" + } + } +} +``` + + +### Explanation + +The backend server has **only a private interface** — it is completely isolated from the +internet. The static address `10.0.1.10` is assigned explicitly so that the web server's +cloud-init can reach the backend at a known address. + +The `flex-4-1` flavor (1 vCPU, 4 GB RAM) is sufficient for the small Python API. + +{{% alert title="No public IP" color="secondary" %}} +Because the backend server has no public interface, you cannot SSH into it directly from +your workstation. To debug, SSH to the web server first, then jump to the backend from +there using its private address `10.0.1.10`. +{{% /alert %}} + + +## Step {{% param sectionnumber %}}.4: Update the Web Server + +The web server now needs both a **public interface** (to serve visitors) and a **private +interface** (to reach the backend). Its cloud-init page is also updated to include the +backend response. + +Update `cloud-init/web.yaml`: + +```yaml +#cloud-config +package_update: true +packages: + - nginx + - curl +runcmd: + - curl -sf --retry 5 --retry-delay 2 http://169.254.169.254/openstack/latest/meta_data.json -o /tmp/meta.json + - curl -sf --retry 10 --retry-delay 3 http://10.0.1.10:8080 -o /tmp/backend.json || echo '{"status":"unreachable","host":"?"}' > /tmp/backend.json + - python3 -c "import json; d=json.load(open('/tmp/meta.json')); b=json.load(open('/tmp/backend.json')); open('/var/www/html/index.html','w').write('

AlpDeploy

Hostname: '+d.get('hostname','?')+'

Zone: '+d.get('availability_zone','?')+'

Backend: '+b.get('host','?')+'

\n')" + - systemctl enable nginx + - systemctl start nginx +``` + +Update `main.tf` to add the private interface to the web server: + +```terraform +locals { + prefix = "alpdeploy-${var.username}" +} + +resource "cloudscale_server" "web" { + name = "${local.prefix}-web" + flavor_slug = "flex-4-2" + image_slug = "debian-13" + zone_slug = var.zone + volume_size_gb = 10 + ssh_keys = [var.ssh_public_key] + user_data = file("${path.module}/cloud-init/web.yaml") + + interfaces { + type = "public" + } + + interfaces { + type = "private" + addresses { + subnet_uuid = cloudscale_subnet.backend.id + address = "10.0.1.11" + } + } +} + +resource "cloudscale_volume" "web_data" { + name = "${local.prefix}-web-data" + zone_slug = var.zone + size_gb = 50 + type = "ssd" + server_uuids = [cloudscale_server.web.id] +} +``` + +{{% alert title="Server replacement" color="secondary" %}} +Changing `user_data` always **forces replacement** of the server — Terraform destroys the +old one and creates a new one. This is expected and shown clearly in the execution plan as +`-/+` (destroy and create). The volume will be re-attached to the new server automatically +because its `server_uuids` references the resource ID. +{{% /alert %}} + +Update `outputs.tf` to also show the backend's private address: + +```terraform +output "web_public_ip" { + description = "The public IPv4 address of the web server." + value = cloudscale_server.web.public_ipv4_address +} + +output "backend_private_ip" { + description = "The private IPv4 address of the backend server." + value = cloudscale_server.backend.private_ipv4_address +} +``` + + +## Step {{% param sectionnumber %}}.5: Apply the Changes + +```bash +terraform apply +``` + +The plan will show: + +* `-/+` for `cloudscale_server.web` (replace due to `user_data` change) +* `+` for `cloudscale_network.backend`, `cloudscale_subnet.backend`, `cloudscale_server.backend` +* `~` for `cloudscale_volume.web_data` (re-attached to new server UUID) + +Confirm with `yes`. + + +## Step {{% param sectionnumber %}}.6: Verify the Two-Tier Setup + +After cloud-init completes on both servers (≈ 90 seconds), test the web server: + +```bash +curl http://$(terraform output -raw web_public_ip) +``` + +Expected output: + +```text +

AlpDeploy

Hostname: alpdeploy-jane-web

Zone: lpg1

Backend: alpdeploy-jane-backend

+``` + +The `Backend:` field confirms the web server successfully reached the backend API via the +private network. + +Verify that the backend is **not** reachable directly from the internet (the connection +should time out after a few seconds): + +```bash +curl --connect-timeout 5 http://$(terraform output -raw backend_private_ip):8080 || echo "not reachable (expected)" +``` + +{{% details title="Hints" %}} +If the `Backend:` field shows `?`, the backend server may still be starting. The web +server cloud-init retries the backend call up to 10 times with a 3-second delay. If it +fails all retries, the page still loads — just with an unknown backend name. + +You can regenerate the page at any time by SSHing in and re-running the Python command: + +```bash +ssh debian@$(terraform output -raw web_public_ip) +curl -sf http://10.0.1.10:8080 -o /tmp/backend.json +python3 -c "import json; d=json.load(open('/tmp/meta.json')); b=json.load(open('/tmp/backend.json')); open('/var/www/html/index.html','w').write('

AlpDeploy

Hostname: '+d.get('hostname','?')+'

Zone: '+d.get('availability_zone','?')+'

Backend: '+b.get('host','?')+'

\n')" +``` + +{{% /details %}} diff --git a/content/en/docs/06_cloudscale/4-scaling.md b/content/en/docs/06_cloudscale/4-scaling.md new file mode 100644 index 00000000..44c5668c --- /dev/null +++ b/content/en/docs/06_cloudscale/4-scaling.md @@ -0,0 +1,205 @@ +--- +title: "10.4. Scaling Out" +weight: 104 +sectionnumber: 10.4 +onlyWhen: cloudscale +--- + + +## Preparation + +Continue in the same working directory: + +```bash +cd $LAB_ROOT/cloudscale +``` + + +## Step {{% param sectionnumber %}}.1: Add a Server Group for Anti-Affinity + +Running multiple web servers on the same physical host defeats the purpose of redundancy. +cloudscale.ch provides **server groups** with an `anti-affinity` policy to ensure members +are placed on different physical hypervisors. + +Add the following resource to `main.tf` (at the top, after the `locals` block): + +```terraform +resource "cloudscale_server_group" "web" { + name = "${local.prefix}-web" + type = "anti-affinity" + zone_slug = var.zone +} +``` + + +### Explanation + +The `anti-affinity` server group type is currently the only supported type in the +cloudscale provider. When two or more servers are members of the same anti-affinity group, +cloudscale's scheduler guarantees they are placed on **different physical hosts**. This +protects against a single hardware failure taking down all web servers simultaneously. + + +## Step {{% param sectionnumber %}}.2: Declare the Web Server Map Variable + +Instead of a single web server, you will now manage a **map of servers** using `for_each`. +Add the following variable to `variables.tf`: + +```terraform +variable "web_servers" { + description = "Map of web server identifiers to their private network configuration." + type = map(object({ + private_ip = string + })) + default = { + web-01 = { private_ip = "10.0.1.11" } + web-02 = { private_ip = "10.0.1.12" } + } +} +``` + + +### Explanation + +Using a `map(object(...))` variable for `for_each` has several advantages over a list with +`count`: + +* Each server has a **stable key** (`web-01`, `web-02`). Adding or removing a server only + affects that one entry — unlike `count`, which reindexes all instances. +* The map value carries per-server configuration (private IP) alongside the key. +* The server names, private IPs, and volume names all derive from `each.key` and + `each.value`, keeping things consistent without duplication. + + +## Step {{% param sectionnumber %}}.3: Convert the Web Server to `for_each` + +Replace the single `cloudscale_server.web` and `cloudscale_volume.web_data` resources in +`main.tf` with `for_each` versions. The full updated `main.tf` is: + +```terraform +locals { + prefix = "alpdeploy-${var.username}" +} + +resource "cloudscale_server_group" "web" { + name = "${local.prefix}-web" + type = "anti-affinity" + zone_slug = var.zone +} + +resource "cloudscale_server" "web" { + for_each = var.web_servers + + name = "${local.prefix}-${each.key}" + flavor_slug = "flex-4-2" + image_slug = "debian-13" + zone_slug = var.zone + volume_size_gb = 10 + ssh_keys = [var.ssh_public_key] + user_data = file("${path.module}/cloud-init/web.yaml") + server_group_ids = [cloudscale_server_group.web.id] + + interfaces { + type = "public" + } + + interfaces { + type = "private" + addresses { + subnet_uuid = cloudscale_subnet.backend.id + address = each.value.private_ip + } + } +} + +resource "cloudscale_volume" "web_data" { + for_each = var.web_servers + + name = "${local.prefix}-${each.key}-data" + zone_slug = var.zone + size_gb = 50 + type = "ssd" + server_uuids = [cloudscale_server.web[each.key].id] +} +``` + + +### Explanation + +When `for_each` is used, Terraform creates one resource instance per map entry. Each +instance is addressed by its key: + +| Terraform address | Server name | +| --- | --- | +| `cloudscale_server.web["web-01"]` | `alpdeploy-jane-web-01` | +| `cloudscale_server.web["web-02"]` | `alpdeploy-jane-web-02` | + +Inside the resource block, `each.key` holds the map key (`web-01`, `web-02`) and +`each.value` holds the corresponding object (`{ private_ip = "10.0.1.11" }`). + +The `server_group_ids` argument adds both servers to the anti-affinity group. cloudscale +will schedule them on different physical hosts. + + +## Step {{% param sectionnumber %}}.4: Update the Outputs + +Replace `outputs.tf` to return a map of public IPs for all web servers: + +```terraform +output "web_public_ips" { + description = "Public IPv4 addresses of all web servers, keyed by server name." + value = { for k, v in cloudscale_server.web : k => v.public_ipv4_address } +} + +output "backend_private_ip" { + description = "The private IPv4 address of the backend server." + value = cloudscale_server.backend.private_ipv4_address +} +``` + + +## Step {{% param sectionnumber %}}.5: Apply the Changes + +```bash +terraform apply +``` + +The plan shows: + +* `-/+` for `cloudscale_server.web` (the old single server is replaced by two new ones + with `for_each` keys and `server_group_ids`) +* `+` for `cloudscale_server_group.web` +* `+` for `cloudscale_server.web["web-02"]` and `cloudscale_volume.web_data["web-02"]` +* `~` for `cloudscale_volume.web_data["web-01"]` (re-attached) + +Confirm with `yes`. + + +## Step {{% param sectionnumber %}}.6: Verify Both Servers + +After cloud-init finishes on both servers, retrieve the IP map and curl each one: + +```bash +terraform output -json web_public_ips +``` + +Expected output (example): + +```text +{ + "web-01": "185.98.123.45", + "web-02": "185.98.123.67" +} +``` + +Test each server individually: + +```bash +curl http://$(terraform output -json web_public_ips | python3 -c "import json,sys; print(json.load(sys.stdin)['web-01'])") +curl http://$(terraform output -json web_public_ips | python3 -c "import json,sys; print(json.load(sys.stdin)['web-02'])") +``` + +Each response should show a different `Hostname:` value (`alpdeploy-jane-web-01` and +`alpdeploy-jane-web-02`), confirming that two independent servers are running. The next +lab will put a load balancer in front of them so that clients reach them through a single +IP. diff --git a/content/en/docs/06_cloudscale/5-load-balancer.md b/content/en/docs/06_cloudscale/5-load-balancer.md new file mode 100644 index 00000000..7bbd3eed --- /dev/null +++ b/content/en/docs/06_cloudscale/5-load-balancer.md @@ -0,0 +1,229 @@ +--- +title: "10.5. Load Balancer" +weight: 105 +sectionnumber: 10.5 +onlyWhen: cloudscale +--- + + +## Preparation + +Continue in the same working directory: + +```bash +cd $LAB_ROOT/cloudscale +``` + + +## Step {{% param sectionnumber %}}.1: Create the Load Balancer Stack + +A cloudscale.ch load balancer consists of several connected resources: + +| Resource | Purpose | +| --- | --- | +| `cloudscale_load_balancer` | The physical LB instance with a public VIP | +| `cloudscale_load_balancer_pool` | A group of backend servers and the balancing algorithm | +| `cloudscale_load_balancer_listener` | The public-facing port that accepts incoming traffic | +| `cloudscale_load_balancer_pool_member` | One entry per backend server (IP + port) | +| `cloudscale_load_balancer_health_monitor` | Periodic health checks to detect failed members | + +Create a new file `lb.tf`: + +```terraform +resource "cloudscale_load_balancer" "web" { + name = "${local.prefix}-lb" + flavor_slug = "lb-standard" + zone_slug = var.zone +} + +resource "cloudscale_load_balancer_pool" "web" { + name = "${local.prefix}-pool" + algorithm = "round_robin" + protocol = "tcp" + load_balancer_uuid = cloudscale_load_balancer.web.id +} + +resource "cloudscale_load_balancer_listener" "web" { + name = "${local.prefix}-listener" + protocol = "tcp" + protocol_port = 80 + pool_uuid = cloudscale_load_balancer_pool.web.id +} + +resource "cloudscale_load_balancer_pool_member" "web" { + for_each = var.web_servers + + name = "${local.prefix}-member-${each.key}" + pool_uuid = cloudscale_load_balancer_pool.web.id + protocol_port = 80 + address = each.value.private_ip + subnet_uuid = cloudscale_subnet.backend.id +} + +resource "cloudscale_load_balancer_health_monitor" "web" { + name = "${local.prefix}-healthcheck" + pool_uuid = cloudscale_load_balancer_pool.web.id + type = "http" + http_url_path = "/" + http_version = "1.1" + http_host = "localhost" +} +``` + + +### Explanation + + +#### Load balancer and pool + +The `lb-standard` flavor is the default load balancer size for cloudscale.ch. When +created without explicit `vip_addresses`, the load balancer is assigned a **public IPv4 +and IPv6 VIP** automatically. + +The pool algorithm `round_robin` distributes each new TCP connection to the next member in +rotation — exactly what you need to demonstrate that both web servers receive traffic. + + +#### Pool members + +The `for_each = var.web_servers` pattern used here mirrors lab 10.4: one pool member is +created per web server entry in the map. Each member references: + +* `address` — the web server's private IP (`10.0.1.11` / `10.0.1.12`) +* `subnet_uuid` — the private subnet, so the load balancer can route traffic to it +* `protocol_port = 80` — nginx listens on port 80 + +By pointing pool members at the **private IPs**, all HTTP traffic between the load +balancer and the web servers stays within the private network. + + +#### Health monitor + +An HTTP health monitor polls `GET /` on port 80 of each member. If a member fails two +consecutive checks, the load balancer stops sending traffic to it until it recovers. This +makes the load balancer self-healing: a failed web server is automatically removed from +the rotation without any manual intervention. + + +## Step {{% param sectionnumber %}}.2: Expose the Load Balancer IP + +Add the LB VIP address to `outputs.tf`: + +```terraform +output "web_public_ips" { + description = "Public IPv4 addresses of all web servers, keyed by server name." + value = { for k, v in cloudscale_server.web : k => v.public_ipv4_address } +} + +output "backend_private_ip" { + description = "The private IPv4 address of the backend server." + value = cloudscale_server.backend.private_ipv4_address +} + +output "lb_public_ip" { + description = "The public IPv4 VIP address of the load balancer." + value = one([for vip in cloudscale_load_balancer.web.vip_addresses : vip.address if vip.version == 4]) +} +``` + + +### Explanation + +`cloudscale_load_balancer.web.vip_addresses` is a list containing one entry per IP +version. The expression `[for vip in ... : vip.address if vip.version == 4]` filters to +only the IPv4 VIP. The `one()` built-in unwraps the single-element list into a plain +string, making it easy to use with `terraform output -raw`. + + +## Step {{% param sectionnumber %}}.3: Apply the Changes + +```bash +terraform apply +``` + +The plan shows five new resources (load balancer, pool, listener, two pool members, health +monitor) and no changes to existing servers. + +```text +Plan: 6 to add, 0 to change, 0 to destroy. +``` + +Confirm with `yes`. + +The apply may take 60–90 seconds because provisioning a load balancer involves allocating +dedicated hardware. + + +## Step {{% param sectionnumber %}}.4: Wait for Health Checks to Pass + +After the apply completes, the load balancer needs a moment to perform the initial health +checks against both web servers. Retrieve the VIP address: + +```bash +terraform output lb_public_ip +``` + +Give it 30–60 seconds, then check that the load balancer can reach both members by sending +a few requests. If you see `Connection refused` immediately after the apply, wait a moment +and retry. + + +## Step {{% param sectionnumber %}}.5: Demonstrate Round-Robin Load Balancing + +Send six consecutive requests to the load balancer VIP and observe which server handles +each one: + +```bash +LB_IP=$(terraform output -raw lb_public_ip) +for i in {1..6}; do + curl -s "http://${LB_IP}" | grep -o 'Hostname:[^<]*' +done +``` + +Expected output — the hostname alternates between the two servers: + +```text +Hostname: alpdeploy-jane-web-01 +Hostname: alpdeploy-jane-web-02 +Hostname: alpdeploy-jane-web-01 +Hostname: alpdeploy-jane-web-02 +Hostname: alpdeploy-jane-web-01 +Hostname: alpdeploy-jane-web-02 +``` + +{{% alert title="Round-robin confirmed" color="secondary" %}} +Each request is served by a different web server in strict rotation. Because each server's +nginx page was generated from the cloudscale metadata service at boot time, the hostname +embedded in the HTML is unique per server — making the round-robin behaviour directly +visible. +{{% /alert %}} + +{{% details title="Hints" %}} +If all six responses show the same hostname, your HTTP client may be reusing a TCP +connection (keep-alive). Force a new connection each time: + +```bash +LB_IP=$(terraform output -raw lb_public_ip) +for i in {1..6}; do + curl -s --no-keepalive "http://${LB_IP}" | grep -o 'Hostname:[^<]*' +done +``` + +If `lb_public_ip` is still empty after the apply, check `terraform show` or the +cloudscale control panel for the assigned VIP address. +{{% /details %}} + + +## Cleanup + +When you are done with the workshop, destroy all resources to avoid ongoing charges: + +```bash +terraform destroy +``` + +{{% alert title="Destroy all resources" color="secondary" %}} +The `terraform destroy` command removes **all** resources managed in this working +directory — servers, volumes, network, and load balancer. Confirm only when you are sure +you no longer need the environment. +{{% /alert %}} diff --git a/content/en/docs/06_cloudscale/_index.md b/content/en/docs/06_cloudscale/_index.md new file mode 100644 index 00000000..b1554d8f --- /dev/null +++ b/content/en/docs/06_cloudscale/_index.md @@ -0,0 +1,108 @@ +--- +title: "10. cloudscale.ch Workshop" +weight: 10 +sectionnumber: 10 +onlyWhen: cloudscale +--- + + +## Overview + +Welcome to the **cloudscale.ch Workshop**! + +In this chapter you will deploy a highly-available web service on +[cloudscale.ch](https://www.cloudscale.ch/), a Swiss sovereign cloud platform, +using Terraform. You will build the infrastructure incrementally — starting from a single +virtual machine and ending with a fully redundant, load-balanced setup. + + +## Story: AlpDeploy + +You are a cloud engineer at **AlpDeploy**, a fictional Swiss SaaS startup. Management has +decided to run the company's new web service on cloudscale.ch to keep data within +Switzerland and stay in control of the infrastructure. Your task: provision everything +with Terraform. + + +## Target Architecture + +By the end of this chapter, you will have built the following architecture: + +```mermaid +graph TB + Internet(("Internet")) + LB["Load Balancer\ncloudscale_load_balancer"] + Web1["web-01\nnginx"] + Web2["web-02\nnginx"] + Vol1[("Data Volume\nweb-01")] + Vol2[("Data Volume\nweb-02")] + PrivNet["Private Network\n10.0.1.0/24"] + Backend["backend-01\nPython API"] + + Internet -->|"port 80"| LB + LB -->|"port 80"| Web1 + LB -->|"port 80"| Web2 + Web1 --- Vol1 + Web2 --- Vol2 + Web1 -->|"port 8080"| PrivNet + Web2 -->|"port 8080"| PrivNet + PrivNet --> Backend +``` + + +## Lab Chapters + +| Lab | Topic | Key Resources | +| --- | --- | --- | +| 10.1 | First Server | `cloudscale_server`, cloud-init, metadata service | +| 10.2 | Persistent Storage | `cloudscale_volume` | +| 10.3 | Private Network | `cloudscale_network`, `cloudscale_subnet`, backend server | +| 10.4 | Scaling Out | `cloudscale_server_group`, `for_each` | +| 10.5 | Load Balancer | `cloudscale_load_balancer` full stack | + + +## Preparation + + +### API Token + +All cloudscale.ch API calls are authenticated via a personal API token. Create one in the +[cloudscale.ch control panel](https://control.cloudscale.ch/) under +**API Tokens → Add Token** and export it in your shell: + +```bash +export CLOUDSCALE_API_TOKEN= +``` + +{{% alert title="Note" color="secondary" %}} +The token is read automatically by the cloudscale Terraform provider from the +`CLOUDSCALE_API_TOKEN` environment variable. Keep the token secret and never commit it to +version control. +{{% /alert %}} + + +### Working Directory + +Create a dedicated folder for all cloudscale exercises: + +```bash +mkdir -p $LAB_ROOT/cloudscale +cd $LAB_ROOT/cloudscale +``` + + +### SSH Key + +You will need an SSH public key to access the virtual machines created during the labs. +If you do not yet have one, generate it now: + +```bash +ssh-keygen -t ed25519 -f ~/.ssh/id_ed25519 +``` + +Make a note of the public key content — you will paste it into your `terraform.tfvars` file +in the first lab: + +```bash +cat ~/.ssh/id_ed25519.pub +```