A production-ready, Docker-based Python application for automated backup of Cloudflare DNS zones. Designed for disaster recovery, compliance requirements, and operational safety in enterprise environments.
- Overview
- Features
- Architecture
- Prerequisites
- Installation
- Configuration
- Usage
- Backup Storage
- Restoration
- Monitoring and Observability
- Deployment Scenarios
- API Endpoints Used
- Performance Considerations
- Security
- Troubleshooting
- Development
- Contributing
- License
Cloudflare DNS Backup provides automated, scheduled backups of DNS zone configurations from Cloudflare's infrastructure. The application exports DNS records in standard BIND zone file format, ensuring compatibility with most DNS systems and providing a reliable disaster recovery mechanism.
- Disaster Recovery: Maintain point-in-time snapshots of DNS configurations for rapid recovery
- Compliance: Meet regulatory requirements for configuration backups and audit trails
- Change Management: Create pre-change backups before DNS modifications
- Multi-Environment: Synchronize DNS configurations across development, staging, and production
- Audit Trail: Maintain historical records of DNS configuration changes
- Migration Planning: Document existing DNS infrastructure before platform migrations
- Automated Scheduling: Configure backup intervals using flexible cron-like syntax
- Selective Backup: Choose to backup all zones or specific domains
- Container-Native: Full Docker and Docker Compose support for easy deployment
- Standard Format: BIND zone file format ensures broad compatibility
- Secure Authentication: Token-based API authentication with minimal permission requirements
- Comprehensive Logging: Structured logging for monitoring, debugging, and audit purposes
- Intelligent Storage: Organized directory structure with timestamp-based naming
- Retention Management: Configurable retention policies with automatic cleanup
- Health Monitoring: Built-in health checks for container orchestration
- Immediate Execution: Runs initial backup on startup plus scheduled backups
- Error Handling: Robust error handling with detailed logging and recovery mechanisms
- Zero Configuration Start: Works with minimal configuration for quick deployment
- Daemon Mode: Runs continuously as a background service
- One-Time Mode: Execute single backup operations for on-demand scenarios
- Graceful Shutdown: Handles termination signals properly
- Volume Persistence: Backup data persists across container restarts
- Resource Efficiency: Minimal CPU and memory footprint
┌─────────────────────────────────────────────────────────────┐
│ Cloudflare DNS Backup │
├─────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌─────────────────┐ │
│ │ Scheduler │────────▶│ Backup Engine │ │
│ │ (schedule) │ │ (backup.py) │ │
│ └──────────────┘ └────────┬────────┘ │
│ │ │ │
│ │ │ │
│ ▼ ▼ │
│ ┌──────────────┐ ┌─────────────────┐ │
│ │ Config │ │ Cloudflare API │ │
│ │ (config.py) │ │ Client │ │
│ └──────────────┘ └────────┬────────┘ │
│ │ │
└─────────────────────────────────────┼────────────────────────┘
│
│ HTTPS
▼
┌────────────────────────┐
│ Cloudflare API │
│ api.cloudflare.com │
└────────────────────────┘
│
▼
┌────────────────────────┐
│ Backup Storage │
│ (Volume Mount) │
│ /backups/* │
└────────────────────────┘
- Initialization: Application loads configuration from environment variables
- Scheduler Setup: Cron schedule is parsed and configured
- Initial Backup: First backup runs immediately on startup
- Zone Discovery: Retrieves list of DNS zones from Cloudflare API
- Zone Filtering: Applies zone selection based on configuration
- Export Process: Exports each zone in BIND format via Cloudflare API
- Storage Management: Saves backups with rotation and retention policies
- Scheduled Execution: Waits for next scheduled time and repeats
- Runtime: Python 3.11
- HTTP Client: Cloudflare Python SDK (>=3.0.0)
- Scheduler: schedule library (>=1.2.0)
- Configuration: python-dotenv (>=1.0.0)
- Container: Docker with multi-stage builds
- Orchestration: Docker Compose v3.8+
- Docker: Version 20.10 or higher
- Docker Compose: Version 2.0 or higher (or docker compose plugin)
- Cloudflare Account: Active account with DNS zones
- API Access: Ability to create API tokens in Cloudflare dashboard
- Storage: Minimum 1GB free space for backups (depends on zone count and retention)
- Network: Stable internet connection for API communication
- Monitoring: Log aggregation system for production deployments
- CPU: 1 core (minimal usage, primarily I/O bound)
- Memory: 128MB minimum, 256MB recommended
- Disk I/O: Moderate write operations during backup execution
- Network: Outbound HTTPS (443) access to api.cloudflare.com
- Clone the Repository
git clone https://github.com/yourusername/cloudflare-dns-backup.git
cd cloudflare-dns-backup- Create Cloudflare API Token
Navigate to the Cloudflare API Tokens page and create a new token:
Option A: Using Template
- Click "Create Token"
- Select "Read all resources" template
- Verify permissions include: Zone → DNS → Read
- Click "Continue to summary"
- Click "Create Token"
- Copy and securely store the generated token
Option B: Custom Token
- Click "Create Custom Token"
- Set token name (e.g., "DNS Backup")
- Add permission: Zone → DNS → Read
- (Optional) Restrict to specific zones under "Zone Resources"
- (Optional) Set IP filtering under "IP Address Filtering"
- Click "Continue to summary"
- Click "Create Token"
- Copy and securely store the generated token
- Configure Environment
cp .env.example .envEdit .env with your preferred text editor:
nano .envSet your API token:
CLOUDFLARE_API_TOKEN=your_actual_api_token_here
BACKUP_SCHEDULE=0 2 * * *
BACKUP_ZONES=all- Launch the Application
docker compose up -d- Verify Operation
# Check container status
docker compose ps
# View logs
docker compose logs -f cloudflare-dns-backup
# Verify backups were created
ls -lh ./backups/For development or systems without Docker:
- Install Python 3.11+
python --version # Verify Python 3.11 or higher- Create Virtual Environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install Dependencies
pip install -r requirements.txt- Set Environment Variables
export CLOUDFLARE_API_TOKEN=your_token_here
export BACKUP_SCHEDULE="0 2 * * *"
export BACKUP_ZONES=all
export BACKUP_PATH=./backups- Run Application
# Daemon mode
python -m src.main
# One-time backup
python -m src.main --onceAll configuration is performed through environment variables, supporting containerized deployments and twelve-factor app methodology.
| Variable | Type | Required | Default | Description |
|---|---|---|---|---|
CLOUDFLARE_API_TOKEN |
String | Yes | - | Cloudflare API token with Zone:DNS:Read permission |
BACKUP_SCHEDULE |
Cron | No | 0 2 * * * |
Backup schedule in cron format |
BACKUP_ZONES |
String | No | all |
Comma-separated domain list or "all" |
BACKUP_PATH |
Path | No | /backups |
Directory path for backup storage |
The BACKUP_SCHEDULE variable uses standard cron syntax with five fields:
┌───────────── minute (0 - 59)
│ ┌───────────── hour (0 - 23)
│ │ ┌───────────── day of month (1 - 31)
│ │ │ ┌───────────── month (1 - 12)
│ │ │ │ ┌───────────── day of week (0 - 6) (Sunday to Saturday)
│ │ │ │ │
│ │ │ │ │
* * * * *
Daily Backups
BACKUP_SCHEDULE=0 2 * * * # Daily at 2:00 AM
BACKUP_SCHEDULE=0 14 * * * # Daily at 2:00 PM
BACKUP_SCHEDULE=30 3 * * * # Daily at 3:30 AMHourly Backups
BACKUP_SCHEDULE=0 * * * * # Every hour at :00
BACKUP_SCHEDULE=15 * * * * # Every hour at :15
BACKUP_SCHEDULE=0 */6 * * * # Every 6 hours
BACKUP_SCHEDULE=0 */4 * * * # Every 4 hoursWeekly Backups
BACKUP_SCHEDULE=0 0 * * 0 # Sunday at midnight
BACKUP_SCHEDULE=0 3 * * 1 # Monday at 3:00 AM
BACKUP_SCHEDULE=0 1 * * 5 # Friday at 1:00 AMMonthly Backups
BACKUP_SCHEDULE=0 3 1 * * # 1st of month at 3:00 AM
BACKUP_SCHEDULE=0 4 15 * * # 15th of month at 4:00 AM
BACKUP_SCHEDULE=0 2 L * * # Last day of month at 2:00 AM (Note: simplified cron)Control which DNS zones are backed up:
Backup All Zones
BACKUP_ZONES=allAutomatically discovers and backs up all zones in your Cloudflare account.
Backup Specific Zones
BACKUP_ZONES=example.com,example.org,example.netBacks up only the specified domains. Domains not found in your account will be logged as warnings.
Single Zone
BACKUP_ZONES=example.comDefault Storage (Docker)
BACKUP_PATH=/backupsMaps to ./backups on host via docker-compose.yml volume mount.
Custom Storage Location
Modify docker-compose.yml:
volumes:
- /path/to/your/backup/location:/backupsOr for local development:
BACKUP_PATH=/home/user/dns-backupsRun the application as a persistent service with scheduled backups:
# Start in background
docker compose up -d
# View real-time logs
docker compose logs -f
# View last 100 lines
docker compose logs --tail=100 cloudflare-dns-backup
# Check container status
docker compose ps
# Stop service
docker compose downThe application will:
- Execute an immediate backup on startup
- Schedule recurring backups based on
BACKUP_SCHEDULE - Continue running until explicitly stopped
- Automatically restart on failure (unless-stopped policy)
Execute a single backup operation and exit:
# Using Docker Compose
docker compose run --rm cloudflare-dns-backup python -m src.main --once
# Using Docker directly
docker run --rm \
-e CLOUDFLARE_API_TOKEN=your_token \
-v $(pwd)/backups:/backups \
cloudflare-dns-backup python -m src.main --once
# Local Python installation
python -m src.main --onceUse cases for one-time mode:
- Pre-change backups before DNS modifications
- Integration with CI/CD pipelines
- Cron job execution on host system
- Manual backup operations
- Testing and validation
Instead of daemon mode, run via system cron:
# Edit crontab
crontab -e
# Add entry for daily backup at 2 AM
0 2 * * * cd /path/to/cloudflare-dns-backup && docker compose run --rm cloudflare-dns-backup python -m src.main --once >> /var/log/dns-backup.log 2>&1# Restart container
docker compose restart
# Rebuild after code changes
docker compose build
docker compose up -d
# View container resource usage
docker stats cloudflare-dns-backup
# Execute commands inside container
docker compose exec cloudflare-dns-backup /bin/bash
# Remove container and volumes
docker compose down -vThe application organizes backups in a hierarchical structure optimized for retention management and quick access:
backups/
├── example.com/
│ ├── 20250118_143022.zone # Latest backup (current)
│ └── backup/ # Historical backups
│ ├── 20250117_020001.zone # Yesterday
│ ├── 20250116_020001.zone # 2 days ago
│ ├── 20250115_020001.zone # 3 days ago
│ └── ... # Up to 14 days
├── example.org/
│ ├── 20250118_143025.zone
│ └── backup/
│ ├── 20250117_020002.zone
│ └── ...
└── example.net/
├── 20250118_143028.zone
└── backup/
└── ...
Backup files use timestamp-based naming for chronological organization:
YYYYMMDD_HHMMSS.zone
│││││││ ││││││
│││││││ │││││└─ Second (00-59)
│││││││ ││││└── Minute (00-59)
│││││││ │││└─── Hour (00-23)
│││││││ ││└──── Day (01-31)
│││││││ │└───── Month (01-12)
││││││└──────────Year (4 digits)
Example: 20250118_143022.zone = January 18, 2025 at 14:30:22
The application implements a two-tier retention strategy:
Current Backup Tier
- Location: Zone root directory (e.g.,
example.com/20250118_143022.zone) - Retention: Latest backup always retained
- Purpose: Quick access to most recent configuration
- Rotation: Moved to backup/ when new backup completes
Historical Backup Tier
- Location:
backup/subdirectory within each zone - Retention: 14 days (configurable via
RETENTION_DAYSconstant) - Purpose: Point-in-time recovery
- Cleanup: Automatic deletion of backups older than retention period
Retention Behavior
- New backup created → Previous current backup moves to
backup/ - After backup completion → Cleanup process runs
- Files older than 14 days → Automatically deleted
- Latest backup → Always remains in zone root
Backups are exported in BIND zone file format (RFC 1035), ensuring compatibility with:
- BIND DNS server
- PowerDNS
- NSD (Name Server Daemon)
- Microsoft DNS
- Cloudflare DNS import functionality
Example Zone File Content
; Zone: example.com
; Exported: 2025-01-18 14:30:22 UTC
$ORIGIN example.com.
$TTL 300
; SOA Record
@ 3600 IN SOA ns1.cloudflare.com. dns.cloudflare.com. (
2025011814 ; Serial
10000 ; Refresh
2400 ; Retry
604800 ; Expire
3600 ) ; Minimum TTL
; Name Servers
@ 86400 IN NS ns1.cloudflare.com.
@ 86400 IN NS ns2.cloudflare.com.
; A Records
@ 300 IN A 192.0.2.1
www 300 IN A 192.0.2.1
api 300 IN A 192.0.2.2
; CNAME Records
blog 300 IN CNAME example.com.
shop 300 IN CNAME shopify.example.com.
; MX Records
@ 300 IN MX 10 mail.example.com.
@ 300 IN MX 20 mail2.example.com.
; TXT Records
@ 300 IN TXT "v=spf1 include:_spf.google.com ~all"
_dmarc 300 IN TXT "v=DMARC1; p=quarantine; rua=mailto:dmarc@example.com"
; AAAA Records (IPv6)
@ 300 IN AAAA 2001:db8::1
www 300 IN AAAA 2001:db8::1
Typical Zone File Sizes
- Small zone (10-50 records): 2-5 KB
- Medium zone (50-200 records): 5-20 KB
- Large zone (200-1000 records): 20-100 KB
- Enterprise zone (1000+ records): 100+ KB
Capacity Planning Examples
| Zones | Avg Records | File Size | Daily | 14-Day Retention | Annual |
|---|---|---|---|---|---|
| 5 | 50 | 5 KB | 25 KB | 350 KB | 9 MB |
| 20 | 100 | 10 KB | 200 KB | 2.8 MB | 73 MB |
| 100 | 200 | 20 KB | 2 MB | 28 MB | 730 MB |
| 500 | 150 | 15 KB | 7.5 MB | 105 MB | 2.7 GB |
Recommendations
- Allocate 10x expected storage for safety margin
- Monitor storage usage via
du -sh ./backups - Consider compression for long-term archival
- Implement external backup of backup directory
The simplest method for restoring DNS records:
-
Navigate to Zone
- Log in to Cloudflare Dashboard
- Select the domain to restore
-
Access Import Function
- Click on "DNS" in the left sidebar
- Click "Advanced" section
- Select "Import and Export"
-
Upload Backup File
- Click "Import DNS records"
- Select your
.zonefile from backups - Review the records to be imported
- Click "Import" to confirm
-
Verify Restoration
- Check DNS records list
- Verify critical records are present
- Test DNS resolution using
digornslookup
Important Notes
- Import merges with existing records (does not replace)
- Duplicate records may need manual cleanup
- SOA and NS records from backup file are typically ignored
Programmatic restoration via API:
# Set variables
ZONE_ID="your_zone_id_here"
API_TOKEN="your_api_token_here"
BACKUP_FILE="./backups/example.com/20250118_143022.zone"
# Import zone file
curl -X POST "https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/import" \
-H "Authorization: Bearer ${API_TOKEN}" \
-H "Content-Type: multipart/form-data" \
--form "file=@${BACKUP_FILE}" \
--form "proxied=false"Response Example
{
"success": true,
"errors": [],
"messages": [],
"result": {
"recs_added": 42,
"total_records_parsed": 42
}
}Automated restoration script:
#!/bin/bash
# restore-dns.sh - Restore DNS from backup
set -euo pipefail
ZONE_ID="${1:?Zone ID required}"
BACKUP_FILE="${2:?Backup file required}"
API_TOKEN="${CLOUDFLARE_API_TOKEN:?API token required}"
echo "Restoring DNS for zone: ${ZONE_ID}"
echo "Using backup file: ${BACKUP_FILE}"
if [[ ! -f "${BACKUP_FILE}" ]]; then
echo "Error: Backup file not found: ${BACKUP_FILE}"
exit 1
fi
response=$(curl -s -X POST \
"https://api.cloudflare.com/client/v4/zones/${ZONE_ID}/dns_records/import" \
-H "Authorization: Bearer ${API_TOKEN}" \
-H "Content-Type: multipart/form-data" \
--form "file=@${BACKUP_FILE}")
success=$(echo "${response}" | jq -r '.success')
if [[ "${success}" == "true" ]]; then
records_added=$(echo "${response}" | jq -r '.result.recs_added')
echo "Success! Imported ${records_added} DNS records"
else
echo "Restoration failed!"
echo "${response}" | jq '.'
exit 1
fiUsage:
chmod +x restore-dns.sh
./restore-dns.sh abc123def456 ./backups/example.com/20250118_143022.zone-
Pre-Restoration Backup
- Create a backup of current state before restoration
- Document the reason for restoration
- Identify the specific backup version needed
-
Test Restoration
- If possible, test restoration on a test zone first
- Verify critical records after import
- Check for unintended duplicates
-
Validation Steps
# Verify DNS resolution dig @1.1.1.1 example.com A dig @1.1.1.1 www.example.com A # Check MX records dig @1.1.1.1 example.com MX # Verify TXT records (SPF, DKIM, DMARC) dig @1.1.1.1 example.com TXT
-
Post-Restoration
- Monitor application logs for DNS-related errors
- Check email delivery if MX records were restored
- Verify SSL/TLS certificate issuance if needed
- Document the restoration in change logs
To restore specific records only:
- Open backup file in text editor
- Copy specific record lines needed
- Manually add via Cloudflare Dashboard or API
- Avoid importing full zone file
Scenario 1: Accidental Record Deletion
- Identify when deletion occurred
- Locate most recent backup before deletion
- Restore via Dashboard (fastest)
- Verify critical services immediately
Scenario 2: Bulk Configuration Error
- Stop any automated DNS updates
- Identify last known good configuration
- Restore via API (more control)
- Implement change approval process
Scenario 3: Account Compromise
- Revoke compromised API tokens
- Create new API token
- Restore from pre-compromise backup
- Enable two-factor authentication
- Audit account access logs
The Docker container includes a built-in health check:
HEALTHCHECK --interval=1h --timeout=10s --start-period=10s --retries=3 \
CMD python -c "import os; import sys; sys.exit(0 if os.path.exists('/backups') else 1)"Health Check Parameters
- Interval: 1 hour between checks
- Timeout: 10 seconds per check
- Start Period: 10 seconds grace period on startup
- Retries: 3 consecutive failures before unhealthy status
Check Health Status
# Docker Compose
docker compose ps
# Docker inspect
docker inspect cloudflare-dns-backup --format='{{.State.Health.Status}}'
# Detailed health information
docker inspect cloudflare-dns-backup --format='{{json .State.Health}}' | jq '.'Viewing Logs
# Real-time log streaming
docker compose logs -f cloudflare-dns-backup
# Last 100 lines
docker compose logs --tail=100 cloudflare-dns-backup
# Logs since specific time
docker compose logs --since 2025-01-18T14:00:00 cloudflare-dns-backup
# Export logs to file
docker compose logs --no-color cloudflare-dns-backup > backup-logs.txtLog Format
TIMESTAMP - MODULE - LEVEL - MESSAGE
Example Log Output
2025-01-18 14:30:15 - __main__ - INFO - ============================================================
2025-01-18 14:30:15 - __main__ - INFO - Starting scheduled backup at 2025-01-18 14:30:15
2025-01-18 14:30:15 - __main__ - INFO - ============================================================
2025-01-18 14:30:16 - src.backup - INFO - Starting backup process...
2025-01-18 14:30:16 - src.backup - INFO - Backup configuration: Config(api_token='***', backup_schedule='0 2 * * *', backup_zones=None, backup_path='/backups')
2025-01-18 14:30:16 - src.backup - INFO - Fetching zones from Cloudflare...
2025-01-18 14:30:17 - src.backup - INFO - Found 3 total zones in account
2025-01-18 14:30:17 - src.backup - INFO - Backing up 3 zones...
2025-01-18 14:30:17 - src.backup - INFO - Exporting zone: example.com (ID: abc123def456)
2025-01-18 14:30:18 - src.backup - INFO - Successfully exported example.com
2025-01-18 14:30:18 - src.backup - INFO - Saved backup to /backups/example.com/20250118_143018.zone
2025-01-18 14:30:19 - src.backup - INFO - Exporting zone: example.org (ID: def456ghi789)
2025-01-18 14:30:20 - src.backup - INFO - Successfully exported example.org
2025-01-18 14:30:20 - src.backup - INFO - Saved backup to /backups/example.org/20250118_143020.zone
2025-01-18 14:30:20 - src.backup - INFO - Cleaned up 2 backup(s) older than 14 days
2025-01-18 14:30:21 - src.backup - INFO - Exporting zone: example.net (ID: ghi789jkl012)
2025-01-18 14:30:22 - src.backup - INFO - Successfully exported example.net
2025-01-18 14:30:22 - src.backup - INFO - Saved backup to /backups/example.net/20250118_143022.zone
2025-01-18 14:30:22 - src.backup - INFO - Backup complete: 3/3 successful, 0 failed
2025-01-18 14:30:22 - __main__ - INFO - ============================================================
2025-01-18 14:30:22 - __main__ - INFO - Backup Summary:
2025-01-18 14:30:22 - __main__ - INFO - Total zones: 3
2025-01-18 14:30:22 - __main__ - INFO - Successful: 3
2025-01-18 14:30:22 - __main__ - INFO - Failed: 0
2025-01-18 14:30:22 - __main__ - INFO - ============================================================
Log Levels
- INFO: Normal operations, backup progress, summaries
- WARNING: Non-critical issues, missing configured zones, cleanup failures
- ERROR: Backup failures, API errors, authentication problems
- DEBUG: Detailed debugging information (requires code modification)
Syslog Integration
# docker-compose.yml
services:
cloudflare-dns-backup:
logging:
driver: syslog
options:
syslog-address: "tcp://syslog.example.com:514"
tag: "cloudflare-dns-backup"JSON Logging
services:
cloudflare-dns-backup:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"Fluentd Integration
services:
cloudflare-dns-backup:
logging:
driver: fluentd
options:
fluentd-address: "localhost:24224"
tag: "cloudflare.dns.backup"Key Metrics to Monitor
-
Backup Success Rate
- Parse logs for successful/failed backup counts
- Alert on consecutive failures
-
Backup Duration
- Track time between "Starting backup" and "Backup complete"
- Alert on unusual delays
-
Storage Usage
# Monitor backup directory size du -sh ./backups/ df -h ./backups/ -
Container Health
- Monitor health check status
- Alert on unhealthy state
Example Monitoring Script
#!/bin/bash
# monitor-backup.sh
BACKUP_DIR="./backups"
CONTAINER_NAME="cloudflare-dns-backup"
# Check container health
HEALTH=$(docker inspect ${CONTAINER_NAME} --format='{{.State.Health.Status}}')
if [[ "${HEALTH}" != "healthy" ]]; then
echo "ALERT: Container unhealthy - ${HEALTH}"
fi
# Check backup age
LATEST_BACKUP=$(find ${BACKUP_DIR} -name "*.zone" -type f -printf '%T@ %p\n' | \
sort -rn | head -1 | cut -d' ' -f2-)
BACKUP_AGE=$(($(date +%s) - $(stat -c %Y "${LATEST_BACKUP}")))
MAX_AGE=$((24 * 3600)) # 24 hours
if [[ ${BACKUP_AGE} -gt ${MAX_AGE} ]]; then
echo "ALERT: No backup in last 24 hours"
fi
# Check storage usage
USAGE=$(df ${BACKUP_DIR} | tail -1 | awk '{print $5}' | sed 's/%//')
if [[ ${USAGE} -gt 80 ]]; then
echo "ALERT: Backup storage over 80% full"
fiPrometheus Integration Example
# Add to backup.py for metrics export
from prometheus_client import Counter, Histogram, start_http_server
backup_total = Counter('backup_total', 'Total backups attempted')
backup_success = Counter('backup_success', 'Successful backups')
backup_failed = Counter('backup_failed', 'Failed backups')
backup_duration = Histogram('backup_duration_seconds', 'Backup duration')
# Start metrics server
start_http_server(8000)Enable Verbose Logging
Modify src/main.py to enable DEBUG level:
logging.basicConfig(
level=logging.DEBUG, # Changed from INFO
format="%(asctime)s - %(name)s - %(levelname)s - %(message)s",
handlers=[logging.StreamHandler(sys.stdout)],
)Rebuild and restart:
docker compose build
docker compose up -dInteractive Debugging
# Access container shell
docker compose exec cloudflare-dns-backup /bin/bash
# Check Python environment
python --version
pip list
# Verify environment variables
env | grep -E '(CLOUDFLARE|BACKUP)'
# Test Cloudflare API connectivity
python -c "from cloudflare import Cloudflare; c=Cloudflare(api_token='${CLOUDFLARE_API_TOKEN}'); print(list(c.zones.list()))"
# Check file permissions
ls -la /backupsConfiguration
- 1-5 DNS zones
- Daily backups at night
- 14-day retention
- Single server deployment
# docker-compose.yml
version: '3.8'
services:
cloudflare-dns-backup:
build: .
container_name: cloudflare-dns-backup
restart: unless-stopped
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
- BACKUP_SCHEDULE=0 2 * * *
- BACKUP_ZONES=all
volumes:
- ./backups:/backupsConfiguration
- Separate backups for dev, staging, production domains
- Different schedules per environment
- Multiple container instances
# docker-compose.yml
version: '3.8'
services:
backup-production:
build: .
container_name: backup-prod
restart: unless-stopped
environment:
- CLOUDFLARE_API_TOKEN=${PROD_API_TOKEN}
- BACKUP_SCHEDULE=0 */6 * * * # Every 6 hours
- BACKUP_ZONES=prod.example.com
volumes:
- ./backups/production:/backups
backup-staging:
build: .
container_name: backup-staging
restart: unless-stopped
environment:
- CLOUDFLARE_API_TOKEN=${STAGING_API_TOKEN}
- BACKUP_SCHEDULE=0 0 * * * # Daily at midnight
- BACKUP_ZONES=staging.example.com
volumes:
- ./backups/staging:/backups
backup-development:
build: .
container_name: backup-dev
restart: unless-stopped
environment:
- CLOUDFLARE_API_TOKEN=${DEV_API_TOKEN}
- BACKUP_SCHEDULE=0 2 * * 1 # Weekly on Monday
- BACKUP_ZONES=dev.example.com
volumes:
- ./backups/development:/backupsDeployment Manifest
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloudflare-dns-backup
namespace: infrastructure
spec:
replicas: 1
selector:
matchLabels:
app: cloudflare-dns-backup
template:
metadata:
labels:
app: cloudflare-dns-backup
spec:
containers:
- name: backup
image: your-registry/cloudflare-dns-backup:latest
env:
- name: CLOUDFLARE_API_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare-credentials
key: api-token
- name: BACKUP_SCHEDULE
value: "0 */4 * * *"
- name: BACKUP_ZONES
value: "all"
volumeMounts:
- name: backups
mountPath: /backups
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
volumes:
- name: backups
persistentVolumeClaim:
claimName: dns-backup-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dns-backup-pvc
namespace: infrastructure
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: standard
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-credentials
namespace: infrastructure
type: Opaque
stringData:
api-token: your_api_token_hereConfiguration
- Multiple backup instances
- Geographic distribution
- Different storage backends
# docker-compose.ha.yml
version: '3.8'
services:
backup-primary:
build: .
container_name: backup-primary
restart: unless-stopped
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
- BACKUP_SCHEDULE=0 2 * * *
- BACKUP_ZONES=all
volumes:
- ./backups/primary:/backups
- /mnt/nfs/dns-backups:/backups-nfs
backup-secondary:
build: .
container_name: backup-secondary
restart: unless-stopped
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
- BACKUP_SCHEDULE=30 2 * * * # 30 minutes after primary
- BACKUP_ZONES=all
volumes:
- ./backups/secondary:/backups
- /mnt/s3fs/dns-backups:/backups-s3GitLab CI Example
# .gitlab-ci.yml
stages:
- backup
dns-backup-pre-deploy:
stage: backup
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker run --rm
-e CLOUDFLARE_API_TOKEN=$CLOUDFLARE_API_TOKEN
-e BACKUP_ZONES=$DNS_ZONES
-v $(pwd)/backups:/backups
cloudflare-dns-backup:latest
python -m src.main --once
artifacts:
paths:
- backups/
expire_in: 30 days
only:
- master
when: manualGitHub Actions Example
# .github/workflows/dns-backup.yml
name: DNS Backup
on:
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
workflow_dispatch: # Manual trigger
jobs:
backup:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Run DNS Backup
env:
CLOUDFLARE_API_TOKEN: ${{ secrets.CLOUDFLARE_API_TOKEN }}
run: |
docker compose run --rm cloudflare-dns-backup python -m src.main --once
- name: Upload Backups
uses: actions/upload-artifact@v3
with:
name: dns-backups
path: backups/
retention-days: 30
- name: Sync to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 sync backups/ s3://my-dns-backups/$(date +%Y-%m-%d)/ --deleteAWS S3 Integration
#!/bin/bash
# backup-to-s3.sh
BACKUP_DIR="./backups"
S3_BUCKET="s3://my-company-dns-backups"
DATE=$(date +%Y-%m-%d)
# Run backup
docker compose run --rm cloudflare-dns-backup python -m src.main --once
# Sync to S3
aws s3 sync ${BACKUP_DIR}/ ${S3_BUCKET}/${DATE}/ \
--storage-class STANDARD_IA \
--delete
# Lifecycle policy applied via S3 settings:
# - Move to Glacier after 30 days
# - Delete after 365 daysGoogle Cloud Storage
#!/bin/bash
# backup-to-gcs.sh
BACKUP_DIR="./backups"
GCS_BUCKET="gs://my-company-dns-backups"
DATE=$(date +%Y-%m-%d)
# Run backup
docker compose run --rm cloudflare-dns-backup python -m src.main --once
# Sync to GCS
gsutil -m rsync -r -d ${BACKUP_DIR}/ ${GCS_BUCKET}/${DATE}/The application interacts with the following Cloudflare API endpoints:
Method: Bearer Token Authentication
Authorization: Bearer YOUR_API_TOKENAll requests use token-based authentication passed via the Authorization header.
Endpoint: GET /zones
Purpose: Retrieve all DNS zones in the Cloudflare account
Request:
GET https://api.cloudflare.com/client/v4/zones
Authorization: Bearer YOUR_API_TOKENResponse:
{
"success": true,
"errors": [],
"messages": [],
"result": [
{
"id": "abc123def456",
"name": "example.com",
"status": "active",
"paused": false,
"type": "full",
"development_mode": 0,
"name_servers": [
"ns1.cloudflare.com",
"ns2.cloudflare.com"
]
}
],
"result_info": {
"page": 1,
"per_page": 20,
"total_pages": 1,
"count": 1,
"total_count": 1
}
}Rate Limit: 1200 requests per 5 minutes
Python SDK Usage:
zones = client.zones.list()Endpoint: GET /zones/{zone_id}/dns_records/export
Purpose: Export all DNS records for a zone in BIND format
Request:
GET https://api.cloudflare.com/client/v4/zones/abc123def456/dns_records/export
Authorization: Bearer YOUR_API_TOKENResponse: Plain text BIND zone file
$ORIGIN example.com.
@ 300 IN A 192.0.2.1
www 300 IN A 192.0.2.1
Rate Limit: 1200 requests per 5 minutes
Python SDK Usage:
response = client.get(
f"/zones/{zone_id}/dns_records/export",
cast_to=str
)Endpoint: POST /zones/{zone_id}/dns_records/import
Purpose: Import DNS records from BIND zone file
Request:
POST https://api.cloudflare.com/client/v4/zones/abc123def456/dns_records/import
Authorization: Bearer YOUR_API_TOKEN
Content-Type: multipart/form-data
--boundary
Content-Disposition: form-data; name="file"; filename="backup.zone"
Content-Type: text/plain
[BIND zone file content]
--boundary--Response:
{
"success": true,
"errors": [],
"messages": [],
"result": {
"recs_added": 42,
"total_records_parsed": 42
}
}Rate Limit: 1200 requests per 5 minutes
Default Rate Limits:
- 1200 requests per 5 minutes per API token
- Applies across all endpoints
Best Practices:
- Application processes zones sequentially (not parallel)
- Natural throttling due to I/O operations
- Typical backup: 2-3 API calls per zone (list + export)
- Example: 100 zones = ~200-300 API calls = well under limit
Rate Limit Headers:
X-RateLimit-Limit: 1200
X-RateLimit-Remaining: 1195
X-RateLimit-Reset: 1642521600Authentication Error (401):
{
"success": false,
"errors": [
{
"code": 10000,
"message": "Authentication error"
}
]
}Insufficient Permissions (403):
{
"success": false,
"errors": [
{
"code": 10000,
"message": "Insufficient permissions"
}
]
}Zone Not Found (404):
{
"success": false,
"errors": [
{
"code": 1001,
"message": "Zone not found"
}
]
}Rate Limit Exceeded (429):
{
"success": false,
"errors": [
{
"code": 10000,
"message": "Rate limit exceeded"
}
]
}- Official API Docs: https://developers.cloudflare.com/api/
- Zones API: https://developers.cloudflare.com/api/operations/zones-get
- DNS Records: https://developers.cloudflare.com/api/operations/dns-records-for-a-zone-list-dns-records
- Python SDK: https://github.com/cloudflare/cloudflare-python
Typical Backup Duration per Zone:
- API call latency: 200-500ms per request
- Zone export: 500ms-2s depending on record count
- File I/O: 10-50ms per file
- Total per zone: 1-3 seconds
Scaling Examples:
| Zones | Estimated Duration | API Calls |
|---|---|---|
| 1 | 2-3 seconds | 2 |
| 10 | 20-30 seconds | 20 |
| 50 | 2-3 minutes | 100 |
| 100 | 4-6 minutes | 200 |
| 500 | 20-30 minutes | 1000 |
Factors Affecting Performance:
- Network latency to Cloudflare API
- Number of DNS records per zone
- Disk I/O speed for backup storage
- Container resource limits
CPU Usage:
- Idle: <1% CPU
- During backup: 5-15% CPU (I/O bound, not CPU intensive)
- Recommended: 100m CPU (0.1 cores) minimum
Memory Usage:
- Base application: 30-50 MB
- During backup: 50-100 MB
- Peak with large zones: 100-150 MB
- Recommended: 128 MB minimum, 256 MB comfortable
Disk I/O:
- Write operations: Sequential writes during backup
- IOPS: Low (1-5 IOPS per backup)
- Throughput: Minimal (<1 MB/s typical)
Network:
- Bandwidth: 10-100 KB/s during backup
- Connections: 1-2 concurrent HTTPS connections
- Total data: Depends on zone size (typically KB to MB)
Parallel Zone Processing (Future Enhancement):
# Potential parallel implementation
from concurrent.futures import ThreadPoolExecutor
def backup_all_parallel(self):
zones = self.get_zones()
with ThreadPoolExecutor(max_workers=5) as executor:
results = executor.map(self.backup_zone, zones)
return resultsConsiderations:
- Reduces total backup time
- Increases concurrent API calls
- Must respect rate limits
- Higher memory usage
Incremental Backups (Future Enhancement):
- Compare zone serial numbers
- Skip unchanged zones
- Reduces API calls and storage
Compression:
# Compress backups older than 7 days
find ./backups -name "*.zone" -mtime +7 -exec gzip {} \;
# Add to cron
0 3 * * * find /path/to/backups -name "*.zone" -mtime +7 -exec gzip {} \;Benefits:
- 50-70% storage reduction
- Lower backup storage costs
- Faster network transfers for offsite copies
Measure Backup Duration:
# Add timing to logs
time docker compose run --rm cloudflare-dns-backup python -m src.main --onceTrack Resource Usage:
# Real-time monitoring
docker stats cloudflare-dns-backup
# Get average stats
docker stats --no-stream cloudflare-dns-backupBenchmark Storage Performance:
# Write speed test
dd if=/dev/zero of=./backups/test.tmp bs=1M count=100 conv=fdatasync
rm ./backups/test.tmp
# Measure IOPS
fio --name=random-write --ioengine=libaio --iodepth=1 --rw=randwrite \
--bs=4k --direct=1 --size=100M --numjobs=1 --runtime=60 \
--filename=./backups/test.fioDNS Resolution:
# Pre-resolve Cloudflare API endpoint
echo "104.16.132.229 api.cloudflare.com" >> /etc/hostsConnection Pooling: The Cloudflare Python SDK automatically manages connection pooling and reuse.
Retry Logic: The SDK includes automatic retry with exponential backoff for transient failures.
Principle of Least Privilege
Create API tokens with minimum required permissions:
Required Permission:
- Zone → DNS → Read
Optional Restrictions:
- Zone Resources: Limit to specific zones only
- IP Address Filtering: Restrict to backup server IPs
- TTL: Set token expiration date
Token Creation Steps:
- Navigate to: https://dash.cloudflare.com/profile/api-tokens
- Click "Create Token"
- Select "Create Custom Token"
- Configure:
- Token name: "DNS Backup - Production"
- Permissions: Zone:DNS:Read
- Zone Resources: Include → Specific zone → example.com
- IP Address Filtering: Is in → 203.0.113.10
- TTL: End Date → 2026-01-01
- Click "Continue to summary"
- Review and click "Create Token"
- Copy token immediately (shown only once)
Token Storage:
# Never commit tokens to version control
echo ".env" >> .gitignore
echo "*.env" >> .gitignore
# Set restrictive file permissions
chmod 600 .env
# Verify permissions
ls -la .env
# Should show: -rw------- (only owner can read/write)Rotation Schedule:
- Rotate tokens every 90 days minimum
- Immediately rotate if compromise suspected
- Document rotation in change management system
Rotation Process:
# 1. Create new token in Cloudflare dashboard
# 2. Update .env with new token
nano .env
# 3. Test new token
docker compose run --rm cloudflare-dns-backup python -m src.main --once
# 4. If successful, restart service
docker compose restart
# 5. Revoke old token in Cloudflare dashboard
# 6. Document rotation dateAutomated Rotation (Advanced): Consider integrating with secrets management systems:
- HashiCorp Vault
- AWS Secrets Manager
- Azure Key Vault
- Google Secret Manager
Development vs Production:
Development:
# .env.development
CLOUDFLARE_API_TOKEN=dev_token_with_limited_access
BACKUP_ZONES=dev.example.comProduction:
# .env.production
CLOUDFLARE_API_TOKEN=prod_token_with_specific_zone_access
BACKUP_ZONES=allKubernetes Secrets:
# Create secret
kubectl create secret generic cloudflare-credentials \
--from-literal=api-token='your_token_here' \
-n infrastructure
# Reference in deployment
env:
- name: CLOUDFLARE_API_TOKEN
valueFrom:
secretKeyRef:
name: cloudflare-credentials
key: api-tokenDocker Secrets:
# Create secret
echo "your_token_here" | docker secret create cloudflare_token -
# Reference in compose file
version: '3.8'
services:
cloudflare-dns-backup:
secrets:
- cloudflare_token
environment:
- CLOUDFLARE_API_TOKEN_FILE=/run/secrets/cloudflare_token
secrets:
cloudflare_token:
external: trueFile System Permissions:
# Set directory ownership
sudo chown -R backup-user:backup-group ./backups
# Restrict permissions
chmod 700 ./backups # Owner only
chmod 600 ./backups/*/*.zone # Owner read/write only
# Verify
ls -la ./backupsEncryption at Rest:
Option 1: Encrypted Volume
# Create encrypted volume
cryptsetup luksFormat /dev/sdb1
cryptsetup luksOpen /dev/sdb1 encrypted-backups
# Mount and use
mkfs.ext4 /dev/mapper/encrypted-backups
mount /dev/mapper/encrypted-backups /mnt/backupsOption 2: File-Level Encryption
# Encrypt backups after creation
find ./backups -name "*.zone" -exec gpg --encrypt --recipient backup@example.com {} \;
# Automated encryption script
#!/bin/bash
for zone_file in ./backups/*/*.zone; do
gpg --encrypt --batch --yes --recipient backup@example.com "${zone_file}"
rm "${zone_file}" # Remove unencrypted
doneOption 3: Encrypted Cloud Storage
- AWS S3: Server-side encryption (SSE-S3, SSE-KMS)
- Google Cloud: Customer-managed encryption keys
- Azure: Storage Service Encryption
Firewall Rules:
# Allow outbound HTTPS to Cloudflare only
iptables -A OUTPUT -p tcp -d 104.16.0.0/12 --dport 443 -j ACCEPT
iptables -A OUTPUT -p tcp -d 172.64.0.0/13 --dport 443 -j ACCEPT
iptables -A OUTPUT -p tcp --dport 443 -j DROPPrivate Network Deployment:
# docker-compose.yml with network isolation
version: '3.8'
services:
cloudflare-dns-backup:
networks:
- backup-network
networks:
backup-network:
driver: bridge
internal: false # Requires internet access
ipam:
config:
- subnet: 172.20.0.0/24VPN/Bastion Access: For highly sensitive environments:
- Deploy backup service in private subnet
- Route traffic through VPN or bastion host
- Restrict API token to VPN exit IP
Logging:
# Enable audit logging
docker compose logs cloudflare-dns-backup | tee -a /var/log/dns-backup-audit.log
# Log rotation
cat > /etc/logrotate.d/dns-backup << EOF
/var/log/dns-backup-audit.log {
daily
rotate 90
compress
delaycompress
notifempty
create 640 root adm
}
EOFAccess Auditing:
# Monitor backup directory access
auditctl -w /path/to/backups -p rwa -k dns-backup-access
# View audit logs
ausearch -k dns-backup-accessCompliance Considerations:
- SOC 2: Maintain backup logs for audit trail
- GDPR: Document data retention policies
- HIPAA: Encrypt backups at rest and in transit
- PCI DSS: Restrict access to cardholder data environments
- API token created with minimum required permissions
- API token restricted to specific zones (if applicable)
- API token restricted to source IP addresses
- API token has expiration date set
-
.envfile is in.gitignore -
.envfile permissions set to 600 - Backup directory permissions set to 700
- Token rotation schedule documented
- Backup encryption implemented (if required)
- Network firewall rules configured
- Audit logging enabled
- Access to backup server restricted
- Secrets stored in dedicated management system
- Disaster recovery plan documented
- Security review completed
Symptom:
ValueError: Required environment variable CLOUDFLARE_API_TOKEN is not set
Cause: Environment variable not set or .env file not loaded
Solutions:
-
Verify .env file exists:
ls -la .env cat .env # Check if CLOUDFLARE_API_TOKEN is set -
Check file format:
# Correct format CLOUDFLARE_API_TOKEN=your_token_here # Common mistakes CLOUDFLARE_API_TOKEN = your_token_here # Extra spaces (may work but avoid) CLOUDFLARE_API_TOKEN="your_token_here" # Quotes (acceptable but unnecessary)
-
Verify Docker Compose loads .env:
docker compose config # Shows resolved configuration -
Manual environment variable:
export CLOUDFLARE_API_TOKEN=your_token_here docker compose up
Symptom:
AuthenticationError: Authentication failed
cloudflare.APIStatusError: Error code: 10000 - Authentication error
Causes and Solutions:
-
Invalid or Expired Token:
# Test token validity curl -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" \ -H "Authorization: Bearer YOUR_TOKEN" \ -H "Content-Type: application/json" # Expected response if valid: # {"success":true,"messages":[],"result":{"id":"...","status":"active"}}
-
Token Copied Incorrectly:
- Remove any whitespace or newlines
- Ensure entire token was copied
- Token should be ~40 characters alphanumeric
-
Token Revoked:
- Create new token in Cloudflare dashboard
- Update .env file
- Restart container
Symptom:
cloudflare.APIStatusError: Error code: 10000 - Insufficient permissions
Solution:
-
Verify Token Permissions:
- Log in to Cloudflare dashboard
- Navigate to Profile → API Tokens
- Find your token → Click "Edit"
- Verify permissions include: Zone → DNS → Read
-
Create New Token with Correct Permissions:
- See Installation section for detailed steps
- Minimum required: Zone:DNS:Read
- Update .env with new token
Symptom:
WARNING - Configured zones not found in account: example.com
Causes and Solutions:
-
Domain Spelling:
# Check exact domain name in Cloudflare # Common mistakes: BACKUP_ZONES=example.com # Correct BACKUP_ZONES=www.example.com # Incorrect (should be root domain) BACKUP_ZONES=example.com. # Incorrect (no trailing dot)
-
Domain Not in Account:
- Verify domain exists in Cloudflare dashboard
- Ensure token has access to the zone
- Check for typos in domain name
-
Token Zone Restrictions:
- Token may be restricted to specific zones
- Create new token with broader access or
- Add specific zones to token permissions
Symptom: Container runs but no files in ./backups directory
Diagnostic Steps:
-
Check Container Logs:
docker compose logs cloudflare-dns-backup
-
Verify Volume Mount:
# Inspect container docker inspect cloudflare-dns-backup | grep -A 10 Mounts # Expected output should show: # "Source": "/path/to/your/project/backups" # "Destination": "/backups"
-
Check Directory Permissions:
# On host ls -ld ./backups # Should be writable # If permission denied sudo chown -R $USER:$USER ./backups chmod 755 ./backups
-
Test Backup Manually:
docker compose run --rm cloudflare-dns-backup python -m src.main --once
-
Verify Zones Found:
# Check logs for: # "Found X total zones in account" # "Backing up X zones..."
Symptom: docker compose ps shows container in "Exited" state
Diagnostic Steps:
-
Check Exit Code:
docker compose ps # Note exit code (e.g., "Exited (1)") -
View Startup Logs:
docker compose logs cloudflare-dns-backup
-
Common Causes:
- Exit Code 1: Configuration error (check logs)
- Exit Code 137: Out of memory (increase container memory)
- Exit Code 139: Segmentation fault (check Python dependencies)
-
Test Interactively:
docker compose run --rm cloudflare-dns-backup /bin/bash # Inside container: python -m src.main --once
Symptom: Initial backup works but scheduled backups don't run
Diagnostic Steps:
-
Verify Schedule Format:
# Check docker compose logs for: # "Scheduling daily backup at HH:MM" docker compose logs | grep -i schedul
-
Test Schedule Parsing:
# Inside container or locally python -c " from src.main import parse_cron_schedule print(parse_cron_schedule('0 2 * * *')) "
-
Check Next Run Time:
# Logs should show: # "Next backup scheduled for: YYYY-MM-DD HH:MM:SS" docker compose logs | grep "Next backup"
-
Verify Container Running:
docker compose ps # State should be "Up" not "Exited" -
Time Zone Issues:
# Check container time docker compose exec cloudflare-dns-backup date # Set timezone if needed environment: - TZ=America/New_York
Symptom: Backup directory consuming excessive disk space
Solutions:
-
Check Current Usage:
du -sh ./backups du -h --max-depth=2 ./backups | sort -rh | head -20
-
Verify Retention Working:
# Check for old backups find ./backups -name "*.zone" -mtime +14 # Should be cleaned automatically # If not, check logs for: docker compose logs | grep -i cleanup
-
Manual Cleanup:
# Remove backups older than 14 days find ./backups -name "*.zone" -mtime +14 -delete
-
Compress Old Backups:
# Compress backups older than 7 days find ./backups -name "*.zone" -mtime +7 -exec gzip {} \;
-
Adjust Retention Period: Edit
src/backup.py:# Change retention days RETENTION_DAYS = 7 # Changed from 14
Symptom: Backups take longer than expected
Diagnostic Steps:
-
Time the Backup:
time docker compose run --rm cloudflare-dns-backup python -m src.main --once -
Check Zone Count:
# In logs, look for: # "Found X total zones in account" # Expected: 1-3 seconds per zone
-
Network Latency:
# Test API connectivity ping api.cloudflare.com # Measure HTTPS latency curl -w "@curl-format.txt" -o /dev/null -s https://api.cloudflare.com/client/v4/user/tokens/verify # curl-format.txt: # time_namelookup: %{time_namelookup}\n # time_connect: %{time_connect}\n # time_appconnect: %{time_appconnect}\n # time_redirect: %{time_redirect}\n # time_total: %{time_total}\n
-
Disk I/O:
# Monitor during backup iostat -x 1 -
Container Resources:
# Check if CPU/memory limited docker stats cloudflare-dns-backup
Symptom: Cannot restore backup via Cloudflare dashboard or API
Solutions:
-
Verify File Format:
# Check first few lines head -20 ./backups/example.com/20250118_143022.zone # Should start with: # $ORIGIN example.com.
-
Check for Corruption:
# Validate file encoding file ./backups/example.com/20250118_143022.zone # Should be: ASCII text or UTF-8 Unicode text
-
Test with Different Backup:
- Try restoring an older backup
- Try restoring different zone
-
API Import Error Details:
# Look at full error response curl -X POST "https://api.cloudflare.com/client/v4/zones/ZONE_ID/dns_records/import" \ -H "Authorization: Bearer YOUR_TOKEN" \ -H "Content-Type: multipart/form-data" \ --form "file=@./backups/example.com/backup.zone" | jq '.'
-
Contact Support:
- If persistent issues, contact Cloudflare support
- Provide: zone ID, backup timestamp, error messages
Symptom: Health check fails
Diagnostic Steps:
-
Check Health Status:
docker inspect cloudflare-dns-backup --format='{{json .State.Health}}' | jq '.'
-
View Health Log:
docker inspect cloudflare-dns-backup --format='{{range .State.Health.Log}}{{.Output}}{{end}}' -
Manual Health Check:
docker compose exec cloudflare-dns-backup python -c "import os; import sys; sys.exit(0 if os.path.exists('/backups') else 1)" echo $? # Should be 0
-
Common Causes:
- Backup directory not mounted
- Volume mount failed
- Permissions issue
Information to Provide:
When requesting help, include:
-
System Information:
docker --version docker compose version uname -a
-
Container Logs:
docker compose logs --tail=100 cloudflare-dns-backup > logs.txt -
Configuration (sanitized):
# Remove sensitive data first! cat .env | sed 's/CLOUDFLARE_API_TOKEN=.*/CLOUDFLARE_API_TOKEN=REDACTED/' > config.txt
-
Error Messages: Full error output from logs
-
Steps to Reproduce: What you did before the error occurred
Support Channels:
- GitHub Issues: https://github.com/yourusername/cloudflare-dns-backup/issues
- Cloudflare Community: https://community.cloudflare.com/
- Cloudflare API Docs: https://developers.cloudflare.com/api/
# Clone repository
git clone https://github.com/yourusername/cloudflare-dns-backup.git
cd cloudflare-dns-backup
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Install development dependencies
pip install pytest pytest-cov black flake8 mypy
# Run tests
pytest
# Code formatting
black src/
# Linting
flake8 src/
# Type checking
mypy src/cloudflare-dns-backup/
├── src/
│ ├── __init__.py # Package initialization
│ ├── main.py # Main entry point and scheduler
│ ├── backup.py # Core backup functionality
│ └── config.py # Configuration management
├── backups/ # Backup storage directory (created automatically)
├── .env.example # Example environment configuration
├── .env # Local environment configuration (gitignored)
├── .gitignore # Git ignore rules
├── Dockerfile # Container image definition
├── docker-compose.yml # Docker Compose configuration
├── requirements.txt # Python dependencies
├── LICENSE # MIT License
└── README.md # This file
# Run all tests
pytest
# Run with coverage
pytest --cov=src --cov-report=html
# Run specific test
pytest tests/test_backup.py
# View coverage report
open htmlcov/index.html# Build image
docker build -t cloudflare-dns-backup:latest .
# Build with specific tag
docker build -t cloudflare-dns-backup:v1.0.0 .
# Multi-platform build
docker buildx build --platform linux/amd64,linux/arm64 -t cloudflare-dns-backup:latest .This project follows:
- PEP 8: Python style guide
- Black: Code formatting (line length: 88)
- Type Hints: Python 3.11+ type annotations
- Docstrings: Google style docstrings
- Create feature branch
- Make changes
- Run tests and linting
- Update documentation
- Submit pull request
Contributions are welcome! Please follow these guidelines:
-
Fork the Repository
# Click "Fork" button on GitHub git clone https://github.com/YOUR_USERNAME/cloudflare-dns-backup.git cd cloudflare-dns-backup
-
Create Feature Branch
git checkout -b feature/your-feature-name
-
Make Changes
- Write clear, commented code
- Follow existing code style
- Add tests for new functionality
- Update documentation
-
Test Changes
# Run tests pytest # Check code style black src/ flake8 src/ # Test Docker build docker build -t cloudflare-dns-backup:test .
-
Commit Changes
git add . git commit -m "Add feature: description of feature"
Follow Conventional Commits:
feat:New featurefix:Bug fixdocs:Documentation changestest:Test additions/changesrefactor:Code refactoring
-
Push to Fork
git push origin feature/your-feature-name
-
Open Pull Request
- Go to original repository on GitHub
- Click "New Pull Request"
- Select your fork and branch
- Provide clear description of changes
- Reference any related issues
Code Enhancements:
- Parallel backup processing
- Incremental backup support
- Additional backup formats (JSON, CSV)
- Backup verification/validation
- Backup encryption built-in
Documentation:
- Improve existing documentation
- Add more examples
- Create video tutorials
- Translate documentation
Testing:
- Increase test coverage
- Add integration tests
- Performance benchmarks
- Security testing
Features:
- Webhook notifications
- Metrics export (Prometheus)
- Web UI for management
- Backup comparison tools
- Multi-cloud storage support
Pull requests will be reviewed for:
- Code quality and style
- Test coverage
- Documentation updates
- Breaking changes
- Security implications
- API Documentation: https://developers.cloudflare.com/api/
- Python SDK: https://github.com/cloudflare/cloudflare-python
- Schedule Library: https://schedule.readthedocs.io/
- Docker Best Practices: https://docs.docker.com/develop/dev-best-practices/
This project is licensed under the MIT License.
MIT License
Copyright (c) 2025 [Your Name]
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
See LICENSE file for full license text.
This project is built with and inspired by:
- Cloudflare Python SDK: Official Cloudflare API client
- schedule: Python job scheduling library
- python-dotenv: Environment variable management
- Cloudflare: For providing excellent DNS infrastructure and API
- Docker Community: For containerization best practices
- Open Source Community: For continuous improvements and contributions
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: This README and Cloudflare API Docs
When reporting bugs, please include:
- Description of the issue
- Steps to reproduce
- Expected behavior
- Actual behavior
- Logs and error messages
- System information (OS, Docker version)
- Configuration (with sensitive data removed)
Feature requests are welcome! Please:
- Check existing issues first
- Describe the feature and use case
- Explain why it would be useful
- Provide examples if applicable
Initial Release
Features:
- Automated scheduled backups using cron syntax
- Support for backing up all zones or specific domains
- Docker and Docker Compose support
- BIND format zone file export
- Comprehensive logging and error handling
- Automatic 14-day retention with cleanup
- Health checks for container monitoring
- One-time backup mode
- Immediate backup on startup
Technical:
- Python 3.11 base
- Cloudflare Python SDK integration
- Schedule library for job scheduling
- Docker multi-stage builds
- Environment-based configuration
Documentation:
- Comprehensive README
- Security best practices
- Troubleshooting guide
- Deployment scenarios
- API endpoint documentation
Project: Cloudflare DNS Backup Version: 1.0.0 License: MIT Author: [Your Name] Repository: https://github.com/yourusername/cloudflare-dns-backup Documentation: https://github.com/yourusername/cloudflare-dns-backup/blob/main/README.md Issues: https://github.com/yourusername/cloudflare-dns-backup/issues