Introduction

Most company websites I see are still running WordPress with a PHP server and MySQL behind them. For a site that shows your services, team, and a contact form? That's a lot of moving parts for what could be a few static HTML files. The server is on 24/7, the CMS demands constant patching, and the attack surface is huge.

We use a different approach: a static site on AWS S3, served through CloudFront, all provisioned with Terraform. It costs almost nothing compared to traditional hosting, loads faster, and has practically no security exposure. Here's why we went this route, and how you can do the same.

The Case Against WordPress for Simple Sites

Cost

WordPress hosting in Switzerland adds up. Infomaniak starts at CHF 10.91/month (~CHF 131/year). Hostpoint is CHF 15.90/month (~CHF 191/year). Go managed with WP Engine or Kinsta and you're at CHF 20–50/month. That PHP server is running around the clock, even if the site gets 10 visits a day.

A static site on AWS S3 + CloudFront runs about CHF 1–5 per year. The domain renewal ends up being the biggest line item. S3 storage is fractions of a cent per GB, and CloudFront's free tier gives you 1 TB of data transfer per month.

Solution Annual cost
Infomaniak (WordPress) ~CHF 131
Hostpoint (WordPress) ~CHF 191
Managed WordPress (WP Engine, Kinsta) CHF 240–600
AWS S3 + CloudFront ~CHF 1–5

Security

The Sucuri 2023 Hacked Website Report tells the story: 95.5% of all CMS infections hit WordPress. 39.1% of compromised sites were on outdated CMS versions, and 49.21% had backdoors installed. Plugins are the main attack vector. Elementor Pro, Advanced Custom Fields, and similar plugins regularly ship with critical vulnerabilities.

A static HTML site has zero server-side attack surface. No PHP interpreter, no database, no admin panel, no plugin ecosystem. Files go straight from a CDN to the browser. Nothing to exploit.

Performance

Every WordPress request runs PHP, hits the database, and assembles HTML on the fly. Typical Time to First Byte (TTFB): 200–800ms. A static site on CloudFront? TTFB of 10–50ms. The HTML is already built and cached at the nearest edge location.

Google's Core Web Vitals want a Largest Contentful Paint (LCP) under 2.5 seconds. Static sites blow past that threshold. CloudFront handles gzip and Brotli compression automatically, and a typical static page is under 100 KB.

Maintenance

WordPress demands constant updates to core, plugins, themes, and PHP itself. Plugin upgrades break things. Sites end up stuck on old, vulnerable versions because nobody wants to risk the migration. A static HTML site? Nothing to update on the server side. The files are self-contained.

You Don't Need to Code Anymore

The usual pushback on static sites is "but I'd need a developer." Fair point a few years ago. Not anymore. AI tools can generate a professional static site from a text description:

There are also structural wins that CMS platforms can't match:

What Is S3?

Amazon S3 (Simple Storage Service) was AWS's first service, launched in 2006. It's object storage, basically files and folders in the cloud. The durability is absurd: 99.999999999% (eleven nines). Storage costs fractions of a cent per GB per month. For hosting HTML, CSS, JS, and images, it's exactly what you need.

What Is CloudFront?

CloudFront is AWS's CDN with 450+ edge locations worldwide. Visitors get served from the closest one (Zurich, Frankfurt, etc.) instead of a single origin server. For a static site, this gives you:

User → nearest edge location → (cache miss?) → S3 bucket

Terraform Implementation: Step by Step

This section is technical and aimed at developers. All values (domain names, bucket names) are anonymized examples.

Project Structure

The infra is split into 5 files, each with a specific job:


.tf/
├── versions.tf    # Backend & provider versions
├── main.tf        # Provider configuration
├── s3.tf          # S3 bucket & policies
├── cloudfront.tf  # CDN distribution
└── dns.tf         # Route53 & ACM certificate

Backend & Provider

Terraform state lives in an S3 bucket so it can be shared across machines. The provider is set to us-east-1 because ACM certificates for CloudFront must be in us-east-1. That's an AWS requirement, not a Terraform thing.

versions.tf

terraform {
  backend "s3" {
    bucket  = "my-terraform-state"
    key     = "example-website/terraform.tfstate"
    region  = "us-east-1"
    encrypt = true
  }
  required_version = ">= 1.0.0"
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.0"
    }
  }
}
main.tf

provider "aws" {
  region = "us-east-1"
}

S3 Bucket

The S3 bucket holds your static files. The bucket itself is not public. Instead, we set up an Origin Access Control (OAC) policy so only CloudFront can read from it. OAC replaced the older Origin Access Identity (OAI) and supports SigV4 signing, server-side encryption, and s3:* actions.

s3.tf

locals {
  bucket_name = "website-example"
}

resource "aws_s3_bucket" "website" {
  bucket = local.bucket_name

  tags = {
    Name = local.bucket_name
    IaC  = "terraform"
  }
}

resource "aws_s3_bucket_policy" "cdn-oac-bucket-policy" {
  bucket = aws_s3_bucket.website.id
  policy = jsonencode(
    {
      Statement = [
        {
          Action = "s3:GetObject"
          Condition = {
            StringEquals = {
              "AWS:SourceArn" = aws_cloudfront_distribution.website.arn
            }
          }
          Effect = "Allow"
          Principal = {
            Service = "cloudfront.amazonaws.com"
          }
          Resource = "${aws_s3_bucket.website.arn}/*"
        },
      ]
      Version = "2012-10-17"
    }
  )
}

resource "aws_s3_bucket_server_side_encryption_configuration" "website" {
  bucket = aws_s3_bucket.website.bucket

  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_s3_bucket_website_configuration" "website" {
  bucket = aws_s3_bucket.website.bucket

  index_document {
    suffix = "index.html"
  }
}

CloudFront Distribution

This is where the interesting bits are. The distribution creates an OAC with SigV4 signing, serves the site over HTTPS on your custom domain, and returns index.html for 404/403 errors (SPA-style routing). Cache behavior is locked to GET and HEAD only, compression is on, and all HTTP traffic gets redirected to HTTPS. TLS 1.2+ minimum. We use SNI to avoid paying for a dedicated IP.

cloudfront.tf

locals {
  s3_origin_id = "distribution"
}

resource "aws_cloudfront_origin_access_control" "s3_access" {
  name                              = aws_s3_bucket.website.bucket_regional_domain_name
  origin_access_control_origin_type = "s3"
  signing_behavior                  = "always"
  signing_protocol                  = "sigv4"
}

resource "aws_cloudfront_distribution" "website" {
  enabled             = true
  is_ipv6_enabled     = true
  default_root_object = "index.html"
  aliases             = [aws_acm_certificate.cert.domain_name]
  comment             = "CDN for ${aws_acm_certificate.cert.domain_name}"

  origin {
    connection_attempts      = 3
    connection_timeout       = 10
    domain_name              = aws_s3_bucket.website.bucket_regional_domain_name
    origin_access_control_id = aws_cloudfront_origin_access_control.s3_access.id
    origin_id                = local.s3_origin_id
  }

  # SPA routing: return index.html for missing files
  custom_error_response {
    error_code            = 404
    response_code         = 200
    response_page_path    = "/index.html"
    error_caching_min_ttl = 0
  }

  custom_error_response {
    error_code            = 403
    response_code         = 200
    response_page_path    = "/index.html"
    error_caching_min_ttl = 0
  }

  default_cache_behavior {
    allowed_methods        = ["GET", "HEAD"]
    cached_methods         = ["GET", "HEAD"]
    compress               = true
    target_origin_id       = local.s3_origin_id
    viewer_protocol_policy = "redirect-to-https"
    forwarded_values {
      query_string = false
      cookies {
        forward = "none"
      }
    }
  }

  restrictions {
    geo_restriction {
      restriction_type = "none"
    }
  }

  viewer_certificate {
    acm_certificate_arn            = aws_acm_certificate_validation.cert.certificate_arn
    minimum_protocol_version       = "TLSv1.2_2018"
    cloudfront_default_certificate = false
    ssl_support_method             = "sni-only"
  }

  tags = {
    Name = "example-website"
    IaC  = "terraform"
  }
}

DNS & SSL Certificate

Last step: wire your domain to CloudFront. We create an ACM certificate with DNS validation, and Terraform handles the validation CNAME record in Route53 automatically. Watch out for the tolist() workaround: there's a known issue (#14447) where domain_validation_options can't be used with for_each on the first terraform plan. The A record alias at the bottom points your domain to CloudFront.

dns.tf

locals {
  domain_name = "example-website.ch"
}

data "aws_route53_zone" "public_zone" {
  name         = "example-website.ch"
  private_zone = false
}

resource "aws_acm_certificate" "cert" {
  domain_name       = local.domain_name
  validation_method = "DNS"
}

# Workaround for terraform-provider-aws#14447:
# tolist() is needed because domain_validation_options
# is a set and can't be indexed during plan.
resource "aws_route53_record" "cert_validation" {
  zone_id         = data.aws_route53_zone.public_zone.zone_id
  name            = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_name
  type            = tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_type
  records         = [tolist(aws_acm_certificate.cert.domain_validation_options)[0].resource_record_value]
  allow_overwrite = true
  ttl             = 60
}

resource "aws_acm_certificate_validation" "cert" {
  certificate_arn         = aws_acm_certificate.cert.arn
  validation_record_fqdns = [aws_route53_record.cert_validation.fqdn]
}

resource "aws_route53_record" "website_alias" {
  zone_id = data.aws_route53_zone.public_zone.zone_id
  name    = aws_acm_certificate.cert.domain_name
  type    = "A"

  alias {
    name    = aws_cloudfront_distribution.website.domain_name
    zone_id = aws_cloudfront_distribution.website.hosted_zone_id

    evaluate_target_health = false
  }
}

Deployment

Once the infrastructure is provisioned, deploying your site is two commands:


# Sync local files to S3 (--delete removes old files)
aws s3 sync ./src s3://website-example --delete

# Invalidate CloudFront cache so changes are visible immediately
aws cloudfront create-invalidation \
  --distribution-id EXXXXXXXXXX \
  --paths "/*"

Want to automate this? Set up a GitHub Actions workflow that deploys on every push to main. Use GitHub's OIDC provider for AWS auth so you don't store any long-lived credentials in your repo. The workflow assumes an IAM role on the fly.

Conclusion

If your company site doesn't need a database or server-side logic, a static site on AWS is the obvious call:

AI handles site creation, Terraform handles infrastructure. Set it up once and you're done. You get a site that's fast, secure, and costs almost nothing to run.