This post walks through a practical, Terraform-centric approach to implementing AWS subnetworks (public-facing and internal) to satisfy the FAR 52.204-21 / CMMC 2.0 Level 1 control SC.L1-B.1.XI requirement that public-facing systems be logically separated and properly configured, focusing on small-business scenarios, concrete Terraform examples, validation steps, and operational best practices.
Implementation overview — how the control maps to AWS networking
SC.L1-B.1.XI in the context of CMMC 2.0 Level 1 (and the related FAR 52.204-21 basic safeguarding expectations) requires that systems exposed to the public internet be isolated from internal systems and that access be controlled and auditable. In AWS this is typically implemented using a VPC with segregated public and private subnets, an Internet Gateway (IGW) attached only to public subnets via route tables, NAT Gateway(s) for private-subnet egress, security groups and Network ACLs (NACLs) enforcing least-privilege, and limited management access paths (for example, SSM Session Manager instead of direct SSH/RDP from the internet).
Design patterns and small-business scenario
For a small business hosting a customer portal and a back-end database, adopt a three-tier minimal pattern: public subnets for load balancers or NAT-proxy endpoints, private application subnets for API servers, and private data subnets for databases. Static assets should be served via S3 + CloudFront where possible to minimize public compute exposure. Use an Application Load Balancer (ALB) in public subnets to terminate TLS, apply AWS WAF for basic layer-7 protections, and keep RDS and other data stores in private subnets with no direct internet route. This separation meets the key objective of isolating public-facing services while allowing controlled egress and management.
Concrete Terraform implementation example
Below is a minimal Terraform snippet demonstrating the core pieces: VPC, public/private subnet, Internet Gateway, route table association, and a security group that limits public traffic to HTTPS. Use this as a scaffold — expand subnets across AZs and add NAT Gateway(s) for production.
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = { Name = "sbx-vpc" }
}
resource "aws_subnet" "public_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = { Name = "sbx-public-a" }
}
resource "aws_subnet" "private_a" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1a"
tags = { Name = "sbx-private-a" }
}
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = { Name = "sbx-igw" }
}
resource "aws_route_table" "public_rt" {
vpc_id = aws_vpc.main.id
route { cidr_block = "0.0.0.0/0", gateway_id = aws_internet_gateway.igw.id }
tags = { Name = "sbx-public-rt" }
}
resource "aws_route_table_association" "public_assoc" {
subnet_id = aws_subnet.public_a.id
route_table_id = aws_route_table.public_rt.id
}
resource "aws_security_group" "web_sg" {
name = "sbx-web-sg"
vpc_id = aws_vpc.main.id
ingress {
description = "HTTPS"
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress { from_port = 0; to_port = 0; protocol = "-1"; cidr_blocks = ["0.0.0.0/0"] }
tags = { Name = "sbx-web-sg" }
}
Key implementation notes: (1) set map_public_ip_on_launch only for public subnets so instances receive public IPs when needed; (2) put ALBs in public subnets, EC2/RDS in private subnets; (3) use NAT Gateway(s) in the public subnet for private-subnet egress during OS/agent updates — ensure NAT Gateway is in a highly available setup for production; (4) enforce least-privileged security-group rules (only 443/80 or specific management ports from tightly-scoped IP ranges); and (5) avoid opening port 22/3389 to 0.0.0.0/0 — prefer SSM Session Manager and IAM roles for management.
Validation, evidence, and monitoring
For compliance evidence, capture Terraform plan/apply logs, maintain git history of Terraform code, and export resource identifiers and configuration JSON for auditors. Implement AWS Config rules to check “publicly accessible” flags (for example, “rds-public-access-check”, “s3-bucket-public-read-prohibited”), enable VPC Flow Logs for public subnets and flow log aggregation to a centralized S3 bucket or CloudWatch Logs, and enable CloudTrail to record API activity. Use automated tests (Terratest or InSpec) as part of CI to assert that public-facing subnets are limited to ALB/IGW routes only, that RDS is private, and that no EC2 instances in private subnets have public IPs. These outputs form the basis of an audit package for FAR/CMMC reviewers.
Compliance tips and best practices
Operationalize the controls with these practices: (1) enforce infrastructure-as-code (Terraform) with code review and branch protection so network changes are auditable; (2) tag resources consistently to link environment, owner, and control mappings (e.g., tag "cmmc_level=1" or "control=SC.L1-B.1.XI"); (3) implement drift detection using terraform plan in CI and AWS Config rules; (4) restrict who can modify networking via IAM IAM policies and use AWS Organizations SCPs if you have multiple accounts; and (5) enable MFA and session recording for console/API access to create an evidentiary trail.
Risk of not implementing network separation and controls
Failing to properly isolate public-facing systems risks unauthorized access to internal resources, lateral movement after compromise, data exfiltration of Controlled Unclassified Information (CUI), and failing FAR/CMMC assessment criteria — which can lead to contract termination, loss of future government work, and reputational / financial damage for a small business. Additionally, unsegmented networks make incident response slower and increase blast radius for malware or misconfigurations.
In summary, meeting FAR 52.204-21 and CMMC 2.0 Level 1 SC.L1-B.1.XI for public-facing systems on AWS is achievable for small businesses by designing a clear public/private subnet topology, enforcing least-privilege access controls, using Terraform for repeatable, auditable deployments, and collecting the right logs and evidence (Terraform state/plans, AWS Config, VPC Flow Logs, CloudTrail). Apply the Terraform scaffolding above, expand it for HA and multi-AZ production, adopt SSM for management, and bake automated validation into CI/CD to maintain continuous compliance and reduce the risk of costly noncompliance.