This post explains practical, audit-ready approaches to documenting and providing evidence for CM.L2-3.4.4 under NIST SP 800-171 Rev.2 / CMMC 2.0 Level 2 — the configuration/change-management control that requires you to control, track, and document configuration changes to systems that process Controlled Unclassified Information (CUI).
What compliance assessors expect
Assessors will look for a repeatable process that demonstrates: (1) baseline configurations exist for systems that handle CUI; (2) changes to those baselines are requested, evaluated, approved, and tracked; (3) deployed changes match approved change records; and (4) configuration drift is detected and remediated. Evidence must tie artifacts (policies, baselines, tickets, scan outputs, signatures, timestamps) to actual systems, dates, and responsible personnel.
Practical implementation steps (Compliance Framework specific)
Implementing CM.L2-3.4.4 in a small business typically follows these steps: define the scope of systems that host or touch CUI; create and store baseline configurations (images, hardening checklists, package/shipping manifests); adopt a change control workflow (ticketing system, approval gates, rollback plan); use automated detection (config management and drift reporting); and retain immutable evidence for a defined retention period (e.g., 12–36 months per contract requirements). Map each artifact to the Compliance Framework control ID in your evidence index.
Baseline configuration artifacts (template examples)
Small-business-friendly baseline artifacts you should produce and catalogue include: a Baseline Configuration Spreadsheet named "baseline_config_inventory.csv" with headers "hostname,asset_tag,OS,OS_version,baseline_hash,last_baseline_date,owner,environment"; a Baseline Image Manifest "image_manifest.json" with image_id, SHA256, build_date, and Ansible/PowerShell playbook version used to create the image; and a "System Hardening Checklist" PDF that references specific CIS or vendor hardening steps and versions. Example CSV header: "host01,ASSET123,Windows Server 2019,10.0.17763,3f6a...c9b,2026-03-01,jane.doe,prod". These exact files make it trivial for an assessor to match a system to its approved baseline.
Change control templates and example entries
Create a lightweight change request template if you don't run a heavy ITSM tool. Required fields: Change ID, Submitter, System(s) Affected (hostname(s)/asset_tag), Baseline ID, Description of Change, Risk Assessment, Backout Plan, Scheduled Window, Approver(s), Implementation Notes, Post-Implementation Verification (PIV) Checklist, and Evidence Attachment(s) (screenshots, logs, commit IDs). Example entry excerpt: "CR-2026-045; submitter=jane.doe@example.com; systems=host01,host02; baseline=image-20260301; change=Apply KB2026-1234 patch; risk=low; approver=it-mgr@example.com; PIV=Get-HotFix -ComputerName host01 | Where HotFixID -eq 'KB2026-1234' (screenshot attached)." Save exported tickets as PDF with approval metadata and store in your evidence repository.
Automated evidence collection and technical checks
Use inexpensive or open-source tools to generate technical evidence. Examples: run OS and package inventory commands and store outputs with timestamps—Windows: "Get-WmiObject -Class Win32_OperatingSystem | Select Caption, Version" and "Get-HotFix"; Linux: "uname -a", "lsb_release -a", "dpkg-query -W -f='${Package} ${Version}\n'". Capture file/image hashes with "Get-FileHash -Algorithm SHA256 C:\images\image-v1.iso" or "sha256sum /srv/images/image-v1.tar.gz". For drift detection, schedule a daily OSQuery or Wazuh policy that produces a diff report; evidence artifacts are the daily report CSVs and the alert ticket numbers created when drift is detected. For cloud workloads, include AWS Config snapshots or Azure Policy compliance reports exported with timestamps and resource IDs.
Audit-ready packaging: what to submit and how to label it
Prepare an evidence bundle per assessment request that contains: a short mapping document (matrix) that links control CM.L2-3.4.4 to each artifact; the Baseline Inventory CSV; representative Baseline Image Manifest(s); exported change ticket PDFs with approvals and post-implementation verification; automated scan outputs showing current vs baseline; hashes and signed manifests; and a signed configuration management policy excerpt. Name files with a consistent convention: "CM_L2-3.4.4_BaselineInventory_2026-03-20.csv", "CM_L2-3.4.4_CR-2026-045_approved.pdf", "CM_L2-3.4.4_ImageManifest_image-20260301.json". This makes it trivial for assessors to validate chain-of-custody, approvals, and timestamps.
Small business scenario: minimal-resources example
Imagine a 20-person engineering shop that stores CUI in a single AWS account. Practical steps: (1) create a baseline EC2 AMI with a hash manifest and hardening checklist; (2) use a simple Git repo to store Ansible playbooks and tag commits used to build the AMI; (3) use GitHub Issues or a simple "change-requests" Google Form that logs CR fields and emails an approver for sign-off; (4) run AWS Config and weekly "aws ssm get-command-invocations" or "aws ec2 describe-images" exports; (5) package artifacts into a dated ZIP and store in an immutable S3 bucket with versioning and access logging. For the assessor, provide the AMI manifest, the Git commit hash (linking to the playbook), the export of the issue ticket showing approval, and the S3 object metadata (ETag/last-modified) as evidence.
Compliance tips, best practices, and risk of not implementing
Best practices: keep baselines minimal and versioned (semantic tags), automate verification, require at least one independent approver for production changes, include a rollback/backout plan for every change, retain evidence for the length specified in contracts, and map each artifact to the exact control ID. Tools to consider for small orgs: OSQuery, Wazuh, Ansible, Git, AWS Config, Azure Policy, and a lightweight ticketing system (Jira Service Desk, GitHub Issues, or a shared mailbox with exported PDFs).
Failure to implement CM.L2-3.4.4 increases risk: unauthorized or undocumented changes can introduce exploitable configurations, create persistence opportunities for threat actors, lead to CUI exposure, and result in failing a CMMC or contract audit — which can mean loss of contracts and reputational damage. Auditors focus on traceability: if you cannot show "who approved what and when, and how the system changed as a result," you will likely receive a finding.
In summary, make CM.L2-3.4.4 attainable by building a small set of repeatable artifacts: a baseline inventory, signed baseline manifests, a documented change-request workflow with approvals, automated configuration and drift reports, and a clear evidence-index mapping artifacts to the control. Use lightweight automation where possible, keep filenames and metadata consistent, and package evidence in audit-ready bundles so assessors can validate your compliance efficiently.