AU.L2-3.3.2 requires producing audit records that clearly show what happened, when it happened, who or what caused it, and the source of the event — a deceptively simple requirement that demands a deliberate logging architecture to meet Compliance Framework expectations while remaining practical for small organizations.
What AU.L2-3.3.2 actually requires (practical field mapping)
To implement the control under the Compliance Framework you must ensure logs include, at minimum: event identifier/type (what), timestamp (when), subject/actor identity (who), event source (where — host, IP, service), and outcome (success/failure). In modern log schemas this maps to fields like event.action/event.category, event.time, actor.id/user.name, source.ip/host.hostname, and event.outcome. Make these fields consistent across Windows, Linux, network devices, and cloud so automated tools can reliably detect and correlate events.
High-level architecture and tool choices
Centralize logs into an aggregator/analysis platform (SIEM or log store) that normalizes and retains logs per contractual and risk requirements. For small businesses the cost-effective choices are: Elastic Stack (EFK) + Wazuh for host-level events, Splunk Cloud or Splunk Light for managed SIEM, Sumo Logic/Datadog for cloud-managed logging, or a lightweight OSS pipeline (Fluentd/Fluent Bit -> Elasticsearch/OpenSearch -> Kibana). For cloud-native workloads, use native ingestion (AWS CloudWatch Logs + Kinesis Firehose to S3/Elasticsearch) and enable CloudTrail for API events. Key capabilities to pick a tool on: reliable collection, normalization of fields listed above, secure transport (TLS), immutable storage or capability to implement WORM, RBAC for access, and alerting/queries for forensic needs.
Concrete configurations and examples
Below are practical snippets and settings you can adapt. These ensure the minimum fields are captured and reliably forwarded to a central store.
# rsyslog example to add structured fields and forward to log aggregator
module(load="imuxsock")
module(load="imklog")
template(name="RFC5424Template" type="string" string="<%PRI%>%TIMESTAMP:::date-rfc3339% %HOSTNAME% %app-name% %procid% %msgid% %msg%\n")
*.* action(type="omfwd" target="10.0.0.10" port="514" protocol="tcp" Template="RFC5424Template" StreamDriver="gtls" StreamDriverMode="1")
For Linux audit rules to capture process execution and file access (useful for "who/what"): add to /etc/audit/rules.d/audit.rules
-a always,exit -F arch=b64 -S execve -k execs
-w /etc/passwd -p wa -k identity_file_changes
On Windows, use Windows Event Forwarding (WEF) to collect Security, System, and Application logs and include the EventID, TimeCreated, Computer, AccountName, and IP fields; configure a subscription on the collector and use GPO to point hosts at the collector. Example subscription XML should include event selectors for 4624/4625 (Logon) and 4688 (Process Creation).
Cloud examples: enable AWS CloudTrail (management events), enable S3 data event logging for buckets with CUI, enable VPC Flow Logs for network source IP visibility, and stream to CloudWatch Logs -> Kinesis -> S3 or Elasticsearch. Ensure CloudTrail is multi-region and integrated with an immutable S3 bucket using object-lock for retention.
Retention, integrity, and access controls
AU.L2-3.3.2 does not prescribe exact retention durations, but Compliance Framework consumers will expect you to retain logs long enough to support incident investigations and contractual obligations; common baselines are 6–12 months for system logs and 1–3 years for audit trails tied to CUI, depending on contract. Protect logs with encryption at transit (TLS) and at rest (AES-256), enable tamper-evidence (hash chaining or object-lock on S3), and implement RBAC so only authorized personnel can read or delete logs. Log writes should be append-only where possible; if you use cloud object stores, enable immutability features or maintain a secure write-once backup store for forensic evidence.
Limit and control log access via strong MFA for analyst accounts, separate duties (those who manage logging infrastructure should not be sole reviewers), and log all administrative actions on the logging system itself so you can prove chain-of-custody.
Monitoring, alerting, and review
Operationalize logs: define detection rules for privilege escalation, failed logons, account creation, data exfiltration indicators, and anomalous service account activity. Tune baseline noise (e.g., benign cron jobs) to reduce alert fatigue. Institute a review cadence: weekly automated alerts triaged by analysts and quarterly manual reviews of audit coverage completeness. Capture evidence of reviews (tickets, screenshots of queries) to demonstrate to auditors that logs are reviewed and used.
Risks of not implementing AU.L2-3.3.2
Without consistent, centralized, and sufficiently detailed audit records you risk missing early indicators of compromise (e.g., lateral movement, privilege abuse), inability to perform root cause analysis after an incident, regulatory noncompliance with potential contract loss, civil penalties, and reputational damage. For small businesses that handle Controlled Unclassified Information (CUI), failure to produce credible logs can directly result in lost DoD or federal contracts.
Small-business real-world scenario
Example: a 25-person subcontractor with hybrid infrastructure (10 Windows desktops, 5 Linux servers on-prem, dev workloads in AWS). Implementation plan: enable Windows Event Forwarding for security events to a single hardened collector VM; deploy Filebeat + Auditbeat on Linux servers; centralize into an OpenSearch cluster (managed or self-hosted) behind TLS; normalize fields using ingest pipelines so event.action, actor.id, event.time exist for all sources; stream CloudTrail and S3 access logs into the same cluster; configure S3 object-lock for 1 year and retain critical logs for 13 months. Use Wazuh for host-based detections and create an alerting runbook mapping alerts to triage steps. Cost-saving tips: use small EC2 instances for short-term hot indexing with lifecycle policies to move older logs to S3 Glacier Deep Archive.
Compliance tips: document your logging policy (what, where, retention), timestamp everything and enforce NTP across devices, include log collection in the incident response plan and tabletop exercises, and keep a lightweight evidence binder (screenshots of queries, export of retention policies) for audits. Prioritize covering identity/authentication events first (logons, privilege changes, service account use) because they provide the highest value for forensics and compliance.
Summary: Build a centralized, normalized, and protected logging pipeline that consistently captures who, what, when, and where across your estate; implement secure transport and immutable storage, tune and automate detection and review, and document retention and review practices to satisfy AU.L2-3.3.2 under the Compliance Framework — doing so reduces investigation time, improves incident response, and protects your ability to retain government and regulated contracts.