Phishing simulations are a practical, measurable way to reduce human risk and provide evidence of ongoing security controls under the Compliance Framework — specifically ECC‑2:2024 Control 1‑10‑1; this post walks through planning, running, measuring and reporting simulations with actionable steps, technical tips, and small‑business examples so you can demonstrate compliance to auditors and leadership.
Plan and scope the simulation (Compliance Framework: Practice)
Start by documenting the practice: define objectives, scope, allowed templates, and exclusions in a short "Phishing Simulation Plan" that maps to the Compliance Framework control. Objectives typically include establishing a baseline susceptibility rate, reducing click/report-to-phish ratios over time, and validating that reporting channels work. For scope, list included user groups (e.g., all non‑executive employees), excluded accounts (HR, legal, outsourced HR systems), timing window, and escalation rules for accidental credential submission. Small businesses should keep the first campaign small (10–20% of users) as a baseline to reduce disruption and fine‑tune templates.
Design campaigns and templates
Design a mix of templates (link‑based, attachment, branded lookalikes, and non‑credential bait) and classify each template by risk level. Example templates for a small firm (25 employees): an internal IT support ticket link (low-to-medium risk), a payroll update attachment (medium risk), and an external vendor invoice (medium-to-high risk). Avoid highly targeted CEO impersonation or mock legal threats unless you have explicit executive consent and handling procedures. Save templates in a versioned repository (Git or shared drive) and track template IDs to show repeatability and audit trail.
Technical implementation details
Coordinate with email admins and security tools before launching: whitelist vendor sending IPs or simulation domains in your email security gateway (Proofpoint/Exchange Online Protection/Gmail), configure SPF/DKIM for the simulation sending subdomain and ensure DMARC alignment for deliverability, and register sending IPs with your MTA to avoid being blocked. Instrument the simulation platform to export results via API (many vendors provide REST endpoints). Example metric extraction using curl: curl -H "Authorization: Bearer $TOKEN" "https://vendor.api/campaigns/123/results?format=csv" -o results.csv. In small shops without vendor tools, you can emulate by sending controlled messages from an internal test domain and capturing click events with a simple web service that logs IP, timestamp, and user identifier.
Define metrics and baseline calculations
Metrics must be precise and auditable. Core metrics to collect and report: delivered count, unique clicks, reported attempts, credential submissions, median time‑to‑report, remediation training completion within X days, and repeat offender counts. Use clear formulas: Click rate = (unique users clicked / delivered unique) * 100; Report rate = (users who reported / delivered unique) * 100; Remediation rate = (clicked users who completed training within 7 days / users who clicked) * 100. Store raw logs (timestamped events) and computed aggregates; auditors will expect both raw evidence and summarized KPIs.
Reporting: format, cadence, and evidence for auditors
Prepare a repeatable report template for compliance submissions: an executive summary with top‑level KPIs and trend charts, a controls mapping section referencing ECC‑2:2024 Control 1‑10‑1, detailed tables by department, campaign artifacts (template screenshots, campaign plan), raw logs (CSV/JSON), and remediation records (user IDs, course completion timestamps). Recommended cadence: monthly internal dashboards and quarterly formal reports to leadership and auditors. Include a "chain of custody" note for logs (where stored, retention policy, and hash of exported CSVs) to preserve integrity for compliance review.
Small‑business example scenario
Example: a 25‑employee accounting firm runs an initial baseline campaign to 20 employees, sees a 28% click rate and 4% report rate. They then implement targeted training for the 6 users who clicked, fix mail filtering rules to reduce phishing delivery, and schedule monthly short simulations on varied themes. After three quarters, metrics show click rate reduced to 9% and remediation completion at 100% within 7 days for those who clicked; these numbers, plus exported campaign logs and training certificates, form the compliance evidence pack mapped to Control 1‑10‑1.
Risks of not implementing: without regular simulations you lack measurable proof that human risk is managed — this increases the chance of credential compromise, lateral movement, and successful small‑scale breaches that can become reportable incidents under regulatory regimes. For compliance, absence of evidence is often treated as non‑compliance; auditors will flag missing campaigns, no baseline data, and no remediations as control failures. Operationally, employees remain susceptible, and the organization misses the chance to validate reporting channels and incident response playbooks.
In summary, run measured, documented phishing simulations aligned to ECC‑2:2024 Control 1‑10‑1 by planning scope, designing safe templates, coordinating technical settings (SPF/DKIM/whitelists), collecting precise metrics with raw logs, and producing a repeatable evidence package for auditors; for small businesses, start small, focus on quick remediation, and publish clear KPI targets and trend reports to demonstrate continuous improvement and compliance.