Every regulatory clause maps to one of the three deficiencies
Compliance is what becomes possible once the three deficiencies are addressed per change.
Mark named the deficiencies that make audits painful. Addressing them satisfies most of the AI-specific regulatory frameworks by construction: NIST AI RMF, EU AI Act, ISO 42001, NAIC, NYDFS, 21 CFR Part 11. The same record that closes the deficiencies answers the regulator.
"We and every other software company in the world are outstripping our ability to test what we're building."
Why now: the velocity of agentic coding has decoupled from the velocity of testing, auditing, and validation - the knowledge and proof that AI agents did what they were tasked to perform, i.e. testing, in this case. An AI agent can produce more code in a day than a team used to write in a sprint. The test, audit, and compliance layers did not get faster at the same rate. The gap is structural and widens with every model release.
Three deficiencies - in every company today - that no software addresses:
- determining which tests need to run for a particular release
- checking whether they ran
- recording the outcome
Mark Walker, nue.io - meeting transcript [00:46:36]
Compliance is what becomes possible when the three deficiencies are addressed. Every regulatory clause maps to one or more of: which tests, that they ran, the outcome.
Frameworks covered
- NIST AI Risk Management Framework (AI RMF 1.0) - GOVERN, MAP, MEASURE, MANAGE functions
- EU AI Act - high-risk AI systems, technical documentation, record-keeping (Art. 9-15), post-market monitoring
- ISO/IEC 42001:2023 - AI management systems
- ISO/IEC 27001:2022 - information security management
- NAIC Model Bulletin on AI Use by Insurers (2023) - governance, transparency, third-party AI
- NYDFS Part 500 - cybersecurity for financial services, including AI-driven systems
- FDA 21 CFR Part 11 - electronic records and signatures for regulated industries
- FDA SaMD (Software as a Medical Device) and IEC 62304 - regulated medical software lifecycle
- HIPAA Security Rule - safeguards for protected health information
- SOC 2 Type II - security, availability, processing integrity, confidentiality, privacy
NIST AI Risk Management Framework
| Function / category | What the framework requires | How CODITECT satisfies it | Evidence produced |
|---|---|---|---|
| GOVERN-1 | AI risk management policies are documented and applied | Standards (`STD-*`) and ADRs are versioned, enforced at commit time, drift detected automatically | Standards repository, signed compliance gate logs |
| MAP-1.1 | Context of AI use is established | Every change starts from an approved task with a regulatory framework tag and an approving engineer | Project database row per task, immutable |
| MEASURE-2 | Trustworthiness characteristics measured | Test pass-rate and decision-log probes run today; the bias and drift probe framework is part of the OBSERVE phase of the Universal Quality Development Harness per ADR-320 | Probe results in audit bundle |
| MANAGE-4 | Risk responses are tracked | Every alert opens an ITIL incident, a remediation task, a closure record with evidence | Incident timeline, signed |
EU AI Act
| Article | Requirement | How CODITECT satisfies it | Evidence produced |
|---|---|---|---|
| Art. 9 | Risk management system across the AI system lifecycle | Continuous risk probes, ITIL incident management, change-management records | Risk register, incident bundles |
| Art. 10 | Data governance for training and operational data | Every model call records the data the model saw, the model used, the routing decision | Per-call audit row, retained immutably |
| Art. 11 | Technical documentation maintained | SDDs, ADRs, IQ/PQ/VTR/RTM produced as a byproduct of normal work | Generated audit documents per change |
| Art. 12 | Logging, automatic recording of events | Append-only audit trail; database triggers reject mutation of history rows | Audit log, queryable indefinitely |
| Art. 13 | Transparency and provision of information to users | Decision-replay surface: any agent decision, with model used, prompt, alternatives, rationale | Decision-trace per task |
| Art. 14 | Human oversight | Approval gates on production-affecting changes; approver name signed into the deploy event | Signed deploy events |
| Art. 15 | Accuracy, robustness, cybersecurity | Tests required per change; security scans; foundation-model-agnostic routing absorbs provider failures | Test bundle, security scan results |
| Art. 17 | Quality management system | The platform is a QMS - that is the entire CODITECT proposition | The audit bundle per change is the QMS record |
| Art. 61 | Post-market monitoring | Runtime alerts open ITIL incidents automatically; model drift detected via output probes | Post-market incident register |
ISO/IEC 42001 - AI management system
ISO 42001 is the new AI-specific management system standard, structurally aligned with ISO 27001. It requires policies, leadership commitment, planning, support, operation, performance evaluation, and improvement. CODITECT addresses every clause through the same primitives that satisfy NIST AI RMF and the EU AI Act:
- Clause 5 (Leadership) - signed approval gates record the responsible person against every decision.
- Clause 6 (Planning) - tasks, sprints, and risk register are first-class records, not separate tools.
- Clause 8 (Operation) - Blueprints are deterministic, replayable; agent runs are not improvisations.
- Clause 9 (Performance evaluation) - continuous probes feed dashboards; deviations open tracked issues.
- Clause 10 (Improvement) - iteration tasks open automatically when probes exceed thresholds.
NAIC Model Bulletin (insurance) and NYDFS Part 500 (financial services)
The NAIC Model Bulletin requires insurers to establish AI governance with documented standards, third-party due diligence, transparency to consumers, and ongoing monitoring of AI system outcomes. NYDFS Part 500 extends similar requirements to financial-services cybersecurity, with explicit attention to AI-driven systems.
| Theme | Requirement | How CODITECT satisfies it |
|---|---|---|
| AI governance | Documented program, board oversight, named officer accountability | ADR set, standards repository, approval-gate audit trail |
| Third-party AI | Due diligence on AI vendors and foundation models | Foundation-model-agnostic routing recorded per call; provider switch is auditable |
| Transparency | Consumer-facing explainability where adverse decisions occur | Decision-replay surface produces the explanation on request |
| Monitoring | Ongoing evaluation of outcomes for fairness, accuracy, drift | Continuous probes, ITIL incident pipeline, model-drift detector |
| Cybersecurity (NYDFS 500) | Encryption, MFA, incident reporting within 72 hours | Encrypted local + cloud stores, MFA on portal, ITIL incident reporting baked in |
FDA 21 CFR Part 11, SaMD, IEC 62304
For regulated medical and pharmaceutical software, electronic records and electronic signatures must be trustworthy, reliable, and equivalent to paper records. The system must maintain audit trails, control versions, and produce evidence on demand.
- 21 CFR Part 11.10(c) - audit trails - append-only audit tables; database triggers reject mutation; cloud WORM lifecycle for evidence.
- 11.50 - signed records - HMAC-signed records with the responsible engineer's identity and timestamp.
- 11.10(e) - operational checks - Blueprint checkpoints enforce that each step ran as specified.
- SaMD lifecycle (IEC 62304) - SDD, TDD, design verification, release records produced as part of normal work.
- Validated state - the platform itself runs under SaMD-style validation: every CODITECT release ships with its own IQ/PQ/VTR/RTM.
HIPAA, SOC 2, ISO 27001
These frameworks govern security posture rather than AI specifically. CODITECT inherits and exposes the controls a customer auditor expects:
- Multi-tenant isolation - row-level security on cloud Postgres; no cross-tenant data leak by construction.
- Encryption - at rest (database, object storage), in transit (TLS), per-tenant key isolation where required.
- Access control - RBAC tied to tenant, team, project, and user; principle of least privilege.
- Audit trail - the same trail that satisfies AI-act logging satisfies SOC 2 CC7.
- Incident reporting - 72-hour breach notification timelines instrumented by the ITIL layer.
- Backup, restore, business continuity - local SQLite with continuous sync to cloud SSOT, point-in-time restore, tested DR runbook.
How regulatory mapping is implemented
A control register lives inside the platform. Each row maps a regulatory clause to a CODITECT primitive (a Blueprint step, an audit-trail field, a standard, an ADR). When a new regulation lands, the register is amended; existing primitives that satisfy the new clause are linked; gaps open as tracked tasks.
Customer compliance teams query this register to answer "show me how we cover Art. 12 of the AI Act" and get back the live list of platform features producing the evidence. The same query, run daily as a probe, alerts when any clause becomes uncovered.
What this means in practice
An audit or examination is not a project that starts in advance and assembles documentation. The documentation already exists, in the database, signed and timestamped, queryable by clause. The only audit work that remains is reading the answer.