Building AI is the easy part. Governing it — so that it runs reliably, handles sensitive data responsibly, satisfies regulators, and doesn't fail silently — is the engineering that most teams skip. We don't.
We published the Agent Engineering Standard — a 13-category governance specification for production AI agent systems, developed from direct experience with systems that broke in production. Every category traces back to a real failure mode: token budgets that exploded, error taxonomies that didn't exist, audit trails that couldn't answer the regulator's question. We've developed AI Use Policies adopted by organisations working in M&E, peacebuilding, and financial services — covering data classification, human-in-the-loop verification, incident response, and donor disclosure.
- Agent Engineering Standard implementation — mapping the 13 categories to your specific systems and regulatory environment
- AI Use Policies for your organisation — data classification frameworks (Red/Amber/Green), prohibited uses, verification requirements, incident response protocols
- Compliance mapping for regulatory regimes: EU AI Act (Articles 9–17, 26), MiFID II, GDPR, CBB regulations, and donor-specific requirements (USAID, FCDO, EU, Netherlands MFA)
- AI governance audits of existing systems — identifying gaps between your current controls and what production and compliance require
- Engineering standards for supporting infrastructure: Solidity smart contracts, Terraform/IaC, CI/CD pipelines — each following the same three-layer enforcement model
Who this is for: Organisations deploying AI, handling sensitive data, or reporting to institutional stakeholders who ask how AI is governed. If your board, regulator, or donor will eventually ask “how do you control your AI systems?” — this is how you have the answer ready before they ask.