Mapped to the frameworks you're already being asked about.
For every framework below, Kaldros ships a control mapping, an evidence pack template, and a verifier your auditor can run offline. Pick a framework to see the exact controls we answer.
Obligations for providers and deployers of high-risk AI systems: logging (Art. 12), human oversight (Art. 14), post-market monitoring (Art. 72), and transparency to affected persons.
EU Regulation 2022/2554 for financial entities: ICT risk management, incident reporting, operational resilience testing, and third-party risk — including AI-powered ICT services.
Cybersecurity obligations for essential and important entities: risk management, incident handling, supply-chain security, and 24-hour early warning for significant incidents.
Requirements for establishing, implementing, maintaining, and improving an AI management system, with Annex A controls for data, lifecycle, monitoring, and third parties.
Voluntary framework from NIST for trustworthy AI: Govern, Map, Measure, Manage. Widely used as the spine of enterprise AI governance programs.
Type 1 and Type 2 reports on security, availability, processing integrity, confidentiality, and privacy. The baseline ask from most US enterprise buyers.
Safeguards for protected health information. When agents touch PHI, §164.312(b) audit controls and §164.308(a)(1)(ii)(D) log review become regulator evidence.
When an agent touches cardholder data or authentication data, Requirement 10 (logging) and Requirement 12 (policies) apply.