SABSA TRAINING CONSULTING
Skip to content

TACO – Integrating Control Objectives for AI

Malcolm Shore

Chief Architect

There are a number of well established standards and guidance documents for technology which have proved to have enduring value in the management and control of systems across enterprises. However, with whatever focus they may have, standards and guidelines often come from different viewpoints and need to be aligned in a framework such as SABSA to ensure proper alignment with business and technology. Going one step further and building a common architectural model provides a significant benefit when assessing process maturity.

Cybersecurity is a case in point. In the research paper Toward Effective Cybersecurity Management: A Hierarchical Process Model with Performance Assessment by Liu et al, a common performance model was developed by aligning four key standards: the Open Group Information Security Management Maturity Model (ISM3), the ISO27000 international standard for an information security management system, NIST’s Cybersecurity Framework and ISACA’s Control Objectives for IT (COBIT).  This required normalising the standards and guidelines into processes for which maturity is measured and control objectives for which effectiveness is measured.  The model has a set of strategic, tactical and operational cybersecurity governance processes supported with an associated set of cybersecurity services and control objectives, all linked by the use of SABSA attributes, which together deliver security to protect business value.

The emergence of AI as a disruptive technology has spurred the development of standards and guidelines for the safe, secure and responsible management of AI. As with the example of cybersecurity, these do not align readily and an architectural model is required.  In particular, ISO has released the ISO42001: AI Management System, NIST has released the AI Risk Management Framework (AI RMF), and ISACA has released an AI Toolkit providing guidance for AI audits. Furthermore, the European Union has released its AI Act which sets out requirements for effective and responsible management of AI. Integrating these requirements in a Trusted AI Control Objectives (TACO) model supports an integrated compliance approach.

The ISO 42001 standard has a set of management processes and at annex a set of control objectives, all of which are enterprise level controls. For example, A.2.2 states that “The organization shall document a policy for the development or use of AI systems.” Control objectives require controls to be defined to achieve the objective. These will normally involve a process which can be manual or automated, and a mechanism which achieves the mitigation. In the case of ISO 42001 A.2.2, the process is documenting a policy and the mechanism is the policy itself.  Measuring the process of documenting a policy results in a maturity level, and measuring the policy itself results in a metric representing the control performance – i.e. how effective the policy is.

The NIST AI RMF model similarly details a Govern set of enterprise-level requirements. However, it also details the three risk categories known as Map, Measure, and Manage which are system-level requirements operating within the umbrella of the Govern requirements. These are, in essence, control objectives. For example, GOV1.1 states: “Legal and regulatory requirements involving AI are understood, managed, and documented.” This is a requirement that can be tested through interviewing staff or sighting documentation, and if it is not true then a scenario can be developed to include business impact as a result of this not being the case, and an associated risk can be raised.

The EU AI Act requirements are presented in seven groups of control objectives which apply variously to the enterprise or a system being assessed. The ISACA AI Toolkit is a much more granular set of tiered controls covering both the enterprise and individual AI systems.

All four sources of requirements can be aligned into a TACO model for managing AI risk using a similar approach to that adopted by Liu et al.  This model is based on a structure of enterprise wide requirements at the strategic level and system level requirements at the tactical and operational level.

The various requirements in the AI standards and guidelines have been developed independently and have a high level of overlap. From an architectural perspective, standards compliance is best managed by consolidating all the relevant requirements into a single integrated controls framework and retaining the mapping back to the original standard or guideline. This then enables a single framework of requirements (or control objectives) to be used for managing risk whilst still enabling the assessment of compliance to the original standards.  We can represent this model of risk as shown below:

 

Example of Risk Aggregation
Example of Risk Aggregation

 

In this example, the virtual Procurement Officer (vPO) is a component layer technology, subject to procedures such as impact assessments and built using datasets (such as the example vpo512 dataset) and models (such as GPT-4o).  Each of these component layer items has risks represented by security attributes, and these map up to the complete attribute taxonomy shown at the conceptual layer. The vPO system can be compliance checked against any of the four standards using the TACO.

The TACO model can be detailed as shown in the table below, with the four sets of control objectives rolled up into a more concise set of generic control objectives for use in architectural activities. This, of course, is supported by a separate mapping of the requirements of specific standards to TACO.

Strategic: Organization
4.1 Organizational Context
Understand the context
Informed
4.2 Interested Party Requirements
Include all requirements
Inclusive
4.3 Determining the scope of AIMS
Set the AI boundaries
Bounded
4.4 Establish the AIMS
Ensure ethical AI
Trustworthy, Privacy-enabled
AI is governed
Governed (providing-roi, approved, compliant, informed, inclusive)
Complies with regulations
Compliant
Strategic: Leadership
5.1 Management commitment
Ensure sponsorship
Accountable
5.2 AI Policy
Establish AI Policy
Governed (approved, compliant, informed)
5.3 Roles and Responsibilities
Establish RACI
Strategic: Planning
6.1 Risk Management
Control risk to AI
Risk-managed
6.2 AI Objectives
AI is business-enhancing
Providing-ROI
6.3 Change Management
Control changes to AI
Change-managed
Strategic: Support
7.1 Competence Management
Establish AI training
Educated
7.2 Awareness
7.3 Communications
Keep stakeholders informed
Informed
7.4 Documentation
Document AI data and systems
Documented
Tactical: SDLC
SDLC
Secure design and delivery
Business-aligned, Documented, Tested
Data
High-quality AI data
Complete, Discrete, Formatted, Labelled, Normalised, Relevant, Simplified, Specified, Legitimate, Unique
Operational: Operation
8.1 Operational Procedures
AI is cyber secure
Available, Reliable, Secure, Resilient, Access-controlled, Monitored
AI operates effectively
Accurate, Controlled, Transparent, Safe, Fair, Unbiased, Grounded, Responsive
AI decisions can be explained
Explainable
8.2 Risk assessment
AI does not cause business disruption
Risk-managed
8.3 Risk Treatment
8.4 System Impact Assessment
Operational: Performance
9.1 Monitoring and Measurement
Model performance is monitored
Monitored
9.2 Internal Audit
Policy compliance
Compliant, Audited
9.3 Management Review
Value is achieved
Providing-ROI
Operational: Continuous Improvement
10.1 Continuous Improvement
The AIMS is regularly reviewed and improved
Governed
10.2 Non-Conformities
Non-conformities are identified and corrected
Error-Free