AI technology is developing very quickly; however, its adoption in companies is still rather slow. One of the reasons seems to be lack of trust in AI systems because of hallucinations, privacy, and other issues — and, very often, AI governance is mentioned as a solution to those problems.
But what exactly is AI governance?
AI governance defines clear rules on how AI systems are managed by defining clear goals, assessing risks and defining controls, writing policies and procedures, and defining roles and responsibilities.
The basics of AI governance
AI governance basically means that a company sets clear rules on how AI systems are developed and used throughout the life cycle of those AI systems.
These rules are typically defined through various technical controls (e.g., guardrails), data management (e.g., ensuring quality of training data), documentation (e.g., policies and procedures, roles and responsibilities), etc.
But how do you know what these rules need to define? How should controls be configured? And how should you handle training data?
How AI governance works
The core concept of AI governance is that you have to assess the potential risks that could happen when AI systems are used, and then, based on those risks, define what kinds of controls you need to implement to decrease those risks.
There are numerous risks — here are some examples:
- Bias – This is when an AI system systematically treats certain individuals or groups of people differently from others. An example could be an AI system for hiring new employees that prefers men over women.
- Data leakage – For example, this could happen if an AI system uses the data a user has entered for training its large language model, and then this same data could be displayed as a response to some other user.
- Hallucinations – This is a response generated by AI that contains false or misleading information, which is presented as fact.
And here are some examples of controls that could be applied to decrease those risks:
- Bias – For example, you might check the quality of training data, or you might have human oversight over the output.
- Data leakage – E.g., using an in-house small language model rather than an external large language model; if using external large language models, then you might anonymize the data sent to the LLM, or delete parts of the data that are the most sensitive.
- Hallucination – E.g., use input data checks, but also output validity checks.
Typical goals and objectives for AI governance
Ultimately, the goals of implementing all of those controls as part of AI governance should be safe, responsible, and ethical AI systems. Or if you want to simplify this, then you can say that the goal should be trustworthiness, which ultimately summarizes all of those previously mentioned goals.
Of course, besides trustworthiness, many companies also want to achieve compliance with AI laws and regulations, and even market visibility if they can prove to their customers that they implemented appropriate AI governance.
Besides these general goals, there are many other objectives that are important for AI governance — here are some examples:
- Accountability
- Environmental impact
- Fairness
- Maintainability
- Privacy and security
- Robustness
- Safety
- Transparency and explainability
- Human oversight
- Etc.
Typical policies and procedures
To be able to handle all of those controls, companies will need to write various internal documents — otherwise, it would be impossible to make sure all of these controls really work.
For example, you might have the following documents as part of your AI governance:
- Top-level AI policy — defines the strategic direction of the company for AI, and general roles and responsibilities
- AI risk management methodology — defines how the risks are assessed and treated
- AI data management policy — defines how the training data is sourced, checked, and prepared for AI systems
- AI design and development policy — describes how AI systems are specified, developed, and tested
- AI operating procedures — defines how the AI systems are handled when in production
- AI acceptable use policy — prescribes for end users which activities are allowed, and which are not
- Etc.
Typical roles and responsibilities
For AI governance to work, you’ll need to include different roles and, ultimately, all of your employees. Below are some typical roles with regard to AI governance.
Senior management needs to set the overall direction for AI in general, and for AI governance — how they need to support the company strategy, which strategic objectives need to be achieved, etc. They also need to provide the resources needed for AI governance.
An AI officer, or someone else who is in charge of AI governance (e.g., CIO, CTO, or similar), must coordinate activities related to the management of AI systems, report to senior management, etc.
Middle management should participate in the creation of AI policies and procedures, especially if they are involved in privacy, cybersecurity, legal, data, or similar areas; they also need to make sure that all of those rules are implemented in practice and complied with on a day-to-day basis.
All employees need to comply with whatever AI rules the company has defined.
Where to start?
All of these risks, controls, documents, roles, and responsibilities sound like a lot, especially if you have several AI systems to work with.
This is why you should use a framework for handling AI governance — and one such framework is ISO 42001, an international standard that defines how to manage AI Management Systems — in other words, how to manage AI governance.
ISO 42001 defines how to perform risk assessment, which documents to write, what the roles and responsibilities will be, etc.; it also provides a list of 38 controls that you can implement. In other words, it clarifies how the whole AI governance needs to be implemented.
Just like ISO 27001 became the most popular framework for cybersecurity, it is predicted that ISO 42001 will become a mainstream framework for AI governance in the next couple of years.
To learn about the details of ISO 42001, sign up for this free ISO 42001 Foundations Course — it will give you a detailed overview of each clause from this AI governance standard together with practical examples of how to implement them.
Dejan Kosutic