If you need to manage your AI systems, you first have to understand some basic concepts if you want your AI governance to work properly. This article presents 13 basic things about AI that are relevant for AI governance.
To help you understand AI better, this article presents concepts from three areas: (1) general AI governance concepts, (2) key elements of AI, and (3) main risks for AI systems.
General AI governance concepts
First, let’s start with some general concepts that are important for AI governance.
AI trustworthiness is the ultimate goal of what you want to achieve with AI systems using AI governance. Sometimes, the goals of safe, responsible, and ethical AI systems are also mentioned, but those could all be included in the trustworthiness goal. AI trustworthiness is basically the ability for stakeholders to check if the results from AI systems are meeting their expectations.
AI objectives or principles are high-level directives that you want your AI governance to comply with — those could include accountability, environmental impact, fairness, maintainability, privacy and security, robustness, safety, transparency and explainability, human oversight, etc.
AI risk assessment & controls are at the core of how AI governance works: You need to find out which potential bad things (risks) could happen because of AI systems, and then you have to find appropriate ways to decrease those risks (controls).
Essential AI concepts
Here are a couple of concepts that explain key elements of AI.
An AI system is an IT system that generates various types of outputs like answers, forecasts, decisions, and others in textual, video, audio, and other formats, based on requests made by humans.
Generative AI (GenAI) is any AI system that creates new content: text, images, audio, video, code, designs, etc. Different types of generative AI exist: large language models, diffusion models, video models, audio models, multimodal models, code generation models, etc.
A large language model (LLM) is a model trained on a vast amount of text and is designed for natural language processing tasks, especially language generation. Each major chatbot, like ChatGPT, Claude, Gemini, and others, has an LLM behind it as an engine that enables it to speak to you.
Machine learning (ML) is the process of optimizing parameters for large language models (or other AI models) using the processing power of computer chips. A huge amount of processing power is needed to optimize the model behavior so that it provides outputs that are acceptable — for example, that it provides an answer in a language that you speak, not in some other language.
Training data is the information used to teach an AI model how to recognize patterns and generate outputs. For LLMs, the built-in training data comes from massive datasets collected and processed by the model creator before release, and it shapes the model’s general knowledge and behavior. User-provided training data (prompts, documents, fine-tuning datasets) is added later and influences how the model performs for a specific organization or task, without changing the underlying original model unless explicitly fine-tuned.
AI inference is the conclusion that AI systems make based on data and reasoning — this is basically about the processing of any request sent to AI. Since a huge amount of processing power is needed, specialized data centers are built around the world to process and answer all those prompts that people are entering. Of course, inference is not only about answering prompts; it is also about processing any other AI activity, including those from AI agents.
An AI agent is a part of an AI system that understands its environment and takes actions autonomously to achieve its goals — in other words, they do not need a human to trigger an activity; they can do this alone. It is predicted that AI agents will take over lots of repetitive tasks, but perhaps also some non-repetitive tasks as well.
Main risks for AI systems
Lastly, let’s look at some very common risks related to AI systems.
Bias is when an AI system systematically treats certain individuals or groups of people differently from others. An example could be an AI system for hiring new employees that prefers men over women.
AI hallucination is a response generated by AI that contains false or misleading information, which is presented as fact. The problem with chatbots is that they present such false or misleading information very convincingly, so it is very hard to know whether it is true or not.
Data leakage and privacy. AI systems can leak data when sensitive information used in training or prompts becomes exposed through model outputs or unauthorized access. This creates privacy risks, because personal or confidential data could become available to attackers or unintended users.
How to approach AI governance?
It might seem overwhelming to take all of these things into account and manage AI systems in a systematic way. However, this is where ISO 42001, an international standard for AI Management Systems (i.e., AI governance) is of great help.
ISO 42001 defines how to set a clear direction for AI governance, how to perform risk assessment, which documents to write, what the roles and responsibilities are, and how to manage AI systems throughout their lifecycles. In other words, it clarifies how the whole AI governance needs to be implemented.
To learn about the details of ISO 42001, sign up for this free ISO 42001 Foundations Course — it will give you a detailed overview of each clause from this AI governance standard together with practical examples of how to implement them.
Dejan Kosutic