Besides the small introduction, subscribers and consulting clients within this management domain have access to:
Create the foundation that enables management, monitoring, and control of all AI activities within the organization. The AI governance framework will allow you to define an AI risk management approach and defines methodology for managing and monitoring the AI/ML models in production.
In recent years, following technological breakthroughs and advances in development of machine learning (ML) models and management of large volumes of data, organizations are scaling their use of artificial intelligence (AI) technologies.
The use of AI and ML has gained momentum as organizations evaluate the potential applications of AI to enhance the customer experience, improve operational efficiencies, and automate business processes. Growing applications of AI have reinforced concerns about ethical, fair, and responsible use of the technology that assists or replaces human decision-making. Implementing AI systems requires careful management of the AI lifecycle, governing data, and machine learning model to prevent unintentional outcomes not only to an organization’s brand reputation but also, more importantly, to workers, individuals, and society. When adopting AI, it is important to have strong ethical and risk management frameworks surrounding its use. |
“Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.” (World Economic Forum) |
Source of data: OECD.AI (2021), powered by EC/OECD (2021), database of national AI policies, accessed on 7/09/2022, https://oecd.ai.
To ensure responsible, transparent, and ethical AI systems, organizations will need to review existing risk control frameworks and update them to include AI risk management and impact assessment frameworks and processes.
As ML and AI technologies are constantly evolving, the AI governance and AI risk management frameworks will need to evolve to ensure the appropriate safeguards and controls are in place. This applies not only to the machine learning models and AI system custom built by the organization’s data science and AI team, but it also includes AI-powered vendor tools and technologies. The vendors should be able to explain how AI is used in their products, how the model was trained, and what data was used to train the model. AI governance enables management, monitoring, and control of all AI activities within an organization. |
Machine learning systems learn from experience and without explicit instructions. They learn patterns from data, then analyze and make predictions based on past behavior and the patterns learned.
Artificial intelligence is a combination of technologies and can include machine learning. AI systems perform tasks that mimic human intelligence, such as learning from experience and problem solving. Most importantly, AI makes its own decisions without human intervention.
We use the definition of data ethics by Open Data Institute: “Data ethics is a branch of ethics that considers the impact of data practices on people, society and the environment. The purpose of data ethics is to guide the values and conduct of data practitioners in data collection, sharing and use.”
Algorithmic or machine bias is systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Algorithmic bias is not a technical problem. It’s a social and political problem, and in the context of implementing AI for business benefits, it’s a business problem.
Download the blueprint Mitigate Machine Bias blueprint for detailed discussion on bias, fairness, and transparency in AI systems
“Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence” (CIFAR).
The AI system is considered trustworthy when people understand how the technology works and when we can assess that it’s safe and reliable. We must be able to trust the output of the system and understand how the system was designed, what data was used to train it, and how it was implemented. Explainable AI, sometimes abbreviated as XAI, refers to the ability to explain how an AI model makes predictions, its anticipated impact, and its potential biases. Transparency means communicating with and empowering users by sharing information internally and with external stakeholders, including beneficiaries and people impacted by the AI-powered product or service. |
68% [of Canadians] are concerned they don’t understand the technology well enough to know the risks. 77% say they are concerned about the risks AI poses to society (TD, 2019) |
Monitoring
Tools & Technologies
Model Governance
|
Organization
Structure, roles, and responsibilities of the AI governance organization Operating Model
Risk and Compliance
Policies and procedures to support implementation of AI governance |