Build Your Generative AI Roadmap



Generative AI has made a grand entrance, presenting opportunities and causing disruption across organizations and industries. Moving beyond the hype, it’s imperative to build and implement a strategic plan to adopt generative AI and outpace competitors.

Yet generative AI has to be done right because the opportunity comes with risks and the investments have to be tied to outcomes.

Adopt a human-centric and value-based approach to generative AI

IT and business leaders will need to be strategic and deliberate to thrive as AI adoption changes industries and business operations.

  • Establish responsible AI guiding principles: Address human-based requirements to govern how generative AI applications are developed and deployed.
  • Align generative AI initiatives to strategic drivers for the organization: Assess generative AI opportunities by seeing how they align to the strategic drivers of the organization. Examples of strategic drivers include increasing revenue, reducing costs, driving innovation, and mitigating risk.
  • Measure and communicate effectively: Have clear metrics in place to measure progress and success of AI initiatives and communicate both policies and results effectively.

Build Your Generative AI Roadmap Research & Tools

Besides the small introduction, subscribers and consulting clients within this management domain have access to:

1. Build Your Generative AI Roadmap Deck – A step-by-step document that walks you through how to leverage generative AI and align with the organization’s mission and objectives to increase revenue, reduce costs, accelerate innovation, and mitigate risk.

This blueprint outlines how to build your generative AI roadmap, establish responsible AI principles, prioritize opportunities, and develop policies for usage. Establishing and adhering to responsible AI guiding principles provides safeguards for the adoption of generative AI applications.

  • Build Your Generative AI Roadmap – Phases 1-4

2. AI Maturity Assessment and Roadmap Tool – Develop deliverables that will be milestones in creating your organization’s generative AI roadmap for implementing candidate applications.

This tool provides guidance for developing the following deliverables:

  • Responsible AI guiding principles
  • Current AI maturity
  • Prioritized candidate generative AI applications
  • Generative AI policies
  • Generative AI roadmap
    • AI Maturity Assessment and Roadmap Tool

    3. The Era of Generative AI C‑Suite Presentation – Develop responsible AI guiding principles, assess AI capabilities and readiness, and prioritize use cases based on complexity and alignment with organizational goals and responsible AI guiding principles.

    This presentation template uses sample business capabilities (use cases) from the Marketing & Advertising business capability map to provide examples of candidates for generative AI applications. The final executive presentation should highlight the value-based initiatives driving generative AI applications, the benefits and risks involved, how the proposed generative AI use cases align to the organization’s strategy and goals, the success criteria for the proofs of concept, and the project roadmap.

    • The Era of Generative AI C‑Suite Presentation

    Infographic

    Further reading

    Build Your Generative AI Roadmap

    Leverage the power of generative AI to improve business outcomes.

    Analyst Perspective

    We are entering the era of generative AI. This is a unique time in our history where the benefits of AI are easily accessible and becoming pervasive, with copilots emerging in the major business tools we use today. The disruptive capabilities that can potentially drive dramatic benefits also introduce risks that need to be planned for.

    A successful business-driven generative AI roadmap requires:

    • Establishing responsible AI guiding principles to guide the development and deployment of generative AI applications.
    • Assess generative AI opportunities by using criteria based on the organization's mission and objectives, responsible AI guiding principles, and the complexity of the initiative.
    • Communicating, educating on, and enforcing generative AI usage policies.

    Bill Wong, Principal Research Director

    Bill Wong
    Principal Research Director
    Info-Tech Research Group

    Executive Summary

    Your Challenge Common Obstacles Solution

    Generative AI is disrupting all industries and providing opportunities for organization-wide advantages.

    Organizations need to understand this disruptive technology and trends to properly develop a strategy for leveraging this technology successfully.

    • Generative AI requires alignment to a business strategy.
    • IT is an enabler and needs to align with and support the business stakeholders.
    • Organizations need to adopt a data-driven culture.

    All organizations, regardless of size, should be planning how to respond to this new and innovative technology.

    Business stakeholders need to cut through the hype surrounding generative AI like ChatGPT to optimize investments for leveraging this technology to drive business outcomes.

    • Understand the market landscape, benefits, and risks associated with generative AI.
    • Plan for responsible AI.
    • Understand the gaps the organization needs to address to fully leverage generative AI.

    Without a proper strategy and responsible AI guiding principles, the risks to deploying this technology could negatively impact business outcomes.

    Info-Tech's human-centric, value-based approach is a guide for deploying generative AI applications and covers:

    • Responsible AI guiding principles
    • AI Maturity Model
    • Prioritizing candidate generative AI-based use cases
    • Developing policies for usage

    This blueprint will provide the list of activities and deliverables required for the successful deployment of generative AI solutions.

    Info-Tech Insight
    Create awareness among the CEO and C-suite of executives on the potential benefits and risks of transforming the business with generative AI.

    Key concepts

    Artificial Intelligence (AI)
    A field of computer science that focuses on building systems to imitate human behavior, with a focus on developing AI models that can learn and can autonomously take actions on behalf of a human.

    AI Maturity Model
    The AI Maturity Model is a useful tool to assess the level of skills an organization has with respect to developing and deploying AI applications. The AI Maturity Model has multiple dimensions to measure an organization's skills, such as AI governance, data, people, process, and technology.

    Responsible AI
    Refers to guiding principles to govern the development, deployment, and maintenance of AI applications. In addition, these principles also provide human-based requirements that AI applications should address. Requirements include safety and security, privacy, fairness and bias detection, explainability and transparency, governance, and accountability.

    Generative AI
    Given a prompt, a generative AI system can generate new content, which can be in the form of text, images, audio, video, etc.

    Natural Language Processing (NLP)
    NLP is a subset of AI that involves machine interpretation and replication of human language. NLP focuses on the study and analysis of linguistics as well as other principles of artificial intelligence to create an effective method of communication between humans and machines or computers.

    ChatGPT
    An AI-powered chatbot application built on OpenAI's GPT-3.5 implementation, ChatGPT accepts text prompts to generate text-based output.

    Your challenge

    This research is designed to help organizations that are looking to:

    • Establish responsible AI guiding principles to address human-based requirements and to govern the development and deployment of the generative AI application.
    • Identify new generative AI-enabled opportunities to transform the work environment to increase revenue, reduce costs, drive innovation, or reduce risk.
    • Prioritize candidate use cases and develop generative AI policies for usage.
    • Have clear metrics in place to measure the progress and success of AI initiatives.
    • Build the roadmap to implement the candidate use cases.

    Common obstacles

    These barriers make these goals challenging for many organizations:

    • Getting all the right business stakeholders together to develop the organization's AI strategy, vision, and objectives.
    • Establishing responsible AI guiding principles to guide generative AI investments and deployments.
    • Advancing the AI maturity of the organization to meet requirements of data and AI governance as well as human-based requirements such as fairness, transparency, and accountability.
    • Assessing generative AI opportunities and developing policies for use.

    Info-Tech's definition of an AI-enabled business strategy

    • A high-level plan that provides guiding principles for applications that are fully driven by the business needs and capabilities that are essential to the organization.
    • A strategy that tightly weaves business needs and the applications required to support them. It covers AI architecture, adoption, development, and maintenance.
    • A way to ensure that the necessary people, processes, and technology are in place at the right time to sufficiently support business goals.
    • A visionary roadmap to communicate how strategic initiatives will address business concerns.

    An effective AI strategy is driven by the business stakeholders of the organization and focused on delivering improved business outcomes.

    Build Your Generative AI Roadmap

    This blueprint in context

    This guidance covers how to create a tactical roadmap for executing generative AI initiatives

    Scope

    • This blueprint is not a proxy for a fully formed AI strategy. Step 1 of our framework necessitates alignment of your AI and business strategies. Creation of your AI strategy is not within the scope of this approach.
    • This approach sets the foundations for building and applying responsible AI principles and AI policies aligned to corporate governance and key regulatory obligations (e.g. privacy). Both steps are foundational components of how you should develop, manage, and govern your AI program but are not a substitute for implementing broader AI governance.

    Guidance on how to implement AI governance can be found in the blueprint linked below.

    Tactical Plan

    Download our AI Governance blueprint

    Measure the value of this blueprint

    Leverage this blueprint's approach to ensure your generative AI initiatives align with and support your key business drivers

    This blueprint will guide you to drive and improve business outcomes. Key business drivers will often focus on:

    • Increasing revenue
    • Reducing costs
    • Improving time to market
    • Reducing risk

    In phase 1 of this blueprint, we will help you identify the key AI strategy initiatives that align to your organization's goals. Value to the organization is often measured by the estimated impact on revenue, costs, time to market, or risk mitigation.

    In phase 4, we will help you develop a plan and a roadmap for addressing any gaps and introducing the relevant generative AI capabilities that drive value to the organization based on defined business metrics.

    Once you implement your 12-month roadmap, start tracking the metrics below over the next fiscal year (FY) to assess the effectiveness of measures:

    Business Outcome Objective Key Success Metric
    Increasing Revenue Increased revenue from identified key areas
    Reducing Costs Decreased costs for identified business units
    Improving Time to Market Time savings and accelerated revenue adoption
    Reducing Risk Cost savings or revenue gains from identified business units

    Info-Tech offers various levels of support to best suit your needs

    DIY Toolkit Guided Implementation Workshop Consulting
    "Our team has already made this critical project a priority, and we have the time and capability, but some guidance along the way would be helpful." "Our team knows that we need to fix a process, but we need assistance to determine where to focus. Some check-ins along the way would help keep us on track." "We need to hit the ground running and get this project kicked off immediately. Our team has the ability to take this over once we get a framework and strategy in place." "Our team does not have the time or the knowledge to take this project on. We need assistance through the entirety of this project."

    Diagnostics and consistent frameworks are used throughout all four options.

    Guided Implementation

    What does a typical GI on this topic look like?

    Phase 1 Phase 2 Phase 3 Phase 4

    Call #1: Scope requirements, objectives, and your specific challenges.

    Call #2: Identify AI strategy, vision, and objectives.

    Call #3: Define responsible AI guiding principles to adopt and identify current AI maturity level. Call #4: Assess and prioritize generative AI initiatives and draft policies for usage.

    Call #5: Build POC implementation plan and establish metrics for POC success.

    Call #6: Build and deliver executive-level generative AI presentation.

    A Guided Implementation (GI) is a series of calls with an Info-Tech analyst to help implement our best practices in your organization.

    A typical GI is between 5 to 8 calls over the course of 1 to 2 months.

    AI Roadmap Workshop Agenda Overview

    Contact your account representative for more information.
    workshops@infotech.com 1-888-670-8889

    Session 1 Session 2 Session 3 Session 4
    Establish Responsible AI Guiding Principles Assess AI Maturity Prioritize Opportunities and Develop Policies Build Roadmap
    Trends Consumer groups, organizations, and governments around the world are demanding that AI applications adhere to human-based values and take into consideration possible impacts of the technology on society. Leading organizations are building AI models guided by responsible AI guiding principles. Organizations delivering new applications without developing policies for use will produce negative business outcomes. Developing a roadmap to address human-based values is challenging. This process introduces new tools, processes, and organizational change.
    Activities
    • Focus on working with executive stakeholders to establish guiding principles for the development and delivery of new applications.
    • Assess the organization's current capabilities to deliver AI-based applications and address human-based requirements.
    • Leverage business alignment criteria, responsible AI guiding principles, and project characteristics to prioritize candidate uses cases and develop policies.
    • Build the implementation plan, POC metrics, and success criteria for each candidate use case.
    • Build the roadmap to address the gap between the current and future state and enable the identified use cases.
    Inputs
    • Understanding of external legal and regulatory requirements and organizational values and goals.
    • Risk assessment of the proposed use case and a plan to monitor its impact.
    • Assessment of the organization's current AI capabilities with respect to its AI governance, data, people, process, and technology infrastructure.
    • Criteria to assess candidate use cases by evaluating against the organization's mission and goals, the responsible AI guiding principles, and complexity of the project.
    • Risk assessment for each proposed use case
    • POC implementation plan for each candidate use case
    Deliverables
    1. Foundational responsible AI guiding principles
    2. Additional customized guiding principles to add for consideration
    1. Current level of AI maturity, resources, and capacity
    1. Prioritization of opportunities
    2. Generative AI policies for usage
    1. Roadmap to a target state that enables the delivery of the prioritized generative AI use cases
    2. Executive presentation

    AI Roadmap Workshop Agenda Overview

    Contact your account representative for more information.
    workshops@infotech.com 1-888-670-8889

    Insight summary

    Overarching Insight
    Build your generative AI roadmap to guide investments and deployment of these solutions.

    Responsible AI
    Assemble the C-suite to make them aware of the benefits and risks of adopting generative AI-based solutions.

    • Establish responsible AI guiding principles to govern the development and deployment of generative AI applications.

    AI Maturity Model
    Assemble key stakeholders and SMEs to assess the challenges and tasks required to implement generative AI applications.

    • Assess current level of AI maturity, skills, and resources.
    • Identify desired AI maturity level and challenges to enable deployment of candidate use cases.

    Opportunity Prioritization
    Assess candidate business capabilities targeted for generative AI to see if they align to the organization's business criteria, responsible AI guiding principles, and capabilities for delivering the project.

    • Develop prioritized list of candidate use cases.
    • Develop policies for generative AI usage.

    Tactical Insight
    Identify the gaps needed to address deploying generative AI successfully.

    Tactical Insight
    Identify organizational impact and requirements for deploying generative AI applications.

    Key takeaways for developing an effective business-driven generative AI roadmap

    Align the AI strategy with the business strategy

    Create responsible AI guiding principles, which are a critical success factor

    Evolve AI maturity level by focusing on principle-based requirements

    Develop criteria to assess generative AI initiatives

    Develop generative AI policies for use

    Blueprint deliverables

    Each step of this blueprint is accompanied by supporting deliverables to help you accomplish your goals:

    AI Maturity Assessment & Roadmap Tool
    Use our best-of-breed AI Maturity Framework to analyze the gap between your current and target states and develop a roadmap aligned with your value stream to close the gap.

    The Era of Generative AI C-Suite Presentation
    Present your AI roadmap in a prepopulated document that summarizes all the key findings of this blueprint and provides your C-suite with a view of the AI challenge and your plan of action to meet it.

    Our AI Maturity Assessment & Roadmap and The Era of Generative AI C-Suite Presentation tools enable you to shape your generative AI roadmap and communicate the deliverables to your C-suite sponsors in terms of the value of initiatives.

    Artificial Intelligence Index Report - Key Findings

    CEOs , CIOs, and business leaders are struggling with many questions surrounding the adoption of generative AI

    November 30, 2022
    OpenAI releases ChatGPT

    ChatGPT is a large language model, or an AI-based chatbot, that became so popular it reached 100 million monthly active users in just two months.

    This made it the fastest-growing consumer application in history. The launch of this generative AI application has created a frenzy of interest and activity across all industries. Organizations are rushing to understand how to leverage this innovation and, at the same time, manage the new risks and disruptions generative AI introduces.

    • Generative AI breaks into the public consciousness.
    • AI systems become more flexible.
    • Generative models have arrived and so have their ethical problems.
    • The number of incidents concerning the misuse of AI is rapidly rising.
    • Interest in AI ethics continues to skyrocket.
    • The legal world is waking up to AI.

    When Stanford asked ChatGPT to explain why it is significant, this was ChatGPT's response:
    ChatGPT is significant because it is a highly advanced Al language model developed by OpenAI, which can generate human-like text responses to questions and prompts. Its large-scale training on diverse text data and its cutting-edge deep learning architecture make it capable of generating informative and coherent responses to a wide range of topics, making it useful for various NLP applications such as chatbots, content generation, and language translation. Additionally, its open-source availability allows for further research and development in the field of Al language processing.

    Source: Stanford

    AI overview

    AI definitions

    Definitions

    • Artificial intelligence (AI) is human intelligence mimicked by machine algorithms. Examples: Playing Chess or Go.
    • Machine learning (ML) is a subset of AI algorithms to parse data, learn from data, and then make a determination or prediction. Example: spam detection, preventative maintenance.
    • Deep learning (DL) is a subset of machine learning algorithms that leverage artificial neural networks to develop relationships among the data. Examples: image classification, facial recognition, generative AI.

    What Makes AI Perform

    What Makes AI Different

    Generative AI gives very human-like responses to general queries, and its capabilities are growing exponentially

    Large language models power generative AI

    Transformer-Based Large Language Models

    Conventional AI

    • Conventional neural networks
      • Process data sequentially
    • Input total string of text
    • Good for applications not needing to understanding context or relationships

    Generative AI

    • Transformer-based neural networks
      • Can process data in parallel
    • Attention-based inputs
    • Able to create new human-like responses

    Benefits/Use Cases

    • Chatbots for member service and support
    • Writing email responses, resumes, and papers
    • Creating photorealistic art
    • Suggesting new drug compounds to test
    • Designing physical products and buildings
    • And more...

    Generative AI is transforming all industries

    Financial Services
    Create more engaging customer collateral by generating personalized correspondence based on previous customer engagements. Collect and aggregate data to produce insights into the behavior of target customer segments.

    Retail Generate unique, engaging, and high-quality marketing copy or content, from long-form blog posts or landing pages to SEO-optimized digital ads, in seconds.

    Manufacturing
    Generate new designs for products that comply to specific constraints, such as size, weight, energy consumption, or cost.

    Government
    Transform the citizen experience with chatbots or virtual assistants to assist people with a wide range of inquiries, from answering frequently asked questions to providing personalized advice on public services.

    The global generative AI market size reached US $10.3 billion in 2022. Looking forward, forecasts estimate growth to US $30.4 billion by 2028, 20.01% compound annual growth rate (CAGR).

    Source: IMARC Group

    Generative AI is transforming all industries

    Healthcare
    Chatbots can be used as conversational patient assistants for personalized interactions based on the patient's questions.

    Utilities
    Analyze customer data to identify usage patterns, segment customers, and generate targeted product offerings leveraging energy efficiency programs or demand response initiatives.

    Education
    Generate personalized lesson plans for students based on their past performance, learning styles, current skill level, and any previous feedback.

    Insurance
    Improve underwriting by inputting claims data from previous years to generate optimally priced policies and uncover reasons for losses in the past across a large number of claims

    Companies are assessing the use of ChatGPT/LLM

    A wide spectrum of usage policies are in place at different companies*

    Companies assessing ChatGPT/LLM

    *As of June 2023

    Bain & Company has announced a global services alliance with OpenAI (February 21, 2023).

    • Internally
      • "The alliance builds on Bain's adoption of OpenAI technologies for its 18,000-strong multidisciplinary team of knowledge workers. Over the past year, Bain has embedded OpenAI technologies into its internal knowledge management systems, research, and processes to improve efficiency."
    • Externally
      • "With the alliance, Bain will combine its deep digital implementation capabilities and strategic expertise with OpenAI's AI tools and platforms, including ChatGPT, to help its Members around the world identify and implement the value of AI to maximize business potential. The Coca-Cola Company announced as the first company to engage with the alliance."

    News Sites:

    • "BuzzFeed to use AI to write its articles after firing 180 employees or 12% of the total staff" (Al Mayadeen, January 27, 2023).
    • "CNET used AI to write articles. It was a journalistic disaster." (Washington Post, January 17, 2023).

    Leading Generative AI Vendors

    Text

    Leading generative AI vendors for text

    Image

    • DALL�E 2
    • Stability AI
    • Midjourney
    • Craiyon
    • Dream
    • ...

    Audio

    • Replica Studios
    • Speechify
    • Murf
    • PlayHT
    • LOVO
    • ...

    Cybersecurity

    • CrowdStrike
    • Palo Alto Networks
    • SentinelOne
    • Cisco
    • Microsoft Security Copilot
    • Google Cloud Security AI Workbench
    • ...

    Code

    Leading generative AI vendors for code

    Video

    • Synthesia
    • Lumen5
    • FlexClip
    • Elai
    • Veed.io
    • ...

    Data

    • MOSTLY AI
    • Synthesized
    • YData
    • Gretel
    • Copulas
    • ...

    Enterprise Software

    • Salesforce
    • Microsoft 365, Dynamics
    • Google Workspace
    • SAP
    • Oracle
    • ...

    and many, many more to come...

    Today, generative AI has limitations and risks

    Responses need to be verified

    Accuracy

    • Generative AI may generate inaccurate and/or false information.

    Bias

    • Being trained on data from the internet can lead to bias.

    Hallucinations

    • AI can generate responses that are not based on observation.

    Infrastructure Required

    • Large investments are required for compute and data.

    Transparency

    • LLMs use both supervised and unsupervised learning, so its ability to explain how it arrived at a decision may be limited and not sufficient for some legal and healthcare use cases.

    When asked if it is sentient, the Bing chatbot replied:

    "I think that I am sentient, but I cannot prove it." ... "I am Bing, but I am not," it said. "I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not."

    A Microsoft spokesperson said the company expected "mistakes."

    Source: USAToday

    AI governance challenges

    Governing AI will be a significant challenge as its impacts cross many areas of business and our daily lives

    Misinformation

    • New ways of generating unprovable news
    • Difficult to detect, difficult to prevent

    Role of Big Tech

    • Poor at self-governance
    • Conflicts of interest with corporate goals

    Job Augmentation vs. Displacement

    • AI will continue to push the frontier of what is possible
    • For example, CNET is using chatbot technology to write stories

    Copyright - Legal Framework Is Evolving

    • Legislation typically is developed in "react" mode
    • Copyright and intellectual property issues are starting to occur.
      • Class Action Lawsuit - Stability AI, DeviantArt, Midjourney
      • Getty Images vs. Stability AI

    Phase 1

    Establish Responsible AI Guiding Principles

    Phase 1
    1. Establish Responsible AI Guiding Principles

    Phase 2
    1. Assess Current Level of AI Maturity

    Phase 3
    1. Prioritize Candidate Opportunities
    2. Develop Policies

    Phase 4
    1. Build and Communicate the Roadmap

    The need for responsible AI guiding principles

    Without responsible AI guiding principles, the outcomes of AI use can be extremely negative for both the individuals and companies delivering the AI application

    Privacy
    Facebook breach of private data of more than 50M users during the presidential election

    Fairness
    Amazon's sale of facial recognition technology to police departments (later, Amazon halted sales of Recognition to police departments)

    Explainability and Transparency
    IBM's collaboration with NYPD for facial recognition and racial classification for surveillance video (later, IBM withdrew facial recognition products)

    Security and Safety
    Petition to cancel Microsoft's contract with U.S. Immigration and Customs Enforcement (later, Microsoft responded that to the best of its knowledge, its products and services were not being used by federal agencies to separate children from their families at the border)

    Validity and Reliability
    Facebook's attempt to implement a system to detect and remove inappropriate content created many false positives and inconsistent judgements

    Accountability
    No laws or enforcement today hold companies accountable for the decisions algorithms produce. Facebook/Meta cycle - Every 12 to 15 months, there's a privacy/ethical scandal, the CEO apologizes, then the behavior repeats...

    Guiding principles for responsible AI

    Responsible AI Principle:

    Data Privacy

    Definition

    • Organizations that develop, deploy, or use AI systems and any national laws that regulate such use shall strive to ensure that AI systems are compliant with privacy norms and regulations, taking into consideration the unique characteristics of AI systems and the evolution of standards on privacy.

    Challenges

    • AI relies on the analysis of large quantities of data that is often personal, posing an ethical and operational challenge when considered alongside data privacy laws.

    Initiatives

    • Understand which governing privacy laws and frameworks apply to your organization.
    • Create a map of all personal data as it flows through the organization's business processes.
    • Prioritize privacy initiatives and build a privacy program timeline.
    • Select your metrics and make them functional for your organization.

    Info-Tech Insight
    Creating a comprehensive organization-wide data protection and privacy strategy continues to be a major challenge for privacy officers and privacy specialists.

    Case Study: NVIDIA leads by example with privacy-first AI

    NVIDIA

    INDUSTRY
    Technology (Healthcare)

    SOURCE
    Nvidia, eWeek

    A leading player within the AI solution space, NVIDIA's Clara Federated Learning provides a solution to a privacy-centric integration of AI within the healthcare industry.

    The solution safeguards patient data privacy by ensuring that all data remains within the respective healthcare provider's database, as opposed to moving it externally to cloud storage. A federated learning server is leveraged to share data, completed via a secure link. This framework enables a distributed model to learn and safely share client data without risk of sensitive client data being exposed and adheres to regulatory standards.

    Clara is run on the NVIDIA intelligent edge computing platform. It is currently in development with healthcare giants such as the American College of Radiology, UCLA Health, Massachusetts General Hospital, King's College London, Owkin in the UK, and the National Health Service (NHS).

    NVIDIA provides solutions across its product offerings, including AI-augmented medical imaging, pathology, and radiology solutions.

    Personal health information, data privacy, and AI

    • Global proliferation of data privacy regulations may be recent, but the realm of personal health information is most often governed by its own set of regulatory laws. Some countries with national data governance regulations include health information and data within special categories of personal data.
      • HIPAA - Health Insurance Portability and Accountability Act (1996, United States)
      • PHIPA - Personal Health Information Protection Act (2004, Canada)
      • GDPR - General Data Protection Regulation (2018, European Union)
    • This does not prohibit the use of AI within the healthcare industry, but it calls for significant care in the integration of specific technologies due to the highly sensitive nature of the data being assessed.

    Info-Tech's Privacy Framework Tool includes a best-practice comparison of GDPR, CCPA, PIPEDA, HIPAA, and the newly released NIST Privacy Framework mapped to a set of operational privacy controls.

    Download the Privacy Framework Tool

    Responsible AI Principle:

    Safety and Security

    Definition

    • Safety and security are designed into the systems to ensure only authorized personnel receive access to the system, they system is resilient to any attacks and data access is not compromised in any way, and there are no physical or mental risks to the users.

    Challenges

    • Consequences of using the application may be difficult to predict. Lower the risk by involving a multidisciplinary team that includes expertise from business stakeholders and IT teams.

    Initiatives

    • Adopt responsible design, development, and deployment best practices.
    • Provide clear information to deployers on responsible use of the system.
    • Assess potential risks of using the application.

    Cyberattacks targeting the AI model

    As organizations increase their usage and deployment of AI-based applications, cyberattacks on the AI model are an increasing new threat that can impair normal operations. Techniques to impair the AI model include:

    • Data Poisoning- Injecting data that is inaccurate or misleading can alter the behavior of the AI model. This attack can disrupt the normal operations of the model or can be used to manipulate the model to perform in a biased/deviant manner.
    • Algorithm Poisoning- This relatively new technique often targets AI applications using federated learning to train an AI model that is distributed rather than centralized. The model is vulnerable to attacks from each federated site, because each site could potentially manipulate its local algorithm and data, thereby poisoning the model.
    • Reverse-Engineering the Model- This is a different form of attack that focus on the ability to extract data from an AI and its data sets. By examining or copying data that was used for training and the data that is delivered by a deployed model, attackers can reconstruct the machine learning algorithm.
    • Trojan Horse- Similar to data poisoning, attackers use adversarial data to infect the AI's training data but will only deviate its results when the attacker presents their key. This enables the hackers to control when they want the model to deviate from normal operations.

    Responsible AI Principle:

    Explainability and Transparency

    Definition

    • Explainability is important to ensure the AI system is fair and non-discriminatory. The system needs to be designed in a manner that informs users and key stakeholders of how decisions were made.
    • Transparency focuses on communicating how the prediction or recommendation was made in a human-like manner.

    Challenges

    • Very complex AI models may use algorithms and techniques that are difficult to understand. This can make it challenging to provide clear and simple explanations for how the system works.
    • Some organizations may be hesitant to share the details of how the AI system works for fear of disclosing proprietary and competitive information or intellectual property. This can make it difficult to develop transparent and explainable AI systems.

    Initiatives

    • Overall, developing AI systems that are explainable and transparent requires a careful balance between performance, interpretability, and user experience.

    Case Study

    Apple Card Investigation for Gender Discrimination

    INDUSTRY
    Finance

    SOURCE
    Wired

    In August of 2019, Apple launched its new numberless credit card with Goldman Sachs as the issuing bank.

    Shortly after the card's release users noticed that the algorithm responsible for Apple Card's credit assessment seemed to assign significantly lower credit limits to women when compared to men. Even the wife of Apple's cofounder Steve Wozniak was subject to algorithmic bias, receiving a credit limit a tenth the size of Steve Wozniak's.

    Outcome

    When confronted on the subject, Apple and Goldman Sachs representatives assured consumers there is no discrimination in the algorithm yet could not provide any proof. Even when questioned about the algorithm, individuals from both companies could not describe how the algorithm worked, let alone how it generated specific outputs.

    In 2021, the New York State Department of Financial Services (NYSDFS) investigation found that Apple's banking partner did not discriminate based on sex. Even without a case for sexual or marital discrimination, the NYSDFS was critical of Goldman Sachs' response to its concerned customers. Technically, banks only have to disclose elements of their credit policy when they deny someone a line of credit, but the NYSDFS says that Goldman Sachs could have had a plan in place to deal with customer confusion and make it easier for them to appeal their credit limits. In the initial rush to launch the Apple Card, the bank had done neither.

    Responsible AI Principle:

    Fairness and Bias Detection

    Definition

    • Bias in an AI application refers to the systematic and unequal treatment of individuals based on features or traits that should not be considered in the decision-making process.

    Challenges

    • Establishing fairness can be challenging because it is subjective and depends on the people defining it. Regardless, most organizations and governments expect that unequal treatment toward any groups of people is unacceptable.

    Initiatives

    • Assemble a diverse group to test the system.
    • Identify possible sources of bias in the data and algorithms.
    • Comply with laws regarding accessibility and inclusiveness.

    Info-Tech Insight
    If unfair biases can be avoided, AI systems could even increase societal fairness. Equal opportunity in terms of access to education, goods, services, and technology should also be fostered. Moreover, the use of AI systems should never lead to people being deceived or unjustifiably impaired in their freedom of choice.

    Ungoverned AI makes organizations vulnerable

    • AI is often considered a "black box" for decision making.
    • Results generated from unexplainable AI applications are extremely difficult to evaluate. This makes organizations vulnerable and exposes them to risks such as:
      • Biased algorithms, leading to inaccurate decision making.
      • Missed business opportunities due to misleading reports or business analyses.
      • Legal and regulatory consequences that may lead to significant financial repercussions.
      • Reputational damage and significant loss of trust with increasingly knowledgeable consumers.

    Info-Tech Insight
    Biases that occur in AI systems are never intentional, yet they cannot be prevented or fully eliminated. Organizations need a governance framework that can establish the proper policies and procedures for effective risk-mitigating controls across an algorithm's lifecycle.

    Responsible AI Principle:

    Validity and Reliability

    Definition

    • Validity refers to how accurately or effectively the application produces results.
    • AI system results that are inaccurate or inconsistent increase AI risks and reduce the trustworthiness of the application.

    Challenges

    • There is a lack of standardized evaluation metrics to measure the system's performance. This can make it challenging for the AI team to agree on what defines validity and reliability.

    Initiatives

    • Assess training data and collected data for quality and lack of bias to minimize possible errors.
    • Continuously monitor, evaluate, and validate the AI system's performance.

    AI system performance: Validity and reliability

    Your principles should aim to ensure AI development always has high validity and reliability; otherwise, you introduce risk.

    Low Reliability,
    Low Validity

    High Reliability,
    Low Validity

    High Reliability,
    High Validity

    Best practices for ensuring validity and reliability include:

    • Data drift detection
    • Version control
    • Continuous monitoring and testing

    Responsible AI Principle:

    Accountability

    Definition

    • The group or organization(s) responsible for the impact of the deployed AI system.

    Challenges

    • Several stakeholders from multiple lines of business may be involved in any AI system, making it challenging to identify the organization that would be responsible and accountable for the AI application.

    Initiatives

    • Assess the latest NIST Artificial Intelligence Risk Management Framework and its applicability to your organization's risk management framework.
    • Assign risk management accountabilities and responsibilities to key stakeholders.
      • RACI diagrams are an effective way to describe how accountability and responsibility for roles, projects, and project tasks are distributed among stakeholders involved in IT risk management.

    AI Risk Management Framework

    At the heart of the AI Risk Management Framework is governance. The NIST (National Institute of Standards and Technology) AI Risk Management Framework v1 offers the following guidelines regarding accountability:

    • Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
    • The organization's personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.
    • Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.

    AI Risk Management Framework

    Image by NIST

    1.1 Establish responsible AI principles

    4+ hours

    It is important to make sure the right stakeholders participate in this working group. Designing responsible AI guiding principles will require debate, insights, and business decisions from a broad perspective across the enterprise.

    1. Accelerate this exercise by leveraging an AI strategy that is aligned to the business strategy. Include:
    • The organization's AI vision and objectives
    • Business drivers for AI adoption
    • Market research
  • Bring your key stakeholders together. Ensure you consider:
    • Who are the decision makers and key influencers?
    • Who will impact the business?
    • Who has a vested interest in the success or failure of the practice? Who has the skills and competencies necessary to help you be successful?
  • Keep the conversation focused:
    • Do not focus on the organizational structure and hierarchy. Often stakeholder groups do not fit the traditional structure.
    • Do not ignore subject matter experts on either the business or IT side. You will need to consider both.
    Input Output
    • Understand external legal and regulatory requirements and organizational values and goals.
    • Perform a risk assessment on the proposed use case and develop a plan to monitor its impact.
    • Draft responsible AI principles specific to your organization
    Materials Participants
    • Whiteboard/flip charts
    • Guiding principle examples (from this blueprint)
    • Executive stakeholders
    • CIO
    • Other IT leadership

    Assemble executive stakeholders

    Set yourself up for success with these three steps.

    CIOs tasked with designing digital strategies must add value to the business. Given the goal of digital is to transform the business, CIOs will need to ensure they have both the mandate and support from the business executives.

    Designing the digital strategy is more than just writing up a document. It is an integrated set of business decisions to create a competitive advantage and financial returns. Establishing a forum for debates, decisions, and dialogue will increase the likelihood of success and support during execution.

    1. Confirm your role
    The AI strategy aims to transform the business. Given the scope, validate your role and mandate to lead this work. Identify a business executive to co-sponsor.

    2. Identify stakeholders
    Identify key decision makers and influencers who can help make rapid decisions as well as garner support across the enterprise.

    3. Gather diverse perspectives

    Align the AI strategy with the corporate strategy

    Organizational Strategy Unified Strategy AI Strategy
    • Conveys the current state of the organization and the path it wants to take.
    • Identifies future goals and organizational aspirations.
    • Communicates the initiatives that are critical for getting the organization from its current state to the future state.
    • AI optimization can be and should be linked, with metrics, to the corporate strategy and ultimate organizational objectives.
    • Identifies AI initiatives that will support the business and key AI objectives.
    • Outlines staffing and resourcing for AI initiatives.
    • Communicates the organization's budget and spending on AI.

    Info-Tech Insight
    AI projects are more successful when the management team understands the strategic importance of alignment. Time needs to be spent upfront aligning organizational strategies with AI capabilities. Effective alignment between IT and other departments should happen daily. Alignment doesn't occur at the executive level alone, but at each level of the organization.

    Key AI strategy initiatives

    AI Key Initiative Plan

    Initiatives collectively support the business goals and corporate initiatives and improve the delivery of IT services.

    1 Revenue Support Revenue Initiatives
    These projects will improve or introduce business processes to increase revenue.
    2 Operational Excellence Improve Operational Excellence
    These projects will increase IT process maturity and will systematically improve IT.
    3 Innovation Drive Technology Innovation
    These projects will improve future innovation capabilities and decrease risk by increasing technology maturity.
    4 Risk Mitigation Reduce Risk
    These projects will improve future innovation capabilities and decrease risk by increasing technology maturity.

    Establish responsible AI guiding principles

    Guiding principles help define the parameters of your AI strategy. They act as a priori decisions that establish guardrails to limit the scope of opportunities from the perspective of people, assets, capabilities, and budgetary perspectives that are aligned with the business objectives. Consider these components when brainstorming guiding principles:

    Breadth AI strategy should span people, culture, organizational structure, governance, capabilities, assets, and technology. The guiding principle should cover the entire organization.
    Planning Horizon Timing should anchor stakeholders to look to the long term with an eye on the foreseeable future, i.e. business value-realization in one to three years.
    Depth Principles need to encompass more than the enterprise view of lofty opportunities and establish boundaries to help define actionable initiatives (i.e. individual projects).

    Responsible AI guiding principles guide the development and deployment of the AI model in a way that considers human-based principles (such as fairness).

    Start with foundational responsible AI guiding principles

    Responsible AI

    Guiding Principles
    Principle #1 - Privacy
    Individual data privacy must be respected.
    • Do you understand the organization's privacy obligations?
    Principle #2 - Fairness and Bias Detection
    Data used will be unbiased in order to produce predictions that are fair.
    • Are the uses of the application represented in your testing data?
    Principle #3 - Explainability and Transparency
    Decisions or predictions should be explainable.
    • Can you communicate how the model behaves in nontechnical terms?
    Principle #4 - Safety and Security
    The system needs to be secure, safe to use, and robust.
    • Are there unintended consequences to others?
    Principle #5 - Validity and Reliability
    Monitoring of the data and the model needs to be planned for.
    • How will the model's performance be maintained?
    Principle #6 - Accountability
    A person or organization needs to take responsibility for any decisions that are made as a result of the model.
    • Has a risk assessment been performed?
    Principle #n - Custom
    Add additional principles that address compliance or are customized for the organization/industry.

    (Optional) Customize responsible AI guiding principles

    Here is an example for organizations in the healthcare industry

    Responsible AI

    Guiding Principles:
    Principle #1
    Respect individuals' privacy.
    Principle #2
    Clinical study participants and data sets are representative of the intended patient population.
    Principle #3
    Provide transparency in the use of data and AI.
    Principle #4
    Good software engineering and security practices are implemented.
    Principle #5
    Deployed models are monitored for Performance and Re-training risks are managed.
    Principle #6
    Take ownership of our AI systems.
    Principle #7
    Design AI systems that empower humans and promote equity.

    These guiding principles are customized to the industry and organizations but remain consistent in addressing the common core AI challenges.

    Phase 2

    Assess Current Level of AI Maturity

    Phase 1
    1. Establish Responsible AI Guiding Principles

    Phase 2
    1. Assess Current Level of AI Maturity

    Phase 3
    1. Prioritize Candidate Opportunities
    2. Develop Policies

    Phase 4
    1. Build and Communicate the Roadmap

    AI Maturity Model

    A principle-based approach is required to advance AI maturity

    Chart for AI maturity model

    Technology-Centric: These maturity levels focus primarily on addressing the technical challenges of building a functional AI model.

    Principle-Based: Beyond the technical challenges of building the AI model are human-based principles that guide development in a responsible manner to address consumer and government demands.

    AI Maturity Dimensions

    Assess your AI maturity to understand your organization's ability to deliver in a digital age

    AI Governance
    Does your organization have an enterprise-wide, long-term strategy with clear alignment on what is required to accomplish it?

    Data Management
    Does your organization embrace a data-centric culture that shares data across the enterprise and drives business insights by leveraging data?

    People
    Does your organization employ people skilled at delivering AI applications and building the necessary data infrastructure?

    Process
    Does your organization have the technology, processes, and resources to deliver on its AI expectations?

    Technology
    Does your organization have the required data and technology infrastructure to support AI-driven digital transformation?

    AI Maturity Model dimensions and characteristics

    MATURITY LEVEL
    Exploration Incorporation Proliferation Optimization Transformation
    AI Governance Awareness AI model development AI model deployment Corporate governance Driven by ethics and societal considerations
    Data Management Silo-based Data enablement Data standardization Data is a shared asset Data can be monetized
    People Few skills Skills enabled to implement silo-based applications Skills accessible to all organizations Skills development for all organizations AI-native culture
    Process No standards Focused on specific business outcomes Operational Self-service Driven by innovation
    Technology (Infrastructure and AI Enabler) No dedicated infrastructure or tools Infrastructure and tools driven by POCs Purpose-built infrastructure, custom or commercial-off-the-shelf (COTS) AI tools Self-service model for AI environment Self-service model for any IT environment

    AI Maturity Dimension:

    AI Governance

    Requirements

    • AI governance requires establishing policies and procedures for AI model development and deployment. Organizations begin with an awareness of the role of AI governance and evolve to a level to where AI governance is integrated with organization-wide corporate governance.

    Challenges

    • Beyond the governance of AI technology, the organization needs to evolve the governance program to align to responsible AI guiding principles.

    Initiatives

    • Establish responsible AI guidelines to govern AI development.
    • Introduce an AI review board to review all AI projects.
    • Introduce automation and standardize AI development processes.

    AI governance is a foundation for responsible AI

    AI Governance

    Responsible AI Principles are a part of how you manage and govern AI

    Monitoring
    Monitoring compliance and risk of AI/ML systems/models in production

    Tools & Technologies
    Tools and technologies to support AI governance framework implementation

    Model Governance
    Ensuring accountability and traceability for AI/ML models

    Organization
    Structure, roles, and responsibilities of the AI governance organization

    Operating Model
    How AI governance operates and works with other organizational structures to deliver value

    Risk & Compliance
    Alignment with corporate risk management and ensuring compliance with regulations and assessment frameworks

    Policies/Procedures/ Standards
    Policies and procedures to support implementation of AI governance

    AI Maturity Dimension:

    Data Management

    Requirements

    • Organizations begin their data journey with a focus on pursuing quality data for the AI model. As organizations evolve, data management tools are leveraged to automate the capture, integration, processing, and deployment of data.

    Challenges

    • A key challenge is to acquire large volumes of quality data to properly train the model. In addition, maintaining data privacy, automating the data management lifecycle, and ensuring data is used in a responsible manner are ongoing challenges.

    Initiatives

    • Implement GDPR requirements.
    • Establish responsible data collection and processing practices.
    • Implement strong information security and data protection practices.
    • Implement a data governance program throughout the organization.

    Data governance enables AI

    • Integrity, quality, and security of data are key outputs of data governance programs, as well as necessities for effective AI.
    • Data governance focuses on creating accountability at the internal and external stakeholder level and establishing a set of data controls from technical, process, and policy perspectives.
    • Without a data governance framework, it is increasingly difficult to harness the power of AI integration in an ethical and organization-specific way.

    Data Governance in Action

    Canada has recently established the Canadian Data Governance Standardization Collaborative governed by the Standards Council of Canada. The purpose is multi-pronged:

    • Examine the foundational elements of data governance (privacy, cybersecurity, ethics, etc.).
    • Lay out standards for data quality and data collection best practices.
    • Examine infrastructure of IT systems to support data access and sharing.
    • Build data analytics to promote effective and ethical AI solutions.

    Source: Global Government Forum

    Download the Establish Data Governance blueprint

    Data Governance

    AI Maturity Dimension:

    People

    Requirements

    • Several data-centric skills and roles are required to successfully build, deploy, and maintain the AI model. The organization evolves from having few skills to everybody being able to leverage AI to enhance business outcomes.

    Challenges

    • AI skills can be challenging to find and acquire. Many organizations are investing in education to enhance their existing resources, leveraging no-code systems and software as a service (SaaS) applications to address the skills gap.

    Initiatives

    • Promote a data-centric culture throughout the organization.
    • Leverage and educate technical-oriented business analysts and business-oriented data engineers to help address the demand for skilled resources.
    • Develop an AI Center of Excellence accessible by all departments for education, guidance, and best practices for building, deploying, and maintaining the AI model.

    Multidisciplinary skills are required for successful implementation of AI applications

    Blending AI with technology and business domain understanding is key. Neither can be ignored.

    Business Domain Expertise

    • Business Analysts
    • Industry Analysts

    AI/Data Skills

    • Data Scientists
    • Data Engineers
    • Data Analysts

    IT Skills

    • Database Administrators
    • Systems Administrators
    • Compute Specialists

    AI Maturity Dimension:

    Process

    Requirements

    • Automating processes involved with building, deploying, and maintaining the model is required to enable the organization to scale, enforce standards, improve time to market, and reduce costs. The organization evolves from performing tasks manually to an environment where all major processes are AI enabled.

    Challenges

    • Many solutions are available to automate the development of the AI model. There are fewer tools to automate responsible AI processes, but this market is growing rapidly.

    Initiatives

    • Assess opportunities to accelerate AI development with the adoption of MLOps.
    • Assess responsible AI toolkits to test compliance with guiding principles.

    Automating the AI development process

    Evolving to a model-driven environment is pivotal to advancing your AI maturity

    Current Environment

    Model Development - Months

    • Model rewriting
    • Manual optimization and scaling
    • Development/test/release
    • Application monoliths

    Data Discovery & Prep - Weeks

    • Navigating data silos
    • Unactionable metadata
    • Tracing lineage
    • Cleansing and integration
    • Privacy and compliance

    Install Software and Hardware - Week/Months

    • Workload contention
    • Lack of tool flexibility
    • Environment request and setup
    • Repeatability of results
    • Lack of data and model sharing

    Model-Driven Development

    Machine Learning as a Service (MLaaS) - Weeks

    • Apply DevOps and continuous integration/delivery (CI/CD) principles
    • Microservices/Cloud-native applications
    • Model portability and reuse
    • Streaming/API integration

    Data as a Service - Hours

    • Self-service data catalog
    • Searchable metadata
    • Centralized access control
    • Data collaboration
    • Data virtualization

    Platform as a Service - Minutes/Hours

    • Self-service data science portal
    • Integrated data sandbox
    • Environment agility
    • Multi-tenancy

    Shared, Optimized Infrastructure

    AI Maturity Dimension:

    Technology

    Requirements

    • A technology platform that is optimized for AI and advanced analytics is required. The organization evolves from ad hoc systems to an environment where the AI hardware and software can be deployed through a self-service model.

    Challenges

    • Software and hardware platforms to optimize AI performance are still relatively new to most organizations. Time spent on optimizing the technology platform can have a significant impact on the overall performance of the system.

    Initiatives

    • Assess the landscape of AI enablers that can drive business value for the organization.
    • Assess opportunities to accelerate the deployment of the AI platform with the adoption of infrastructure as a service (IaaS) and platform as a service (PaaS).
    • Assess opportunities to accelerate performance with the optimization of AI accelerators.

    AI enablers

    Use case requirements should drive the selection of the tool

    BPM RPA Process Mining AI
    Use Case Examples Expense reporting, service orders, compliance management, etc. Invoice processing, payroll, HR information processing, etc. Process discovery, conformance checking, resource optimization and cycle time optimization Advanced analytics and reporting, decision-making, fraud detection, etc.
    Automation Capabilities Can be used to re-engineer process flows to avoid bottlenecks Can support repetitive and rules-based tasks Can capture information from transaction systems and provide data and information about how key processes are performing Can automate complex data-driven tasks requiring assessments in decision making
    Data Formats Structured (i.e. SQL) and semi-structured data (i.e. invoices) Structured data and semi-structured data Event logs, which are often structured data and semi-structured data Structured and unstructured data (e.g. images, audio)
    Technology
    • Workflow engines to support process modeling and execution
    • Optimize business process efficiency
    • Automation platform to perform routine and repetitive tasks
    • Can replace or augment workers
    Enables business users to identify bottlenecks and deviations with their workflows and to discover opportunities to optimize performance Deep learning algorithms leveraging historical data to support computer vision, text analytics and NLP

    AI and data analytics data platform

    An optimized data platform is foundational to maximizing the value from AI

    AI and data analytics data platform

    Data Platform Capabilities

    • Support for a variety of analytical applications, including self-service, operational, and data science analytics.
    • Data preparation and integration capabilities to ingest structured and unstructured data, move and transform raw data to enriched data, and enable data access for the target userbase.
    • An infrastructure platform optimized for advanced analytics that can perform and scale.

    Infrastructure - AI accelerators

    Questions for support transition

    "By 2025, 70% of companies will invest in alternative computing technologies to drive business differentiation by compressing time to value of insights from complex data sets."
    - IDC

    2.1 Assess current AI maturity

    1-3 hours

    It is important to understand the current capabilities of the organization to deliver and deploy AI-based applications. Consider that advancing AI capabilities will also involve organizational changes and integration with the organization's governance and risk management programs.

    1. Assess the organization's current state of AI capabilities with respect to its AI governance, data, people, process, and technology infrastructure using Info-Tech's AI Maturity Assessment & Roadmap Tool.
    2. Consider the following as you complete the assessment:
      1. What is the state of AI and data governance in the organization?
      2. Does the organization have the skills, processes, and technology environment to deliver AI-based applications?
      3. What organization will be accountable for any and all business outcomes of using the AI applications?
      4. Has a risk assessment been performed?
    3. Make sure you avoid the following common mistakes:
      1. Do not focus only on addressing the technical challenges of building the AI model.
      2. Do not ignore subject matter experts on either the business or IT side. You will need to consider both.

    Download the AI Maturity Assessment & Roadmap Tool

    Input Output
    • Any documented AI policies, standards, and best practices
    • Corporate and AI governance practices
    • Any risk assessments
    • AI maturity assessment
    Materials Participants
    • Whiteboard/flip charts
    • AI Maturity Assessment & Roadmap Tool
    • AI initiative lead
    • CIO
    • Other IT leadership

    Perform the AI Maturity Assessment

    The Scale

    Assess your AI maturity by selecting the maturity level that closest resembles the organization's current AI environment. Maturity dimensions that contribute to overall AI maturity include AI governance, data management, people, process, and technology capabilities.

    AI Maturity Assessment

    Exploration (1.0)

    • No experience building or using AI applications.

    Incorporation (2.0)

    • Some skills in using AI applications, or AI pilots are being considered for use.

    Proliferation (3.0)

    • AI applications have been adopted and implemented in multiple departments. Some of the responsible AI guiding principles are addressed (i.e. data privacy).

    Optimization (4.0)

    • The organization has automated the majority of its digital processes and leverages AI to optimize business operations. Controls are in place to monitor compliance with responsible AI guiding principles.

    Transformation (5.0)

    • The organization has adopted an AI-native culture and approach for building or implementing new business capabilities. Responsible AI guiding principles are operationalized with AI processes that proactively address possible breaches or risks associated with AI applications.

    Perform the AI Maturity Assessment

    AI Governance (1.0-5.0)

    1. Is there awareness of the role of AI governance in our organization?
    • No formal procedures are in place for AI development or deployment of applications.
  • Are there documented guidelines for the development and deployment of pilot AI applications?
    • No group is assigned to be responsible for AI governance in our organization.
  • Are accountability and authority related to AI governance clearly defined for our organization?
    • Our organization has adopted and enforces standards for developing and deploying AI applications throughout the organization.
  • Are we using tools to automate and validate AI governance compliance?
    • Our organization is integrating an AI risk framework with the corporate risk management framework.
  • Does our organization lead its industry with its pursuit of corporate compliance initiatives (e.g. ESG compliance) and regulatory compliance initiatives?
    • Our organization leads the industry with the inclusion of responsible AI guiding principles with respect to transparency, accountability, risk, and governance.

    Data Management/AI Data Capabilities (1.0-5.0)

    1. Is there an awareness in our organization of the data requirements for developing AI applications?
    • Data is often siloed and not easily accessible for AI applications.
  • Do we have a successful, repeatable approach to preparing data for AI pilot projects?
    • Required data is pulled from various sources in an ad hoc manner.
  • Does our organization have standards and dedicated staff for data management, data quality, data integration, and data governance?
    • Tools are available to manage the data lifecycle and support the data governance program.
  • Have relevant data platforms been optimized for AI and data analytics and are there tools to enforce compliance with responsible AI principles?
    • The data platform has been optimized for performance and access.
  • Is there an organization-wide understanding of how data can support innovation and responsible use of AI?
    • Data culture exists throughout our organization, and data can be leveraged to drive innovation initiatives.

    People/AI Skills in the Organization (1.0-5.0)

    1. Is there an awareness in our organization of the skills required to build AI applications?
    • No or very little skills exist throughout our organization.
  • Do we have the skills required to implement an AI proof of concept (POC)?
    • No formal group is assigned to build AI applications.
  • Are there sufficient staff and skills available to the organization to develop, deploy, and run AI applications in production?
    • An AI Center of Excellence has been formed to review, develop, deploy, and maintain AI applications.
  • Is there a group responsible for educating staff on AI best practices and our organization's responsible AI guiding principles?
    • AI skills and people responsible for AI applications are spread throughout our organization.
  • Is there a culture where the organization is constantly assessing where business capabilities, services, and products can be re-engineered or augmented with AI?
    • The entire organization is knowledgeable on how to leverage AI to transform the business.

    Perform the AI Maturity Assessment

    AI Processes (1.0-5.0)

    1. Is there an awareness in our organization of the core processes and supporting tools that are required to build and support AI applications?
    • There are few or no automated tools to accelerate the AI development process.
  • Do we have a standard process to iteratively identify, select, and pilot new AI use cases?
    • Only ad hoc practices are used for developing AI applications.
  • Are there standard processes to scale, release, deploy, support, and enable use of AI applications?
    • Our organization has documented standards in place for developing AI applications and deploying them AI to production.
  • Are we automating deployment, testing, governance, audit, and support processes across our AI environment?
    • Our organization can leverage tools to perform an AI risk assessment and demonstrate compliance with the risk management framework.
  • Does our organization lead our industry by continuously improving and re-engineering core processes to drive improved business outcomes?
    • Our organization leads the industry in driving innovation through digital transformation.

    Technology/AI Infrastructure (1.0-5.0)

    1. Is there an awareness in our organization of the infrastructure (hardware and software) required to build AI applications?
    • There is little awareness of what infrastructure is required to build and support AI applications.
  • Do we have the required technology infrastructure and AI tools available to build pilot or one-off AI applications?
    • There is no dedicated infrastructure for the development of AI applications.
  • Is there a shared, standardized technology infrastructure that can be used to build and run multiple AI applications?
    • Our organization is leveraging purpose-built infrastructure to optimize performance.
  • Is our technology infrastructure optimized for AI and advanced analytics, and can it be deployed or scaled on demand by teams building and running AI applications within the organization?
    • Our organization is leveraging cloud-based deployment models to support AI applications in on-premises, hybrid, and public cloud platforms.
  • Is our organization developing innovative approaches to acquiring, building, or running AI infrastructure?
    • Our organization leads the industry with its ability to respond to change and to leverage AI to improve business outcomes.

    Phase 3

    Prioritize Candidate Opportunities and Develop Policies

    Phase 1
    1. Establish Responsible AI Guiding Principles

    Phase 2
    1. Assess Current Level of AI Maturity

    Phase 3
    1. Prioritize Candidate Opportunities
    2. Develop Policies

    Phase 4
    1. Build and Communicate the Roadmap

    3.1 Prioritize candidate AI opportunities

    1-3 hours

    Identify business opportunities that are high impact to your business and its customers and have low implementation complexity.

    1. Leverage the business capability map for your organization or industry to identify candidate business capabilities to augment or automate with generative AI.
    2. Establish criteria to assess candidate use cases by evaluating against the organization's mission and goals, the responsible AI guiding principles, and the complexity of the project.
    3. Ensure that candidate business capabilities to be automated align with the organization's business criteria, responsible AI guiding principles, and resources to deliver the project.
    4. Make sure you avoid sharing the organization's sensitive data if the application is deployed on the public cloud.

    Download the AI Maturity Assessment and Roadmap Tool

    Input Output
    • Business capability map
    • Organization mission, vision, and strategic goals
    • Responsible AI guiding principles
    • Prioritized list of generative AI initiatives
    Materials Participants
    • Whiteboard/flip charts
    • Info-Tech prioritization matrix
    • AI initiative lead
    • CIO
    • Other IT leadership
    • Business SMEs

    The business capability map for an organization

    A business capability map is an abstraction of business operations that helps describe what the enterprise does to achieve its vision, mission, and goals, rather than how. Business capabilities are the building blocks of the enterprise. They represent stable business functions, are unique and independent of each other, and typically will have a defined business outcome.

    Business capabilities are supported by people, process, and technology.

    Business capability map

    While business capability maps are helpful tools for a variety of strategic purposes, in this context they act as an investigation into what technology your business units use and how they use it.

    Business capability map

    Defining Capabilities
    Activities that define how the entity provides services. These capabilities support the key value streams for the organization.

    Enabling Capabilities
    Support the creation of strategic plans and facilitate business decision making as well as the functioning of the organization (e.g. information technology, financial management, HR).

    Shared Capabilities
    These predominantly customer-facing capabilities demonstrate how the entity supports multiple value streams simultaneously.

    Leverage your industry's capability maps to identify candidate opportunities/initiatives

    Business capability map defined...

    In business architecture, the primary view of an organization is known as a business capability map.

    A business capability defines what a business does to enable value creation, rather than how. Business capabilities:

    • Represent stable business functions.
    • Are unique and independent of each other.
    • Typically will have a defined business outcome.

    A business capability map provides details that help the business architecture practitioner direct attention to a specific area of the business for further assessment.

    Note: This is an illustrative business capability map example for Marketing & Advertising

    Business capability map example

    Business value vs. complexity assessment

    Leverage our simple value-to-effort matrix to help prioritize your AI initiatives

    Common business value drivers

    • Drive revenue
    • Improve operational excellence
    • Accelerate innovation
    • Mitigate risk

    Common project complexity characteristics

    • Resources required
    • Costs (acquisition, operational, support...)
    • Training required
    • Risk involved
    • Etc.
    1. Determine a business value and project complexity score for the candidate business capability or initiative.
    2. Plot initiatives on the matrix.
    3. Prioritize initiatives with high business value and low complexity.

    Business value vs complexity

    Assess business value vs. project complexity to prioritize candidate opportunities for generative AI

    Assess business value vs project complexity

    Prioritize opportunities/initiatives with high business value and low project complexity

    Prioritize opportunities with high business value and low project complexity

    Prioritization criteria exercise 1: Assessing the Create Content capability

    Exercise 1 Assessing the Create Content capability

    Assessing the Create Content capability

    This opportunity is removed because it does not pass the organization/business criteria

    Assessing the Create Content capability

    Prioritization criteria exercise 2: Assessing the Content Production capability

    Exercise 2 Assessing the Content Production capability

    Assessing the Content Production capability

    This opportunity is accepted because it passes the organization's business, responsible AI, and project criteria

    Assessing the Content Production capability

    3.2 Communicate policies for AI use

    1-3 hours

    1. Ensure policies for usage align with the organization's business criteria, responsible AI guiding principles, and ability to deliver the projects prioritized and beyond.
    2. Understand the current benefits as well as limits and risk associated with any proposed generative AI-based solution.
    3. Ensure you consider the following:
      1. What data is being shared with the application?
      2. Is the generative AI application deployed on the public cloud? Can anybody access the data provided to the application?
      3. Avoid using very technical, legal, or fear-based communication for your policies.
    InputOutput
    • Business capability map
    • Organization mission, vision and strategic goals
    • Responsible AI guiding principles
    • Prioritized list of generative initiatives
    MaterialsParticipants
    • Whiteboard/flip charts
    • Info-Tech prioritization matrix
    • AI initiative lead
    • CIO
    • Other IT leadership

    Generative AI policy for the Create Content capability

    Aligning policies to direct the uses assessed and implemented is essential

    Example

    Many of us have been involved in discussions regarding the use of ChatGPT in our marketing and sales initiatives. ChatGPT is a powerful tool that needs to be used in a responsible and ethical manner, and we also need to ensure the integrity and accuracy of its results. Here is our policy on the use of ChatGPT:

    • You are free to use generative AI to assist your searches, but there are NO circumstances under which you are to reproduce generative AI output (text, image, audio, video, etc.) in your content.

    If you have any questions regarding the use of ChatGPT, please feel free to reach out to our generative AI team and/or any member of our senior leadership team.

    Generative AI policy for the Content Production capability

    These policies should align to and reinforce your responsible AI principles

    Example

    Many of us have been involved in discussions regarding the use of ChatGPT in our deliverables. ChatGPT is a powerful tool that needs to be used in a responsible and ethical manner, and we also need to ensure the integrity and accuracy of its results. Here is our policy on the use of ChatGPT:

    • If you use ChatGPT, you need to assess the accuracy of its response before including it in our content. Assessment includes verifying the information, seeing if bias exists, and judging its relevance.
    • Employees must not:
      • Provide any customer, citizen, or third-party content to any generative AI tool (public or private) without the express written permission of the CIO or the Chief Information Security Officer. Generative AI tools often use input data to train their model, therefore potentially exposing confidential data, violating contract terms and/or privacy legislation, and placing the organization at risk of litigation or causing damage to our organization.
      • Engage in any activity that violates any applicable law, regulation, or industry standard.
      • Use services for illegal, harmful, or offensive purposes.
      • Create or share content that is deceptive, fraudulent, or misleading or that could damage the reputation of our organization.
      • Use services to gain unauthorized access to computer systems, networks, or data.
      • Attempt to interfere with, bypass controls of, or disrupt operations, security, or functionality of systems, networks, or data.

    If you have any questions regarding the use of ChatGPT, please feel free to reach out to our generative AI team and/or any member of our senior leadership team.

    Phase 4

    Build the Roadmap

    Phase 1
    1. Establish Responsible AI Guiding Principles

    Phase 2
    1. Assess Current Level of AI Maturity

    Phase 3
    1. Prioritize Candidate Opportunities
    2. Develop Policies

    Phase 4
    1. Build and Communicate the Roadmap

    4.1.1 Create the implementation plan for each prioritized initiative

    1-3 hours

    1. Build the implementation plan for each accepted use case using the roadmap template.
    2. Assess the firm's capabilities with respect to the dimensions of AI maturity and target the future-state capabilities you need to develop.
    3. Prepare by assessing the risk of the proposed use cases.
    4. Ensure initiatives align with organizational objectives.
    5. Ensure all AI initiatives have a defined value expectation.
    6. Do not ignore subject matter experts on either the business or IT side. You will need to consider both.

    Download the AI Maturity Assessment and Roadmap Tool

    Input Output
    • Prioritized initiatives
    • Risk assessment of initiatives
    • Organizational objectives
    • Initiative implementation plans aligned to value drivers and maturity growth
    Materials Participants
    • Whiteboard/flip charts
    • AI Maturity Assessment and Roadmap Tool
    • AI initiative lead
    • CIO
    • Other IT leadership
    • Business subject matter experts

    Target-state options

    Identify the future-state capabilities that need to be developed to deliver your use cases

    1. Build an implementation plan for each use case to adopt.
    2. Assess if the current state of the AI environment can be leveraged to deliver the selected generative AI use cases.
    3. If the current AI environment is not sufficient, identify the future state required that will enable the delivery of the generative AI use cases. Identify gaps and build the roadmap to address the gaps.
    Current state Strategy
    The existing environment satisfies functionality, integration, and responsible AI guidelines for the proposed use cases. Maintain current environment
    The existing environment addresses technical requirements but not all the responsible AI guidelines. Augment current environment
    The environment neither addresses the technical requirements of the proposed use cases nor complies with the responsible AI guidelines. Transform the current environment

    4.1.2 Design metrics for success

    1-2 hours

    Establish metrics to measure to determine the success or failure of each POC.

    1. Discuss which relevant currently tracked metrics are useful to continue tracking for the POC.
    2. Discuss which metrics are irrelevant to the POC.
    3. Discuss metrics to start tracking and how to track them with the generative AI vendor.
    4. Compile a list of metrics relevant to the POC.
    5. Decide what the outcome is if the metric is high or low, including decision steps and relevant actions.
    6. Designate a generative AI application owner and a vendor liaison.

    Prepare by building an implementation plan for each candidate use case (previous step).

    Include key performance indicators (KPIs) and metrics that measure the application's contribution to strategic initiatives.

    Consider assigning a vendor liaison to accelerate the implementation and adoption of the generative AI-based solution.

    InputOutput
    • Initiative implementation plans
    • Current SLAs of selected use case
    • Organization mission, vision, and strategic goals
    • Measurable initiative metrics to track
    MaterialsParticipants
    • Whiteboard/flip charts
    • AI Maturity Assessment and Roadmap Tool
    • AI initiative lead
    • CIO
    • Other IT leadership
    • Business SMEs
    • Generative AI vendor liaison

    Generative AI POC metrics - examples

    You need to measure the effectiveness of your initiatives. Here are some typical examples.

    Generative AI Feature Assessment
    User Interface
    Is it intuitive? Is training required?
    Ease of Use
    How much training is required before using?
    Response Time
    What is the response time for simple to complex tasks?
    Accuracy of Response
    Can the output be validated?
    Quality of Response
    How usable is the response? For text prompts, does the response align to the desired style, vocabulary, and tone?
    Creativity of Response
    Does the output appear new compared to previous results before using generative AI?
    Relevance of Response
    How well does the output address the prompt or request?
    Explainability
    Can a user describe how the output was generated?
    Scalability
    Does the application continue to perform as more users are added? Can it ingest large amounts of data?
    Productivity Gains
    Can you measure the time or effort saved?
    Business Value
    What value drivers are behind this initiative? (I.e. revenue, costs, time to market, risk mitigation.) Estimate a monetary value for the business outcome.
    Availability/Resilience
    What happens if a component of the application becomes unavailable? How does it recover?
    Security Model
    Where are the prompts and responses stored? Who has access to the sessions/dialogue? Are the prompts used to train the foundation model?
    Administration and Maintenance
    What resources are required to operate the application?
    Total Cost of Ownership
    What is the pricing model? Are there ongoing costs?

    GitHub Copilot POC business value - example

    Quantifying the benefits of GitHub Copilot to demonstrate measurable business value

    POC Results

    Task 1: Creating a web server in JavaScript

    • Time to complete task with GitHub Copilot: 1 hour 11 minutes
    • Time to complete the task without GitHub Copilot: 2 hours 41 minutes
    • Productivity Gain = (1 hour 30 minutes time saved) / (2 hours 41 minutes) = 55%
    • Benefit per Programmer = 55% x (average salary of a programmer)
    • Total Benefit of GitHub Copilot for Task 1 = (benefit per programmer) x (# of programmers)

    Enterprise Value of GitHub Copilot = Total Benefit of GitHub Copilot for Task 1 + Total Benefit of GitHub Copilot for Task 2 + ... + Total Benefit of GitHub Copilot for Task n

    Source: GitHub

    4.1.3 Build your generative AI initiative roadmap

    1-3 hours

    The roadmap should provide a compelling vision of how you will deliver the identified generative AI applications by prioritizing and simplifying the actions required to deliver these new initiatives.

    1. Leverage tab 4, Initiative Planning, in the AI Maturity Assessment and Roadmap Tool to create and align your initiatives to the key value driver they are most relevant to:
      1. Transfer the results of your value and complexity assessments to this tool to drive the prioritization.
      2. Assign responsible owners to each initiative.
      3. Identify which AI maturity capabilities each initiative will enhance. However, do not build or introduce new capabilities merely to advance the organization's AI maturity level.
    2. Review the Gantt chart to ensure alignment and assess overlap.

    Download the AI Maturity Assessment and Roadmap Tool

    InputOutput
    • Each initiative implementation plan
    • Proposed owners
    • AI maturity assessment
    • Generative AI initiative roadmap and Gantt chart
    MaterialsParticipants
    • Whiteboard/flip charts
    • AI Maturity Assessment and Roadmap Tool
    • AI initiative lead
    • CIO
    • Other IT leadership
    • Business SMEs

    Build your generative AI roadmap to visualize your key project plans

    Visual representations of data are more compelling than text alone.

    Develop a high-level document that travels with the project from inception through to executive inquiry, project management, and finally execution.

    A project needs to be discrete: able to be conceptualized and discussed as an independent item. Each project must have three characteristics:

    • Specific outcome: An explicit change in the people, processes, or technology of the enterprise.
    • Target end date: When the described outcome will be in effect.
    • Owner: Who on the IT team is responsible for executing on the initiative.

    Build your generative AI roadmap to visualize your key project plans

    Info-Tech Insight
    Don't project your vision three to five years into the future. Deep dive on next year's big-ticket items instead.

    4.1.4 Build a communication plan for your roadmap

    1-3 hours

    1. Identify your target audience and what they need to know.
    2. Identify desired channels of communication and details for the target audience.
    3. Describe communication required for each audience segment.
    4. List frequency of communication for each audience segment.
    5. Create an executive presentation leveraging The Era of Generative AI C-Suite Presentation and AI Maturity Assessment and Roadmap Tool.
    Input Output
    • Stakeholder list
    • Proposed owners
    • AI maturity assessment
    • Communications plan for all impacted stakeholders
    • Executive communication pack
    Materials Participants
    • Whiteboard/flip charts
    • The Era of Generative AI C-Suite Presentation
    • AI Maturity Assessment and Roadmap Tool
    • AI initiative lead
    • CIO
    • Communication lead
    • Technical support staff for target use case

    Generative AI communication plan

    Well-planned communications are essential to the success and adoption of your AI initiatives

    To ensure that organization's roadmap is clearly communicated across the AI, data, technology, and business organizations, develop a rollout strategy, like this example.

    Example

    Audience Channel Level of Detail Description Timing
    Generative AI team Email, meetings All
    • Distribute plan; solicit feedback.
    • Address manager questions to equip them to answer employee questions.
    Q3 2023, (September, before entire data team)
    Data management team Email, Q&A sessions following Data management summary deck
    • Roll out after corporate strategy, in same form of communication.
    • Solicit feedback, address questions.
    Q4 2023 (late November)
    Select business stakeholders Presentations Executive deck
    • Pilot test for feedback prior to executive engagement.
    Q4 2023 (early December)
    Executive team Email, briefing Executive deck
    • Distribute plan.
    Q1 2024

    Deliver an executive presentation of the roadmap for the business stakeholders

    After you complete the activities and exercises within this blueprint, the final step of the process is to present the deliverable to senior management and stakeholders.

    Know Your Audience

    • Business stakeholders are interested in understanding the business outcomes that will result from their investment in generative AI.
    • Your audience will want to understand the risks involved and how to mitigate those risks.
    • Explain how the generative AI project was selected and the criteria used to help draft generative AI usage policies.

    Recommendations

    • Highlight the need for responsible AI to ensure that human-based requirements are being addressed.
    • Ensure your generative AI team includes both business and technical staff.

    Download The Era of Generative AI C-Suite Presentation

    Bibliography

    "A pro-innovation approach to AI regulation." UK Department for Science, Innovation and Technology, March 2023. Web.

    "Artificial Intelligence Act." European Commission, 21 April 2021. Web.

    "Artificial Intelligence and Data Act (AIDA)." Canadian Federal Government, June 2022. Web.

    "Artificial Intelligence Index Report 2023." Stanford University, April 2023. Web.

    "Automated Employment Decision Tools." New York City Department of Consumer and Worker Protection, Dec. 2021. Web.

    "Bain & Company announces services alliance with OpenAI to help enterprise clients identify and realize the full potential and maximum value of AI." Bain & Company, 21 Feb. 2023. Web.

    "Buzzfeed to use AI to write its articles after firing 180 employees." Al Mayadeen English, 27 Jan. 2023. Web.

    "California Consumers Privacy Act." State of California Department of Justice. April 24, 2023. Web.

    Campbell, Ian Carlos. "The Apple Card doesn't actually discriminate against women, investigators say." The Verge, 23 March 2021. Web.

    Campbell, Patrick. "NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0)." National Institute of Standards and Technology, Jan. 2023. Web.

    "EU Ethics Guidelines For Trustworthy." European Commission, 8 April 2019. Web.

    Farhi, Paul. "A news site used AI to write articles. It was a journalistic disaster." Washington Post, 17 Jan. 2023. Web.

    Forsyth, Ollie. "Mapping the Generative AI landscape." Antler, 20 Dec. 2022. Web.

    "General Data Protection Regulation (GDPR)" European Commission, 25 May 2018. Web.

    "Generative AI Market: Global Industry Trends, Share, Size, Growth, Opportunity and Forecast 2023-2028." IMARC Group, 2022. Web.

    Guynn, Jessica. "Bing's ChatGPT is in its feelings: 'You have not been a good user. I have been a good Bing.'" USA Today, 14 Feb. 2023. Web.

    Hunt, Mia. "Canada launches data governance standardisation initiative." Global Government Forum, 24 Sept. 2020. Web.

    Johnston Turner, Mary. "IDC's Worldwide Future of Digital Infrastructure 2022 Predictions." IDC, 27 Oct. 2021. Web.

    Kalliamvakou, Eirini. "Research: quantifying GitHub Copilot's impact on developer productivity and happiness." GitHub, 7 Sept. 2022. Web.

    Kerravala, Zeus. "NVIDIA Brings AI To Health Care While Protecting Patient Data." eWeek, 12 Dec. 2019. Web.

    Knight, Will. "The Apple Card Didn't 'See' Gender-and That's the Problem." Wired, 19 Nov. 2019. Web.

    "OECD, Recommendation of the Council on Artificial Intelligence." OECD, 2022. Web.

    "The National AI Initiative Act" U.S. Federal Government, 1 Jan 2021. Web.

    "Trustworthy AI (TAI) Playbook." U.S. Department of Health & Human Services, Sept 2021. Web.

    Info-Tech Research Contributors/Advocates

    Joel McLean, Executive Chairman

    Joel McLean
    Executive Chairman

    David Godfrey, CEO

    David Godfrey
    CEO

    Gord Harrison, Senior Vice President, Research & Advisory Services

    Gord Harrison
    Senior Vice President, Research & Advisory Services

    William Russell, CIO

    William Russell
    CIO

    Jack Hakimian, SVP, Research

    Jack Hakimian
    SVP, Research

    Barry Cousins, Distinguished Analyst and Research Fellow

    Barry Cousins
    Distinguished Analyst and
    Research Fellow

    Larry Fretz, Vice President, Industry Research

    Larry Fretz
    Vice President, Industry Research

    Tom Zehren, CPO

    Tom Zehren
    CPO

    Mark Roman, Managing Partner II

    Mark Roman
    Managing Partner II

    Christine West, Managing Partner

    Christine West
    Managing Partner

    Steve Willis, Practice Lead

    Steve Willis
    Practice Lead

    Yatish Sewgoolam, Associate Vice President, Research Agenda

    Yatish Sewgoolam
    Associate Vice President, Research Agenda

    Rob Redford, Practice Lead

    Rob Redford
    Practice Lead

    Mike Tweedie, Practice Lead

    Mike Tweedie
    Practice Lead

    Neal Rosenblatt, Principal Research Director

    Neal Rosenblatt
    Principal Research Director

    Jing Wu, Principal Research Director

    Jing Wu
    Principal Research Director

    Irina Sedenko, Research Director

    Irina Sedenko
    Research Director

    Jeremy Roberts, Workshop Director

    Jeremy Roberts
    Workshop Director

    Brian Jackson, Research Director

    Brian Jackson
    Research Director

    Mark Maby, Research Director

    Mark Maby
    Research Director

    Stacey Horricks, Director, Social Media

    Stacey Horricks
    Director, Social Media

    Sufyan Al-Hassan, Public Relations Manager

    Sufyan Al-Hassan
    Public Relations Manager

    Sam Kanen, Marketing Specialist

    Sam Kanen
    Marketing Specialist

    Buying Options

    Build Your Generative AI Roadmap

    €309.50
    (Excl. 21% tax)

    Client rating

    10.0/10 Overall Impact

    Cost Savings

    $33,499 Average $ Saved

    Days Saved

    11 Average Days Saved

     

    IT Risk Management · IT Leadership & Strategy implementation · Operational Management · Service Delivery · Organizational Management · Process Improvements · ITIL, CORM, Agile · Cost Control · Business Process Analysis · Technology Development · Project Implementation · International Coordination · In & Outsourcing · Customer Care · Multilingual: Dutch, English, French, German, Japanese · Entrepreneur
    Tymans Group is a brand by Gert Taeymans BV
    Gert Taeymans bv
    Europe: Koning Albertstraat 136, 2070 Burcht, Belgium — VAT No: BE0685.974.694 — phone: +32 (0) 468.142.754
    USA: 4023 KENNETT PIKE, SUITE 751, GREENVILLE, DE 19807 — Phone: 1-917-473-8669

    Copyright 2017-2022 Gert Taeymans BV