building successful ai that’s grounded in trust and

13
Building successful AI that’s grounded in trust and transparency

Upload: others

Post on 16-Oct-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Building successful AI that’s grounded in trust and

Building successful AI that’s grounded in trust and transparency

Page 2: Building successful AI that’s grounded in trust and

2Building successful AI that’s grounded in trust and transparency

Table of contents

How to avoid bias and drift while ensuring explainability

Building trust in AI requires a strategic approach

Why avoiding bias is critical to AI success

How to avoid AI model drift with monitoring and management

AI models lose value if their results can’t be explained

Get started with AI to yield better outcomes

Chapter 1

Chapter 2

Chapter 3

Chapter 4

Chapter 5

Chapter 6

Page 3: Building successful AI that’s grounded in trust and

3Building successful AI that’s grounded in trust and transparency

Artificial Intelligence (AI) has expanded its role in our daily lives and is now virtually everywhere — from our workplaces and smart homes to our ADAS-equipped cars and ubiquitous chatbots. AI is also helping to transform businesses and how people work by automating processes, providing better insights through data, and innovating ways of engaging with customers and employees. Businesses across industries are looking to AI to support human decision making, with 84% of executives anticipating increased organizational focus on AI in the near future.1 AI is quickly becoming necessary for all businesses that wish to stay relevant and able to quickly respond to market disruptions and predict future opportunities.

The global AI market size was valued at $27.23 billion in 2019 and is projected to reach $ USD 266.92 billion by 2027.2

But all AI is not created equal. For AI to have meaningful impact, businesses must answer the critical question: can we trust it? Businesses need to not only trust the AI models themselves, but also the outcomes they produce — as AI that is either poorly trained, not properly monitored, or unable to be explained can do more harm than good.

Trust in AI can only be established when it is fair, transparent and explainable. In fact, 91% of organizations say that their ability to explain how their AI made a decision is critical.3 So how can organizations build trust to tap into the full business value of AI?

It all begins with the right strategy.

How to avoid bias and drift while ensuring explainability

Chapter 1

Trust in AI can only be established when it is fair, transparent and explainable

1 The Business Value of AI, IBM, November 2020. 2 Artificial Intelligence Market Size, Share & Covid-19 Impact Analysis, Fortune Business Insights, July 2020. 3 Global Data from IBM Points to AI Growth as Businesses Strive for Resilience, IBM, August 2021.

Page 4: Building successful AI that’s grounded in trust and

4Building successful AI that’s grounded in trust and transparency

Organizations must master the quality of data used, mitigate algorithmic bias, and provide answers that are supported with evidence. An organization’s ability to build and earn trust is grounded in five key pillars:

• Explainability: it’s critical to understand how AI-led decisions are made and what determining factors are included. While you need to be able to explain the decisions made by AI, you also need to be able to explain the history of a project — what was the data’s full path before the outcome.

• Fairness: proper monitoring and safeguards can help mitigate bias and drift, which leads to fairer results. The AI outcomes themselves need to be fair, but the people building the AI models need to ensure they are not building human bias into the algorithms.

• Robustness: when you have trustworthy AI at scale, you are more prepared to keep your systems healthy and guard against potential threats. Having robust data is essential since it can live in multiple settings and its provenance is explicit. It is also less likely to be susceptible to interference.

• Transparency: having transparency and sharing information with stakeholders of varying roles helps deepen trust. While transparency involves knowing who owns an AI model, it also involves knowing the original purpose of why it was built in the first place and who is accountable for each step.

• Privacy: AI systems need to safeguard data through the entire AI lifecycle from training to production and governance.2 In addition to being secure from outside threats or interference, the data being used for an AI model must be anonymized to ensure the entire lifecycle is ethical and compliant with regulations.

Building trust requires a strategic approach

Chapter 2

Human trust in technology is rooted in our understanding of how it works. AI has the promise of delivering valuable insights and knowledge, but broad adoption of AI systems relies heavily on the ability to trust the AI output. To trust a decision being made by an AI algorithm, you need to know that it’s fair, accurate, ethical and explainable. To have ethical AI that isn’t causing inequalities, it’s important to start with a clear vision and understanding of who is training AI, what data was used, and what went into their algorithms’ recommendations.1 This is a tall order and requires a clear and deliberate strategy.

Page 5: Building successful AI that’s grounded in trust and

5Building successful AI that’s grounded in trust and transparency

Building trust in AI will require a significant effort to instill in it a sense of morality, operate in full transparency and provide education about the opportunities it will create for business and consumers.3

“ Machines get biased because the training data they’re fed may not be fully representative of what you’re trying to teach them.”4 Guru Banavar IBM Chief Science Officer for Cognitive Computing

But a strategy built on trust needs to be something that continues to evolve throughout the AI lifecycle. Model creation, unbiased training and deployment are only the start. Once a business has established trust, that trust must be maintained, refined and deepened as the dependency on AI grows. With a solid AI lifecycle management strategy, you can have line of sight into each step of the AI process and rely on verifiable touchpoints that continue to reflect the overall goal. This ensures greater transparency and a better understanding of outcomes to provide accurate, trustworthy, AI-driven decisions.

1 AI Ethics, IBM, July 2021. 2 Trustworthy AI, IBM, June 2021. 3 Building Trust in AI, IBM, October 2016. 4 Trusted AI, IBM, 2017.

Page 6: Building successful AI that’s grounded in trust and

6Building successful AI that’s grounded in trust and transparency

Why avoiding bias is critical to AI success

Chapter 3

As AI becomes more integral to human-run businesses, those who train and manage AI algorithms need to consider how human biases could influence that training and cause unintended discriminatory outcomes. If the person building an AI model isn’t aware that they are building human bias into the algorithms, it can lead to bias in the AI outcomes produced and cause deeper inequities. It can be difficult to tell how widespread these biases are in the technology we use in our everyday lives, however. While mitigating bias in AI models is clearly a challenge, it’s essential that businesses do so to reduce the likelihood of negative results.1

Machine learning models are being used more and more to inform high stakes decisions. But unless the training that sets this process in motion is carefully monitored for bias, the AI algorithm could unintentionally place certain groups at a systemic advantage or disadvantage.2 Bias in training data itself can yield models with unwanted bias. Incomplete or inaccurate data sets — over- or under-sampling within groups, for example – can lead to model bias, as can a failure to account for nuances based on cultural, racial or gender considerations.

“ AI can be used for social good. But it can also be used for other types of social impact in which one man’s good is another man’s evil. We must remain aware of that.”3 James Hendler Director of the Institute for Data Exploration and Applications, Rensselaer Polytechnic Institute

So how can businesses ensure that they’re not building human bias directly into AI algorithms? Organizations need to have bias mitigation processes in place that allow for ongoing review and oversight of their AI systems. They need to continuously monitor and manage their models based on data.

Five ways to avoid bias

1. Choose the correct learning model. There are two types of learning models, supervised and unsupervised. In a supervised model, the training data is controlled by stakeholders. It’s critical that this group of people is equitably formed and have also received unconscious bias training themselves. An unsupervised model depends fully on the AI itself to detect bias trends. Bias prevention techniques need to be factored into the neural network so that it learns to distinguish between what is biased and what’s not.

2. Use the right training data set. Machine learning is only as good as the data that trains it. Whatever data you feed into your AI must be comprehensive and balanced while replicating the actual demographic of society.

3. Perform data processing mindfully. Businesses need to be aware of bias at each step when processing data. Whether during pre-processing, in-processing or post-processing, bias can creep in at any point and be fed into the AI. Any data that could introduce bias needs to be excluded. It’s also important to ensure there is no human bias when it comes to interpreting the data outputs created by AI.

Page 7: Building successful AI that’s grounded in trust and

7Building successful AI that’s grounded in trust and transparency

4. Monitor real-world performance across the AI lifecycle. When it comes to monitoring AI, it’s key to not consider any model as “trained” or “finished.” Ongoing monitoring and testing with real-world data can help bias be detected and corrected before it creates a negative situation.

5. Avoid infrastructural issues. Aside from human and data influences, sometimes the infrastructure being used itself could cause inaccuracies that may lead to bias. For example, if you’re collecting data from mechanical sensors, the mechanical equipment itself could introduce bias if the sensors are not functioning correctly. This kind of bias can be difficult to detect and requires investment into the latest digital and technological infrastructures.4

Progress is being made on the AI research front. One method that researchers are exploring to address inherent bias is inverse reinforcement learning, in which the AI observes human behavior in various situations for it to learn what people value. This teaches the system to make decisions consistent with fundamental ethical principles. From a tactical, business-level perspective, even things as simple as employing a diverse staff of programmers can make a huge difference in mitigating training bias. There is always room for improvement.

It’s also critical that businesses create, implement, and operationalize AI ethics principles, and ensure proper governance is in place. This end-to-end visibility across the AI lifecycle will help identify when bias could occur and allow for course-correction.

An open-source library like the AI Fairness 360 toolkit is a helpful resource for detecting and mitigating bias in machine learning models. A comprehensive set of metrics for datasets can test for biases, explain those metrics, and build algorithms that can mitigate bias in datasets.

1 Mitigating Bias in Artificial Intelligence, IBM, May 2021. 2 AI Fairness 360 - Resources, IBM Research Trusted AI, August 2021.3 Building trust in AI, IBM, October 2016. 4 How to Reduce Machine Learning Bias, Medium, April 2021.

Learn how the Notre Dame-IBM Technology Ethics Lab is examining real-world challenges and solutions

Learn how IBM has partnered with the Linux Foundation AI to advance trustworthy AI

See how IBM Research is spotting the “unknown unknowns” in AI

Case study Blog Blog

Page 8: Building successful AI that’s grounded in trust and

8Building successful AI that’s grounded in trust and transparency

AI models may start out strong and seem like they will produce valuable outputs, but if not properly monitored, even the most well-trained, unbiased AI model can “drift” from its original parameters once deployed, producing unwanted results. If an AI model’s training doesn’t align with incoming data, it can’t accurately interpret that data or use that data to reliably predict outcomes. If drift isn’t detected and mitigated quickly, it will only digress further, increasing the negative business impact. To ensure accurate AI throughout its lifecycle, you need to look at model drift as another key consideration in your overall strategy.

How to avoid AI model drift with monitoring and management

Chapter 4

61% of businesses strategically scaling AI depend on large, accurate data sets1

There are a few best practices when it comes to monitoring drift. First, it’s important to understand that model monitoring can require specific tools and skill sets. Having the right tools and data scientists in place is crucial. Next, you need to actively model all models that are in production from a central place. Having a centralized, holistic view can help break down silos and provide more transparency across the entire data lineage.

It’s also important to establish a consistent set of metrics for assessing the health of your AI models. Learning from what’s working and not working will allow your team to pivot and correct when needed. AI models also need to be monitored on an ongoing basis and not just viewed as one snapshot in time. A model’s health could change overtime and cause greater drift in the future if those changes are not identified. Lastly, it is helpful to automate as much of the monitoring process as possible to scale across your organization. Automation can provide consistent and reliable notifications and provide more time for your teams to focus on model development instead of monitoring.

So, what if you have model drift? How can it be corrected?

Page 9: Building successful AI that’s grounded in trust and

9Building successful AI that’s grounded in trust and transparency

How to correct drift in three steps

Estimate the impact. If you’re able to estimate the impact of model drift, you can better determine what to prioritize and how many resources you need for repair.

Analyze the root cause of the drift. This is where a time-based analysis is helpful to see how drift numbers evolved and when. For example, if you run checks weekly, you can see how drift evolved each day. Analyzing time-lines can also be helpful to determine if the drift was gradual or abrupt.

Resolve the drift issue. This involves retraining your model on a new training dataset that has more recent and relevant samples added to it. Ultimately, the goal is to get your models back into production quickly and correctly. If retraining the model doesn’t resolve the issue, then a new model may need to be built entirely.2

An integrated approach can help your business track metrics continually and alert you to drift in accuracy and data consistency. You can also set targets and track them through development, validation and deployment.

An integrated data and AI platform also simplifies the steps it takes to identify business metrics that are affected by model drift. So, you’ll be able to minimize the impact of model degradation through automating drift monitoring.3

1 The State of AI in 2020, McKinsey & Company, November 2020. 2 Model Monitoring Best Practices: Maintaining Data Science at Scale, Domino, August 2020. 3 Model Drift, IBM, June 2021.

Watch: Learn how to detect and correct model drift

Page 10: Building successful AI that’s grounded in trust and

10Building successful AI that’s grounded in trust and transparency

AI models are gaining more widespread adoption, demonstrating impressive accuracy across various industries. However, it’s not enough to simply demonstrate a model’s accuracy — it must be explainable and provable. Unfortunately, even the most well-trained AIs — free of bias and drift – often are not easily understood by the people who interact with them and are affected by them.

AI models lose value if their results can’t be explained

Chapter 5

91% of organizations say their ability to explain how their AI made a decision is critical.1

Explainability is crucial because it provides insight into the AI’s decision-making process. If you don’t understand how an AI model arrived at a result, you can’t fully trust the model itself. Not only that, but from a regulatory standpoint, the inability to explain how the model draws its conclusions means those conclusions aren’t likely to be compliant.

But what does this all mean for your business? As with a medical diagnosis or loan application, many variables contribute to a conclusion, and all data points must be considered to generate a trustworthy result. Without explainable AI, you can’t be confident that the AI models being put into production are completely reliable. AI explainability is crucial because it helps organizations develop and use AI responsibly and reliably.

68% of business leaders believe that customers will demand more explainability from AI in the next three years.2

Page 11: Building successful AI that’s grounded in trust and

11Building successful AI that’s grounded in trust and transparency

As AI continues to become more complex, understanding and retracing how an algorithm came to a result increases in difficulty. The whole AI calculation process is often considered to be a “black box” that is difficult to interpret. This means it’s even more crucial for a business to monitor and manage models so it can understand and measure the impact of using certain algorithms.

When it comes to explainability, there are two components: (1) a business needs to be able to explain the decision made by the AI and (2) the business needs to be able to explain the history of the project. What was the data’s path? What was the original intent of the AI model?

All these things need to be explained to provide transparency across the entire AI model. To have ethical AI that isn’t causing inequalities, companies need to understand who is running their AI systems, what data was used, and what went into their algorithms’ recommendations. To understand the behavior of a model, you need to understand what the model is doing.

53% of AI high performers track AI-model performance and explanations to ensure that outcomes and/or models improve over time.3

Explainability can also help developers ensure that a system is working as expected and complying with regulatory requirements. GDPR standards have had a profound effect on how customer information is gathered and used, with stiff penalties for businesses found to be in violation. And new regulations are coming.

Without explainability, AI simply can’t be implemented responsibly at scale. Businesses need to embed ethics principles into their AI applications and processes by building an AI system that is grounded in transparency.4

1 Trustworthy AI, IBM, June 2021. 2 Introducing AI Explainability 360, IBM, August 2019. 3 The State of AI in 2020, McKinsey & Company, November 2020. 4 Explainable AI, IBM, March 2021.

Page 12: Building successful AI that’s grounded in trust and

12Building successful AI that’s grounded in trust and transparency

AI can be a valuable tool to augment human decision-making and can accelerate your digital transformation across enterprises. The AI lifecycle itself includes a variety of roles performed by people with different areas of expertise that collectively produce an AI service. Each role, each person, and each data set contribute in a unique way.

But it’s critical to mitigate bias and drift and ensure explainability when building trustworthy AI. If you can’t trust your AI outputs, they are unreliable and potentially harmful to your business and customers.

Take the next step Have questions or want to talk about your specific needs? Schedule a free one-on-one consultation an IBM AI expert. They can help you build your strategy so you can deploy AI that’s fair, explainable, and transparent.

Resources and solutions Want to learn more about AI lifecycle management and dive deeper into your AI journey? Here are a few resources and solutions that can help.

• IBM Cloud Pak for Data can help you improve your data usage to enable better decision making. It’s a unified platform that can help you connect to and access siloed data, regardless of where it lives. When you have access to all your data, you can properly curate it to deliver better insights and explain your AI outcomes.

• IBM Watson Studio can help you eliminate barriers to build and scale successful AI across any cloud by giving you everything you need for the entire AI lifecycle. It empowers data scientists, developers and analysts to build, run and manage AI models while optimizing decisions. When combined with an IBM Data Science for ModelOps approach, you can accelerate end-to-end model development, monitor for fairness and drift, unify data and tools, and track your model performance.

• AI FactSheets 360 is designed to help build trust in AI by increasing transparency and enabling governance. A fact sheet is a collection of relevant information (or facts) about the creation and deployment of an AI model, which provide a rich history about its construction.

• IBM OpenPages with Watson is a highly scalable governance, risk and compliance solution. It’s AI driven and runs on any cloud, giving you a single environment that can help identify, manage, monitor and report on risk and regulatory compliance.

Talk to an expert

Get started with AI to yield better outcomes

Chapter 6

Page 13: Building successful AI that’s grounded in trust and

13Building successful AI that’s grounded in trust and transparency

© Copyright IBM Corporation 2021.

U.S. Government Users Restricted Rights—Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. NOTE: IBM web pages might contain other proprietary notices and copyright information that should be observed.

IBM, IBM Cloud Pak, the IBM logo and ibm.com are trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at “Copyright and trademark information” at www.ibm.com/legal/ copytrade.shtml.