The Complete Introduction to AI Ethics
What is AI Ethics?
AI ethics is a term that refers to the ethical issues surrounding the use of artificial intelligence (AI) technology.
As the use of AI systems becomes more prevalent across the globe, governments, industry groups and AI-focused executives are grappling with the question of how to ensure the technology is used in an ethical way.
Key questions in the field of AI ethics include:
- What applications for AI systems are ethical for a given organization?
- How can companies ensure AI systems are built to operate in a way that’s fair and unbiased?
- What processes do companies need in place to ensure AI systems continue to function ethically over time?
AI Ethics Definition
The UK’s Alan Turning Institute defines AI ethics as a set of values, principles and techniques that employ widely accepted standards of ‘right’ and ‘wrong’ to guide the development and use of AI technologies.
In practice, this means ensuring that organizations using AI have the right AI ethics policy and governance practices in place to ensure the technology is used for good and does not harm people unintentionally.
Potential ‘harms’ that AI systems may cause include:
- Invading people’s right to privacy by processing data without consent or handling it in way that reveals personal information without an individual’s consent
- Making biased or unfair decisions or recommendations about certain populations or demographics
- Making decisions in a way that can’t be explained in plain language, so it’s unclear if their conclusions are fair and unbiased
- Making decisions that are unreliable or deliver poor quality outcomes due to model implementation issues
- Denying people their right to accountability for the decisions AI systems make about them
We discuss these challenges around the ethics of AI in this episode of The Business of Data Show:
Is AI Ethical?
As we outline in A Complete Introduction to AI and Machine Learning, the fact that there are many different types of AI is a complicating factor when considering the ethical implications of AI. Similarly, a single type of AI technology, such as natural language processing (NLP), can be applied in many different business contexts.
This means the morality of different AI systems must be reviewed on a case-by-case basis. AI technology itself is neither ethical nor unethical. Rather, enterprises must establish principles or frameworks to ensure that they use AI systems ethically and responsibly and guard against AI misuse.
Yet, our research shows that there’s no consensus about what ethical responsibilities enterprises have for different applications for AI technology. Different AI-focused executives can look at the same use case for AI and draw different conclusions about what their moral responsibilities are.
AI Ethics Issues and Considerations
There are many ethical dilemmas associated with AI use. These range from deciding whose lives autonomous vehicles should prioritize saving in a multi-person crash situation to ensuring that credit scoring AIs don’t discriminate against people unfairly based on factors such as gender.
To discover how enterprises are tackling the moral issues surrounding AI implementation, we invited six of the world’s leading female AI-focused executives to join us for this virtual roundtable on the topic:
Social Challenges of Artificial Intelligence
The social challenges around AI center around the technology’s potential to create or preserve inequality or unfairness. For example, without the proper care, an AI model designed to make credit decisions may make biased decisions that favor certain ethnic groups or genders over others unfairly.
On the other hand, many AI leaders believe the technology can be used as a tool for good, to remove hidden or unconscious biases that may exist in the manual business processes of the past.
Many believe the ethical responsibilities a company may have vary depending on the AI system in question’s potential to cause harm.
For example, Cortnie Abercrombie, Founder and CEO of non-profit organization AI Truth, says companies need to create frameworks that quantify an AI system’s level of risk based on its intended purpose.
Balancing the potential risks of implementing an AI system against the potential benefits to the business should help executives decide when extra governance and resources need to be applied.
“When I think of ‘high impact’, I think about life-or-death situations,” she says. “In my mind, the things that are the riskiest are self-driving cars, autonomous weapons [and] health diagnostic tools.”
She adds: “The very first thing I tell groups that are dealing with those is, you need to have a full vetting process before you release those to the wild.”
For more about Abercrombie’s experiences of raising awareness of the social and ethical issues around AI, click here to view the interview we did with her for our 2020 Global Top 100 Innovators in Data and Analytics list.
How Ethical is the Use of AI in Business?
Our research shows that AI-focused executives believe all types of AI used in business today show at least some potential for misuse. But at present, businesses are largely free decide how to ensure they use the technology ethically for themselves.
Today, a growing number of industry voices are suggesting that this ‘self-regulation’ approach may not work.
“Unless we have the legal requirement for companies to abide by the law and do certain things, it is highly unlikely that they will engage in [ethical] behavior,” says Ganna Pogrebna, Lead: Behavioral Data Science at The Alan Turing Institute, in this episode of our podcast.
“There’s a huge amount of attention and energy going into developing good principles for how to do AI ethically,” World Economic Forum Lead: AI and ML Mark Caine agrees in this podcast episode. “What we’re seeing is a bit of an implementation gap.”
Media coverage of AI controversies such as the Cambridge Analytica scandal or accusations of bias during the Apple Card launch suggest that many companies still lack the right systems and processes to ensure they use AI systems ethically.
Can Artificial Intelligence be Ethical?
The business community and general public alike are becoming increasingly aware of the ethical issues surrounding AI use.
As Scott Zoldi, Chief Analytics Officer at analytics company FICO, says in this episode of the Business of Data podcast, the buck for AI ethics stops with a company’s AI-focused executives.
“They have to define one standard within their organization,” he argues. “They need to make sure it aligns from a regulatory perspective. They need to align all their data scientists around a centralized management or standardization of how you do that.”
As they work to achieve this goal, AI-focused executives are focusing on three main areas to ensure the AI models in production in their organizations function ethically:
1. Data Ethics
Ensuring AI models function in a way that’s fair, unbiased and in customers’ best interests starts with ensuring the data that feeds into them is collected, governed and used ethically.
On the one hand, that means ensuring companies secure the proper consent from customers before using their data and handle it in a secure way that respects their privacy.
On the other, it means taking proactive steps to address data bias in those datasets and ensure the populations being analyzed are fairly represented in the data.
“We should assume that bias exists wherever we’re using historic datasets,” says Sathya Bala, Head of Global Data Governance at Channel. “Unless you are consciously creating interventions to ensure that your data and algorithms are fair, we should assume bias is built in.”
2. AI Model Bias
In addition to ensuring the data that feeds AI models is relatively unbiased, AI and machine learning ethics requires enterprises to ensure that those models are built in a way that ensures they make their decisions or recommendations fairly.
For Jordan Levine, MIT Lecturer and Partner at AI training provider Dynamic Ideas and Business of Data podcast guest, the reputational risks that come with irresponsible AI use stem primarily from undetected instances of bias. As such, he argues that enterprises need rigorous processes to find, spot and remove bias across the model lifecycle.
He explains: “The problem statement is, by building a model, do I then have subsets of my population that have a different accuracy rate than the broader, global population when they get fed into the model?”
Our research shows that enterprises are using a range of approaches to root out causes of AI bias during the model development process. However, it also suggests that few companies have a full suite of checks and balances in place.
3. AI Model Monitoring and Maintenance
A key difference between AI models and other types of software is that AI models are dynamic systems.
Some machine learning models adjust themselves in response to new inputs to improve their accuracy. Others may see their accuracy change over time due to changes in the data flowing into them, in a phenomenon known as ‘AI model drift’.
That means enterprises must also ensure AI models are effectively monitored and maintained so that they continue functioning as intended over time.
“The most important thing in the long run is about continuously monitoring and taking care of the model and the input data,” says Minna Kärhä, former Data and Analytics Lead at Finnair, here.
The growing awareness of the need for this in the business community is reflected in the increased focus enterprises are giving to building ModelOps functions. These functions are focused on the governance and lifecycle management of decision models, such as AI models.
The Future of AI Ethics and Governance
People are only just starting to grapple with the moral issues associated with artificial intelligence. But most AI-focused executives are working to put the right AI ethics principles and practices in place.
When we surveyed 100 C-level AI-focused executives from across the globe for our The State of Responsible AI report, we discovered that 21% have made AI ethics a core element of their organizations’ business strategies. Another 30% say they will do so by 2022.
At the same time, regulators and industry bodies are establishing policy groups and committees to publish frameworks to help guide enterprise AI ethics initiatives. For example, the European Commission has appointed a group of experts to its High-Level Expert Group on Artificial Intelligence to provide advice on its AI strategy. Similarly, Australia’s New South Wales Government formed a committee with to advise on the appropriate use of AI in 2021.
AI and Data Ethics Frameworks
These new committees will publish AI ethics frameworks to provide guidance to inform enterprise AI strategies and shape future AI ethics regulation and policymaking.
But every organization is different. So, AI leaders must also define their own frameworks and standards for ethical AI for their own companies.
South Africa’s Standard Bank is one example of a company that has adopted an ethical framework to help ensure it uses AI ethically. Standard Bank Group Head of AI, Automation and APIs Nanda Padayachee shares his AI ethics case study in this episode of our podcast.
“[It] creates the boundaries for how we deploy any capability,” he explains. “We have tried to anchor this to almost an ‘AI manifesto’ within our organization that says, ‘The principles for how we want to use AI are anchored around these points.”
“Because it’s adaptive, because it’s autonomous and because you’re constantly feeding it data, you need to be very deliberate on an ongoing basis,” Padayachee concludes.
AI Ethics Conferences
Topics from across the AI ethics spectrum have become a mainstay of technology conferences in recent years, from robotics and artificial intelligence to AI ethics policy and governance and beyond.
Click here to browse Corinium Intelligence’s calendar of upcoming conferences and events and discover when we’ll next be bringing analytics executives together to discuss AI ethics in your region.