In the scramble to embrace tech such as AI and data & analytics, businesses may find themselves on the wrong side of the law. How? Let’s investigate.
With any new technology or innovation, comes new risks – risks that can inadvertently lead to compliance issues that can otherwise be avoided.
It’s a scenario that has already become a reality. In October 2018, tech giant Amazon decided to scrap its internal Artificial Intelligence (AI) recruiting tool when it was found to discriminate against female candidates.
Hot on the heels of this case, MAS released a set of principles to promote Fairness, Ethics, Accountability and Transparency (FEAT) in the use of Artificial Intelligence and Data Analytics (AIDA) on 12 November 2018.
What do these principles mean for us? Simply put, we need to ensure that when racing to reap the benefits of innovation, we must still continue to use tech such as AI and Data Analytics responsibly. Or, run the risk of getting on the wrong side of compliance issues.
Question is, how can we be sure that we stay out of trouble? Here’s a closer look at what MAS’s principles require of businesses:
The case of Amazon’s recruitment model tells us the importance of AI programmes not learning to model our prejudices. This is often the result of training a model with data that captures inherent stereotypes. For example, if loan assessors were prejudiced against males, they might perceive them to be of higher risk and consistently offer them higher interest rates. If this data was then used to train a loan application model, the model will inherently learn to do likewise.
One way to prevent prejudice is to exclude traits such as gender and ethnicity from AI training data. This then requires constant balancing between building a highly accurate model versus a fair one.
Machine learning models have proven to be very powerful. It is critical that this power is not abused for unethical purposes, such as using models to identify and target clients who are less informed and are more likely to buy products whether these are right for them or not.
If a self-driving car knocks down a person, who is accountable – the person who bought the car or the company which made the car? The principles set out by MAS makes it clear that the organisation that deploys the model in question – not the creator, nor the user – is responsible and accountable for the decisions that resuls from the use of the model.
Transparency is required at multiple levels and on multiple fronts. One is transparency in the use of AI to users, clients and relevant third parties; another is the transparency of the model and its impact on processes and outcomes. This is possible with a model validation process and framework to ensure that assumptions and outputs are documented and the model is independently tested.
Given the seemingly endless potential of AI and data analytics for adding business value across every sector, it is easy to overlook the potential impact of misuse on stakeholders, the community and even brand reputation. MAS’s principles act as a guide on how we can steer clear of compliance issues even as we take our companies forward with AI and Data & Analytics.