At the recent Amsterdam leg of its Ignite tour, Microsoft made a variety of announcements, most interestingly (to this analyst anyway), around ethics when developing AI systems and how Microsoft 365, a solution that includes Office 365, Windows 10, and Enterprise Mobility + Security, continues to evolve to meet the needs of a modern workforce.
Microsoft has defined a set of principles intended to guide the responsible development of AI systems
A lot of time in the analyst sessions was dedicated to discussions around artificial intelligence (AI). One of the more interesting sessions focused on AI governance and ethics. Here, Microsoft shared how seriously it is taking its responsibilities in relation to AI technology. In particular, the company shared six principles that guide how it intends to mitigate some of the dangers that AI technologies may present and encourage responsible development of AI systems:
- All systems should be free of biases and treat all people fairly. Imagine where AI is used to speed up and deliver inputs into insurance claims, for example. Data training and building any AI system on unbiased data will be very important. This principle is crucial in building confidence in any AI system.
- Reliability and safety. This principle relates to the expectation that AI systems should be developed to be consistently reliable and safe.
- Privacy and security. This principle relates to the need to make sure AI systems are secure and that people’s privacy is protected. This is especially important given the data that people may be expected to contribute in making any system better. If the outcomes are compelling, meaningful, and of value (such as in improving healthcare outcomes for people, for example) then people will likely have more confidence in providing their data, but only if security is assured.
- Systems should be developed to empower and engage many people and not solely for the benefit of a small group.
- It is important that people understand how AI systems are making decisions and reaching conclusions. Contextual explanations need to be provided. This is a complex activity, most notably because what is deemed an appropriate and adequate level of transparency may vary. This context is essential, however, as even if a highly accurate (90%+) model has been developed, not being able to explain the model and how the outputs were reached will limit, perhaps completely, how the system can be used in operation.
- People must be accountable for how their systems operate. This is important in driving confidence in AI.
These principles certainly demonstrate an attentiveness from Microsoft in relation to the company’s responsibilities around AI. They provide a framework that can guide the ethical use and development of AI technologies, but the proof of this model’s effectiveness and relevance will be better judged over the longer term, using evidence on how consistently these principles are being applied, especially across different regions.
For more from Ovum, click here.