Select Page

The Ethics of Code: Developing AI for Business with five core principles

The Ethics of Code: Developing AI for Business with five core principles

Building chatbots and AI that helps our customers is the easy part – the wider questions that the rising tide of AI bring are broad and currently very topical. Because of this, we developed our AI within a set of guiderails, these are the core principles that we believe help us to ensure our products are safe and ethical, according to Kriti Sharma, VP of Bots and AI at Sage.

She said that the ‘Ethics of Code’ are designed to protect the user and to ensure that tech giants, such as Sage, are building AI that is safe, secure, fits the use case and most importantly is inclusive and reflects the diversity of the users it serves.

As a leader in AI for business we would like to call others to task – big businesses, small business and hackers alike – and ask them to bear these principles in mind when developing or deploying their own Artificial Intelligence,” she added.

1. AI should reflect the diversity of the users it serves

Both industry and community must develop effective mechanisms to filter bias as well as negative sentiment in the data that AI learns from – ensuring AI does not perpetuate stereotypes.

2. AI must be held to account – and so must users

Users build a relationship with AI and start to trust it after just a few meaningful interactions. With trust, comes responsibility and AI needs to be held accountable for its actions and decisions, just like humans. Technology should not be allowed to become too clever to be accountable. We don’t accept this kind of behaviour from other ‘expert’ professions, so why should technology be the exception.

3. Reward AI for ‘showing its workings’

Any AI system learning from bad examples could end up becoming socially inappropriate – we have to remember that most AI today has no cognition of what it is saying. Only broad listening and learning from diverse data sets will solve for this. One of the approaches is to develop a reward mechanism when training AI. Reinforcement learning measures should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.

4. AI should level the playing field

Voice technology and social robots provide newly accessible solutions, specifically to people disadvantaged by sight problems, dyslexia and limited mobility. The business technology community needs to accelerate the development of new technologies to level the playing field and broaden the available talent pool.

5. AI will replace, but it must also create

There will be new opportunities created by the robotification of tasks, and we need to train humans for these prospects. If business and AI work together it will enable people to focus on what they are good at – building relationships and caring for customers.


Caption: Kriti Sharma is VP of Bots and Artificial Intelligence at Sage and a trailblazer for smart machines that work and react like humans to help to make business admin invisible. Kriti is the creator of Pegg, the world’s first accounting chatbot, launched to market in 2017 which now boasts users in 135 countries.


 

About The Author

Intern

The Economist accommodates two interns every year, one per semester. They are given less demanding, softer issues to hone their skills, often with a specific leaning to social issues. Today, many of our interns are respected journalists or career professionals at economic and financial institutions. - Ed.