Your users want to trust your business.
They want to know your use of data and algorithms is robust, respects their privacy, and doesn't bias against groups in our communities.
They want clear and focussed accountability.
They want to know that your governance ensures your processes test the quality and safety of your machine learning - continuously.
Transparency is how you assure them.
Your customers and users want to trust you.
They want to know that you are using machine learning and AI safely and ethically. They want assurance that your AI isn't trained on biased data. They want to know that you have properly tested and understood the limits of your AI. They want transparency about your responsible use of personal data.
And they want to know who in your organisation is publicly accountable for your use of data, machine learning and AI.
Do you know and understand the risks to your business from using varied data, machine learning and AI techniques at the core of your products and services?
Do know how your business would respond if your automated AI-driven processes failed? Are you confident that an investigation would confirm that you followed good practice?
Can you communicate to your board and shareholders an understandable and accurate assessment of the risks and opportunities?
Across the globe, both governments and public are demanding better regulation of AI and machine learning.
Currently, the UK government and the EU have developed initial frameworks for assessing safe use of AI, with an expectation of more robust regulation on the horizon.