Bernard Marr
AI can do incredible things, but just because something is possible doesn’t mean it’s right. There’s enormous potential for backlash against the misuse of AI, and policymakers and regulators will no doubt take an increasing interest in AI. This means it’s vital organizations pursue an ethical use of AI.
Here are four ways to do just that.
1. Build stakeholder trust
Organizations must be transparent with customers, employees, and other stakeholders about how they’re using AI and data. In the past, some big tech companies have perhaps tried to get away with not telling users what they’re doing, but this is a dangerous path to go down. It’s far better to be upfront about what data you’re gathering, how that data is analyzed, and why you are using this data. And that means telling people in a straightforward, plain English way, not burying the details in long, jargon-heavy terms and conditions that nobody reads. This transparency will be key to building stakeholder trust.
Consent is another important part of building trust; meaning businesses must seek informed consent for gathering people’s data and, wherever possible, allow people to opt-out. When doing this, it helps to demonstrate how AI and data add real value for stakeholders – for example, by helping the organization create better products, deliver a smarter service, solve customers’ problems, make work better for employees, and so on. People are far more likely to give consent when they know it will deliver real value for them.
2. Avoid the “black box problem”
From the satnav I follow in my car to the spellchecker that corrects my typos as I write, more and more decisions are being supported by AI. We all place a lot of trust in these systems, allowing them to direct our activities without really questioning how the technology arrives at its decisions.
And even when we do want to understand how an AI system makes a decision, it’s not always possible to get an explanation. This is known as the “black box problem.” The black box problem means we can’t always understand exactly how AIs work. We give the system data, and it gives us a response. It’s not like we can look under the hood and see what goes on in there! This is why AI engineers don’t always understand how their own systems work, particularly on very advanced deep learning AIs. This is a problem because if we can’t understand how advanced AI algorithms are making decisions, how can we trust those systems? How can you be sure they are accurate? How can we predict when they’re likely to fail?
Therefore, organizations must think to question AI providers for details of how their AI does what it does, and, wherever possible, look for AI tools that promote explainability. The good news is, AI providers appear to be grasping the gravity of this problem; for example, in 2019, IBM announced a new toolkit of algorithms called AI 360 Explainability, designed to help explain the decisions of deep learning AIs. It’s not a magic bullet, but it’s certainly a good start.
3. Think critically (and don’t abdicate all decision making to AI)
Research has shown that humans have a worrying tendency to blindly follow automated systems, even when those systems are clearly leading us astray – a phenomenon known as “automation bias.” This means, as more decisions are being driven by AI, the need for humans to think critically about AI systems is more important than ever.
To ensure the ethical and safe use of AI, it’s really important to give people the tools they need to overcome automation bias. Organizations must, therefore, train their people to not blindly follow automated systems. Teams need to be educated about AI and encouraged to question AI decisions (what data is involved, and how decisions are made, etc.). Critical thinking should be prioritized. So, too, should data literacy – the more people understand about data and AI, the better able they are to ask questions about how systems work and what data is being used to support decisions.
4. Check for biases in your data and algorithms
One of the many advantages of AI is that it has the potential to reduce bias. When decisions are augmented or even automated by AI systems, we can remove some of the baggage that humans bring to the decision-making process.
That’s the idea, anyway. The reality is that an AI algorithm is only as good as the data it’s trained on. If it’s trained on biased data, then the AI system will be biased. Let’s say I train a basic AI to predict the next president of the United States-based only on historical data of past presidents. It’s highly likely to predict the next president will be a white man of advancing years! That’s because there are a hefty race and gender biases built into the training data. The consequences of not addressing biases in data can be serious – inaccurate decisions, loss of reputation and trust, to name just a few. Some consequences could be far graver; just imagine what would happen if patient treatment decisions were based on biased or incomplete datasets.
Biased data is usually the result of an unintentional bias based on a lack of representation – meaning it’s probably an inherent systemic bias rather than any one individual’s prejudices rearing their head. The most obvious way to avoid these inherent biases is to look for under- or over-representation in the data and algorithms being used. Granted, it takes an expert eye to examine data and AI algorithms in any real depth, but that doesn’t let organizations off the hook. Instead, organizations must think to ask these questions of their AI providers, rather than blindly trusting that data and AI algorithms are unbiased. Where necessary, additional data may be needed to correct over-or under-representation in datasets.
I is going to impact businesses of all shapes and sizes across all industries. Discover how to prepare your organization for an AI-driven world in my new book, The Intelligence Revolution: Transforming Your Business With AI.
Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. To read my future posts simply join my network here or click ‘Follow’. Also feel free to connect with me via Twitter, Facebook, Instagram, Slideshare or YouTube.
About Bernard Marr
Bernard Marr is a world-renowned futurist, influencer and thought leader in the field of business and technology. He is the author of 18 best-selling books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.