Who’s watching the watchrobots?
The Covid-19 pandemic has made it clear how dependent humanity is on artificial intelligence and algorithms. But the great benefits this technology come with even greater risk. It is increasingly implicated in biased decisions, unintended consequences, and human rights abuses. The more dependent the world becomes on AI, the harder it will be to recognize and reduce the threats ahead of time and prepare for them.
What we really need is to take responsibility for AI, and create a code of practice by which we can hold ourselves accountable. This code is not just for the major investors and corporate leaders, but for the many people working as software developers and project managers, along with the policy makers who support their work. This group includes a lot of people who wake up at 3 am to struggle with the question of how their work might be used and potentially abused.
There are a lot of codes of practice for responsible AI out there. We’ve counted at least 15 by reputable industry, professional, and academic groups. In our forthcoming book about responsible AI for Penguin/Berrett-Koehler (2023), we distill them into seven principles.
Traditionally, the point of algorithmic solutions is to make the world more seamless and immediately responsive. With AI, however, the only way to make significant decisions on behalf of humanity will be to slow down the process: to add friction, giving innovators time to figure out the necessary safeguards.
The principles are:
1. Do No Harm. Should there be a distinction between AI and algorithms that can harm people and society, versus those that only affect other algorithms? Should they have different constraints? How can we prevent threats like identity theft, physical and bodily harm, the inability to get credit, and other abuses against people or the natural environment from becoming everyday occurrences?
2. Create Code that Speaks for Itself. How important is it that engineers and coders working on projects understand their purpose and how the technology will be used? How can we balance the need for data and algorithmic transparency against the legitimate need for security and competitive advantage?
3. Reclaim Data Rights for People. Should people have the right to control data about themselves? Should organizations be allowed to monetize personal data without also paying the people who supply that data? Is there a middle ground between corporate investment and individual data rights? Should anybody, any organization, or any government have the right to own data about people?
4. Use Machine Learning for Human Understanding. No matter how large or small the data sets, does correlation ever represent causation? Can we trust algorithms that make correlations but never go beyond that to explain why those correlations exist? Who should judge the output, and on whose behalf?
5. Question and Confront Bias. Can we stop ourselves from passing on our prejudices and insecurities to our children? If we can’t, then how can we avoid doing the same with our software?
6. Hold All Stakeholders Accountable. Who should be held accountable for AI and algorithmic abuses? What about investors, owners, leaders, and employees? What training do leaders, software engineers and AI developers need to recognize the possible negative consequences of their work?
7. Embrace Creative Friction. Traditionally, the whole point of algorithmic solutions is to eliminate friction: to create seamless, quick interactions that require as little attention as possible, so that human attention can move to weightier and more significant things. But what if the only way to make significant decisions is to slow down and take the measure of complex interactions? Then you have to slow down and take stock.
“It’s hard to imagine [responsible AI] happening through self-regulation,” says internet law scholar Brett Frischmann. “It’s hard to imagine the industry leaders saying, ‘We need to inject friction into the systems which we’re trying to make seamless.’ But teams that deliberately incorporate friction into their deliberations seem to make better decisions – and the same is true for AI.”
We’ve culled these principles from the various groups who are releasing AI ethics standards or statements. No principles will, in themselves, keep AI from getting out of control. But we all need principles for AI, in the same way that medicine needs the Hippocratic Oath: to keep a sense of perspective, so that the professionals recognize the consequences of their AI.