An Unexpected Route to Responsible AI
A “Bright Swan” scenario of how democratizing software engineering can lead to a less frightening future for machine learning and AI bias.
One of the great problems of our time is what Cathy O’Neil calls “weapons of math destruction.” This is the systematic misuse of machine learning, data analytics, and today’s rudimentary AI to harm human beings. Often, that harm takes the form of constraint and harassment. People are wrongly identified, denied credit, falsely accused. Bias is built into the programs, and because the programs are loosely documented, it is hard to see the causes.
There doesn’t seem to be an easy cure — a credible form of oversight. My colleague Juliette Powell and I are exploring this in a book called Who Watches the Watchrobots? All too often, the answer is: Nobody. Not effectively. Not in a way that would catch problems at the source, when the algorithm is being developed, and especially when it is combined with other algorithms. Nobody sees the full picture except a very few people, who keep the others from communicating. Which means that we are entering a world of constraint and peril, enabled by the algorithms that are intended to make life easy and productive.
There are scenarios for how to stop this. Companies might regulate themselves. Regulators might become proficient at stopping it. Or the coders might themselves figure out ways to prevent harm. Somehow, these scenarios don’t seem very convincing.
But what if there were a “bright swan” — an unexpected event that provided a solution — in the opening up of the profession of programming?
A growing number of people are being recruited as software engineers from previously unlikely environments. It turns out that a gifted coder from the backstreets of Lagos or Bogota can do as well in crafting algorithms as a computer science grad from Stanford. To be sure, there’s a learning curve — but increasingly the skills can be delivered remotely, and large numbers of people from around the world are being recruited for software development. This will include AI. They learn the skills online and deliver them without ever meeting their bosses in person. They also help recruit their gamer friends.
There comes a point that many coders reach. They see the potential for damage in the programs they are writing. It’s hard to speak out, so many coders simply swallow their concerns. Or they leave and start their own businesses. But these new software engineers, sitting in Africa or Latin America or South Asia, don’t have that background. They are grateful to the companies that hired them, but they also have begun to see the broader opportunities they have — and especially they see the damage that their software could potentially do.
So they do a little reverse-engineering. They figure out ways for the software to reveal the connections and patterns to the world. Some of them become famous this way. Over time, it becomes a matter of course: if you are a software developer, you expect that the code will be documented, audited, and reported on in an open-source forum. It’s the algorithmic equivalent of the Panama Papers.