What is AI, again?
This year, many of the fears and hopes associated with artificial intelligence are coming to pass. It’s easy to overstate the fears — and to underestimate the risks. Let’s clarify them by remembering exactly what this technology is — and what it is not.
The term “artificial intelligence” is misleading, because it implies similarity to human intelligence. Automated software processes are not intelligent.
Engineers prefer the term “machine learning.”
A more precise phrase still is “Triple-A systems:” algorithmic, automated, autonomous systems.
Triple-A systems are algorithmic. They operate with algorithms, mathematical instructions for calculating answers to problems. This gives them an underlying computational identity, no matter how complex they are.
They are automated. They can perform tasks without direct human control.
They are autonomous. Even in complex situations, they don’t need a human to supervise them. They can change their own instructions, adapting based on experience and data. That’s how they train themselves.
The Triple-A systems in use by business and government today are sociotechnical systems. Their design and performance depend just as much on human and social elements as on the technology.
We can only understand and improve them if we treat each AI system as an integrated, interdependent whole: a complex system comprising of machines, people, and organizations.
Responsible AI is the development and use of Triple-A systems to accentuate the benefits for people and minimize the risks. This is not just a legal or moral imperative, but a way of gaining true advantage for a business. It requires investment, oversight, and creative friction: the ability to think together in constructive ways about short- and long-term goals and outcomes.
This is the basis of KPI’s AI advisory work. How do we adjust the parameters of machine learning, human learning, and organizational learning to get the optimal results for every stakeholder, including the broad range of people who are affected by these systems? We need design interventions, at a variety of levels, including the decision-making skills of individuals and teams.
Some people fear that Triple-A systems will replace human judgment or overtake human agency. Instead, they have become a forcing function, changing the way we pay attention to ourselves. If people can’t tell the difference between disinformation and information, if we can’t discern between guidance from a chatbot and from another human, and if we can’t connect meaningfully in a flood of AI-enabled content, then what does that say about us?
The impact of AI accelerated with the widespread release in 2022 of generative AI. These are new digital tools, mostly based on natural language processing (NLP) systems and large language models (LLMs), which are trained on unstructured data. They are typically available for free or at very low cost. These generative AI programs include apps like DALL-E, ChatGPT, and GPT-3 from OpenAI, along with natural language search engines from Microsoft and Google.
Suddenly, it has become easy to create and alter images, text, and interactive media within seconds. Millions of people have shared their assisted creations through social media. These new tools are continually evolving, and they have begun to change longstanding human habits.
You will hear many different opinions about the value, promise, and dangers of AI. In assessing their value, it is important to remember the definition. Algorithmic, automated, autonomous systems reflect the values and interests of those who create and sponsor them, and they also reflect the biases and assumptions in the data on which they are trained.
It is also important to remember how they affect people. Deliberately or not, they often have an impact on vulnerable populations, or on people who have no reason to expect to be harmed.
The AI systems, like any organization, are applicable toward many purposes. We choose how we use it. We assess the outcomes with an eye toward changing the inputs, including our own goals. The technology is not separate from the people who make it or use it. The responsibility for its use, abuse, and oversight is shared among us all.