Wakeup Call to CHROs: How GenAI is Changing the Role of HR Leadership
When Netflix introduced video streaming in 2007, TV-watching habits changed dramatically. Now something similar is happening in the workplace with generative artificial intelligence (GenAI). HR leaders are on the front lines as the new ways of working evolve.
The conventional wisdom says the HR function will change to have less rote work, more strategic work, with more time and attention for people issues like talent development, recruiting, succession planning, and organizational design. I know that’s the conventional wisdom, because that’s what ChatGPT said when I asked it. But what will it actually mean?
AI-enabled apps will be used throughout companies. HR leaders will given control of this, expected to oversee the design, development and use of these tools – with the explicit goal of developing a more productive, better-trained workforce. Everyone will feel like they have more control, because the apps provide a sense of being in control as part of the experience.
Yet as Juliette Powell and I point out in our book The AI Dilemma: 7 Principles for Responsible Technology, the issue of control is complicated. Real control requires work and close attention. The control offered by AI systems can be illusory.
The HR leader will have to learn how to balance real control against the illusion of control, if only to help companies realize the full potential of these apps. That’s a human issue, not a tech issue, and it will require the unique skills and awareness that HR leaders have gained in dealing with issues like data regulations and diversity and inclusion.
Consider the issue of bias. As we’ve seen recently in cases like the Google Gemini “white Viking” scandal, In February 2024, just two months after launching its new GenAI service Gemini, Google shut down its ability to produce images of people. Gemini had raised an outcry because it failed to produce images of white people in settings like Viking villages. This was the latest in a long list of AI-generated images from multiple providers -- including not just Google, but Midjourney, Microsoft, IBM, OpenAI, and others – reflecting racial and ethnic stereotypes, when biases wasn’t the intent.
The crucial point is that negative biases about people are embedded in human data and attitudes so deeply that algorithms reflect them. The tech companies that manage the AI systems have tried to correct the bias with tuning instructions. So far, at least, these efforts seem to simply substitute one set of biases for another. It’s most visible in images, but the same thing happens in text, including anything related to people, practice, corporate policy – or Human Resources.
Another issue is data. During the next few years, GenAI will be used routinely to draft the documents of HR: appraisals, policies, and training modules. Employees will have access to incredible tools: real-time performance review, problem-solving support, training and guidance. Top decision makers will have clearer aggregated views of their talent resources.
To provide this, the apps will gather and track massive amounts of data about employees, their behavior, their patterns of communication, and their interests. Already, as Juliette Powell and I noted in The AI Dilemma, employee behavior data is routinely tracked at companies such as Amazon, UnitedHealth Care, JP Morgan Chase, Barclay’s Bank, and other major companies. This data can be used rigidly, forcing people to compete with each other to avoid being reprimanded, inducing them to forego bathroom breaks and other amenities, often without any transparency about what is going on. Or it can be used to build employee capabilities, to raise their skills and loyalty. HR leaders will be at the forefront of that decision, and will end up carrying it out.
Performance appraisals, a core HR experience at many companies, represent another issue. Somewhere between 50-70% of US companies, depending on the survey, conduct annual or semiannual performance reviews. Typically, the boss or a designated substitute interviews stakeholders and writes up a report for each individual. A committee reviews them. The boss holds a short conversation with each subordinate, giving them feedback, setting goals for the next year, and naming the bonus.
When I was a mid-level executive, I took part in many of these conversations on both ends, and in the review committee. I came to feel that this was the way the corporate culture whispered to itself about what it considered important. Everybody dreaded the conversations, because they were both intense and superficial: they put peoples’ career and self-esteem on the line, but didn’t really provide any means for improvement.
Those appraisals are rapidly becoming obsolete. Already, managers are using GenAI to write their drafts. Deeper changes will follow. Instead of waiting for an annual score, employees will get feedback whenever they want it. After a difficult meeting, they’ll ask the bot how they came across.
It’s not clear yet how this new kind of feedback will affect performance or trust. One research group recently conducted an experiment with real employees at a financial services firm in China. About 130 employees, chosen at random, received appraisals written by AI; another 130 had appraisals written by human managers. The people who received AI feedback attained 12.9% higher job performance. They said the reviews were sharper, with better advice. But when the researchers told half those people that AI had written the appraisal, their performance dropped by 5.4%, compared to the others.
Is this a reliable universal finding? Or is it an artifact of banking, or of Chinese work culture? We don’t fully know yet. But we do know that HR leaders will be the focal point of responsibility when it comes to designing the performance systems with these new tools in hand. They will be accountable for ensuring that the bias inherent in AI systems is managed or kept in check.
HR executives can’t read every appraisal themselves. So they will rely on AI tools to evaluate them – and to help set up the guidelines and design ways to ensure they’re followed. They will offer guidance on how to use the tools judiciously and with empathy. They will be called on to set an example themselves.
In short, the growth of AI will require “creative friction:” the use of in-depth conversations that help people understand the nature of the AI systems that shape their appraisals and performance reviews. HR leaders will be called on to lead these conversations. As new laws are passed regulating the use of AI, the conversations will be part of the compliance and audit process. The tools themselves will help. Ultimately, however, it will fall to the HR department to represent the human side of the organization’s resources.
— Art Kleiner