The Potential and Peril of Using Generative AI for People Analytics

​Many HR leaders continue to struggle with analytics, as it tends not to be their typical area of expertise.

The good news: Generative AI, or GenAI, can help if it’s used wisely. Many use cases are just beginning to be explored. However, there’s also some cautionary news: Generative AI is not yet at a place where it can be 100 percent trusted to produce accurate, valid and reliable outputs. HR professionals still need to apply their own expertise, insights and critical analysis to this information. Still, generative AI can save a lot of time and offer new solutions to improve virtually every aspect of HR operations.

Numerous Opportunities to Seek Generative AI Insights

Iu Ayala Portella is CEO and founder of Gradient Insight, a data science consultancy, and an AI intelligence expert with nearly a decade of experience in the field. Ayala listed several tasks in which HR professionals can harness the power of generative AI to glean more from their people data.

  • Talent acquisition. HR leaders, Ayala said, “have used generative AI to analyze job descriptions, identifying bias or gendered language that may deter diverse candidates.”
  • Employee sentiment analysis. Generative AI can analyze employee feedback from surveys, performance reviews, social media and other sources. “HR leaders can identify potential areas for improvement, gauge employee satisfaction and implement targeted interventions to enhance workplace engagement,” he said.
  • Skills development and career pathing. “HR leaders can analyze employee skills, interests and past career trajectories to recommend personalized learning paths and identify growth opportunities,” Ayala said.

There’s plenty of chatter about these possibilities. In fact, John Bremen, managing director at consultancy WTW, has been having conversations with clients about the implications of generative AI. He’s seen HR professionals use generative AI data to analyze sourcing and recruiting data and understand employee preferences. But “we’re not seeing any significant trends yet,” he noted. “I think we’re very much in the experimentation stage.”

One useful application is helping HR professionals better serve their business unit customers, said Emily Killham, director of research and insights at people analytics company Perceptyx. Queries for AI chatbots, she said, can be used to give front-line managers direction based on data. An example of such a query, called a prompt, might be: “I manage a team of 10 people. We all work remotely and only see each other in person four times per year. I would like to work on the survey item, ‘My manager treats all employees with respect’ with my team. Can you create an action plan?” Drawing on a wealth of information, generative AI “can reach for that output in a matter of seconds, and that could be a game changer for increased speed-to-action in an organization,” Killham said.

Jed Macosko, a professor of physics at Wake Forest University in Winston-Salem, N.C., who teaches a class on data and AI, offered some more examples. Generative AI could help HR practitioners “glean operationally defined variables and conclusions from HR datasets that might be less intuitive to human analysts,” he explained. This is because AI is less likely to make the common mistake of confusing correlation with causation. HR professionals could use prompts such as: “What are the causal variables in this dataset?” and “What are the correlational variables in this dataset?”

In addition, Macosko said, generative AI can help HR managers mitigate the risk of personal bias. An effective prompt for this would be: “Please outline the distinct perspectives on the conclusions that can be drawn from this data.”

But while generative AI offers many benefits and efficiencies for HR from a data analysis and people analytics perspective, it’s not a cure-all. HR professionals still need to bring their expertise and insights to bear to get the most out of these tools. And they need to be able to identify when the results they get appear incorrect or misleading.

Tread Carefully When Using Generative AI Tools

HR leaders can’t abdicate their participation in the evaluation of analytics to generative AI tools.

Above all, they need to bring to bear their creativity in terms of what the data is telling them, Killham said. She recalled a time when she was working with a customer to study the relationship between clear expectations and the performance of 10 call centers. On the surface, they didn’t find much of a relationship. But when researchers started asking questions about specific centers, new insights emerged.

In one case, a snowstorm closed one center for a full week, which seriously impacted productivity for the quarter. “Once we corrected for that event, we could help the organization see the connection between the two variables,” Killham said. “So while it’s possible to ask GenAI to tell us the statistical differences between two groups on some outcome like retention or productivity when the data are clean and perfect, only a human analyst could see there was an issue with the data and do something to adjust for it.”

It’s also important to remember that generative AI is a tool for data analysis, but it’s not analysis, Killham stressed. “Calculators changed data analysis. Statistical packages like SAS or R changed data analysis. Machine learning changed data analysis,” she said. “Great analysts will learn to use this technology as a tool to make their analysis more efficient. But we really still need those great analysts to formulate good prompts, look for things that look ‘off’ about the findings, and fix them.”

She continued, “My experience with the tech is that it’s good at refining expertise—making it clearer, simpler and easier to understand—but the technology and the information it can access isn’t a substitute for actual knowledge or experience in understanding how messy people data can be.”

People who aren’t analysts can use many of these features, Killham said, but they also need to know what the numbers mean—and how to tell if they’re correct.  

Data privacy should also be a primary concern for HR leaders, she stressed—at both the company and employee level. “Having a clear policy in place about trade secrets, personal information and GenAI will be important so that no one inadvertently causes a data privacy issue,” she said. “As we see countries and areas of countries develop laws around the use of GenAI, those policies will need to be adjusted.”

Bremen agreed and added that it’s important to understand what happens with the data you feed into these systems. While you can train generative AI tools with your own data, “many companies don’t want to do that because of security issues,” he explained. It’s important to understand what will happen to your data if you feed it into these systems—will the AI have access to it for other uses? Different tools treat the data in different ways, Bremen said, “so it’s important to understand specifically how the tools you’re using will use that information.”

Best Practices for Prompts and Interpretation

Prompts are a critical determinant of the value of the output generative AI tools will provide. “To achieve the best results and uncover valuable insights, it’s crucial to follow some best practices when generating prompts,” Ayala said. He recommended that HR professionals:

  • Be clear and specific. Clearly articulate the objectives and desired outcomes you are looking for. For instance, instead of a vague prompt like “Analyze employee engagement,” a more effective prompt would be: “Generate insights on factors impacting employee engagement in the sales department over the past six months.”
  • Incorporate context and constraints, providing specific time frames, demographics or other variables of interest. For example: “Analyze the attrition rate among female employees aged 25-35 in the engineering department, considering factors such as training opportunities and career progression.”
  • Repeat and refine. Generating prompts is an iterative process, so it’s important to experiment with different variations to explore multiple angles and rework inquiries. This helps users delve more deeply into desired topics and yields more comprehensive insights.

To evaluate the accuracy and validity of results, Ayala recommended that users:

  • Cross-validate. Compare generated results with existing data or external benchmarks to assess accuracy and consistency.
  • Seek input from domain experts within HR or data analytics who understand the nuances of people data analysis.
  • Test and validate hypotheses based on prompts and then design experiments or further analysis to test these hypotheses again.

“This iterative approach ensures a robust validation process and guards against drawing erroneous conclusions solely from AI-generated outputs,” he said.

The bottom line: Today’s generative AI tools hold promise and potential, but HR professionals should not abdicate their important role in ensuring that data is used, interpreted and applied appropriately and accurately to guide people decisions.

Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.

Leave a Reply

Your email address will not be published. Required fields are marked *