Practical (and Scary) Implications of Speech Technology in HR

?When film critic Roger Ebert lost his ability to speak while being treated for thyroid cancer, CereProc, a company based in Edinburgh, Scotland, was able to create a text-to-speech program using hundreds of recorded hours of Ebert’s speech, recorded from his television show, to give him a voice. Similarly, the company has worked with former NFL player Steve Gleason to clone his voice after he was diagnosed with ALS.

Conversational artificial intelligence isn’t a thing of the future. It’s here now. It is now possible for technology to artificially replicate an individual’s voice to, for instance, share a voice message from the CEO, respond to common employee questions, offer voice-driven training or just-in-time instruction, and much more.

But while there are practical applications, there are scary ones as well. Could this type of voice technology lead to security breaches—such as when the CEO’s voice asks an employee for a password?

Where is this voice technology going, and what opportunities and risks does it present for HR?

Practical Applications

Linda Shaffer is chief people and operations officer at Checkr, a software-as-a-service startup and employer background-check provider. At Checkr, she says, voice technology is used to automatically transcribe employee training sessions to create searchable transcripts that employees can access anytime, anywhere. This is, she says, “especially helpful in remote or hybrid work environments, where employees may not have easy access to trainers.”

Shaffer says the company has discovered some best practices she would recommend to others:

  • Provide training on how to use speech technology and search for information. Not all employees are comfortable with speech technology.
  • Offer audio versions of transcripts for employees who prefer to listen rather than read.
  • Create transcripts of frequently asked questions so employees can quickly find answers to common questions. This might be helpful, for instance, during open enrollment season or employee onboarding.

Robb Wilson is CEO and founder of OneReach, a conversational AI platform. He says these are just a few practical—and powerful—use cases for voice technology.

“It’s easy to think of meaningful applications of this kind of technology, and there’s a wellspring of opportunity,” Wilson said. But, he added, “the more refined it becomes, the more problematic and troubling deepfake scenarios become.”

Nefarious Possibilities

SHRM Online has reported on deepfake video implications; the same concept can be applied to voice.

“Having been tied to at least two public fraud cases, the technology that creates artificial voices has now become part of the cyberattack arsenal,” said Steve Povolny, head of advanced threat research with Trellix, a cybersecurity platform. Three years ago, he said, hackers impersonated the CEO of a U.K.-based company and attempted to force a transfer of nearly a quarter million dollars. “The technology is quickly evolving,” he added. “While it has been used for illegitimate financial gain, it is equally likely to be used for credential theft and cyber intrusion.”

The other case, reported by Forbes, involved a bank manager in Hong Kong who received a call (he thought) from the director at a company he’d spoken with before. The director was excited about making an acquisition and said he needed the bank to authorize transfers amounting to $35 million. It was a deepfake voice scam.

Kavita Ganesan, author, AI advisor, educator and founder of Opinosis Analytics in Salt Lake City, said, “In HR, the use of voice technology may be useful to speed up the productivity of certain tasks, such as search and lookup, as well as training employees. However, developing the ability to mimic the voices of employees may present more risks and ethical issues than open up opportunities.” For example:

  • Company politics may drive certain employees to use voice technology inappropriately to get specific employees in trouble.
  • The company may hold the CEO liable for something the CEO did not say voluntarily.

“As voice technology gets closer and closer to sounding more human, those types of risks bring about unnecessary trouble for the company when they can use nonemployee voices to accomplish so many of the HR tasks,” Ganesan said.

Peter Cassat, a partner at Culhane Meadows where his practice focuses on employment, technology, privacy and data security law, points out that the use of voice technology in this manner is considered “phishing”—which is usually the use of e-mail to get someone to do something by posing as someone else. He says he hasn’t yet seen much workplace-related litigation related to phishing using voice technology.

However, the potential is there. Organizations need to get ahead of this potential risk now by educating employees and putting appropriate policies and practices in place.

Policies and Practices

Povolny recommends instituting a standard set of rules to address the type of information employees should—and shouldn’t—supply based on voice requests. “No one should ever ask you to transfer funds, provide private credentials or [take] any other confidential action without providing a method of authentication or identity verification,” he said. “It’s a new way of thinking to trust but verify when it comes to audio, but it is just as valid for a verbal-only transaction as a computer-based transaction.”

Failing to validate based on anything but a voice, Povolny said, “could lead to stolen funds, privileged access to restricted systems of areas, network breaches, personnel tampering and many more damaging scenarios.”

Wilson takes this a step further. “From a business perspective, replicating a CEO’s voice creates far more liability than opportunity,” he said. “The many deceitful ways this kind of technology could be used to mislead employees and customers are so potentially damaging that businesses will need to explicitly state that they will never use it.”

It’s a common business practice to let customers know that you will never call asking for information, Wilson noted. “This extends that further. Businesses will need to create private communication systems that rely on more substantial forms of authentication in an era when individuals’ voices can be recreated and manipulated at will.”

Wilson also recommended that companies make it clear to employees when they’re interacting with a machine and not a human. “If a new employee thinks they’ve been interacting with a human only to find out later that it was a machine, they could feel violated and become wary of future interactions within the organization,” he said.

Educate Employees

Risk exists, both for your company and your employees. Even seemingly innocent pursuits such as using poplar apps like Overdub, Povolny explained, can put people at risk for having their actual voices harvested by nefarious players planning to use that audio data to produce deepfake content.

These apps aren’t “necessarily malicious,” Povolny said, “but many times even the EULAs [end-user license agreements] will contain verbiage that allows the company the right to do whatever they want with that data, given your consent.” Povolny recommended that employees—and, in fact, all of us—”consider your voice and facial identities protected data that you don’t give out easily.”

Wilson agreed. “While a great deal of the security solutions that voice replication spurs will start on the business side, it will soon become everyone’s concern,” he said.

Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.

Leave a Reply

Your email address will not be published. Required fields are marked *