How Technology Can Help Companies Tackle Job Seeker Fraud

?Imagine hiring a new employee who, when she showed up to start the job, was not the same person you had interviewed. That may be happening more often than you think.

On June 28, the FBI issued a public service announcement indicating that the number of complaints it’s receiving for this problem is increasing.  

Scammers are using deepfake technology and stolen personally identifiable information to pose as other people and apply for jobs. Why? Once on the job, these individuals can gain access to data and systems, release ransomware, or obtain the credit card information or Social Security numbers of customers or employees.

Could this—has this—happened to you?

Deepfake Risks

Deepfakes, which can occur during both video and audio interactions, involve digitally altering the image and voice of someone to make it appear like they are someone else saying or doing things they haven’t actually said or done. It’s most commonly thought of as a means to spread misinformation—especially malicious information. As Common Sense Media reports, “Director and comedian Jordan Peele teamed up with Buzzfeed and Barack Obama to create this deepfake video to serve as a warning of what manipulated video could do.”

Deepfake technology is also known as “synthetic media,” said Dave Hatter, a software engineer and cybersecurity consultant with 30 years of experience in IT. It’s advancing, he says, “much quicker than most realize.”

There are social and political risks related to deepfakes and their reported use on social media channels, with their potential to spread misinformation. Companies are also at risk during the hiring process, as the FBI recently indicated. Remote jobs for which the entire interview process is conducted virtually are particularly vulnerable. But there are some steps organizations can take to protect themselves.

Minimizing the Risk of Fraud 

Jon Hill is chairman and CEO of The Energists, a recruiting firm that works with companies in the energy industry. “Our reputation depends on the candidates we send along to clients, so we take preventing candidate fraud very seriously and have implemented systems to detect and avoid these scammers,” he said.

Remote jobs may be particularly at risk from this type of fraud, he said—especially if the interview and training process is entirely remote. Hill said companies need to “be vigilant and thorough in verifying the identity of candidates before extending offers.”

At least one round of interviews should be done via video call, he said. Candidates should be informed that they must:

  • Have their camera on.
  • Show their photo ID alongside their face at the start of the interview.
  • Agree to have the video recorded.
  • Remove any earbuds or headphones.
  • Turn off any backgrounds or filters in the program.

“These steps don’t eliminate the possibility of an especially skilled scammer using deepfake technology, but it does make it more difficult for fraudulent candidates to succeed,” Hill said.

Recording the interview is important, Hill said—interviewers are likely to be busy talking and listening to the candidate and may not notice any oddities. If you plan to move forward with the candidate, review the video on a larger screen, he advised.

“Pay close attention to their eye and mouth movements, which are the most difficult parts of the face to make appear natural,” Hill suggested. “Also keep an eye out for any skin tone irregularities or odd shadows, which could be a sign the video is faked.

“If something odd does catch your eye mid-interview and you suspect the video may be a deepfake, ask the candidate to stand up or turn their chair away from the camera. Often, the edges of the AI-created video will become visible when they move around the frame or will warp and distort in profile, even very sophisticated ones that are otherwise virtually indetectable.” 

It can be helpful to get some practice in identifying the real from the fake. Hatter recommended a website developed by MIT that can be used to practice identifying deepfakes. Going through the 32 examples can help you be more watchful for some of the “tells” that indicate the clips aren’t genuine.

An Ongoing Challenge

Peter Strahan is the founder and CEO of Lantech, a professional IT support, cybersecurity and cloud services firm. “Because deepfakes are AI-generated and AI is constantly learning all of the time, making a conventional detection tool for deepfakes is futile,” Strahan said. “You’ll never stay ahead of machine learning.”

Fortunately, he added, “companies like Microsoft have been creating their own video authentication tools, using AI to fight AI.” Microsoft used a public dataset of real faces to develop its technology, Strahan said, which gives a “confidence score” indicating how likely it is that any given image has been manipulated in various ways. With videos, he said, scores can be given for each frame. “The added bonus is that as the technology is AI, it is constantly learning and improving, although deepfake technology is improving, too.”

It is, Strahan said, “a digital arms race.” But, he added, “I have no doubt that the good guys will win. Microsoft’s tool is currently available, and I would recommend anybody suspecting that they are dealing with deepfake job applications to give it a try. There’s still a fair way to go, but I’m sure it would identify all but the best deepfakes.”

Keep in mind that, especially in an increasingly remote/hybrid world, deepfake fraud isn’t limited only to the recruitment process. There is, for instance, the potential for this technology to be used by employees to fake their participation in a Zoom call, for instance—or to make it appear that another employee, customer, vendor or anyone else, for that matter, has said or done something that they haven’t.

Lin Grensing-Pophal is a freelance writer in Chippewa Falls, Wis.

Leave a Reply

Your email address will not be published. Required fields are marked *