AI Can Help Protect HR Data from Hackers

?The sensitive nature of HR data has always made it a rich target for bad actors. The prominent ransomware attack on industry vendor UKG represents just one example of cybercriminals’ increased attempts to breach Social Security numbers, bank account information, compensation data and more.

But data security experts say one trend has made HR data even more alluring to hackers of late: the Great Resignation.

More employees continuing to come and go from organizations has increased the odds of problems like failing to properly deactivate access to corporate networks, leaving sensitive credentials exposed or workers taking confidential data out the door without even realizing it.

“With the rush of people still leaving jobs, there can be a lot of holes that don’t get patched by cybersecurity teams,” said Justin Fier, vice president of tactical risk and response for Darktrace, a cybersecurity software provider in Cambridge, U.K. “HR has a rich dataset, and whenever that exists, you should expect bad actors to come after it.”

One way that organizations are protecting against these rising threats is by using a new generation of artificial intelligence and machine learning tools designed to automatically detect and alert them to abnormal actions in their technology ecosystems. These software tools are increasingly being baked into HCM technology suites—as well as offered by third-party providers to integrate with HR applications—in part to compensate for a continued shortage of cybersecurity professionals who would usually conduct such monitoring.

Identifying Anomalies

Security experts say one of the most effective uses of this new AI is anomaly detection. The technology first establishes what’s normal in the use of an organization’s systems—including cloud, software-as-a-service, on-premise and e-mail platforms—and then is able to automatically detect any abnormalities as employees access and use those systems. The AI can analyze thousands of metrics to reveal small deviations that might indicate an emerging threat, experts say.

“The technology identifies things that aren’t supposed to be there,” said Alexander Wurm, a senior analyst who leads the data science coverage at research and advisory firm Nucleus Research. “Anomaly detection identifies abnormal things happening within networks or databases, then typically sends alerts and assesses priorities based on the risk individual users might present.”

The technology has particular value in remote working environments in which home-office employees or contract workers are regularly accessing corporate networks using their own devices, Wurm said.

“Every time someone interacts with a network, information about their IP address, what they’ve accessed and more is fed into a machine learning algorithm so the technology can learn whether what they’re doing is considered usual or unusual activity,” Wurm said.

The “self-learning” AI that vendors like Darktrace use for data security is different from past versions of the technology, Fier said. Previous iterations identified threats based on historical attack data, first requiring that data to be cleansed, labeled and moved to some central repository. Fier said Darktrace’s AI instead learns in real time “on the job” and updates its understanding as technology environments evolve.

“We’re not here to tell our customers the difference between good and bad,” Fier said of Darktrace’s anomaly detection capabilities. “We’re here to tell them the difference between the usual and unusual.”

HCM Tech Vendors Introduce AI-Based Data Security

HR technology vendors have begun embedding this type of AI-based data security in their platforms. Oracle is among them, having recently introduced an AI-powered monitoring solution to its Fusion Cloud HCM platform that automates security analysis to protect against cybercriminals and limit access to sensitive employee data.

The AI is part of Oracle’s advanced HCM controls offering and features anomaly detection and alerts to help HR functions and data security teams monitor and respond to threats like suspicious employee activity, phishing attacks and attempts to steal data, according to Aman Desouza, senior director of risk cloud product strategy for Oracle. The AI allows HR to see blind spots and shine a light on where data vulnerabilities might lie, he said.

Experts say HR needs to become more involved in data security issues because the breadth and complexity of threats is growing. “Data security is no longer just an issue for IT today,” Desouza said. “You have to engage the organization more broadly in that effort, which includes HR, lines of business and employees themselves.”

Oracle’s AI tools can monitor and detect online activity based on time and frequency, sending instant alerts about suspicious activity based on when and how sensitive HR records are accessed. That might allow HR or security teams, for example, to detect abnormal activity such as employees accessing data over the weekend or when large amounts of data are accessed in short periods. Clicking through data faster than humans can read it can be a sign of fraudulent activity, data security experts say, such as when bots are used maliciously to breach HR data.

The AI can monitor activity by location, sending alerts based on where HR data is being accessed from. Systems also can be monitored based on role and responsibility, Desouza said. If an employee is transferred to a different department, for example, they may still have previous privileges to access sensitive data they no longer need in their new role.

Desouza said the Great Resignation and increased internal mobility of workers has created enhanced security risks for organizations. “Employees are increasingly moving in and out of roles, shifting the scope of their work or leaving organizations altogether,” he said. “Many others are now accessing corporate networks remotely or when they’re on vacation somewhere around the world. You want to be able to monitor all those scenarios in your HR technologies to ensure data security.”

Experts say more cybersecurity software providers also are using machine learning to protect e-mail communications from malicious actors. The technology can learn what type of e-mail has been flagged in the past for things like phishing attacks, for example, then proactively keep similar e-mails from reaching employees’ primary inboxes in the future.

A Major Risk: The Human Element

One of the biggest data security risks in any organization remains human interaction, experts say, which is why automated processes can provide additional protection.

“You tend to see the greatest risk anywhere there are human touch points,” Wurm said. “That’s why automating processes like onboarding and offboarding can have value both in improved data security as well as in gaining new process efficiencies or time savings.”

For example, relying on manual rather than automated deactivation of network access when employees leave companies increases the chances of error, because it can be easy to lose track of which systems employees had access to and who is coming and going. “There’s a growing use of automation by IT teams as a way to limit the odds of human error in protecting sensitive data,” Wurm said.

Dave Zielinski is principal of Skiwood Communications, a business writing and editing company in Minneapolis.

Leave a Reply

Your email address will not be published. Required fields are marked *