Regulating AI: Groups Call for Solutions to Avoid Discrimination, Other Challenges

?The rise of AI technology, such as ChatGPT, in workplaces has been received with both enthusiasm and apprehension.

Two organizations—the Center for American Progress (CAP) and the Institute for Workplace Equality (IWE)—have separately called for specific solutions to address a multitude of issues around businesses’ use of generative AI, particularly in their workforces.

CAP, a research and advocacy organization in Washington, D.C., released a report on April 25 calling for the White House to issue an executive order and take other actions to address the challenges of AI, such as employee displacement, algorithm-based discrimination in the workplace and the spread of misinformation.

“We cannot allow the Age of AI to be another age of self-regulation,” Adam Conner, vice president of technology policy at CAP and author of the report, said in a statement. “Issuing an immediate executive order to implement the Blueprint for an AI Bill of Rights [written by the White House Office of Science and Technology Policy] and establishing other safeguards will help to ensure that automated systems deliver on their promise to improve lives, expand opportunity and spur discovery.”

The report calls on President Joe Biden to:

  • Create a new White House Council on Artificial Intelligence.
  • Require federal agencies to implement the Blueprint for an AI Bill of Rights for their own use of AI.
  • Require that agencies should exercise all existing authorities to address AI violations and assess the use of AI in enforcement of existing regulation.
  • Require that all new federal regulations include an analysis of how they would apply to AI tools.
  • Prepare a national plan to address economic impacts from AI, particularly job losses.
  • Task the White House Competition Council with ensuring fair competition in the AI market.
  • Assess and prepare for potential AI systems that may pose a threat to public safety.

“The Biden administration’s Blueprint for an AI Bill of Rights is a clear road map on how to implement important principles around this technology into practice,” Conner told SHRM Online. “The president can and should act immediately, and Congress should follow his lead.”

Potential DE&I Concerns

The CAP report acknowledged that AI technology has the potential to bring significant benefits to workplaces, such as increasing efficiency and reducing costs.

However, the risks of AI are profound—both by creating entirely new classes of problems and exacerbating existing ones.

For example, research shows that ChatGPT could discriminate against certain employees and compromise diversity, equity and inclusion (DE&I) at work if not audited correctly. This could lead to lawsuits and violate state and local laws that require notice of AI use in certain employment decisions.

A report by Bloomberg Law suggested that employers should include in their policies a general prohibition on the use of AI in connection with employment decisions absent approval from their legal counsel.

[SHRM Resources and Tools: How to Manage Generative AI and ChatGPT in the Workplace]

Using AI Effectively Without Discriminating

Due to the growing complexity of AI at work, IWE established the Artificial Intelligence Technical Advisory Committee to help users and developers of AI-enabled employment tools navigate important equal employment opportunity (EEO) and DE&I issues that arise with regard to such tools in the context of current regulatory and professional standards requirements.

The multidisciplinary group of 40 experts recently released a report detailing effective uses of AI in employment decisions including in recruitment, hiring, promotions, assignments, performance evaluations and terminations.

“This report itself is both broad and deep,” said Victoria A. Lipnic, committee chair and former commissioner on the U.S. Equal Employment Opportunity Commission. “It addresses everything from how to deal with data collection issues to how to address the unique privacy and notice issues that arise with the use of AI-enabled employment tools to how adverse impact analysis may have to change when considering the impact of such tools.”

According to Lipnic:

  • Employers that rely on AI technology in recruiting and hiring or at any point in the employee life cycle need to know the source and quality of the data being used.
  • Some techniques for establishing reliability of data may or may not apply to newer sources of data used in AI systems.
  • A fundamental characteristic of AI—and especially of machine-learning algorithms—is the ability to gather and quickly process a high volume of information. This AI-specific characteristic may result in large amounts of personal data about applicants and employees being obtained, used and stored.
  • Companies should be mindful of applicable data privacy rules.

“Automation is burgeoning, but it is important for employers and everyone who cares about EEO and [DE&I] matters to remember: The rules remain the same,” Lipnic added. “We want employment tools and processes that do not discriminate and that provide opportunity to everyone in our wonderfully diverse workforce.”

Leave a Reply

Your email address will not be published. Required fields are marked *