How to Reduce the Risk of Bias in Your Hiring AI

Different types of bias, whether intentional or unintentional, have been a concern among HR departments for decades. In fact, psychologists have defined more than 180 human biases.

Artificial intelligence (AI) and machine learning (ML) programs have been promoted as a more objective, less biased way to screen, hire, and train job candidates. Unfortunately, humans may program biased information into algorithms, skewing results. AI programs also may exhibit bias when they lack sufficient, representative data.

Skewed AI programs can create serious problems for employers and HR departments that use these tools in their hiring processes. In fact, a new law in New York City mandates employers to conduct a “bias audit” of any automated employment decision tools, and to notify employees or candidates if the employer used such a tool to make job decisions.

As a growing number of companies use AI to streamline their operations, including hiring, it is important to take steps to avoid bias in these algorithms. Here are some ways to reduce or eliminate bias in HR algorithms.  

Understand AI’s limits.

AI is a valuable tool for streamlining HR departments, but it should not be the one and only solution. Consider how humans and AI can work in tandem, rather than replacing one with the other.

It may sound like this defeats the purpose of using AI in the first place. However, researchers at the National Institute of Standards and Technology (NIST) suggest a “socio-technical” approach that understands the limits of purely technical efforts to mitigate bias.

When selecting and implementing AI tools, employers, HR departments, and IT specialists should be aware of the data sources that these tools use. Be sure that the AI developers have taken steps to limit bias. HR departments should monitor the data that feeds their AI to avoid creating or emphasizing bias.  

Create bias and fairness definitions.

Defining bias and fairness, and their minimum acceptable levels, is a daunting task for any employer. Many industries and governing bodies have struggled to develop standard, universal definitions of bias.  What constitutes “fair” in one organization may not be applicable to another.

By defining bias for their organizations, however, employers can guide the choice of AI tools, and demonstrate their commitment to limiting hiring bias. It will also help HR staff know when their AI tools have fallen short of their standards.

Employers may set a single standard for fairness and bias, or they may have different thresholds for different groups or situations. Either way, leaders should consider a variety of metrics and standards when setting definitions and goals for fairness.

“An essential practice is to ensure as much as possible that training data are representative,” says Dr. Sanjiv M. Narayan, Professor of Medicine at Stanford University School of Medicine. “Representative of what? No data set can represent the entire universe of options. Thus, it is important to identify the target application and audience upfront, and then tailor the training data to that target.”  

Job postings can influence AI.

Where a company shares job openings may influence the data that feeds into an AI algorithm, and thus may contribute to bias. For example, if a company posts a job opening just on LinkedIn, that platform’s algorithms select which users see that job ad, which may unfairly skew the employer’s AI data.

Rather than depending solely on one outside algorithm, employers should include their own outreach efforts to target potential job seekers. This can provide new data to AI hiring tools that could mitigate bias.

Whether disclosed in a job posting or elsewhere in the hiring process, applicants should be aware if an employer uses AI to make staffing decisions. Even if local or state laws do not apply, this transparency can inspire greater trust among applicants and other stakeholders.  

Review and refresh AI tools.

Because AI evolves in response to new data, employers should periodically conduct audits of their ML tools to ensure minimal bias and make necessary corrections. This may include a review of rejected applicants and whether those exclusions were warranted. The AI may need to be adjusted or even overruled by human HR staff.

Business leaders, decision makers, and HR departments also should stay up-to-date on AI research. This includes updates to specific software or new findings in the field. Look for best practices from companies like Google AI or recommended tools from IBM’s “Fairness 360” framework.

While it can save time and improve efficiency, AI is not entirely “hands-off.” Rather than one replacing the other, machines and humans should work in cooperation to reduce bias and improve AI software’s effectiveness. HR departments should keep an eye on the field of AI research, and their organization’s chosen software, to ensure they are using the fairest options in hiring.  

 

Comments

Blog post currently doesn't have any comments.

Add Your Comment