Does Using AI and Algorithms to Screen Employees Put Your Company at Legal Risk?

Does-Using-AI-and-Algorithms-to-Screen-Employees-Put-Your-Company-at-Legal-Risk-(IntelliCorp).png

Digital innovations and advances in AI have produced a range of new talent identification and assessment tools. Many of these technologies promise to help organizations improve their ability to find the right person for the right job, and screen out the wrong people for the wrong jobs, faster and cheaper than ever before.

Technology also presents new dilemmas with respect to differences, and there have already been high-profile real-life situations where these systems have revealed learned biases, especially relating to race and gender. Amazon, for example, developed an automated talent search program to review resumes — which was abandoned once the company realized that the program was not rating candidates in a gender-neutral way. To reduce such biases, developers are working to balance the data used for training AI models to appropriately represent all groups. The more information that the technology has and can account for/learn from, the better it can control for potential bias.

Federal Agencies Increase Scrutiny on Use of AI in Background Checks

In April 2020, the FTC issued guidance to businesses on “Using Artificial Intelligence and Algorithms” written by Director of FTC Bureau of Consumer Protection Andrew Smith on the use of AI for Machine Learning technology and automated decision making with regard to federal laws that included the FCRA that regulates background checks. As far as compliance is concerned, the guidance indicates that employers should consider three primary areas: transparency around how the AI is used, transparency around data collection, and disclosure of algorithmic decision making.

Transparency around how the AI is used. Most often, AI is invisible to the end user and the consumer experience in general. When using AI tools, the FTC guidance suggests that companies take care not to mislead consumers about the nature of the interaction. If a company  buys fake followers, phony subscribers, and bogus “likes” to boost their social media presence - from a company like the one cited in the Devumi complaint, that company could face an FTC enforcement action. Many of these companies that offer fake followers and bogus likes use AI technology to do so. It’s important to understand how any plug-ins or add ons to your social media or websites affect how AI interacts with users and what factors are involved.

The American Bar Association suggests that employers should know the factors being considered by the program or algorithm. In much the same way that employers carefully develop and identify non-discriminatory and non-biased factors and considerations that are important to their traditional hiring decisions, they need to be equally as diligent in developing and modifying (where appropriate) the inputs that are fed into their recruiting programs and algorithms used to screen and evaluate potential candidates and applicants. Not only will this enhance the likelihood of recruiting success, but it will give employers the opportunity to assess whether the factors are, in fact, job-related, which is a lynchpin criterion under many employment laws.

Be transparent when collecting sensitive data. Secretly collecting audio or visual data – or any sensitive data – to feed an algorithm could also give rise to an FTC action. Just last year, the FTC alleged that Facebook misled consumers when it told them they could opt in to facial recognition – even though the setting was on by default. As the Facebook case shows, how you get the data may matter a great deal.

Disclose algorithmic decision making. If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice. Under the FCRA, a vendor that assembles consumer information to automate decision-making about eligibility for credit, employment, housing, or similar transactions, may be a “consumer reporting agency.” That triggers duties for you, as the user of that information.

FCRA’s Guidance Using Artificial Intelligence in Hiring

For example, your company purchases reports from a background check company that uses AI tools to generate information on which you might base a hiring decision. The AI model uses a broad range of inputs about consumers, including public record information, criminal records, credit history, and even data about social media usage. If you use the report as a basis to deny someone a job opportunity, you must provide that consumer with an adverse action notice. Specifically, you must provide consumers with certain notices under the FCRA, and a licensed background check vendor will be aware of what must be sent to consumers as well as able to provide you and your HR team advice on how to set this process up. 

Not all employers have the capability of internally developing A.I. tools for recruiting—many contract with outside vendors to handle parts of the recruiting process, particularly the initial vetting of applicants and/or the advertising to specific potential candidates. Using such an arrangement, however, does not exempt the employer from liability if the vendor is using tools that discriminate against protected groups. Similar to requests for salary history and background checks, employers may be held liable for violations of employment laws by recruiting companies. As such, employers—through appropriate contract language—should require their recruiters, or others acting on their behalf, to comply with all existing employment laws in connection with the screening and hiring of job applicants.

Comments

Blog post currently doesn't have any comments.

Add Your Comment