Artificial Intelligence is everywhere, but what is it? The Oxford Dictionary defines it as follows:
The theory and development of computer systems capable of performing tasks that normally require human intelligence, such as visual perception, speech recognition, decision making, translation between languages.
For many employers, artificial intelligence (“AI”) is a recruiting tool that allows employers to complete complex tasks in extremely short timeframes while eliminating human bias and increasing the reach of diverse candidates. What is the legal risk for AI?
Studies show that 80% of employers use AI to recruit. If anyone needs to assess whether the risk is real, the EEOC issued guidance regarding IA and ADA in employment in May 2022. The DOJ did the same.
So how does it work? AI uses machine learning to analyze all kinds and types of data, from facial expressions and body language to social media research, open source data, computerized testing and automated selection to acquire and evaluate data to to attempt to predict the best candidate for the position he is seeking. AI is using technology screening chatbots and gaming software, among other things, to take what used to take recruiters days or hours to screen candidates in minutes or less. The AI can immediately eliminate candidates who do not meet the job requirements. The AI can immediately answer frequently asked questions. AI can automatically schedule follow-up interviews. Recruiters have learned over the years that the timeliness of employer feedback and communication is directly tied to hiring the best candidates, which is one of the biggest benefits of AI in recruiting. In fact, some studies have shown that using AI can cut 23 hours per hire, don’t disrupt workflow, and reduce candidate screening costs by 75% while reducing revenue by 35% and increasing revenue per employee by 4% by finding better fits for roles. In fact, with AI, resumes can be reviewed quickly, interviews can be scheduled electronically, and frequently asked questions can be answered immediately. Sounds good, right? If the system is computerized without human intervention, what is the risk?
Unless properly tested, AI risks creating unfair outcomes or inadvertently favoring one group over others. So how does it work? This can happen in a number of ways, none of which is intentional, but all of which can lead to disparate impact claims. Some examples include programs that look for gaps in employment, which seem harmless at first glance, but which can lead to biases against women who have taken time off to care for children. Another example is profiles of “successful employee success traits” that resemble current company management that are then used as predictors of top candidates. And what is a problem? If all successful employees belong to one group, all other applicants, including those in protected classes, may not meet predictors of “success” that resemble current leadership. Some AIs use automated games that differentiate between contestants. While seemingly simple, the reality is that gaming technology can unfairly disadvantage people with disabilities who might otherwise compete for positions with reasonable accommodations. Is this risk real?
Many employers have been challenged whether their AI algorithms have a disparate impact. The media reported on a few that received national attention, including Google Photos which was alleged to be racially discriminatory because it used image recognition to tag photos deemed racist. IDEMIA used facial recognition technology to analyze photo IDs and allegedly made significant racial errors. COMPAS used algorithms to predict recidivism, but allegedly incorrectly identified black criminals as more likely to commit future crimes. These are just a few companies whose AI practices have been questioned. This risk is real. If the risk is real, what can employers do to leverage AI while managing the risk?
We all know the problem is not going away. As of this writing, Illinois, Maryland, New York City, Washington, DC, and California have passed legislation regarding the use of AI in employment. While not all provided private rights of action, most included some form of consent to use AI in the recruitment process in their language.
This summer, President Biden released an AI Bill of Rights earlier this year in which he identified a number of rights to be protected:
- You must be protected against unsafe or inefficient systems.
- You must not be discriminated against by algorithms and systems must be used and designed fairly.
- You must be protected against abusive data practices through built-in safeguards and you must have authority over how data about you is used.
- You need to know that an automated system is being used and understand how and why it contributes to the results relevant to you.
- You should be able to opt out, if necessary, and have access to someone who can quickly investigate and resolve any issues you are having.
In light of all this, what can employers do to leverage AI while managing risk? Here are some examples of best practices:
- Require hiring managers to identify the qualifications really needed for positions.
- Let candidates know that AI is used in the process and how it will be used to assess candidates.
- Provide enough information to allow applicants to determine whether they should seek reasonable accommodations as part of the application process.
- Train employees to identify accommodation requests and implement procedures to provide reasonable accommodations.
- Require biased audits from all contractors using AI.
- If employers identify disparate impacts, challenge the AI data.
- Never forget the importance of human intelligence when considering AI.
#Beware #artificial #intelligence #real #problems #Supra