In a recently released technical assistance document, titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964,” the Equal Employment Opportunity Commission (“EEOC”) explained how employers can run afoul of the American’s with Disabilities Act (“ADA”) by using computer-based tools when making decisions about hiring, monitoring, compensation, and/or terms and conditions of employment. Building upon earlier advice, contained in the “Artificial Intelligence and Algorithmic Fairness Initiative,” published in 2021, the EEOC warned of possible adverse impacts arising from the use of AI in employment. This new article particularly focuses on the adverse impact of using AI for employment decisions.
The EEOC identified three ways in which an employer's use of algorithmic decision-making tools could violate the ADA:
- Failure to provide a reasonable accommodation to an applicant or employee who needs an accommodation in order to be fairly and accurately rated by the algorithm.
- The unintentional 'screening out' of an individual with a disability even though they are able to perform the job with a reasonable accommodation. If the tool makes use of disability-related inquiries and medical examinations, the ADA may be violated.
- The EEOC recommends that employers provide reasonable accommodations, have alternative means available to rate an applicant if the applicant would be unfairly disadvantaged by the use of a computer-based tool due to the applicant's disability, and reveal to candidates what specific traits the test is designed to assess, how the assessment is performed, and the variables or factors that could affect the rating.
The EEOC also addressed liability arising from tools that have a disproportionate impact on a protected group. The EEOC suggests that employers consider self-auditing how selection tools and filters impact different groups and proactively change the practice going forward if an adverse impact is found to have occurred. It is important to note that the EEOC will still hold employers ultimately responsible for any violations even if they hired an outside vendor to develop the AI process. In addition, employers should consider the following measures before implementing AI to assist in employment decisions:
- When training and implementing the AI system ensure that the data used is free from biases that may unintentionally sift out more diverse and representative candidates.
- Regularly self-audit and test AI systems to weed out any biases or discriminatory patterns.
- The AI system itself should be accessible and provide accommodations for persons who need them.
- Be open and honest about the processes used in your employment decisions. Transparency can include a detailed explanation of how the AI system makes decisions and reaches its conclusions.
- A human being should be tasked with the responsibility to oversee any AI systems to ensure continued compliance that is in line with company values and policies.
In short, employers should consider all of the potential implications and adverse effects before investing in AI tools for use in the workplace and remain diligent in ensuring such tools are not used in an unlawful manner or produce an unlawful result. Experienced employment counsel at WSHB can provide guidance to ensure your company remains in compliance with the law. Please do not hesitate to reach out to a member of our team should you have questions or concerns.