The Department of Justice (“DOJ”), the Federal Trade Commission (“FTC”), the Consumer Financial Protection Bureau (“CFPB”), and the Equal Employment Opportunity Commission (“EEOC”) have issued a joint statement outlining a collective commitment to monitor the use of automated systems and artificial intelligence (“AI”) and its relation to unlawful discrimination. The agencies have warned that while AI tools utilized by employers offer a promise of advancement, their use carries the potential of unlawful bias, discrimination, and other harmful outcomes.
The agencies have described the targeted systems as “software and algorithmic processes that are used to automate workflows or help people complete tasks or make decisions.” The joint statement provides that these systems could potentially discriminate through several different avenues.
The agencies first point to the data and datasets used by the systems. They fear that outcomes generated by the systems could be skewed by unrepresentative or imbalanced datasets, and that the automated systems could correlate certain data with protected classes, which could lead to discriminatory outcomes. The agencies have also expressed concern that employers might not understand the internal workings of the automated systems, which would prevent them from recognizing if the system is producing biased results. Similarly, there are concerns that unfair bias might occur due to automated systems being used in a context for which they were not intended.
The EEOC has announced that it plans for its staff to be trained on how to identify AI-related issues in its enforcement work. As part of its initiative, the EEOC has already released guidelines regarding the effects that automated systems and AI software could have on individuals protected by the ADA. These guidelines recommend that employers inform all job applicants and employees subject to the systems that reasonable accommodations are available, and to provide clear and accessible instructions for requesting such accommodations.
The FTC issued its own warning that there is no AI exemption to the current anti-discrimination laws, and that the FTC plans to vigorously combat unfair or deceptive practices where AI programs are used. Employers utilizing automated systems and AI software should consider creating protocols to ensure that the systems are not resulting in outcomes with disparate impacts on protected groups or producing results that create an unfair bias in violation of federal or state law.