Jansen – Spring 2023

E.E.O.C. Targets Artificial Intelligence: Is It Enough?

Ryan Jansen


INTRODUCTION

Between sentient search engines and growing fears of academic dishonesty, artificial intelligence (“A.I.”) has been facing growing criticism.[1] In the employment context, A.I. use is already widespread: experts estimate that 79 percent of employers use A.I. in their hiring processes.[2] Despite mounting evidence that A.I. can — and does — discriminate against protected classes, there has been little regulation or scrutiny.[3] However, on January 10, 2023, the Equal Employment Opportunity Commission (“EEOC”) released its draft Strategic Enforcement Plan (“SEP”) for 2023-2027 with a focus on combating employment discrimination caused by A.I.[4] While EEOC oversight is important, it alone will not be enough to protect civil rights and new legislation is needed.

A.I. AND DISCRIMINATION: HOW IT HAPPENS AND THE GROWING EVIDENCE

Discrimination caused by A.I. predominantly comes from “data mining.”[5] Data mining is how algorithms identify statistical relationships, discover patterns in datasets, make predictions, and learn what attributes or characteristics serve as proxies for outcomes of interest.[6]  However, A.I. models often rely on existing datasets which can discriminate against protected classes (i.e. race, sex, religion) because these groups are more likely to be miscounted or misrepresented in existing data.[7]

Discrimination caused by artificial intelligence is well-documented. In the employment context, there is evidence that the use of artificial intelligence in pre-interview personality tests screens out applicants with mental disabilities or mental illness in probable violation of the Americans with Disabilities Act (“ADA”).[8] Moreover, A.I. algorithms have demonstrated discriminatory tendencies in both job advertising and job screening.[9] For example, A.I. perpetuates “algorithmic bias” in job advertising since algorithms rely on existing employment data to identify “targets” for future advertising, thereby advertising to those that already dominate certain fields and ignoring other demographics.[10]

When it comes to job screening, A.I.’s algorithmic machine learning system assesses, scores, and ranks applications based on the employer’s preferred qualifications and skill sets.[11] However, these algorithms predict the future success of candidates based on previous hiring decisions, often rejecting members of protected classes for positions.[12] For instance, Amazon designed a machine-learning program to manage its job application process, but their algorithm excluded candidates with resumes including “women’s” because most previous hires were men without similar terms on their “successful” resumes.[13] Moreover, job screening algorithms can disqualify applicants outside of certain geographic radiuses or without particular educational credentials, inadvertently excluding entire racial or ethnic groups and older applicants from consideration.[14]

The SEP is not the first time the EEOC has targeted artificial intelligence, but it is the most meaningful. In late 2021, the EEOC launched an agency initiative on “Artificial Intelligence and Algorithmic Fairness” to provide guidance to applicants, employees, employers, and technology companies.[15] Since, the EEOC has provided only one technical assistance document about ADA violations and the use of artificial intelligence.[16] However, their guidance was limited to disability discrimination, and unlike an SEP, it was not voted on by the full EEOC, nor did it go through the ordinary administrative law process involving notice and comment.[17]

THE SEP and ARTIFICIAL INTELLIGENCE

Strategic Enforcement Plans outline the EEOC’s new enforcement priorities and underscore where the agency will focus its limited resources over the next few years.[18]  The EEOC considers its SEP priorities when initiating Commissioner charges, selecting cases for litigation, writing amicus briefs, publishing guidance, and conducting research.[19] Moreover, the EEOC considers SEP priorities in determining when and where to launch “systemic investigations” (or “directed investigations”) on employment practices.[20]

The new SEP repeatedly targets artificial intelligence. First, the EEOC promised scrutiny over “the use of automated systems, including artificial intelligence or machine learning, to target job advertisements, recruit applicants, or make or assist in hiring decisions where such systems intentionally exclude or adversely impact protected groups[.]”[21] The EEOC has already launched its first lawsuit under this category, suing an English-language tutoring company for allegedly programming its A.I. software to reject older applicants.[22]

Second, the EEOC signaled increasing oversight regarding “restrictive application processes or systems, including online systems that are difficult for individuals with disabilities or other protected groups to access[.]”[23] Here, the EEOC is buttressing its existing guidance on A.I. and the ADA[24] and addressing known concerns with personality tests.[25]

Third, the EEOC targets “screening tools or requirements that disproportionately impact workers based on their protected status, including those facilitated by artificial intelligence or other automated systems[.]”[26] This priority  mirrors the “disparate impact” language from Title VII case law and targets the aforementioned examples of discrimination based on geographic or educational factors.[27] Notably, these three points of emphasis are all under the very first priority listed in the new SEP: “eliminating barriers in recruitment and hiring.”[28]

MORE IS NEEDED TO COMBAT A.I.-INDUCED DISCRIMINATION

EEOC oversight is an important step towards combating A.I.-induced discrimination, but it is far from enough. The EEOC has three main enforcement options: Commissioner charges, directed investigations, and litigation.[29] Commissioner chargers require the authorization of only one EEOC Commissioner and involve an investigation of an employer, requiring them to provide an issue statement and support it with evidence.[30] Under a Commissioner charge, the EEOC often works with their state agency counterparts and the investigation can pertain to anything under the EEOC’s jurisdiction.[31] However, Commissioner charges are very rare, with only three issued in each of fiscal years 2020 and 2021.[32] Moreover, EEOC Commissioners are political appointees and therefore the initiation, investigation, and actual enforcement of potential employment violations are subject to political influence.[33]

Directed investigations are even more limited. While directed investigations can be initiated without the approval of a Commissioner, they can only target Age Discrimination in Employment Act or Equal Pay Act violations.[34] Therefore, potential discrimination on the basis of race, sex, religion, disability and other protected characteristics by A.I. cannot be investigated by directed investigations.

Finally, the EEOC can use litigation to combat A.I.-induced discrimination. The EEOC uses litigation sparingly, in part because of resource constraints, but also as a deliberate strategy to find winnable cases that have a maximum impact on employment discrimination case law.[35] However, the evidentiary barriers for plaintiffs (like the EEOC) make winning litigation difficult.

There are two major litigation frameworks in anti-discrimination law: disparate treatment and disparate impact.[36] Under a disparate treatment analysis, a plaintiff must show that they suffered an adverse employment action because an employer intentionally discriminated against them on the basis of a protected characteristic.[37] An employer can rebut a plaintiff’s argument by stating that they have a legitimate, non-pretextual, non-discriminatory basis for their decision.[38] It is exceptionally difficult to prove that A.I.-caused discrimination is intentional, making disparate treatment cases nearly impossible for plaintiffs.[39]

Therefore, plaintiffs must rely on the disparate impact theory. A disparate impact analysis requires a plaintiff show that a facially neutral employment or hiring practice has a disproportionately discriminatory effect.[40] An employer can rebut that their facially neutral policy is a business necessity, but even if it is necessary, a plaintiff can still prevail by showing that some equally efficient but less discriminatory alternative is available.[41]

While the disparate impact theory accurately characterizes discrimination caused by A.I., plaintiffs’ cases will suffer from evidentiary issues. Even if one assumes that most plaintiffs can show that the facially neutral use of A.I. has a discriminatory impact, most cannot get beyond the employer’s “business necessity defense.”[42] Consider the fact that most A.I. models discriminate on facially neutral “proxies” for success, like education levels or geographic location.[43] Many businesses can contend that certain education levels or proximity to their company is necessary for their business. Moreover, it is hard for plaintiffs to show the sufficiently strong causal relationship between A.I. software and disparate outcomes necessary to establish liability.[44]

Plaintiffs may also struggle to show that an equally effective alternative employment practice exists. Plaintiffs must contend with the real and immense benefits of A.I. when providing an alternative.[45] In light of these real benefits, plaintiffs will have a hard time convincing judges and juries that any alternatives they offer are better than continue reliance on A.I.[46] 

Commissioner charges, directed investigations, and litigation are important but imperfect solutions. To fill the gaps in enforcement, there needs to be new legislation passed to regulate A.I. in employment.[47]Since most of the discrimination caused by A.I. comes from data mining, any new law must change the way data is collected.[48] One potential solution is New York City’s new A.I. auditing law.[49] In New York City, the use of artificial intelligence software is prohibited unless those programs have gone through a “bias audit,” a process by which an independent auditor tests a program’s disparate impact on individuals based on race, sex, and ethnicity.[50] While this law has been criticized (including by pro-employee organizations), its potential is clear.[51] First, an audit system requires only a simple statistical analysis to determine whether an algorithm has a significant disparate impact across protected classes. Moreover, an audit incentivizes businesses to contract with developers whose software can pass the bias audit, and in turn, incentivizes A.I. developers create less discriminatory programs in the first place.

If A.I. discriminates based on biased models, then A.I. audit laws can incentivize more creative data mining and lessen any disparate impact. Employers justifiably want variables (proxies) that are most related to applicant success.[52] But since many current models lead to discriminatory impacts,[53] developers can tinker with how variables are weighed or defined to still proximate the characteristics of successful candidates while lessening any discrimination. Developers can test multiple A.I. models estimating applicant success and find the model that is the least discriminatory while still finding the employer qualified candidates.[54] Audit systems ensure that this process is conducted and applied in the employment context. It is currently possible, but the prevalence of discrimination demonstrates that regulation is needed to require companies use less discriminatory A.I. models.

 An audit law can also rectify discriminatory job advertising and job screening. If A.I. programs advertise a job predominantly to demographics who currently hold said job, then the A.I.’s overreliance on the status quo can be easily changed. The same is true for the use of A.I. in job screening. If a disparate impact results from an A.I. algorithm relying on previous applicants’ characteristics as a proxy for new applicants’ success, then developers can adjust the model’s reliance on these proxies to lessen the resulting discrimination. An audit effectively forces companies to use these less discriminatory models. 

Advertising positions and hiring applicants is not like math: there is no singular, correct solution. Instead, employers using A.I. can choose from a range of algorithms that can provide qualified candidates with varying levels of disparate impact. Right now, employers are choosing discriminatory algorithms. Current laws are not robust enough to safeguard applicants’ civil rights and EEOC enforcement is too limited to fill in this legal gap. An auditing system provides a simple and effective safeguard against pervasive employment discrimination brought by A.I.


Ryan Jansen is a Junior Editor with MJEAL. Ryan can be reached at rpjansen@umich.edu.


[1] See e.g., Billy Perrigo, The New AI-Powered Bing Is Threatening Users. That’s No Laughing Matter, Time (February 17, 2023), https://time.com/6256529/bing-openai-chatgpt-danger-alignment/; Kayla Jimenez, Schools nationwide are banning ChatGPT. What we know about the future of AI in education., USA Today (January 30, 2023), https://www.usatoday.com/story/news/education/2023/01/30/chatgpt-going-banned-teachers-sound-alarm-new-ai-tech/11069593002/.

[2] J. Edward Moreno, EEOC Targets AI-Based Hiring Bias in Draft Enforcement Plan (1), Bloomberg Law (January 12, 2023), https://www.bloomberglaw.com/bloomberglawnews/daily-labor-report/X4VNQTOO000000?bna_news_filter=daily-labor-report#jcite.

[3] Keith E. Sonderling, Bradford J. Kelley & Lance Casimir, The Promise and the Peril: Artificial Intelligence and Employment Discrimination, 77 U. MIAMI L. REV. 1, 37-53 (2022).

[4] Equal Employment Opportunity Commission’s Draft Strategic Enforcement Plan, 2023-2027, 88 Fed. Reg. 1379 (January 10, 2023) (draft strategic plan, pending approval following the end of the public comment session).

[5] Solon Barocas and Andrew D. Selbst, BIG DATA’S DISPARATE IMPACT, 104 Calif. L. Rev. 671, 677 (2016).

[6] Id. at 677-678.

[7] Id. at 684-685 (noting that protected classes “have unequal access to and relatively less fluency in the technology necessary to engage online, or are less profitable customers or important constituents and therefore less interesting as targets of observation.”).

[8] Kelly Cahill Timmons, PRE-EMPLOYMENT PERSONALITY TESTS, ALGORITHMIC BIAS, AND

THE AMERICANS WITH DISABILITIES ACT, 125 Penn St. L. Rev. 389 (2021).

[9] Maya C. Jackson, ARTIFICIAL INTELLIGENCE & ALGORITHMIC BIAS: THE ISSUES WITH TECHNOLOGY REFLECTING HISTORY & HUMANS, 16 J. Bus. & Tech. L. 299, 310-311 (2021).

[10] Id. at 310.

[11] Id. at 311.

[12] Id.

[13] Id. An example would be “women’s leadership” or “women’s affinity organization.”

[14] Amber M. Rogers & Michael Reed, Discrimination in the Age of Artificial Intelligence, 38 GPSolo 73, 74 (2021).

[15] Press Release, Equal Employment Opportunity Commission, EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness (October 28, 2021).

[16] U.S. Equal Emp. Opportunity Comm’n, EEOC-NVTA-2022-2, A Technical Assistance Document on the Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees (May 12, 2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence.

[17]Sonderling, supra note 3 at 42.

[18] Draft Strategic Enforcement Plan, 88 Fed. Reg. at 1379-1380.

[19] Id. at 1383; see also id. at 1380 (stating that the “EEOC will take a targeted approach to enforcement. A targeted approach empowers Commission staff throughout the agency to direct attention and resources to the specific priorities identified in this SEP. . . .”).

[20] Id. at 1381.

[21] Id.

[22] Tutoring Provider Sued for Age Discrimination by EEOC, Bloomberg Law (May 5, 2022), https://www.bloomberglaw.com/product/blaw/bloomberglawnews/bloomberg-law-news/XB4SMDH8000000?#jcite.

[23] Draft Strategic Enforcement Plan, 88 Fed. Reg. at 1381.

[24] Technical Assistance Document, supra note 17.

[25] Timmons, supra note 8.

[26]  Draft Strategic Enforcement Plan, 88 Fed. Reg. at 1381.

[27] Rogers, supra note 14.

[28]  Draft Strategic Enforcement Plan, 88 Fed. Reg. at 1381.

[29] See id. at 1379; see also Sonderling, supra note 3 at 64, 66.

[30] Commissioner Charges and Directed Investigations, U.S. Equal Employment Opportunity Commission, https://www.eeoc.gov/commissioner-charges-and-directed-investigations.

[31] Id.

[32] Id.

[33] See e.g., Nick Niedzwiadek, EEOC muddles along absent Democratic appointees, Politico (August 8, 2022) (stating that “the Republican bloc [of Commissioners] can vote down any item brought before them — a power they have exerted at least 15 times since Jan. 20, 2021 — leaving the Democratic appointees to mainly stick to cases that pass unanimously or where they can pick off at least one Republican commissioner (Andrea Lucas being the primary target.)); see also Shira Stein, Republican Swing Vote Helps EEOC Avoid Greater Partisan Gridlock, Bloomberg Law (January 25, 2022) (“Significant EEOC policy changes that would align with Biden administration priorities are in a holding pattern while Republicans maintain a majority on the agency’s leadership panel.”);

[34] Commissioner Charges and Directed Investigations, supra note 34.

[35] See Draft Strategic Enforcement Plan, 88 Fed. Reg. at 1383.

[36] Barocas, supra note 5 at 694 (“An employer sued under Title VII may be found liable for employment discrimination under one of two theories of liability: disparate treatment and disparate impact.”).

[37] See Barocas, supra note 5 at 696.

[38] Id.

[39] Barocas, supra note 5 at 698-701; see also Alexandra N. Marlowe ROBOT RECRUITERS: HOW EMPLOYERS & GOVERNMENTS MUST CONFRONT THE DISCRIMINATORY EFFECTS OF AI HIRING, 22 J. High Tech. L. 274, 291-292 (2022) (discussing how the “black box” of artificial intelligence makes the details of algorithms’ internal processes and decision-making “imperceptible” and obscures any evidence of discriminatory intent.).

[40] See Jackson, supra note 5 at 315.

[41] Barocas, supra note 5 at 702-705.

[42] Id.

[43] See Jackson, supra note 9; Rogers, supra note 14.

[44] Barocas, supra note 5 at 704.

[45] Sonderling, supra note 3 at 16-17.

[46] See Barocas, supra note 5 at 729-730; Marlow, supra note 44. 

[47] Notably, there is no current federal law regulating artificial intelligence. Rogers, supra note 14.

[48] See Barocas, supra note 5; Jackson, supra note 9.

[49] Sonderling, supra note 3 at 47.

[50] Sonderling, supra note 3 at 47.

[51] Sonderling, supra note 3 at 48-49 (noting the law is criticized for its vagueness, which may harm the law’s effectiveness, along with the lack of labor and business consultation in its drafting and the confusion it has caused human resource departments).

[52] See Barocas, supra note 14 at 715-716.

[53] See infra “A.I. AND DISCRIMINATION: HOW IT HAPPENS and the GROWING EVIDENCE.”

[54] See Barocas, supra note 14 at 716.

Leave a comment

Your email address will not be published. Required fields are marked *