How does using artificial intelligence (AI) in the hiring process impact people with disabilities?

Summary
Discrimination may be unintended on the part of AI software, yet their unintentional biases may be coded into the AI system’s decision-making processes. Download the full reports.

Introduction

Businesses implement various technologies to streamline processes and increase efficiency. One efficiency-focused technology is the use of artificial intelligence (AI). AI is computer software algorithms designed to imitate human thinking and decision-making. In recent years, AI has been used in the employee hiring process. Increasingly, AI is replacing human interaction and decision-making in tasks such as resume screening, interviewing, and hiring applicants. While AI can certainly help make these processes more efficient and take up less human time, it is not without cost. One such cost is the potential for bias and discrimination. AI discrimination affects all protected classes, but here we specifically analyze how AI can discriminate against people with disabilities in the hiring process. We begin by looking at the Americans with Disabilities Act (ADA) framework for discrimination in hiring. Then, we review several examples of AI discrimination in the hiring process and analyze why AI technology is particularly troublesome for people with disabilities.

For purposes of this report, we recognize our charge to address ADA employment discrimination case law specific to federal HHS Region 8 (the region served by the Rocky Mountain ADA Center). Unfortunately, there is no AI-specific employment discrimination case law available in the Circuit Courts that serve Region 8 so we will consider cases from other parts of the country. The major points in these cases are outlined or affirmatively recognized by the United States Supreme Court which creates binding rulings for all U.S. regions.

ADA Framework

To better understand how AI might introduce bias or discrimination in the hiring process, we first review how hiring discrimination is addressed by the ADA. The ADA maintains, “no covered entity shall discriminate against a qualified individual on the basis of disability in regard to job application procedures, the hiring, advancement, or discharge of employees.” The Court has differentiated discrimination claims of this type by whether they are disparate impact claims or disparate treatment claims, but has held that both are addressable under the ADA.

There are several clauses in the ADA that protect applicants with disabilities from disparate impact:

The ADA prohibits:

(1) limiting, segregating, and classifying an applicant or employee “in a way that adversely affects” their opportunities or status because of their disability; (2) contractual or other relationships that have the effect of disability discrimination; and, (3) “utilizing standards, criteria, or methods of administration” that have the effect of disability discrimination.

To be successful, a disparate impact claim would need to prove that the business has violated one of the clauses. The first and third clause would likely be the easiest to prove in an AI discrimination case because AI systems generally classify and/or categorize applicants via standards and criteria coded into the AI system itself. These standards can create conditions that introduce discrimination based on disability rather than competency. For example, an applicant who uses speech recognition software for computer input may be inadvertently discriminated against by an AI system if an AI- driven application task requires timed keyboard input or is based on some measure of physical keyboard proficiency. If keyboard input speed or proficiency is directly related to an essential job function, this would be appropriate. If not, then this could be an explicit form of discrimination. Or, if an AI algorithm somehow “discerns an applicant’s physical disability, mental health or clinical diagnosis”vi this would constitute an unallowable and illegal assessment standard under federal ADA law.

To show disparate treatment, whereby “an employer treats a group of people less favorably than others because of a protected characteristic,”vii an applicant with disabilities must use the McDonnell Douglas proof mechanism.viii

The McDonnell Douglas test follows these four steps:

  1. The applicant must first “make out a prima facie case of discrimination.”
    • To successfully establish a prima facie case of discrimination, the applicant must show:

      • [They—the applicant] are within the ADA’s protected class;

      • [They] applied for the position in question;

      • [They] were qualified for that position; and

      • [The business] rejected [the applicant] under circumstances that give rise to an inference of discrimination.

  2. “If the applicant does so [establish each of the four points in 1.a above] successfully, the burden then shifts to the [business] to provide a legitimate, non-discriminatory explanation for its decision.”

  3. "Once such neutral reason is proffered, the burden reverts to the [applicant] to establish that the employer’s non-discriminatory rationale is a pretext for intentional discrimination.

  4. The court will then determine if there is a prima facie case for discrimination and whether the employer’s neutral reason is legitimate in order to determine if the applicant with disabilities was discriminated against.

In other words, before seeking a discrimination claim due to disability, the applicant must first establish that they are a person with a disability protected under the ADA, that they applied for a position for which they were qualified, and that the reason for NOT being selected or hired for the position could reasonably be attributed to discrimination. Once these conditions are met (#1 above), the discrimination claim can begin (#2 - #4 above). It is reasonable to conclude that an applicant with a disability would first need to make a prima facie case, then describe how an AI system likely discriminated against them. At that point, the burden would shift to the business to show that the AI denied the applicant for a non-discriminatory reason.

It is important to note that the Court does not always require that disparate treatment claims use the McDonnell Douglas framework. “For instance, if [an applicant] is able to produce direct evidence of discrimination, [they] may prevail without proving all the elements of a prima facie case.” If there is no direct evidence of discrimination, however, then the four steps of the McDonnel Douglas test must be followed.

Whether the discrimination claim would be brought against the business seeking employees or a third-party AI-driven hiring system under contract to the business seeking employees is unclear. Who is ultimately responsible for eliminating bias and discrimination in the hiring process? It is likely the business itself, and not the AI system. The business has a due diligence responsibility to ensure that its systems and services, including those it procures from third-party sources, are accessible and legal as per state and federal law.xv

AI Discrimination

The fact that AI systems discriminate in the hiring process is evidenced by a plethora of examples. Whittaker and colleagues explain that “[AI systems], often marketed as capable of making smarter, better, and more objective decisions, have been shown repeatedly to produce biased and erroneous outputs.”xvi This particular issue is important because “as the field progresses [. . .] we collectively need to ensure that technology tools are not channeling recruiters, managers, candidates, and employees with disabilities back into the old ways of doing things—back to belief and practices that discouraged the hiring of people with disabilities.”xvii

HireVue’s hiring system offers a clear example of AI discrimination in the hiring process. While it has since improved its AI-driven process in positive ways (e.g., applicants can now request accommodations such as more time to answer timed questions), in its early stages, HireVue provided AI video-interviewing systems marketed to large firms as “capable of determining which job candidates will be successful workers, and which won’t, based on a remote video interview.”xviii The AI would analyze the videos by “examining speech patterns, tone of voice, facial movements, and other indicators”xix to decide which candidates should continue in the hiring process. While the process was efficient by requiring less human interaction, the downside was that “this method massively discriminates against many people with disabilities that significantly affect facial expression and voice.”xx It is important to note also, “that a meaningful connection between any person’s facial features, tone of voice, and speech patterns, on one hand, and their competence as a worker, on the other, is not backed by scientific evidence.”xxi That is to say, HireVue’s system denied applicants not because of their prospective incompetence, but because they were disabled.

AI and Disability Discrimination

AI-driven hiring systems, like that represented by HireVue’s early process, are discriminatory because the standards, criteria and processes are coded into algorithms by people who are not omniscient and unbiased. AI developers and design engineers, “claim their AI systems do not discriminate against underrepresented communities because they don't use prohibited factors such as gender, race, or disability in their algorithms, and that machines don't have the unconscious bias that humans have,” but “these types of claims have been widely debunked [. . .] because of common design and implementation errors in these systems that use historical data that tends to perpetuate existing biases.”xxii

Self-driving car systems are one example. Consider this related insight:

[Wheelchair users are often hit by] human car drivers that do not recognize [wheelchair users] as humans [. . .], yet the datasets being used to train automobile vision systems also embed similar limitations not only due to the lack of wheelchairs and scooters in training datasets, but the trainers themselves may be misrecognizing them.xxiii

While discrimination may be unintentional on the part of the AI software and design engineers, their unintentional biases are coded into the system. Whittaker and colleagues explain that people with disabilities, “have been subject to historical and present-day marginalization, much of which has systemically and structurally excluded them from access to power, resources, and opportunity” and “such patterns of marginalization are imprinted in the data that shapes AI systems, and embed these histories in the logics of AI.”xxiv In short, AI systems reflect the view of their creators. That is not to say that the people who design and develop AI systems are intentionally discriminatory, but, like all people, those who develop AI systems come from the same society which has historically marginalized people with disabilities.

Disability Fluidity, Disclosure and Bias

The problem is further complicated for people with disabilities because, unlike other protected classes, disability is not always readily apparent. In other words, it is usually much easier to identify an individual's race, gender, and sometimes religion or national origin than it is to identify a person with a disability. It is more difficult to identify a person with a disability because “disability encompasses a vast and fluid number of physical and mental health conditions (such as asthma, depression, and post-traumatic stress disorder) which can come and go throughout a person’s lifetime (or even in the course of a single day).”xxv

Further, “disability has issues more in common with other ‘invisible’ minority groups in terms of lower disclosure.”xxvi That is to say, people with disabilities are less likely to intentionally share information about their disabilities. In fact, research on self- identification shows that “while 99.5% of employees responded to the gender question, and 88% answered the ethnicity question, less than 20% of employees answered the disability status question.”xxvii While there is substantial data indicating why self- identification is low among people with disabilities, we simply raise this issue to highlight why AI systems do not have enough information to avoid perpetuating unintentional biases.

Disability, AI, and Case Law

Unfortunately, there is not much data on AI-related discrimination against people with disabilities because “gathering more detailed data about the employment of people with disabilities has simply not been a priority in the field.”xxviii There is also a severe lack of case law on the topic. In fact, at this point, there is no major case law in either Region 8 or the rest of the U.S. involving an AI system discriminating against a person with a disability in the hiring process. The lack of case law could be a good sign that there is a lack of AI-related discrimination, but more likely, it means that people with disabilities do not know they are being discriminated against or understand their remedies under the ADA. While it is beyond our scope to analyze those issues, it is clearly within reason that this type of discrimination exists.

Conclusion

Overall, documenting AI-related discrimination against applicants with disabilities in the hiring process is challenging. Legal discussions of AI technology and the ADA are scarce. While there are yet to be major legal decisions on the matter, the discrimination problem is clearly present. HireVue’s system and previous process is the most prominent example of how AI systems can discriminate against people with disabilities.

The discrimination does not necessarily stem from intentional biases of AI system designers and developers. Instead, discrimination occurs for a variety of reasons, primarily, AI systems may not be created to be inclusive of all abilities. This occurs for several reasons. First, disabilities, unlike other protected characteristics, are dynamic. That is to say, disabilities are not always permanent and often manifest in particular situations. To account for this, AI designers would have to account for each potential instance or expression of ability; this is not feasible. Second, people with disabilities often prefer to keep their disabilities private. This means that AI designers will likely not have enough information to ensure their systems will work effectively with all potential applicants. Finally, AI designers come from the same society whereby people with disabilities have been historically marginalized, meaning they may share the same implicit societal biases.

The best way to improve AI-related hiring outcomes and reductions in bias and discrimination among applicants with disabilities is to ensure that hiring standards and criteria focus on essential job functions. Then, ensure that AI algorithms measure applicant competence related to those job functions. Finally, ensure that the entire talent acquisition and retention process adheres to state and federal laws AND to hiring best- practices adopted by the human resource industry.

References

i Zielinski, D. (May 22, 2020). Addressing Artificial Intelligence-based Hiring Concerns.

https://www.shrm.org/hr-today/news/hr-magazine/summer2020/pages/artificial-intelligence- based-hiring-concerns.aspx. Society for Human Resource Management (SHRM). Retrieved September 22, 2021.

ii Friedman, G.D. & McCarthy, T. (October 1, 2020). Employment Law Red Flags in the Use of Artificial Intelligence in Hiring. https://www.americanbar.org/groups/business_law/publications/blt/2020/10/ai-in-hiring/ American Bar Association. Retrieved September 22, 2021.

iii 42 U.S.C. § 12112(a). iv

v Ibid., 798 (Quoting and analyzing 42 U.S.C. § 12112 (2009); 29 C.F.R. § 1630.5-.7 (2019)).

vi Friedman, et al. (October 1, 2020). vii Ibid.

viii Malone v. Greenville County, 2008 U.S. Dist. LEXIS 86520, 26 (Quoting Heiko. Colombo Savings Bank, F.S.B., 434 F.3d 249, 258 (4th Cir. 2006).

ix Ibid., 27 (Quoting
x Ibid. (Quoting Heiko v. Colombo Sav. Bank, F.S.B., 434 F.3d 249, 258 (4th Cir. 2006)

& Brown v. McLean, 159 F.3d 898, 902 (4th Cir. 1998)). xi Ibid. (Quoting

xii Ibid. (Quoting xiii Ibid.

xiv Swierkiewicz v. Sorema N.A., 534 U.S. 506, 512, 122 S.Ct 992, 152 L. Ed. 2d 1 (2002).

Moss, H. (2021). "Screened Out Onscreen: Disability Discrimination, Hiring Bias, and

Artificial Intelligence." Denver Law Review 98, no. 4. 797 (Summarizing

Raytheon Co. v. Hernandez, 540 U.S. 44, 55 (2003)).

(4th Cir.2005).

Anderson v. Westinghouse Savannah River Co., 406 F.3d 248, 268

ex. Dep't of Cmty. Affairs v. Burdine, 450 U.S. 248, 254, 101 S. Ct.

1089, 67 L. Ed. 2d 207 (1981); Bryant v. Aiken Reg'l Med. Ctrs. Inc., 333 F.3d

536, 545 (4th Cir.2003))/

Reeves v. Sanderson Plumbing Prods., Inc., 530 U.S. 133, 143, 120 S.

Ct. 2097, 147 L. Ed. 2d 105 (2000)).

xv Friedman, et al. (October 1, 2020).
xvi Whittaker, M., et al. (2019). "Disability, bias, and AI." AI Now Institute (2019), 7.

xvii

xviii Whittaker et al. (2019), 15.

xix Ibid.

xx Fruchterman & Mellea (2018), 3.

xxi Whittaker et al (2019), 6.

xxii Fruchterman & Mellea (2018), 3. xxiii

xxiv Whittaker et al. (2019), 8.
xxv Ibid., 10.
xxvi Fruchterman & Mellea (2018), 5. xxvii Ibid.
xxviii Ibid., 5.