Is It Discrimination if an AI Tool Rejects an Application?
Companies increasingly use AI tools to screen and evaluate résumés and cover letters, search online platforms and social media networks for potential candidates, and examine the speech patterns and facial expressions of job applicants during interviews. Companies also use AI to recruit and hire employees, create performance reviews, and oversee employee activities and performance. However, AI bias can happen in any stage of the employer-employee relationship – from hiring to firing and everything in between – and may result in a discrimination lawsuit.
Discrimination typically refers to unfair treatment based on certain characteristics such as race, gender, age, religion, or disability. If an AI tool rejects an application based on criteria that are legally protected against discrimination and those criteria were unfairly applied, then yes, it could be considered discriminatory.
However, the issue with AI tools is often more nuanced. Discrimination can occur if the AI tool’s decision-making process is biased due to the data it was trained on or the algorithms it uses. For example, if the training data used to develop the AI tool is biased against certain groups, the tool may produce discriminatory outcomes even if it’s not the intention of the developers.
How is AI trained?
AI is trained using a process called machine learning, a subset of artificial intelligence. There are various techniques within machine learning, but one of the most common methods used to train AI models is supervised learning. In supervised learning, the AI model is trained on a dataset that consists of input-output pairs. During training, the model learns to map inputs to outputs by adjusting its internal parameters until it can make accurate predictions on new, unseen data.
What is AI bias?
Bias in AI is harmful to society. It can influence decisions regarding whether someone is accepted into a school, approved for a mortgage, or allowed to rent an apartment.
According to a recent IBM article, “AI systems that produce biased results that reflect and perpetuate human biases within a society, including historical and current social inequality.” Two significant technical factors contribute to AI bias include:
- Training Data: AI systems acquire their decision-making through training data. However, when those data overrepresent or underrepresent certain groups, this can lead to biased results. For example, a facial recognition algorithm that was trained on data overrepresenting white people might lead to racial bias against people of color. Mislabeled data, or data that reflect existing inequalities, can compound these issues. If an AI recruiting tool was trained with a dataset where certain applicant qualifications were improperly labeled, this might cause the tool to reject qualified candidates who possess the necessary skills but whose résumés were not correctly understood by the tool.
- Programming errors: AI bias may also happen because of coding mistakes, e.g., when a developer inadvertently (or consciously) overweighs certain factors in algorithmic decision-making due to their own biases. For example, the algorithm might use indicators like income or vocabulary to inadvertently discriminate against people of a certain race or gender.
- Cognitive bias: As people process information and use it to make judgments, they are predictably influenced by their own experiences and preferences. As a result, these biases may be built into AI systems through the selection or weighting of the data. According to a report from the National Institute of Standards and Technology (NIST), cognitive bias is quite common, noting that “human and systemic institutional and societal factors are significant sources of AI bias … and are currently overlooked.”
To better identify and manage the effects of bias in AI systems, NIST recommends broadening the scope of where we look for the sources of common biases. The researchers suggest looking beyond the processes and data utilized to train AI systems to the wider societal factors that determine how technology is developed.
Litigation surrounding AI bias
Lawsuits have been filed regarding AI bias and hiring discrimination. Here are some examples:
The Equal Employment Opportunity Commission (EEOC) settled its first AI hiring discrimination lawsuit in August 2023. In Equal Employment Opportunity Commission v. iTutorGroup, Inc., the EEOC sued three companies that provided tutoring services under the “iTutorGroup” brand name. The EEOC’s lawsuit alleged that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (ADEA) because the AI hiring program it used spontaneously rejected female applicants aged 55 or older and male applicants aged 60 or older. As a result, the program screened out over 200 applicants because of their age. In the settlement agreement, iTutorGroup agreed to pay $365,000 to the group of automatically rejected job seekers, implement antidiscrimination policies, and hold training to guarantee compliance with equal employment opportunity laws.
In another case, Mobley v. Workday, Inc., the plaintiff, a disabled African-American man over the age of 40, alleged that Workday provides companies algorithm-based applicant screening technology that illegally discriminates against job seekers on the basis of race, age, and disability (all protected classes). According to the complaint, the software violates Title VII of the Civil Rights Act of 1964, the Civil Rights Act of 1866, the ADEA, and the ADA Amendments Act of 2008. Although the court granted Workday’s motion to the dismiss the case, the plaintiff filed an amended complaint that included more details to support his claim.
At Buckley Bala Wilson Mew LLP, we believe that equality in the workplace matters. Our team holds employers accountable when their actions or policies put the civil rights of employees in jeopardy. With an unwavering commitment to our clients, we have earned our reputation as leaders in employment law and civil rights litigation – not only from the cases we’ve won, but from the cases we take. If you need help, we will advocate for you.
Buckley Bala Wilson Mew LLP is a leading employment and civil rights law firm serving individuals throughout Georgia. To learn more about our services or to request a consultation with one of our Atlanta discrimination attorneys, please call or contact us today.