AI Retail and an introduction to the EEOC for businesses

RETAIL INDUSTRY 2021 YEAR IN REVIEW

In October 2021, the Equal Employment Opportunity Commission (EEOC) announced an initiative to ensure that artificial intelligence (AI) used in the workplace does not lead to violations of anti-corruption laws. – discrimination of the country. The EEOC, through an internal working group, plans to gather insights into the design, adoption and impact of hiring and employment technologies, launch listening sessions with key stakeholders and to provide technical assistance to provide advice on algorithmic fairness and the use of AI in employment decisions. The EEOC press release announcing the initiative can be viewed here.

The announcement should come as no surprise to those watching the moves of the EEOC. Indeed, the EEOC’s interest in AI dates back to an October 2016 EEOC public meeting on the use of big data in the workplace.

The labor lawyers, EEOC commissioners, and IT scientists present at this meeting agreed that AI should not be seen as a panacea against employment discrimination. Technology, if not carefully implemented and monitored, can introduce and even exacerbate illegal biases. This is because algorithms typically rely on a set of human inputs, such as resumes from high-performing employees. If these inputs lack diversity, the algorithm can reinforce existing institutional biases at breakneck speed.

Recently, EEOC commissioners noted that the agency is wary of unruly implementation of AI that could perpetuate or accelerate biases in the workplace. As a result, the EEOC may consider the use of Commissioner’s Charges—investigations initiated by the agency unrelated to an employee’s allegation of discrimination—to ensure that employers do not use the IA in an unlawful manner that violates Title VII of the Civil Rights Act (Title VII) or the Americans with Disabilities Act (ADA).

Given the EEOC’s spotlight on AI, retailers using these technologies should take steps to minimize risk and maximize benefit. This article will offer insight into the merits and potential pitfalls of job-related AI technologies, provide a refresher course on commissioner charges, and suggest steps retailers can take to reduce the risk of becoming the target of EEOC investigations.

Benefits of AI for Retailers

The AI ​​technology landscape continues to grow. Some retailers use automated candidate search technology to search social media profiles to determine which job openings should be advertised to particular candidates. Others use video interview software to analyze facial expressions, body language, and tone to assess whether a candidate exhibits preferred character traits. The use, however, is not limited to the hiring process. Some retailers use AI software for workforce optimization, which allows AI to create employee schedules, taking into account a multitude of variables such as employee availability , local or regional laws on compensation and time management, as well as business initiatives and seasonal fluctuations.

Regardless of the precise tool, AI is being touted to retailers as a technological breakthrough that delivers simplicity, improves candidate quality, drives efficiency, and improves diversity.

Time is perhaps the most obvious of these benefits. AI can, for example, save recruiting departments countless hours of work on resumes of acceptable candidates. This is especially true for large retailers who receive thousands of inquiries every year. This freed up time can be spent on more productive activities.

AI can also expose retailers to untapped pools of talent, and with more applicants, retailers can expect more diverse and skilled new recruits. Additionally, removing or reducing human decision-making can help eliminate unconscious, or even intentional, human biases in hiring, scheduling, and other employment-related decisions.

Potential for discrimination

Although AI promises significant rewards, it carries considerable risks. While AI tools probably have no intention of unlawfully discriminating, that doesn’t absolve them of liability. Indeed, the law contemplates both intentional discrimination (disparate treatment) and unintentional discrimination (disparate impact). The biggest risk for AI is disparate impact claims. In such trials, intent is irrelevant. The question is whether a seemingly neutral policy or practice (e.g. the use of an AI tool) has a disparate impact on a particular protected group, e.g. on their race, color, national origin , sex or religion.

The diversity of AI tools means that each type of technology has unique discrimination potential. A common thread, however, is the possibility of input data creating a discriminatory impact. Many algorithms rely on a set of inputs to understand search parameters. For example, a resume selection tool is often set up by uploading sample resumes of high-performing employees. If these resumes favor a particular race or gender and the tool is tasked with finding comparable resumes, the technology will likely reinforce the existing homogeneity.

Some examples are less obvious. Resume examples may include employees from certain ZIP codes that primarily house one race or color. An AI tool may favor these ZIP codes, disfavoring applicants from other ZIP codes with a different racial makeup. Older applicants may be disadvantaged by an algorithm’s preference for “.edu” email addresses. In short, if a workforce is largely comprised of one race or gender, the tool’s reliance on past hiring decisions could negatively impact applicants from a another race or of another sex.

Commissioner’s charges as a tool to investigate AI-based discriminatory impacts

The potential for AI to reject hundreds or thousands of applicants based on biased inputs or faulty algorithms is attracting the attention of the EEOC. And because job applicants are often unaware that they have been barred from certain positions due to faulty or miscalibrated AI software, the EEOC can rely on the commissioner’s accusations as an important tool to uncover illegal bias under Title VII and the ADA, most likely under the rubric of disparate impact discrimination.

42 USC § 2000e-5(b) authorizes the EEOC to investigate possible discrimination “filed by or on behalf of a person claiming to be aggrieved, or by a member of the Commission.” (emphasis added). Unlike employee-initiated charges, commissioner’s charges can be proposed by “any person or organization.” Indeed, it is the origin that distinguishes the duties of the commissioner from those initiated by the employees.

The EEOC explained that commissioner charges generally arise if 1) a field office learns of possible discrimination from local community leaders, direct observation, or a fair employment office operated by the state; 2) a field office learns of a possible pattern or practice of discrimination during its investigation of an employee allegation; or 3) a commissioner becomes aware of the discrimination and asks a local office to investigate.

EEOC regional field offices refer proposed requests for commissioner fees to the EEOC Executive Secretariat, which then distributes these requests to the commissioners on a rotating basis. A commissioner then determines whether to sign a proposed charge, authorizing the field office to conduct an investigation. Commissioners, however, can bypass this referral procedure and file a complaint directly with a regional office.

Once filed, Commissioner’s charges follow the same procedure as employee charges. The defendant is notified of the charge and the EEOC requests documents and/or interviews with company personnel. If necessary, the agency can use its power of administrative subpoena and seek judicial execution. EEOC regulations provide that the commissioner who signed the indictment must refrain from making a decision in the case.

If the agency ultimately determines that there is a reasonable ground to believe that discrimination has occurred, the EEOC will generally attempt conciliation with the employer. The same remedies available under Title VII disparate impact claims – equitable relief in the form of back wages and/or injunctive relief – are available to injured parties.

Measures to mitigate the risks of discrimination

Retailers should be aware of the EEOC’s awareness of this and the availability of commission fees to uncover disparate impacts without the need for employee fees. To avoid being the target of such investigations, retailers should consider the following steps:

First, those considering an AI tool should require AI vendors to disclose enough information to explain how the software makes employment decisions. Often vendors are unwilling to disclose proprietary information related to the operation of their tools and the interpretation of data. However, retailers may ultimately be responsible for their results, so it is important that they understand how applicants are selected. At a minimum, a retailer should obtain strong indemnification rights.

Second, even after obtaining assurances and compensation, retailers should consider auditing the AI ​​tool before relying on it to make decisions. To do this, retailers need to be able to identify which applicants the tool has rejected, not just those who have been accepted. Thus, retailers should verify with vendors that data is retained so that they can properly audit the tool and review the results to determine if there has been a negative impact on individuals in protected classes. This audit should take place regularly, and not just during the initial implementation.

Third, and perhaps most importantly, retailers need to ensure that the input or training data on which the tool is based (e.g. CVs of model employees) does not reflect a homogenous group. If the input data reflects a diverse workforce, a properly functioning algorithm should, in theory, replicate or enhance that diversity.

Finally, as this is an emerging area, retailers need to keep abreast of developments in the law. When in doubt, companies should consult an employment lawyer to decide if and how to use AI to improve the productivity, diversity and capabilities of their workforce.

Copyright © 2022, Hunter Andrews Kurth LLP. All rights reserved.National Law Review, Volume XII, Number 36


Source link

Comments are closed.