Avoiding ADA Violations When Using AI Employment Technology

Artificial intelligence offers the prospect of freeing decisions from human bias. Too often, however, he may end up unwittingly reflecting and reinforcing those biases despite his presumed objectivity.

On May 12, the Equal Employment Opportunity Commission and the Department of Justice released guidance on discriminating against people with disabilities when using artificial intelligence to make employment decisions. ‘use. The EEOC’s guidance is part of its broader initiative to ensure that AI and “other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency applies”.

AI-Based Employment Technologies May Violate the ADA

Many companies are now using AI-based technologies for hiring, promotion, and other employment decisions. Examples include tools that filter applications or resumes using certain keywords; video interview software that assesses facial expressions and speech patterns; and software that assesses “fit for the job” based on personalities, skills, or competencies.

The Americans with Disabilities Act prohibits companies with 15 or more employees from discriminating on the basis of disability. Among other things, the ADA requires employers to provide reasonable accommodations to people with disabilities, including during the application process, unless it creates an undue hardship. An AI-enabled system can violate the ADA if it improperly eliminates a person due to a disability, whether or not the employer intended that outcome.

For example, employment technology itself may be difficult for a person with a disability to use. Employers must therefore provide reasonable accommodations to candidates or employees, if necessary, so that the technology can assess them fairly and accurately. If a computer-based test requires applicants to write an essay, for example, the employer may need to offer voice recognition software to visually impaired applicants.

Second, technology that seeks medical or disability-related information may also involve ADA. Inquiries related to employment decisions that are “likely to obtain information about a disability” – such as seeking information about the candidate’s physical or mental health – are particularly problematic.

Finally, AI-powered technologies could violate the ADA by weeding out candidates with disabilities who could do the job with or without reasonable accommodation. For example, hiring technologies that predict potential performance by comparing applicants to successful current employees may unintentionally exclude fully qualified applicants with disabilities. (which can also happen with applicants from other protected classes if employers are not careful).

Avoiding ADA Violations

Whether AI-based employment technology is developed internally or externally, employers must ensure that they are not unlawfully dismissing people with disabilities. As the EEOC explains, employers should only “develop and select tools that measure abilities or qualifications that are truly necessary for the job, even for people who are entitled to reasonable accommodation in the workplace. “.

An employer who develops their own test may be able to protect themselves by using experts on various types of disabilities throughout the development process – a reminder that the diversity of backgrounds among those who develop and review AI protects against prejudices.

An employer using purchased technology can still be held liable if the tool’s evaluation process violates the ADA. The guidance suggests speaking to the seller to ensure appropriate precautions against disability discrimination.

As we explained in Bloomberg Law Perspective, AI systems are often made up of multiple components created by different people or companies. To truly understand the risk posed by a developer’s product, it may be necessary to “peel the onion” layer by layer. Procurement contracts should clearly allocate responsibilities for risk assessment and mitigation, and buyers should ensure the rigor of supplier risk management processes.

The EEOC recommends “promising practices” to avoid ADA violations. Employment technology should: (a) make it clear that reasonable accommodations are available for people with disabilities; (b) provide clear instructions for requesting accommodations; and (c) ensure that the request for reasonable accommodation does not diminish the applicant’s opportunities.

The agency also suggests that prior to the assessment, an employer provide comprehensive information about the hiring technology, including (i) the traits or characteristics the tool is designed to measure, (ii) measurement methods and (iii) factors that may affect the valuation.

The larger context

The EEOC and DOJ guidelines on ADA are part of a rising tide of global algorithm and AI regulation. The Federal Trade Commission is planning regulations “to curb lax security practices, limit invasions of privacy, and ensure that algorithmic decision-making does not result in unlawful discrimination.” The White House Office of Science and Technology Policy is formulating an “AI Bill of Rights.”

States and localities have also begun to more aggressively oversee algorithmic decision-making. Illinois requires employers to use AI to verify video interviews to inform applicants and obtain consent. New York City recently passed its own law requiring automated employment decision tools to undergo an annual “bias audit” by an independent auditor.

AI-enabled job applications may also be subject to privacy laws, including those governing automated decision-making, the collection of biometric or other personal information, or the use of privacy technology. facial recognition.

Overseas, the European Union is considering a comprehensive AI law, which was proposed in April 2021. The UK government will present its AI governance plan this year.

In addition, the Cyberspace Administration of China adopted in January its provisions on the management of algorithmic recommendations of Internet information services and is completing other regulations on algorithmically created content, including technologies such as virtual reality, text generation, voice synthesis and “deep fakes”. .” Brazil is also in the process of developing AI regulations.

The EEOC and DOJ guidelines remind us that even long-standing laws like the ADA have a lot to say about the proper use of advanced technologies.

This article does not necessarily reflect the views of the Bureau of National Affairs, Inc., publisher of Bloomberg Law and Bloomberg Tax, or its owners.

Write for us: guidelines for authors

Author Information

Allon Kedem is a partner in the Arnold & Porter Appellate and Supreme Court practice. He has appeared 12 times before the Supreme Court of the United States. Previously, he served as an assistant in the DOJ’s Office of the Solicitor General, where he worked on cases involving constitutional, statutory, and administrative law.

Pierre Schildkraut is co-head of Arnold & Porter’s technology, media and telecommunications industry team and provides strategic advice on AI, spectrum utilization, broadband and other issues TMT regulations. He represents clients in regulatory, administrative and transactional disputes, and advises on regulatory compliance and internal investigations.

Josh Alloy is an attorney in the Labor and Employment practice of Arnold & Porter, where he handles all aspects of labor and employment matters involving advice, transactions and day-to-day litigation. He advises and defends clients in matters of discrimination, harassment, retaliation and complex leave and disability issues.

Alexis Sabet is a partner in the Corporate & Finance practice of Arnold & Porter, which focuses primarily on mergers and acquisitions transactions.


Source link

Comments are closed.