A risk-based approach to AI procurement

Organizations should tailor contractual requirements for purchased AI systems based on risk levels.

The responsible acquisition and deployment of artificial intelligence (AI) systems is a complex undertaking.

Through our work at the Responsible AI Institute, we helped the US Department of Defense’s Joint Artificial Intelligence Center (JAIC) develop foundational procurement protocols that embody the agency’s AI ethics principles.. We have also supported companies in the financial services and healthcare sectors in the development of their responsible AI sourcing processes.

While each context poses different and significant issues, our experiences suggest that tailoring contractual requirements for purchased AI systems to the level of risk created by each system is an effective way to address three major challenges organizations face. face to responsibly source and deploy AI systems. These challenges include developing responsible organizational AI capacity, navigating legal uncertainty, and addressing general information technology procurement issues.

To use this risk-based approach, an organization must assign a risk level to an AI system, such as high, medium, or low, based on the results of the AI ​​impact assessment d an organization during a first phase of the procurement process.

If an organization purchases an already developed system, the results of the AI ​​impact assessment can be provided as part of the offer. If an organization is creating an AI system with the help of a vendor, they should perform an AI impact assessment once the details of the system are known. The risk level of an AI system should reflect the risks to those affected by the system as well as the risks to the organization.

While the content of an organization’s AI impact assessment will vary by organization and context, it should always include accountability considerations; robustness, safety and security; bias and fairness; system operations; explainability and interpretability; and consumer protection. These core categories form the basis of our responsible AI implementation framework.

Once assigned, the risk level of an AI system should guide the rest of the procurement and deployment process (see figure). For example, while all AI systems must undergo testing before and during deployment, the scope of testing, the approval authority that gives the go-ahead for deployment, the frequency of testing during deployment, and the documentation requirements, such as details of the AI ​​stewardship plan and frequency of updates. should all be determined by the risk level of the AI ​​system.

Responsible AI programs in most organizations are at an early stage of maturity. Because the operation of AI systems can be unpredictable and difficult to understand, responsible deployment of AI requires an organization to develop its capabilities in the form of new contractual requirements. These requirements can take various forms and include assessments such as an AI impact assessment, documentation requirements such as the AI ​​stewardship plan, governance processes, policy frameworks and training programs.

As an organization develops responsible AI capability, adopting a risk-based approach to AI procurement can promote a thoughtful and measured understanding of AI among different parties. of an organization, without exaggerating either the benefits or the risks of AI systems.

Since AI systems are generally not subject to general regulation and industry-specific AI laws often lack detail, legal requirements for AI systems can be difficult to determine.

For example, New York City legislation requiring any automated hiring system used on or after January 1, 2023 to undergo a biased audit consisting of an “unbiased evaluation by an independent auditor,” including tests to assess the potential disparate impact on certain groups, do not further specify the types of discrimination to be tested, the criteria to be tested and the frequency of testing.

By integrating regulatory risk considerations into the initial AI impact assessment, organizations can consider the legal and compliance implications of deploying an AI system early in the procurement process, reducing thus the need for costly and time-consuming interventions later in the system’s life cycle.

In addition to regulatory risks to an organization, careful examination of proposed and enacted AI-specific laws and regulations also provides insight into potential harms to people that regulators seek to address. The risks of this potential harm must also be carefully factored into the impact assessment of AI.

For example, an organization’s AI impact assessment for a hiring-related AI system should assess its compliance with the Equal Employment Opportunity Commission’s guidelines on the how these systems may violate the Americans with Disabilities Act, Illinois notice and consent requirement for AI video interviews, Maryland notice. and the consent requirement for the use of facial recognition in video interviews and the aforementioned New York City bias audit requirement for automated hiring systems. More generally, it should address the potential issues of fairness, notification, transparency, redress and effectiveness that underlie these regulations.

Adopting a risk-based procurement approach also enables an organization to incorporate contractual language that aligns with emerging laws, best practices, and certification standards. For example, the proposed European Union Artificial Intelligence Act, the National Institute of Standards and Technology’s draft AI Risk Management FrameworkCanada’s proposal The Artificial Intelligence and Data Act and our Responsible AI Institute certification program, currently under review by national accreditation bodies, all reflect an increasingly sophisticated understanding of responsible AI implementation. ‘IA.

Efforts to responsibly acquire and deploy AI systems often place greater emphasis on well-known IT procurement issues, including building organizational expertise to manage external teams, preventing lock-in suppliers and providing a level playing field for suppliers of different sizes.

For example, while startups that provide AI solutions are sometimes more up-to-date in their understanding of responsible AI considerations and may be quicker to adapt to new types of contractual requirements, established tech companies may often use existing inroads with organizations to thwart new entrants.

Adopting a risk-based approach to procurement and communicating it clearly to suppliers can help address these issues by informing the procuring organization in advance of the specific oversight capabilities it will need in future stages of the process. system lifecycle, preventing vendors from making intellectual property arguments against have required testing, monitoring and auditing of their AI systems in the future and have rewarded vendors – of all sizes – who are more advanced and responsive in their responsible AI efforts.

Var Shankar is director of policy at the Responsible AI Institute.

This essay is part of a nine-part series titled Artificial Intelligence and Procurement.


Source link

Comments are closed.