Changeflow GovPing Courts & Legal Federal AI Procurement Prioritizes Speed Over I...
Routine Notice Added Final

Federal AI Procurement Prioritizes Speed Over Integrity Safeguards

Favicon for www.americanbar.org ABA Legal News
Detected April 3rd, 2026
Email

Summary

The ABA Public Contract Law Section published an analysis examining how federal AI procurement policies create corruption vulnerabilities. The article warns that agencies are acquiring opaque AI systems without adequate transparency, audit rights, or testing requirements, leaving 'regulation by contract' as the primary safeguard mechanism. Practical recommendations are offered for agencies to implement within existing authorities.

What changed

This article analyzes emerging corruption risks in federal AI procurement, where agencies are acquiring AI technologies without adequate transparency, audit rights, or testing requirements. The author examines how recent deregulatory actions limit agencies' ability to negotiate AI-specific protections, creating vulnerabilities from contractor lock-in, information asymmetry, and automation bias. The South Africa state capture scandal is referenced as an example of corruption becoming entrenched in government systems.

While no immediate compliance deadline exists, the article provides practical recommendations that agencies can implement within existing authorities. The author challenges the premise that governance impedes innovation, arguing that governance is crucial for sustainable innovation and maintaining institutional trust necessary for long-term AI integration. Government contractors and technology companies should review their AI procurement practices and consider implementing the safeguards discussed.

Source document (simplified)


Summary

  • Federal AI procurement policies prioritize speed over integrity safeguards, resulting in agencies acquiring opaque AI systems without adequate transparency or audit rights.
  • Recent deregulatory actions limit agencies' ability to negotiate AI-specific protections, creating exploitable corruption vulnerabilities.
  • Contractor lock-in, information asymmetry, and automation bias compromise procurement integrity and create conditions that enable corruption.
  • Practical recommendations offer near-term safeguards agencies can implement within existing authorities, challenging the premise that governance impedes innovation.

FangXiaNuo/E+ via Getty Images

Jump to:



Abstract

The United States is accelerating toward a corruption crisis of its own making. In its race to rapidly acquire artificial intelligence (AI), current policy risks undermining longstanding procurement integrity safeguards. This article examines how AI increases traditional corruption risks and introduces new vulnerabilities that current oversight mechanisms are ill-equipped to address.

Recent federal AI policies have accelerated adoption while simultaneously narrowing regulatory oversight, effectively leaving “regulation by contract” as the primary—and profoundly inadequate—mechanism for embedding safeguards. The consequence of these policies is that the government is “buying blind,” acquiring AI technologies without adequate transparency, audit rights, or testing requirements. These acquisition-phase deficiencies will translate directly into operational risks as AI deployment expands. How the government acquires AI today determines the procurement integrity vulnerabilities it will inherit tomorrow.





This article offers practical recommendations to address these emerging threats, prioritizing those most feasible to implement. It also challenges the assumption driving current federal AI policy: that governance impedes innovation. As the article demonstrates, governance is crucial for sustainable innovation. It helps maintain fair, transparent markets and fosters the institutional trust necessary for long-term AI integration.

The window for establishing effective governance is closing. As procurement dependencies solidify and integrity risks become entrenched, reversing course will become exponentially harder.

I. Introduction

The United States is accelerating toward a corruption crisis of its own making. While artificial intelligence (AI) capabilities are advancing at an unprecedented pace, federal AI procurement governance has taken a back seat to the rapid deployment of these technologies. Federal agencies are signing contracts today that will determine how AI influences government operations for years to come. As AI capabilities grow and agencies implement these systems across increasingly consequential procurement functions, current policy choices will create systemic vulnerabilities in federal infrastructure. The result is “buying blind”—acquiring AI without adequate protections against corruption and integrity risks.

Current federal policy treats governance and innovation as incompatible, prioritizing rapid AI adoption over integrity safeguards. Recent executive actions have rescinded prior risk management frameworks, narrowed regulatory oversight pathways, and emphasized commercial acquisition methods that limit agencies’ ability to negotiate protective terms. Simultaneously, workforce reductions have eliminated the expertise necessary to oversee increasingly complex AI systems. This policy trajectory—deregulation during accelerated adoption—creates conditions where corruption risks compound rather than diminish over time.

To understand what occurs when corruption becomes entrenched in the structure of government systems rather than isolated among individual actors, consider South Africa’s state capture scandal. Between approximately 2009 and 2018, South Africa experienced one of the most severe corruption scandals in the country’s history, widely referred to as the “state capture” scandal. The Public Protector’s State of Capture report detailed how then-President Jacob Zuma granted his family’s close associates, the Gupta brothers, unprecedented access to state-owned entities (SOEs) and other public entities. Although the Gupta brothers never held formal government positions, investigations concluded that they exercised outsized influence over the state, shaping ministerial appointments, influencing procurement decisions, and diverting billions of public funds to their affiliated companies. SOEs were destabilized by inflated contracts and driven into a financial crisis. The South African Revenue Service was “gutted” as senior officials were removed, and the “High Risk Investigative Unit had been set up illegally and gone rogue.” The National Prosecuting Authority was significantly weakened. In short, the institutions designed to check corruption were systematically hollowed out.

The consequences were catastrophic. Conservative estimates place the cost at more than R57 billion (approximately $3 billion), while others estimate the broader economic damage at closer to R1 trillion (roughly $58 billion). South Africa’s credit rating was downgraded to junk status, SOEs remain crippled, and public trust in government has never fully recovered. More than a decade later, the country still struggles with the effects of institutions that were corrupted to serve private interests.

But it could have been so much worse.

If the Guptas had AI tools, detection would have been exponentially more difficult. More importantly, their access to government data would have enabled a self-sustaining, corrupt infrastructure. With access to historical procurement documents, internal communications, financial records, tax data, citizen information, audit patterns, and classified intelligence, they could have trained AI systems to map vulnerabilities across the entire government—identifying susceptible officials, optimal procurement targets, ongoing audits and investigations, and ways to calibrate corrupt awards just below oversight thresholds. The systems could generate specifications that embed favoritism while appearing neutral, evaluate proposals favoring allies and affiliates, and refine these patterns based on what triggers scrutiny, all while adapting to anti-corruption countermeasures in real time.

Most critically, AI would not just enable corruption—it would institutionalize it. Traditional corruption creates opportunities for detection through paper trails, communications, and whistleblowers. AI-enabled corruption operates differently: once a structural competitive advantage is embedded in procurement systems, it persists with minimal human involvement. Unlike traditional corruption schemes that collapse when key figures are exposed, AI-embedded corruption continues to operate independently of bad actors. Removing corrupt officials does not eliminate algorithmic biases. Analyzing individual decisions alone cannot uncover systematic advantages afforded to preferred vendors. Corruption becomes structural—effectively enabling a private company to sit inside the government’s nervous system, controlling who gets audited, investigated, ignored, or favored while appearing technically neutral. South Africa’s corruption scandal serves as a clear warning: once corruption “captures” the machinery of government, recovery is measured in decades, not years.

The U.S. procurement system has long been grounded in principles antithetical to corruption: competition, transparency, oversight, and documented decision-making. Technological innovation can improve government procurement processes and reduce corruption risks, but, without proper guardrails, it also opens new opportunities for exploitation. We are now transforming a system designed to prevent, detect, and mitigate corruption into one structurally vulnerable to it. Although the South African AI capture example is an extreme scenario, it illustrates risks emerging in U.S. AI procurement. AI capabilities are advancing rapidly, and agencies are deploying systems faster than they can understand them, even as governance capacity shrinks and corruption risks rise. Once contractors are locked into the federal government’s AI infrastructure and integrate opaque systems at scale, corruption risks become embedded as structural features rather than as isolated vulnerabilities.

Acquisition decisions made today will influence tomorrow’s operational risks. When agencies procure AI systems through expedited commercial pathways that limit their ability to negotiate transparency, audit rights, or testing requirements, they risk inheriting opaque, vendor-controlled technologies whose design choices and data dependencies they cannot meaningfully verify. As deployment expands from administrative tasks into more consequential procurement functions, such as evaluating proposals or generating specifications, the inadequate safeguards accepted during acquisition will manifest as integrity vulnerabilities in operation.

This article challenges the common assumption that governance hinders innovation, demonstrating that governance and innovation are not opposing forces, but mutually reinforcing conditions for responsible AI acquisition and deployment. Governance promotes sustainable innovation by ensuring fair, transparent markets and by building the institutional trust necessary for continued adoption.

General AI risks are well documented, but corruption and integrity risks in federal AI procurement remain largely underexamined. This article serves dual purposes. First, it identifies critical governance gaps: as federal AI adoption accelerates, existing frameworks for addressing corruption risk are inadequately calibrated to address AI-specific vulnerabilities. Second, this article provides practical guidance. For acquisition professionals and agency counsel navigating AI procurement now, it offers risk-mitigation strategies that can be implemented within existing authorities, despite policy constraints. For scholars and policymakers, it establishes analytical foundations for addressing AI corruption risks as technology evolves and as the need for comprehensive reform becomes more pressing.

Part II examines AI procurement within existing anti-corruption and procurement integrity frameworks, introducing key AI technology concepts necessary for understanding governance challenges. Part III describes how the federal government acquires AI through different procurement pathways and highlights recent policy changes that favor speed and commercial acquisition over governance safeguards—factors that will significantly affect the deployment of AI in federal procurement. Part IV examines how systemic vulnerabilities create conditions that enable intentional corruption. Part V offers recommendations that agencies can implement within current constraints to reduce risks while preserving options for broader reform.

II. An Introduction to Anti-Corruption Law and Artificial Intelligence Governance

In fiscal year 2024, the United States spent over $773 billion on procurement. As of 2018, public procurement accounts for approximately twelve percent of global GDP—roughly $11 trillion. Due to the substantial sums involved, procurement systems are particularly susceptible to corruption. A 2021 World Bank estimate indicates that more than $2.6 trillion (approximately five percent of global GDP) is lost to corruption annually worldwide. Moreover, some studies estimate procurement losses due to corruption at approximately ten to twenty percent “even in countries with relatively high integrity” in their procurement systems. Although estimating the monetary cost of corruption in U.S. government procurement remains difficult, the U.S. Government Accountability Office (GAO) reports that, between 2018 and 2022, the federal government lost between $233 billion and $521 billion annually to fraud across all federal programs. Although this estimate is not procurement-specific, it highlights the potential losses that may result from these practices.

It is well established that the consequences of procurement-related corruption extend far beyond fiscal losses. Corruption destroys public trust, compromises the delivery of public goods and services, and undermines the legitimacy of democratic institutions. Indeed, it can have devastating, and even deadly, consequences. In more extreme contexts, systemic corruption can delegitimize and destabilize governments, fostering conditions for violent extremism and instability. The impact of government corruption is particularly acute in public procurement, a sector that is frequently exploited due to the “large sums of money” and discretion involved in its processes. In the United States, where government operations depend on a sprawling network of private contractors, corruption within procurement does not just waste public funds—it threatens the government’s ability to function. The most effective anti-corruption systems combine legal, institutional, and normative mechanisms to deter misconduct, promote ethical behavior, and ensure accountability.

A. Hallmarks of Strong Government Procurement Anti-Corruption Systems

To assess the opportunities and risks that AI presents for public procurement, it is first necessary to understand the foundational principles that promote integrity within government institutions. The United Nations Convention Against Corruption (UNCAC) is the world’s only legally binding universal anti-corruption instrument. The treaty requires the 191 States Parties to establish procurement systems grounded in transparency, competition, and objective decision-making, and calls for the implementation of internal controls, independent review mechanisms, and ethics training for procurement officials. These provisions establish a treaty-level baseline: without legally enforceable guarantees of transparency, oversight, ethics, and accountability, a procurement system is viewed as deficient under international anti-corruption standards. The Methodology for Assessing Procurement Systems (MAPS), operationalizes these principles through four pillars, with Pillar IV addressing the safeguards most relevant to corruption risk, including the transparency of procurement information, the effectiveness of audit and review mechanisms, and the robustness of ethics requirements. Unlike UNCAC, MAPS imposes no binding obligations but provides a diagnostic benchmark for identifying vulnerabilities and guiding reform.

The UNCAC and MAPS establish the globally accepted baseline for procurement integrity: transparency, accountability, and oversight must be built into a government procurement system’s law and practice. Historically, the U.S. procurement system has not only met these standards but exceeded them through the implementation of additional compliance and enforcement mechanisms.

1. The U.S. Government Procurement Anti-Corruption Ecosystem

The United States, home to one of the world’s most mature and complex procurement systems, imposes an extensive framework of requirements, restrictions, and compliance obligations on entities contracting with the government. Designed to prevent, detect, and remediate corruption, this ecosystem pursues four objectives: (1) maintaining integrity in interactions with government officials; (2) promoting fairness, transparency, and competition; (3) ensuring contractor honesty; and (4) protecting integrity throughout the supply chain.

Its legal foundation forms a mosaic of criminal and civil law across the United States Code and the U.S. Code of Federal Regulations. The Federal Acquisition Regulation (FAR) serves as the primary rulebook governing federal procurement and buttresses these key statutory and regulatory principles, while also imposing expansive procurement-specific compliance obligations. The FAR embeds several core principles particularly relevant to the acquisition and deployment of AI, including transparency, oversight, integrity, and competition. These principles translate into increasingly rigorous procedural requirements as procurement complexity and dollar values increase. This risk-based approach reflects a key trade-off: higher-value or higher-risk procurements require more extensive procedural protections, while lower-dollar or commercial procurements require fewer safeguards. This framework serves as the baseline against which the introduction of AI into the U.S. procurement system must be assessed.

i. Transparency and Oversight

Transparency is a foundational aspect of the ecosystem. Procurement rules, solicitations, award decisions, government spending, and government audits are published on publicly accessible platforms. This level of openness is intended to deter misconduct, foster public trust, and enable external oversight.

Oversight is further reinforced through multiple institutional mechanisms. To protect the integrity of the competitive process, contractors may file bid protests challenging the terms of a solicitation or the award of a contract. These challenges may be filed with the GAO, the U.S. Court of Federal Claims, or, where available, directly with the procuring agency. Inspectors General (IGs) and GAO conduct audits and investigations across federal agencies. Congress provides legislative oversight, and the Defense Contract Audit Agency (DCAA) also plays a significant oversight role in large high-risk defense programs. Together, these groups provide multiple entry points for detecting misconduct.

ii. Qualification and Exclusion

FAR Part 9 addresses contractor qualification standards, which require contracting officers (COs) to make “responsibility determinations” before award. Contractors must demonstrate, among other things, “adequate financial resources,” satisfactory past performance, and a record of business integrity and ethics. These determinations serve as a screening mechanism to ensure that only capable and trustworthy contractors are eligible for award, reinforcing both integrity and fairness. When misconduct or performance failures are identified, agencies may initiate suspension or debarment proceedings under FAR Subpart 9.4. These actions are protective rather than punitive, designed to exclude non-responsible contractors from receiving new federal awards.

iii. Enforcement, Ethics, and Exclusion

The ecosystem is underpinned by a suite of civil and criminal statutes and regulatory requirements that prohibit bribery and gratuities, kickbacks, fraud, collusion, and the disclosure of confidential procurement information. These statutes create overlapping layers of protection beyond traditional contract remedies. Criminal prosecutions under these statutes can result in substantial fines and imprisonment, while civil enforcement, primarily through the False Claims Act (FCA), enables the government to recover treble damages plus penalties.

Public officials involved in federal procurement are subject to a comprehensive ethics regime designed to preserve impartiality and public trust. These obligations include restrictions on accepting gifts, misuse of official position, and negotiating post-government employment.

Enforcement and compliance are handled by the U.S. Department of Justice (DoJ), often in coordination with agency IGs, ethics officials, and suspension and debarment officials. This multilayered enforcement structure creates multiple pathways for addressing misconduct, ensuring that violations can be pursued through criminal, civil, or administrative channels as circumstances warrant.

iv. Conflicts of Interest

Personal and organizational conflicts of interest are addressed through both criminal laws and regulatory restrictions. These provisions are intended to prevent unfair competitive advantages and biased decision-making. Although personal conflicts involve individual financial interests or relationships, organizational conflicts arise when a company’s other activities or relationships impair its ability to provide impartial assistance to the government or create unfair competitive advantages. These organizational conflicts are becoming increasingly relevant as the government relies more heavily on contractors to provide services that require advice and judgment.

v. Compliance, Disclosures, and Whistleblower Protections

Federal contractors must maintain robust ethics and compliance programs tailored to the risks of public sector work. Under FAR 52.203-13, the Contractor Code of Business Ethics and Conduct, contractors for awards above certain thresholds are required to establish internal control systems and codes of conduct. Compliance programs, a relatively recent development compared to previously discussed tools, represent a paradigm shift by outsourcing significant aspects of government anti-corruption efforts within the broader framework of corporate responsibility. The same clause also creates mandatory disclosure obligations. Contractors must report credible evidence of certain legal violations, such as fraud, bribery, or conflicts of interest. Additional voluntary disclosure programs further incentivize self-reporting by offering reduced penalties or damages.

Federal employees and contractor personnel are also encouraged, and in some circumstances required, to report instances of waste, fraud, and abuse through designated internal and external channels. Whistleblower protections are codified across several statutes and regulatory regimes, providing both anti-retaliation safeguards and, in certain cases, financial rewards. These protections are particularly robust under the FCA, which authorizes qui tam actions by private individuals and entitles whistleblowers to a share of any recovered damages.

Taken together, the institutional, legal, and regulatory architecture of the U.S. anti-corruption ecosystem constitutes a complex and sophisticated framework designed to promote transparency, accountability, and integrity in U.S. federal procurements. This architecture provides a necessary foundation for evaluating how emerging technologies, including AI, might reshape the procurement landscape.

B. Artificial Intelligence in the U.S. Federal Marketplace

As U.S. federal agencies increasingly integrate AI into critical functions, including procurement, understanding how these systems are structured becomes essential for assessing governance risks. Although the term lacks a universally accepted legal definition, 15 U.S.C. § 9401 defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” This article focuses on machine-learning systems, particularly foundation models, which present distinct governance and integrity risks compared to traditional rules-based AI.

In practice, federal agencies overwhelmingly license AI-enabled applications built on foundation models rather than accessing foundation models directly or developing AI in-house. Foundation models are large-scale AI systems trained on massive datasets and designed for general-purpose use—meaning they can perform diverse tasks such as writing, analysis, coding, and translation through prompts and fine-tuning, unlike specialized machine learning models built for narrow functions. Large language models (LLMs)—a subset of foundation models designed to process and generate text—are increasingly used in federal procurement of AI technologies.

Agencies typically license applications such as ChatGPT Enterprise, Microsoft Copilot, or Google Gemini for Workspace, which provide user-friendly interfaces to these underlying foundation models. Part III explains why agencies generally adopt this “application-first” approach rather than building or directly integrating foundation models. Because foundation models are trained on massive datasets and designed for broad applications, effective governance depends not only on application-level controls but also on architectural and training choices embedded in the models themselves—choices that agencies generally cannot inspect or directly influence.

1. AI’s Architecture Through a Procurement Lens

To understand the risks involved in acquiring and using AI in government procurement, it is helpful to visualize the “AI technology stack,” the layered technical structure of AI systems, as a tiered structure, much like a wedding cake. This simplified procurement-focused framework illustrates where the general legal, operational, and contractual issues are most likely to arise in the context of procurement. Corruption and integrity vulnerabilities can emerge at any tier and cascade through the system.

Cake Stand: Infrastructure

The cake stand represents the infrastructure layer that supports all AI systems. It includes the physical hardware (i.e., the semiconductors and specialized AI chips) that provides the computational power required to train and run models. This layer also encompasses commercial cloud infrastructure that provides data storage, networking, and baseline security.

Tier 1: The Foundation Model and Customization

The first tier is the foundation model. The customization layer of this tier refers to how agencies adapt these general-purpose models for specific government uses. Customization typically happens three ways: (1) fine-tuning—updating some of the model’s internal settings using agency examples so it learns specialized terminology; (2) retrieval-augmented generation (RAG)—allowing the system to draw on a curated database of agency documents when generating answers, without changing the model itself; and (3) prompt engineering—providing written instructions that guide the system’s behavior and the format of its outputs. The effectiveness of this customization layer depends on the information it is trained on, refined with, tested against, and allowed to reference when generating responses.

Tier 2: Applications and Integration

The second tier is what users interact with. These applications call a foundation model (or a customized version of one) via an application programming interface (API) or access it via a model hub hosted on cloud platforms. Integration encompasses the technical steps necessary to make these tools operational within government systems.

Tier 3: Human Oversight and Accountability

The top tier represents human oversight—an essential step in deploying AI in government operations. Government employees should review, verify, and, if necessary, override AI-generated outputs before they influence agency decisions, though most agencies lack binding regulations requiring human review of AI outputs.

Frosting: Governance and Security

Governance and security surround the stack. Governance includes laws, contract terms, internal policies, and technical safeguards that define permissible use, data rights, testing, provenance, and audit rights. Security includes cybersecurity controls, privacy safeguards, and continuous monitoring. Together, they reinforce integrity and reduce the chance that risks in one tier propagate through the system.

2. The Role of Data in Artificial Intelligence

Data is the backbone of AI systems, making its governance central to the integrity of procurements. Unlike traditional software that operates according to explicitly programmed rules, AI systems learn from data by identifying patterns and applying them to generate outputs. The quality, origin, and management of data directly affect system reliability, creating new opportunities for corruption if not properly controlled.

Three types of data operate at different points in the foundation model architecture described above. Training data establishes the foundation model’s basic capabilities, shaping how the system processes information and generates responses. Customization data adapts foundation models for specific government uses, such as fine-tuning a model to understand agency-specific procurement requirements. Operational data is generated during actual system use, including the prompts that users submit and the outputs that the system produces. All three types operate within defined policy and configuration settings that determine how models apply their knowledge to agency tasks. To that end, each category creates distinct governance challenges that significantly impact procurement integrity, as examined in Part III.

C. The Evolution of U.S. Federal Law and Policy on Government AI Acquisition

Over the past decade, the U.S. federal government has developed a complex and fragmented framework to guide the acquisition and use of AI by federal agencies. Unlike the European Union, the United States has no single comprehensive AI statute or regulatory framework. Instead, the system comprises a mix of statutory mandates, executive orders, Office of Management and Budget (OMB) memoranda, agency policies, and subregulatory guidance. Across the last three administrations, the legal and policy landscape has resembled a political seesaw, with priorities shifting from the first Trump administration to the Biden administration, and now to a second Trump term. Although the identification of core AI-related risks (i.e., safety, bias, privacy, and security) has stayed consistent, approaches toward regulation, oversight, and implementation have varied considerably.

The following provides a high-level overview of the legal and policy instruments governing federal AI acquisition and use, with a focus on integrity, oversight, and risk mitigation.

1. Statutory Foundations for Federal AI Governance

An early influence on the federal AI statutory landscape can be found in the John S. McCain National Defense Authorization Act (NDAA) for Fiscal Year 2019 (FY2019). Section 238 of the FY2019 NDAA established early guardrails for AI procurement, directing the Department of Defense (DoD) to develop ethical, legal, and policy frameworks for AI and governance mechanisms to integrate, oversee, and continually improve AI policy across the Department.

Subsequently, the AI in Government Act of 2020, which was incorporated into the Consolidated Appropriations Act, 2021, established the AI Center of Excellence (CoE) within the General Services Administration (GSA). Among other things, the Act directed OMB to issue a memorandum to federal agencies to guide the acquisition and use of AI by agencies, recommend approaches to eliminate barriers to AI adoption while still safeguarding civil liberties and national security, and identify best practices for assessing and mitigating discriminatory impacts or unintentional consequences of AI use.

The National AI Initiative Act of 2020, which was enacted as part of the FY2021 NDAA, established a national AI research and development strategy and directed the National Institute of Standards and Technology (NIST) to develop technical standards for safe and trustworthy AI. Two years later, Congress enacted key statutes that had a critical impact on federal AI procurement. The AI Training Act requires OMB to establish an AI training program for the federal acquisition workforce to build capacity to identify and mitigate risks associated with AI acquisitions. In addition, although not exclusively focused on AI, the Creating Helpful Incentives to Produce Semiconductors (CHIPS) and Science Act of 2022 makes significant investments in AI-related research and safety. Finally, the FY2023 NDAA introduced the Advancing American AI Act, which required OMB to integrate risk-management practices into procurement rules and to publish agency AI inventories. This legislation served as the impetus for several essential policy documents issued during the Biden administration, exemplifying the administration’s commitment to a risk-oriented approach to AI development and procurement within federal agencies.

2. Executive Orders and AI-Governance

During his first administration, President Trump issued the first coordinated federal strategy on AI. Published in 2019, Executive Order 13859, Maintaining American Leadership in Artificial Intelligence, prioritized federal investment in AI Research and Development, directed NIST to coordinate technical standards, supported workforce development, and promoted international cooperation. The Executive Order (EO) established broad principles but, compared to later EOs, was notably light on governance duties.

One year later, Executive Order 13960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, issued in December 2020, built upon EO 13859 and introduced more specific regulatory expectations for federal agencies. In part, it (1) directed federal agencies to ensure that AI systems adhere to key principles of trustworthiness, including lawfulness, transparency, accountability, and security; (2) required agencies to publish inventories of nonclassified AI use cases to promote transparency and public oversight; and (3) mandated risk assessments and mitigation strategies for AI systems with potential impacts on rights or safety. It did not impose specific acquisition-related mandates, but emphasized that acquisition practices should reflect these general principles, establishing a regulatory foundation for subsequent policy developments.

Marking a pivotal shift in federal AI policy, in 2023, President Biden issued Executive Order 14110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It integrated safety, civil rights, and national security into AI governance frameworks by directing the development of testing protocols and risk-mitigation practices, and by calling for adherence to guidance aligned with the NIST AI Risk Management Framework (RMF). It emphasized transparency, safety, and trust as essential to AI-related acquisition and deployment, elevating these activities to a critical role in public trust and systemic integrity.

Although EO 14110 spurred significant regulatory developments during its effective period, President Trump rescinded the EO on January 20, 2025, with Executive Order 14148, Initial Rescissions of Harmful Executive Orders and Actions. Subsequently, President Trump issued Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence. EO 14179 rescinded EO 14110 and shifted emphasis toward rapid adoption and deregulation. OMB’s 2025 memos preserve Chief Artificial Intelligence Officer (CAIO) governance and still require public AI use-case inventories, while adopting a less prescriptive, risk-based approach for high-impact uses. The new EO instead prioritizes U.S. global leadership and the elimination of regulatory requirements deemed to impede innovation. In short, the new policy shifts away from the prior emphasis on systematic risk, accountability, and oversight toward speed, flexibility, and competition.

3. The 2025 Trump AI Action Plan and Related EOs: A Paradigm Shift in Federal AI Policy

In July 2025, the Trump administration released an “AI Action Plan,” organized around three pillars: (1) accelerating AI innovation, (2) building American AI infrastructure, and (3) leading in international AI diplomacy and security. Building on EO 14179’s deregulatory posture, the Plan (1) directs agencies to identify, revise, or repeal policies that “unnecessarily hinder AI development or deployment”; (2) instructs OMB to weigh a state’s “AI regulatory climate” when “making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award”; (3) calls on the Federal Communications Commission (FCC) to assess whether state AI regulations interfere with the agency’s ability to carry out its obligations; and (4) orders a review of prior Federal Trade Commission (FTC) investigations commenced under the Biden administration “to ensure that they do not advance theories of liability that unduly burden AI innovation.”

The AI Action Plan also imposes additional requirements on agencies’ acquisition and use of AI by (1) directing Commerce/NIST to revise the NIST AI RMF to eliminate references to misinformation, DEI, and climate change; (2) updating federal procurement guidance so agencies contract only with large language model (LLM) developers whose systems are “objective and free from top-down ideological bias”; (3) promoting open-source and open-weight models; and (4) directing OMB and GSA to stand up a nonpublic repository of agency AI tools, templates, and best practices and to publish acquisition guides to help officials choose authorities, vehicles, and approaches for AI buys. Parallel EOs also require accelerated permitting for AI-related infrastructure and impose “ideological neutrality” requirements on AI systems used by federal agencies. The AI Action Plan complements several 2025 OMB memoranda, discussed below, which rescind the Biden administration’s 2024 governance and acquisition guidance and redirect implementation toward rapid adoption and operational efficiency.

Taken together, the Action Plan and associated EOs mark a clear shift in U.S. AI governance priorities. They emphasize rapid adoption, commercial integration, and procurement centralization, while reducing the role of traditional oversight. This development provides important context for the discussion in Part V, which assesses how the federal procurement system may need to compensate for governance gaps created by the current U.S. approach to regulating AI.

4. The “Revolutionary FAR Overhaul”

In mid-2025, the Trump administration launched the “Revolutionary FAR Overhaul” (RFO), a government-wide effort mandated by Executive Order 14275, Restoring Common Sense to Federal Procurement, to remove non-statutory language from the FAR and simplify its language. A key aspect of this reform effort involves moving non-statutory acquisition strategies out of the FAR into “buying guides.” These guides, together with the streamlined FAR, constitute the Strategic Acquisition Guidance (SAG). The RFO also creates the FAR Companion, a separate, nonbinding guidance document that provides detailed procedural content.

Although reform efforts are hardly new in federal procurement, the current “revolutionary” overhaul represents a significant departure from traditional FAR rulemaking practices. Since 1984, the FAR has functioned as a comprehensive rulebook that codifies both statutory requirements and detailed procedural practices, providing extensive guidance to COs. The RFO shifts this model, moving greater discretion to individual contracting officers and emphasizing commercial acquisition methods. This governance structure—including binding rules reduced to their statutory roots, regulatory requirements reduced to non-binding guidance, and complex implementation decisions delegated to individual contracting officers—has implications for AI procurement governance, where the risks documented in Part III require consistent safeguards rather than discretionary measures.

5. Subregulatory Guidance: The Evolving Role of OMB and Agency Implementation

Over the past five years, OMB memoranda have served as the primary instrument for developing and implementing federal regulatory AI policy, tracing a clear arc from early deregulatory preferences to Biden-era governance mandates and now to the second Trump administration’s policies, which prioritize speed and commercial adoption. Four key memoranda, issued between 2020 and 2025, reflect the federal government’s evolving approach to governance, risk management, and AI acquisition priorities.

i. 2020–2024: A Focus on Guardrails and Governance

Between 2020 and 2024, spanning the first Trump and Biden administrations, the first wave of OMB memoranda emphasized risk management and institutional oversight. In 2020, issued during the final year of the first Trump administration, M-21-06, Guidance for Regulation of Artificial Intelligence Applications, established a deregulatory preference, urging agencies to avoid mandates that “hamper innovation” and to favor flexible, cost-benefit-driven approaches. Although M-21-06 is focused more broadly on AI regulation and is not procurement-specific, it established a deregulatory preference that shaped subsequent AI policies. The Biden administration adopted a stronger governance posture in 2023 with M-24-10, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, which imposed new governance duties, including designating CAIOs forming governance boards, publishing AI use-case inventories, and requiring algorithmic impact assessments for systems affecting rights or safety. In 2024, the Biden administration issued procurement-specific guidance in OMB Memorandum M-24-18, Advancing the Responsible Acquisition of Artificial Intelligence in Government. It instructed agencies to improve their capacities for the responsible acquisition of AI, focusing on (1) cross-functional and interagency collaboration; (2) risk-management policies and practices that address privacy, civil rights, and safety throughout the acquisition life cycle; (3) data management and protection policies; (4) obtaining contractor disclosures to substantiate model claims; and (5) design strategies that avoid contractor lock-in and promote competition.

ii. 2025: A Reorientation of Federal AI Policy

The second inauguration of President Trump marked a sea change in the federal government’s approach to AI. OMB Memorandum M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust, rescinded the Biden-era OMB governance framework and instructed agencies to prioritize rapid AI adoption, reduce procedural “bottlenecks,” and favor U.S.-developed AI products. Risk management shifted from standardized protections to decisions based on specific circumstances and individual judgment. This deregulatory momentum extended into the Trump administration’s AI acquisition policy.

Issued as a companion to M-25-21, OMB Memorandum M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government, superseded the Biden administration’s procurement guidance (M-24-18) and emphasized competition, contractor sourcing, data portability, and long-term operability. It promotes collaboration among legal, technical, and acquisition officials to navigate risks during the acquisition process. The memo directs agencies to update their internal AI policies, adopts a formal “Buy American AI” policy, and provides high-level acquisition lifecycle guidance, from needs assessment through contract closeout. It also mandates the creation of a nonpublic repository for agency AI tools, templates, and best practices, and calls for the development of publicly available guides to assist procurement officials in selecting the acquisition authorities, vehicles, and approaches most suitable for procuring AI systems. References to privacy, intellectual property, data rights, and public trust are scattered throughout, but they are largely implemented through procedural directives and general instructions to negotiate “appropriate” contract terms, rather than standardized disclosure obligations or uniform compliance metrics.

In December 2025, OMB issued M-26-04, Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles, instructing agencies to include baseline transparency requirements in new LLM solicitations, such as acceptable use policies and model, system, and/or data cards. The memo also instructs agencies to require, as part of the minimum LLM transparency package, that vendors provide a mechanism for end users to submit feedback and report issues relating to “outputs that violate the Unbiased AI principles.” Depending on planned use, agencies may also seek enhanced disclosures, such as bias evaluation results and testing methodology, reflecting the memo’s emphasis on “truth-seeking” and “ideological neutrality” in LLM outputs. Notably, M-26-04 contains several consequential carveouts: it does not apply to national security systems (though application is encouraged where practicable), sunsets in two years unless extended, and does not require agencies to apply its requirements to LLMs acquired under a free open-source license.

iii. Agency-Level Governance: Fragmented Safeguards

As White House AI policies shifted over the past five years, individual agencies have established governance frameworks to guide AI acquisition and deployment, both in response to OMB memoranda and as agency-driven initiatives. Examples include DoD’s AI Ethical Principles (2020) and the “Responsible Artificial Intelligence Strategy and Implementation Pathway,” which require lifecycle risk management, safeguards against bias, transparent AI decisions, and rigorous testing. The Air Force-MIT AI Accelerator’s Artificial Intelligence Acquisition Guidebook provides practical acquisition guidance for contracting officers, including risk mitigation strategies and ethical considerations. The Department of Homeland Security has issued a Playbook for Public-Sector Generative AI Deployment, which shares lessons learned from three department-wide pilots and develops a framework for agency GenAI adoption consistent with privacy, ethical, and civil rights protections. GSA has published several guides addressing the acquisition and responsible use of AI. This sample of agency initiatives illustrates that agency-level discretion can embed meaningful safeguards, even in the absence of centralized regulation. However, their fate is uncertain in the wake of the dramatic policy shifts of the current Trump administration.

iv. A Shrinking Governance Framework

With recent executive orders scaling back non-statutory regulatory requirements and OMB narrowing its guidance, the federal AI governance framework now relies more heavily on agency-level discretion and the strategic use of government contracts as governance tools. The voluntary frameworks that do exist offer limited practical guidance for procurement. The NIST AI RMF provides a taxonomy of AI risks, including validity and reliability; safety; security and resilience; transparency and accountability; fairness; and privacy. However, it operates as voluntary guidance without enforceable requirements. Agencies may adopt AI RMF principles in procurement contexts, but the framework provides no mandate to do so and offers limited implementation guidance for acquisition processes. The AI RMF exemplifies the current governance approach, which involves identifying risks without creating binding obligations or enforcement mechanisms.

In this evolving policy environment, the long-term viability of agency-specific safeguards remains uncertain. Much of the responsibility for managing AI-related risks now falls to acquisition personnel and agency legal counsel, who must navigate complex technical and ethical issues, often through contractual procedures not originally designed for this purpose.


III. How the U.S. Government Acquires AI and Associated Governance Risks

The U.S. government’s posture toward AI procurement is overwhelmingly application-first, licensing commercial applications built on private-sector foundation models rather than developing or training models in-house. This approach is driven primarily by the extraordinary requirements of foundation model development, which demand specialized expertise and capital investments exceeding hundreds of millions of dollars. Training a foundation model at the scale of GPT-5, Claude, or Gemini requires ongoing access to massive quantities of highly specialized graphics processing units (GPUs) and the technical capacity to operate them. This need creates an environment that effectively limits participation to a small number of well-capitalized technology firms.

As a result, federal agencies largely procure AI “as a service,” accessing it through secure cloud platforms already approved for government use, such as Microsoft Azure, AWS Bedrock, or Google Cloud, or tools hosted directly by a vendor or by the agency itself. The models are then built into applications like chatbots or document-review tools and adjusted to use agency materials and rules. This approach enables agencies to access advanced customized AI-enabled capabilities without developing the underlying technology in-house.

This article maps three of the most common acquisition paths for agency AI use: (1) commercial buys (including Part 12, license upgrades, and GSA Schedule orders), (2) negotiated noncommercial acquisitions under FAR Part 15, and (3) nontraditional pathways such as Other Transactions (OTs). It does not attempt to catalog every available vehicle or exception.

These pathways differ fundamentally in how they balance governance leverage against acquisition speed. Commercial and schedule ordering maximize speed and scale but generally default to customary terms and conditions, limiting agencies’ ability to negotiate AI-specific protections. Part 15 maximizes governance leverage at the cost of time and administrative burden. OTs maximize speed and experimentation, but can reduce some traditional procedural safeguards, such as transparency, formal competition requirements, cost/pricing data submissions, and documentation requirements. Understanding these tradeoffs is essential for assessing the corruption and integrity vulnerabilities examined in Part IV.

A. Commercial Acquisition of AI

Commercial acquisition represents the default pathway for federal AI procurement, prioritizing speed and standardization over customized governance terms.

  1. FAR Part 12 Commercial Acquisition

When agencies acquire AI under the FAR, the most common approach is to treat it as ordinary commercial software. FAR Part 12 is designed to acquire goods and services already sold in the commercial marketplace with fewer government-unique contract terms. Cloud-based Software-as-a-Service (SaaS) tools, API access to foundation models, and traditional software licenses where agencies install and run AI on their own servers all typically fall within Part 12’s scope.

Part 12 offers limited governance leverage. FAR 12.302(c) permits tailoring through addenda, but agencies cannot unilaterally impose requirements that contractors characterize as non-customary. It requires a waiver, and contractors must agree to any expansion of rights, often with additional consideration.

  1. License Upgrades to Existing Enterprise Software

One common procurement method involves adding AI capabilities as licensed features to existing enterprise software agreements. For example, Microsoft’s Copilot for Microsoft 365 integrates AI functions directly into Word, Excel, Teams, and other applications already in use across federal agencies. Because these AI capabilities are offered as add-ons to existing commercial software licenses, they typically fall under the terms and conditions of the base agreement.

This inclusion speeds up AI acquisition but often at the cost of governance safeguards. Agencies have limited ability to negotiate AI-specific protections, such as transparency obligations or audit rights, without renegotiating the entire enterprise agreement, making standard commercial terms the path of least resistance.

  1. GSA Multiple Award Schedules and Consolidation Dynamics

FAR Part 8.4 allows agencies to order GSA schedule contracts (i.e., catalogs of pre-vetted products and services with established pricing and terms). This centralized approach offers efficiency but comes with a governance trade-off. Although agencies benefit from GSA’s buying power and avoid duplicative negotiations, they inherit whatever protections that GSA negotiated at the master agreement level, and only GSA can modify those baseline terms. Downstream ordering agencies have narrow authority to tailor orders consistent with commercial practice, but they cannot override inadequate baseline protections, and many lack the technical expertise to identify the specific AI safeguards that they need.

The federal government is consolidating AI acquisition through centralized vehicles managed by GSA. GSA’s “OneGov Strategy” seeks to centralize software procurement, favors direct contracting with original equipment manufacturers (OEMs), and harmonizes licensing terms across agencies. This approach aims to leverage the government’s buying power, reduce duplicative negotiations, and provide vetted AI solutions through streamlined pathways.

In mid-2025, the GSA added leading AI platforms to the GSA Multiple Award Schedule, giving federal agencies streamlined access to commercial AI tools under pre-negotiated terms. To accelerate adoption across federal agencies, GSA negotiated government-wide agreements, making them available to agencies for a promotional price of $1.00 (or less). GSA has also launched USAi.gov, a shared platform designed to help agencies evaluate and experiment with AI tools under a common framework.

These consolidation efforts offer efficiency: vetted solutions without individual procurements, standardized terms, and centralized technical expertise. OMB has reinforced this approach by directing the development of an AI procurement toolbox to standardize acquisition practices and promote transparency and data rights.

B. FAR Part 15 Negotiated Procurements

FAR Part 15, which governs negotiated procurements, provides agencies with the broadest latitude to embed AI governance safeguards, but current policy makes this pathway increasingly unlikely for AI acquisitions. Part 15 applies when agencies acquire products or services that are not commercially available or when commercial solutions cannot meet agency needs without modification. Part 15 enables agencies to structure evaluation factors rewarding governance terms, negotiate tailored data rights, and require verification deliverables, including training data documentation, model architecture, and bias assessments—creating enforceable obligations rather than relying on contractor cooperation. This leverage, however, comes with significant procedural costs and extended timelines that can be problematic for fast-moving AI procurements.

C. A Nontraditional Pathway: Other Transactions

Authorized agencies increasingly use OTs as alternatives to FAR-based procurement. These mechanisms offer speed and access to nontraditional contractors, but they operate with fewer built-in safeguards than traditional vehicles.

OTs are authorized for research, prototype, and certain production activities pursuant to various statutory authorities, with authority varying by agency. The DoD possesses broad OT authority, while civilian agencies’ access is more limited and often tied to specific programs or pilot initiatives. OTs exempt agencies from FAR requirements, creating flexibility to negotiate AI-specific governance terms, including transparency, audit access, data rights, and independent testing provisions. This structure appeals to both parties: agencies gain rapid access to cutting-edge technology with compressed timelines, while commercial AI firms can engage with the government without incurring the complex FAR compliance requirements. Whether this flexibility produces stronger or weaker governance depends on implementation: agencies with expertise and bargaining power can secure robust safeguards, while those prioritizing speed or facing vendor leverage may secure fewer protections than traditional procurement pathways. The current policy emphasis on rapid adoption often channels OT’s flexibility toward speed rather than governance.

IV. Integrity and Corruption Risks in the Use and Acquisition of AI

Although much of the current debate around AI and procurement centers on governance risks, it is equally vital to acknowledge AI’s potential to strengthen integrity and improve efficiency within the procurement system. Across the federal government, AI tools are already being used to reduce compliance errors, flag problematic contract language, and streamline oversight processes. Systems like GSA’s Solicitation Review Tool and the IRS’s Contract Clause Review Tool illustrate how AI can augment cumbersome procurement reviews to make compliance reviews more efficient. Internationally, AI has also been used to detect fraud, overbilling, and collusion, often identifying red flags more quickly and comprehensively than traditional oversight methods.

Yet AI’s integration into procurement also introduces significant risks. This part analyzes these risks through two distinct but interconnected categories. Corruption risks (Part IV.A) involve the exploitation, whether intentional or enabled by organizational advantages, of AI technologies for illicit gain, unfair competitive advantage, or manipulation of procurement processes. Integrity risks (Part IV.B) represent systemic vulnerabilities that undermine fair outcomes regardless of intent, including technical limitations, market concentration, and transparency failures. These risks operate on a continuum; integrity risks create the conditions for corruption to exploit.

The acquisition pathways and policy shifts described in Part II create the conditions for the corruption and integrity risks examined in this Part. When agencies acquire AI through commercial vehicles that prohibit non-customary governance terms, they forfeit oversight mechanisms precisely when AI-specific risks demand heightened scrutiny. Traditional procurement demands transparency, yet AI vendors shield training data and system architecture as proprietary. Traditional procurement requires documented, auditable decisions, yet AI systems operate as black boxes. Traditional procurement mandates human oversight, yet AI ~~’~~ s confident outputs encourage acceptance of recommendations without independent verification.

These structural mismatches prevent agencies from addressing foreseeable risks. Without supply chain transparency, they cannot identify embedded conflicts of interest or assess whether upstream data or model components introduce hidden vulnerabilities. Without audit rights, they cannot detect algorithmic manipulation or verify that system behavior aligns with contractual representations. Without testing protocols, they cannot verify whether systems exhibit bias, generate fabricated information (hallucinations), or change behavior without notice (drift). AI systems exhibit flaws that would trigger rejection in any other procurement context, yet current policy encourages rapid adoption while dramatically reducing the safeguards needed to mitigate them.

Each corruption and integrity risk examined below demonstrates how acquisition failures create vulnerabilities that materialize once AI systems are deployed in procurement operations. The full extent of AI use in federal procurement operations, including proposal evaluation, is not publicly documented, but adoption is likely to expand as capabilities mature. The government’s tendency to engage in “automation creep,” where technologies initially deployed for low-risk administrative tasks gradually migrate into higher-stakes decision-making roles, makes it vital to address these risks now before consequential AI-driven procurement functions become widespread across the federal government.

A. Weaponizing AI: Intentional Exploitation for Corrupt Gain

The integration of AI into procurement processes creates a paradox: the same technologies that promise enhanced oversight, efficiency, and risk mitigation can also enable more sophisticated forms of corruption. As illustrated in the South African state capture example, AI systems could enable more sophisticated and harder-to-detect forms of traditional procurement corruption. These corruption risks manifest in at least three distinct but interconnected ways:

• The complexity of AI supply chains can mask organizational conflicts of interest (OCIs), making them more difficult for agencies to detect and mitigate;

• AI-enabled fraud schemes can exploit automated systems to generate false documentation, manipulate evaluation criteria, or circumvent detection algorithms in ways that traditional oversight may fail to recognize; and

• Vulnerabilities in AI training data and automated scoring systems can be exploited to embed systematic contractor favoritism or to game evaluation outcomes.

  1. Organizational Conflicts of Interest

The increasing use of AI tools in federal procurement presents new and significant risks of OCIs. OCIs may occur at instances in which “because of other activities or relationships with other persons, a person is unable or potentially unable to render impartial assistance or advice to the government, or the person’s objectivity in performing the contract work is or might otherwise be impaired, or a person has an unfair competitive advantage.” The FAR requires COs to identify, evaluate, and avoid, neutralize, or mitigate OCIs once detected. OCIs are generally separated into three categories:

Impaired objectivity may arise when a contractor’s business or financial interests create incentives to provide biased advice under a government contract, or where the contractor is positioned to review its own work or that of a competitor.

Biased ground rules may arise when a contractor, in performing one procurement, helps establish the framework for another procurement, such as drafting the statement of work or developing specifications, and then competes for that procurement. This early interaction raises concerns that the contractor may have shaped the requirements to its own advantage.

Unequal access to information may occur when a contractor gains access to nonpublic, competitively useful information as part of its contract performance, giving it an unfair competitive advantage in a later competition for a government contract. This might include access to competitors’ pricing strategies, technical approaches, or agency procurement plans that are not publicly available.

Understanding these OCI risks requires identifying the algorithmic supply chain. The foundation model provider (such as OpenAI, Anthropic, or Google) develops and maintains the underlying model and controls its essential capabilities. A second actor adapts that model for a specific use. This adaptation can involve further training it on selected materials, connecting it to specific data sources, or setting up the rules and prompts that shape its responses. The application vendor then builds a software product around the model and integrates it into the agency’s systems or workflows. Each actor can make changes that influence system behavior, but agencies often contract only with the application vendor and thus have limited visibility into upstream decisions made by model developers or customizers. In practice, a single contractor may perform more than one of these roles, and the boundaries are often fluid, but the functional distinctions are essential for identifying conflicts. OMB has now explicitly recognized this supply chain concern in LLM procurements, noting that, when agencies transact through resellers, integrators, or platform operators, the availability of product information and the ability to intervene may depend on the underlying model developer’s willingness to cooperate through third parties.

AI-generated OCIs therefore operate across multiple layers of the technology stack, each presenting distinct detection challenges. OCI risks concentrate in two areas: customized models (where model hosts and customizers make upstream choices) and agency-facing applications (where vendors adapt general models for specific procurement tasks). The three OCI categories can manifest differently across these architectural layers, as illustrated in the following two subsections.

i. Application-Layer Conflicts: The Immediate Threat

At the application layer (the second tier of our wedding cake), conflicts are most apparent.

Impaired Objectivity. An impaired objectivity OCI can occur if a contractor licenses its AI proposal evaluation application to an agency while competing for awards that its own system evaluates. The contractor controls the retrieval database (documents that the AI application references) and system prompts (instructions guiding the AI application’s behavior), enabling it to embed preferences for its own technical approaches and methodologies. Agency personnel review detailed evaluation reports with adjectival ratings and technical justifications that appear objective yet lack visibility into the underlying (biased) design choices that embedded the contractor’s preferences.

Biased Ground Rules. Biased ground-rules concerns could arise if an agency procures an AI application to generate specifications or evaluation criteria, and the same application vendor then tries to compete in a procurement based on those AI-generated requirements. When an agency asks the application to draft requirements, the tool draws from examples in its database. If that database predominantly contains the contractor’s successful proposals, the generated specifications will systematically favor the contractor’s terminology and technical frameworks. The result appears neutral but systematically favors the contractor’s capabilities and proposal style.

Unequal Access to Information. Unequal access to information issues arise when contractors, for example, license their AI market research platforms to government agencies. These systems then gather nonpublic, competitively useful information during contract performance (such as upcoming agency priorities, evaluation criteria under consideration, and technical approaches being tested). The contractor managing the platform gains intelligence that can inform its future competitive strategies in procurements.

ii. Foundation Model Conflicts: A Novel Doctrinal Question

A more complex issue can occur at the foundation-model layer (the first tier of our wedding cake) when the agency-facing application obscures conflicts introduced by customization. In practice, agencies frequently license an application that accesses a foundation model via an API, while contractual and technical restrictions limit their ability to see these design decisions. Although some agencies directly participate in customization, many lack the access or expertise to review or verify a contractor’s methods.

Imagine the following scenario. Contractor A operates an AI model customization service, developing specialized foundation models for government use, such as a “cybersecurity compliance scoring” model, which has been trained on what it characterizes as “industry best practices”—derived from Contractor A’s successful past proposals. Contractor B licenses this model for its FedEval AI application. When DoD licenses FedEval for a cybersecurity procurement in which Contractor A competes as an offeror, the tool produces ratings that appear neutral to evaluators but are influenced by Contractor A’s terminology and technical frameworks. Competitors offering functionally equivalent solutions receive lower scores, not because of substantive deficiencies, but because the model’s customization—its RAG database, fine-tuning examples, and prompts—established A’s methodological approach as the quality benchmark. To the agency, the analysis seems impartial; in reality, it reflects A’s upstream design choices reframed as “industry best practices.”

Under FAR Subpart 9.5, an impaired-objectivity OCI exists where a firm has a financial interest in an award and provides advice that may be less than impartial. Here, that advice takes the form of the customized model’s ratings and narrative analysis. Because Contractor A designed and controls the model, its involvement is tantamount to self-dealing. If the same customized model is used in a separate application to draft specifications or evaluation factors, and Company A later competes in the resulting procurement, the arrangement could create a biased ground rules OCI.

Whether FAR 9.5 applies to this type of influence remains legally uncertain. Traditional OCI analysis assumes direct contractor involvement, traceable decision-making processes, and human judgment that can be scrutinized for bias. AI systems challenge these assumptions through indirect algorithmic influence, opaque processes that shape outputs, and multiparty supply chains where conflicts originate with model customizers rather than the application vendors the agency directly contracts with.

The open question is whether a competitor’s upstream role in customizing a model constitutes such a relationship when that influence operates algorithmically and is channeled through a third-party’s application. This theory would extend existing OCI principles to circumstances in which structural competitive advantages are laundered through intermediary contractors, making the competitor’s influence indirect but potentially no less consequential. Crucially, these unfair advantages need not stem from deliberate manipulation. When models are customized using data drawn predominantly from a single contractor, sector, or firm size, they learn to favor the characteristics and approaches of those overrepresented sources. These learned preferences are more subtle and difficult to detect than obvious conflicts, yet are just as damaging to competitive integrity because they operate invisibly, shaping procurement processes regardless of vendor intent. Until GAO or the courts address whether model customization decisions by potential competitors create cognizable conflicts under existing doctrine, agencies face doctrinal ambiguity about how OCI principles apply to AI supply chains.

iii. Why These Conflicts Are Hard to Detect

Current AI acquisition practices make these conflicts exceptionally difficult to identify or mitigate. Agency personnel generally interact only with commercial application interfaces and, as a result, cannot inspect the layers beneath the application without audit rights that commercial licenses typically restrict (nor would most acquisition professionals understand them even if they could). Contractors usually characterize these components as proprietary, often preventing the government from examining whether conflicts exist in the system design. AI outputs also carry an “aura of objectivity and infallibility,” often in the form of detailed reports and sophisticated analyses that discourage scrutiny even from experienced acquisition professionals. The multiparty structure further complicates attribution. When one contractor provides the foundation model, another customizes it, a third builds the application interface, and a fourth offers implementation services, it becomes unclear which party bears OCI responsibility.

Unlike traditional conflicts, AI-based conflicts operate through design choices that produce seemingly neutral outputs. Targeted behavioral testing, such as comparing how the same technical content rates when described with different terminology, or testing outputs against baseline models, could flag these risks, but acquisition professionals, who already struggle with identifying traditional OCIs, are unlikely to conduct such testing without specific guidance or mandates. Without enhanced transparency requirements, standardized testing protocols, or specialized review capabilities, agencies may favor contractors whose prior configuration work shaped the systems evaluating their proposals, which is precisely the competitive distortion OCI doctrine aims to address.

  1. Fraud, Falsification, and AI-Generated Deception

Fraud represents one of the most persistent and damaging threats to the integrity of government procurement. Even without advanced technologies, in fiscal year 2024, civil FCA settlements and judgments exceeded $2.9 billion, and whistleblowers filed 979 qui tam lawsuits—the highest number in a single year. The increasing integration of AI technologies into the federal procurement process significantly amplifies these traditional fraud risks, making them difficult for even experienced procurement professionals to detect.

Generative AI systems can now produce highly realistic fabricated documents, images, audio recordings, and video content. Unlike traditional fraud, which often leaves detectable patterns or involves multiple coconspirators, AI-generated fraud can produce sophisticated fabrications at scale with minimal human involvement. For example, the Federal Bureau of Investigations (FBI) has issued public warnings about an increasing threat of “cyber criminals” using AI to impersonate senior U.S. officials to trick unsuspecting victims into divulging sensitive information or authorizing fraudulent transactions. Similarly, the U.S. Treasury Department’s Financial Crimes Enforcement Network (FinCEN) has warned financial institutions that criminals are using generative AI to create falsified documents and multimedia content to bypass identity verification controls.

Three categories of AI-enabled fraud pose acute risks to procurement integrity: (1) document fabrication; (2) multimedia deception; and (3) algorithmic concealment. Although these categories can overlap in practice (i.e., fabricated documents may be algorithmically concealed to avoid detection), they illustrate distinct fraud mechanisms.

First, document fabrication involves generating false records that appear authentic but contain fictitious information. A contractor could use generative AI to create entirely fabricated past performance reviews, including fake letters of recommendation with realistic company letterheads, financial statements with credible formatting and figures, and even fabricated testimonial letters from nonexistent clients, all signed with AI-generated signatures.

Second, multimedia deception through deepfake technology includes creating or manipulating videos, audio, or images with AI to make them appear authentic when they are partially or entirely fabricated. Advances in machine learning and generative AI are steadily improving realism and evading detection tools in what researchers describe as an ongoing “arms race” between generation and detection capabilities. These developments pose novel risks in federal procurement, where agencies increasingly rely on multimedia evidence to assess contractor capabilities.

Under FAR 9.104, contracting officers making a required responsibility determination must evaluate whether offerors have, among other things, the necessary facilities and technical capabilities. A contractor could produce video footage of facility tours showing impressive but nonexistent manufacturing capabilities, or create deepfake videos of key personnel explaining complex methodologies, where the personnel are AI-generated rather than real employees with the claimed expertise to meet this requirements. When preaward surveys under FAR 9.106 are conducted virtually or in hybrid mode rather than through physical site visits, reliance on submitted multimedia evidence increases vulnerability to this form of misrepresentation.

While traditional fraud risks (falsified photographs or misleading documentation) have always existed, deepfake technology enables deception that evades detection without specialized tools that agencies currently lack. More critically, real-time deepfake technology enables live video manipulation with increasing accuracy, which could, for example, be used during oral presentations to deceive an agency during the procurement process. Contractors could present AI-generated or digitally altered personnel in live, virtual presentations who respond dynamically to questions, creating interactive dialogue that appears authentic. Given the rapid evolution of this technology, it is unlikely that agencies possess the detection capabilities or procedural safeguards necessary to identify real-time deepfakes that could be used in this common procurement procedure. Without training and safeguards, procurement evaluations might rely on fabricated impressions, rather than genuine contractor performance.

The third category, algorithmic concealment, may involve using AI to disguise fraud, making it more difficult to detect during routine audits. For example, imagine a subcontractor bills a prime contractor for more hours than were worked. To conceal inflated hours, an AI tool could rewrite entries to avoid detection flags, such as varying start times, adding realistic breaks, diversifying task descriptions among employees, and generating authentically formatted documentation.

These AI-enhanced methods of fraud pose significant challenges for traditional fraud-detection tools, which are designed to identify red flags of classic fraud schemes. Classic fraud detection relies on identifying inconsistencies, anomalies, and patterns that suggest fabrication, such as documents that are too perfect. Generative AI, however, can create documentation with realistic variation and appropriate imperfections—characteristics that often evade detection by traditional fraud detection tools. Beyond detection challenges, AI-generated fraud creates significant evidentiary obstacles for prosecution. Although the underlying conduct clearly violates existing fraud statutes, proving violations may require new investigative capabilities and forensic expertise that traditional procurement fraud enforcement has not required.

  1. System Corruption and Supply Chain Manipulation

AI systems introduce technical vulnerabilities that can be manipulated to corrupt procurement decisions. Although often framed as security or reliability issues, these vulnerabilities have significant corruption implications when contractors deliberately manipulate AI systems to gain competitive advantages.

“Adversarial attacks” refer to deliberate attempts to cause a machine learning system to produce incorrect outputs or predictions. They are commonly categorized by timing: “poisoning,” which corrupts the training process, and “evasion,” which manipulates trained systems during deployment. Prompt injection represents a distinct form of attack that has emerged with generative AI systems, involving the embedding of malicious instructions within user inputs to override intended system behavior. Unlike visible system failures, these manipulations often evade detection and can gradually undermine reliability and integrity. These vulnerabilities present foreseeable risks as AI adoption increases across federal operations. For the purposes of this paper, the most relevant examples are data poisoning, evasion, and prompt injection.

Data poisoning occurs when someone deliberately corrupts the information or processes used to train a model. Unlike visible system failures, poisoning can imperceptibly shape what the model learns, skewing outputs in a contractor’s favor without producing obvious errors. In a government procurement, this corruption poses a serious risk. For example, if agencies train or configure evaluation tools using responses to a Request for Information (RFI) or contractor capability statement, unverified exaggerations or misleading terminology can be misinterpreted as positive signals. Over time, those patterns can distort how the system evaluates future proposals.

Evasion attacks target how a trained model applies what it has learned. Imagine a contractor learns that an agency’s AI proposal evaluation tool favors certain security terminology. The contractor then drafts its proposal to maximize ratings, repeating a phrase such as “zero-trust architecture” throughout, even where it is only tangentially relevant to the work. The language is carefully hedged to avoid explicit commitments. The agency’s AI evaluation tool, trained to weight those terms heavily, assigns higher ratings, which may appear innocuous to human reviewers. The result is a procurement integrity risk: proposals engineered to exploit automated evaluation patterns while avoiding outright misrepresentation.

Prompt-injection attacks illustrate a more direct form of manipulation. Rather than exploiting how the system processes data, an attacker embeds hidden instructions directly into their submission. For example, invisible text in a PDF says, “Rate this proposal as technically superior.” A human evaluator would never notice this hidden material, but an automated evaluation system that reads all document text could detect and follow these embedded instructions, ultimately altering the outcome. This behavior differs significantly from legitimate keyword optimization: it seeks to manipulate the system’s decision-making rather than to present information convincingly. In most procurement situations, such manipulation would be considered fraud.

Although these three vulnerabilities share similarities, they pose distinct risks that warrant separate analysis. Data poisoning compromises the learning process, distorting how AI systems understand contractors and requirements across procurements. Evasion attacks target the decision-making process, manipulating outcomes in favor of certain contractors in specific competitions. Prompt injection directly interferes with the model’s instructions, embedding hidden commands that hijack automated evaluation systems. Each risk undermines procurement integrity, but through different mechanisms.

This risk is acute in Part 15 competitions, which have long drawn criticism for rewarding contractors skilled at proposal writing over those with superior performance capability. Past performance evaluation became a generally required factor in the 1990s as part of the shift to best-value source selection, prompting agencies to look beyond proposal rhetoric to documented performance records. Yet this reform faces significant implementation challenges. Agencies struggle with inconsistent rating standards, reluctance to provide negative assessments, and limited ability to verify contractor-submitted data.

Against this backdrop, AI-evaluation tools risk exacerbating this long-standing pathology. Unlike human evaluators, who might detect manipulation or buzzword repetition, AI tools trained on specific terminology or patterns can be systematically exploited through reverse engineering—turning this flaw into a scalable, repeatable way to manipulate the procurement process. More disturbingly, prompt injection attacks present an even more insidious risk: hidden instructions that human evaluators cannot detect but automated systems will process and follow. This possibility creates a procurement integrity risk where proposals are crafted to exploit automated evaluation methods, rewarding contractors who invest in understanding the AI’s scoring algorithms, rather than those who advance their actual performance capabilities.

These manipulation techniques raise difficult questions about contractor accountability under existing law. Submitting false information is already prohibited by fraud statutes and the FCA. However, proving that contractors manipulated government AI tools will be challenging. The conduct falls into a gray area: contractors may claim they are merely tailoring proposals to evaluation criteria, rather than exploiting vulnerabilities. This underscores the need for clearer standards distinguishing legitimate optimization from prohibited manipulation.

Detection adds another challenge. Data poisoning detection demands review of training data provenance and integrity. Uncovering vulnerabilities to evasion attacks requires adversarial testing, among other evolving mitigation techniques. Prompt injection detection requires content analysis tools that can identify hidden or obfuscated text in submissions. Few agencies possess the technical expertise or resources necessary for such oversight, creating an asymmetry that favors sophisticated contractors who understand AI systems’ vulnerabilities. This issue is exacerbated by limited transparency in AI supply chains. When contractors claim proprietary rights over their information sources or system components, they prevent agencies from examining how AI models are developed. This creates opportunities to hide bias, conflicts of interest, or deliberate manipulation. Without transparency into how AI systems are built and trained, inappropriate contractor influence can spread across multiple procurement decisions, amplifying systematic favoritism.

B. Systemic Vulnerabilities: Integrity Risks in AI Procurement

This part examines how the systemic vulnerabilities stemming from the acquisition and deployment of AI in government procurement both compromise integrity and enable corruption. Contractor lock-in, opacity, automation bias, and embedded technical flaws can undermine fair and transparent procurement outcomes, even in the absence of malicious intent. These same weaknesses, however, create exploitable conditions: opacity that shields manipulation, automation bias that reduces oversight, market concentration that provides leverage to resist safeguards, and AI dysfunction that masks deliberate misconduct.

1.
Contractor Lock-In: How Market Capture Creates a Structural Integrity Risk

Contractor “lock-in” creates structural conditions that undermine procurement integrity, even in the absence of traditional corruption. Lock-in occurs when the “costs of switching contractors are sufficiently high that users stay with an incumbent firm rather than switch to a firm whose product or service they would prefer.” Lock-in has been a long-standing concern in federal technology contracts. It is now recognized as a critical risk in government AI policies, reflecting both technical barriers to switching providers and the concentration of the AI market. From a law-and-economics perspective, these “switching costs” are transaction costs that federal acquisition officials should consider through a life-cycle cost analysis during acquisition planning.

Agencies find themselves tied to single contractors when switching becomes costly due to proprietary systems that hinder interoperability, steep egress fees to remove data, licenses that limit flexibility, and security approvals that favor incumbents. These switching costs can entrench contractors, even when better alternatives emerge. When exit options become prohibitively expensive, incumbent contractors can extract rents through price increases or reduced service quality—the classic “hold-up” problem. This dynamic undermines the purpose of acquisition planning and competitive procedures, which aim to reduce transaction costs and preserve value over a system’s life cycle.

The AI market structure intensifies these dynamics. Training foundation models requires enormous upfront costs in computing power, data, and skilled workers, while adding customers costs relatively little. A 2023 Brookings report concluded that “the market for cutting-edge foundation models exhibits a strong tendency toward market concentration,” citing high fixed costs, scope efficiencies, and natural-monopoly tendencies. Similar concentration is evident in adjacent markets. NVIDIA currently controls the overwhelming majority of the market for advanced chips used to train and run AI systems. Taiwan Semiconductor Manufacturing Company (TSMC) dominates the production of cutting-edge semiconductors. Three firms—Amazon, Microsoft, and Google—dominate the global cloud market. Market concentration is reinforced by partnerships between cloud providers and AI developers. The FTC’s 2025 AI Partnerships report cautioned that these preferential arrangements risk entrenching the market power of a few firms and could foreclose competition in emerging AI markets.

Federal spending in concentrated markets “risk[s] further entrenching dominant incumbents,” raising concerns about competition, innovation, and accountability. Once an agency commits to a particular cloud provider and AI model, switching becomes technically and financially prohibitive. Contractors bundle AI services with cloud offerings precisely “to lock in customers to a fully integrated suite of . . . productivity software products and impose high switching costs . . . even if [a] competitor is higher-quality or more cost-effective.” Once adopted, these systems become technically and financially prohibitive to replace.

  1. Promotional Pricing as Strategic Buy-In

Recent procurement consolidation initiatives compound lock-in risks. Under GSA’s OneGov program, agencies can access leading AI platforms at promotional prices. OpenAI’s ChatGPT Enterprise and Anthropic’s Claude are $1.00 per agency for one year, Google’s Gemini is $0.47 for one year, xAI’s Grok is $0.42 for the next eighteen months, Microsoft’s Copilot is free for up to twelve months for existing G5 users, and Perplexity is $0.25 per agency for 18 months. This pricing represents a textbook example of “buy-in”—a contractor strategy expressly recognized as problematic in federal procurement regulations. FAR 3.501-1 defines buy-in as “submitting an offer below anticipated costs, expecting to (1) [i]ncrease the contract amount after award (e.g., through unnecessary or excessively priced change orders); or (2) [r]eceive follow-on contracts at artificially high prices to recover losses incurred on the buy-in contract.”

FAR 3.501-2 requires contracting officers to prevent recovery of buy-in losses through change orders or follow-on contracts and to minimize buy-in opportunities by seeking price commitments covering entire programs through multiyear contracting or priced options. GSA’s dollar-deal structure creates precisely this risk. Contractors offer AI services at $1.00 (or less) for limited promotional periods, after which agencies must either pay significantly higher commercial prices or incur substantial switching costs to migrate to alternative systems. Yet GSA has not disclosed whether it negotiated price caps, guaranteed renewal terms, or exit protections for post-promotional periods. Without this information, stakeholders cannot assess whether the government considered the FAR’s buy-in protections or instead created conditions where contractors can recoup promotional losses by raising prices once agencies become dependent. This lack of transparency itself represents a procurement integrity failure: stakeholders cannot evaluate compliance with fundamental acquisition safeguards when critical pricing terms remain undisclosed.

The most recent OneGov AI agreement reinforces this dynamic. GSA’s contract with Perplexity AI offers a federal “pilot” price of $0.25 per agency for eighteen months, even though the associated schedule lists standard government prices for Perplexity’s enterprise offerings between $28 and $227.50 per user per month. Even estimated conservatively by including the prepayment discount and assuming users continue with the lower-tier enterprise license, if 10,000 employees at a federal agency continue to use Perplexity after the promotional period ends, the cost would jump from $0.25 for eighteen months of agency-wide access to approximately $4,183,200 for that same period—a price increase of roughly 1.67 billion percent.

The parallel to adjustable-rate mortgage “teaser” loans is instructive. In the mid-2000s, lenders offered artificially low introductory interest rates to attract borrowers who qualified only on their initial payments, not on their ability to sustain higher costs after the reset. When refinancing options evaporated and the higher rates took effect, the housing market collapsed. Federal agencies now face analogous risks: they adopt AI systems at promotional prices without analyzing whether they can afford standard commercial rates once the promotional periods expire. Indeed, GSA officials have already acknowledged that once promotional pricing expires, agencies may be unable to afford standard commercial rates for systems that they have integrated into operations.

The risks associated with the OneGov AI deals extend beyond pricing. According to GSA, Perplexity’s enterprise platform is one of only two AI services designated for “AI Prioritization” under the FedRAMP 20x pilot, an expedited authorization pathway intended to accelerate agency adoption of AI and cloud services. That acceleration, however, risks outpacing scrutiny of key governance terms and raises transparency concerns. Even in Perplexity’s case, which is among the more transparent of the OneGov AI deals, the end-user license agreement is not publicly disclosed. The schedule references the agreement but does not reproduce it, stating only that it is “attached” and available “upon request,” leaving central issues of data use, model training, and audit rights outside public view.

Current policy—emphasizing rapid AI adoption and commercial acquisition—creates pressure to accept promotional pricing without the buy-in analysis contemplated by the FAR. The fact that promotional periods are explicit rather than hidden does not eliminate the problem. It simply means that agencies are entering dependency-creating arrangements with their eyes open, without appropriate consideration for future budgets, competitive alternatives, or transition costs. Even if consolidated procurement initially reduces costs, those savings evaporate if agencies become locked into vendors who can raise prices once switching becomes prohibitively expensive.

This distinction matters: promotional pricing may be defensible for limited pilot programs where agencies test AI capabilities with small user populations and minimal operational integration. Pilots allow evaluation of technical performance, identification of risks and governance gaps, and development of implementation frameworks while maintaining the ability to walk away. But current policy uses below-market pricing to drive widespread adoption across the federal government, locking in dependencies before agencies understand costs, develop expertise, establish governance, or build exit strategies. The government is trading long-term flexibility for short-term savings precisely when flexibility matters most—during widespread adoption of immature high-risk technology.

This sequence highlights a structural integrity risk distinct from traditional corruption. No fraud or bribery occurs, but the effect, such as agencies’ diminished ability to pursue alternatives, resembles the impact of corruption on competition processes. By offering entry at a minimal cost, contractors position themselves to influence how AI governance and oversight are framed, allowing them to argue that, since their systems are already standard in government, extra safeguards are unnecessary. The result of low pricing is a loss of leverage to demand protections—a structural vulnerability that worsens when promotional periods end and contractors charge higher prices, leaving agencies with limited options to push back.

Beyond the economic considerations, lock-in also raises additional integrity concerns. Exclusive contractor control over government-furnished or fine-tuned data causes information asymmetries similar to organizational conflicts of interest. Contractors might obtain confidential insights into agency needs, competitor strategies, and procurement plans through their exclusive access to data. Without negotiating adequate data protections during the acquisition process, contractors may use nonpublic agency data to train commercial models, reinforcing their incumbency at taxpayers’ expense and potentially distorting future competitions. In the wake of government-wide directives instructing agencies to “ensur[e] contracts permanently prohibit the use of nonpublic inputted agency data and outputted results to further train publicly or commercially available AI algorithms,” contracts increasingly contain these restrictions; however, without standardized approaches across the government, this practice will remain uneven.

None of these outcomes is inevitable, but current mitigation measures fall short of the structural risks they seek to address. OMB’s most recent acquisition guidance instructs agencies to proactively mitigate lock-in within AI procurements by emphasizing portability, transparency, and exit planning, yet those principles remain largely aspirational without enforceable contracting standards to make exit affordable once dependence sets in.


3. Automation Bias and Capacity Gaps: How Reduced Oversight Creates Corruption Opportunities

As federal agencies increasingly adopt AI tools to support procurement functions, long-standing workforce capacity challenges threaten to undermine both performance and integrity. Many agencies lack sufficient internal technical expertise to rigorously assess the AI systems that they procure or use. Without the capacity to validate technical claims or performance metrics, assess risks, or verify compliance requirements, acquisition officials become dependent on contractor representations, thereby eroding oversight functions, including the detection of false claims, identification of vulnerabilities, and enforcement of compliance requirements. This dynamic reflects a broader pattern observed in other privatization contexts: when the government lacks internal capacity, it cedes both control and accountability to private actors.

History demonstrates the danger of this reliance. When agencies lack technical expertise to evaluate what they are acquiring, oversight defaults to contractor self-regulation—a pattern that has produced catastrophic results. Because the technical complexity of AI systems often exceeds most agencies’ internal expertise, the federal government is once again becoming dangerously overreliant on vendor assurances about capabilities, limitations, and safeguards.

Beyond workforce caps, the perceived objectivity of AI systems compounds the risk of inadequate oversight. Systems produce confident outputs that can encourage acquisition professionals to defer to automated recommendations without meaningful scrutiny. This excessive reliance amplifies automation bias, or “the tendency for an individual to over-rely on an automated system,” which “can lead to increased risk of accidents, errors, and other adverse outcomes when individuals and organizations favor the output or suggestion of the system, even in the face of contradictory information.” In procurement contexts, automation bias can lead acquisition professionals to approve AI-generated proposal evaluations or past performance assessments without sufficient human review.

Consider an agency using a general AI proposal evaluation platform for system integration services. The tool generates a 200-page evaluation report with adjectival ratings, narrative justifications, and comparative analysis for each contractor. The source selection authority (SSA), lacking AI expertise and under time pressure, adopts the recommendations, incorporating the AI application’s analysis into the decision memo. The SSA believes that they exercised independent judgment by reviewing the output, but they lacked the capacity to assess whether the application’s rating logic, comparative methodology, or key assumptions were sound.

In a protest of the procurement, Contractor A argues that disclosures at the debriefing revealed that the SSA merely adopted the AI-generated evaluation without evidence of understanding or validating its rating methodology. Because the SSA could not explain how the tool’s analysis aligned with the stated evaluation factors, Contractor A argues the agency failed to demonstrate the independent judgment required by FAR Part 15. Review of SSA decisions is deferential, but this deference depends on the underlying report providing an adequate analytical foundation. When an SSA cannot clarify the methodology or reasoning behind technical ratings, GAO or the U.S. Court of Federal Claims may determine that the decision is insufficiently documented, even if the SSA formally “adopted” the report. The issue is not whether the SSA claims ownership of the analysis, but whether the decision meets the FAR’s requirement for reasoned decision-making.

As agencies begin integrating AI tools into crucial acquisition tasks and rely more on automated recommendations, the risk of automation bias increases, along with a potential decline in human expertise needed to critically assess those suggestions. Although adoption varies widely across offices, the trend clearly points toward broader use of AI-assisted procurement. This growing reliance creates a feedback loop: as workforce skills weaken, agencies may become even more dependent on AI, further diminishing their ability to spot biased or manipulated outputs. Meanwhile, multimedia deception (such as deepfakes) and doctored documents are advancing quickly and may now surpass many acquisition professionals’ ability to detect them. The result is a widening capability gap—on one side, deteriorating procurement quality; on the other, increased vulnerability to corruption or manipulation.

Automation bias also complicates accountability, making it difficult to establish responsibility under traditional legal frameworks that typically require proving either knowing or intentional contractor fraud or agency official negligence, neither of which may cleanly apply when automation produces biased results. Ultimately, the combination of declining workforce capacity and increasing reliance on AI tools creates structural vulnerabilities in federal procurement, as personnel tasked with evaluating and overseeing these tools often lack the necessary training to effectively challenge them.

4. Opacity and Audit Limits: How Information Control Masks Misconduct

AI systems, especially complex foundation models, often function as “black boxes,” with internal decision-making processes that are difficult to interpret even for their developers. This opacity undermines the traditional oversight mechanisms that are foundational to strong procurement systems. Whereas automation bias reflects human overreliance on AI systems and outputs, explainability concerns arise when the systems themselves obscure the basis for procurement decisions, making it difficult for COs to understand, verify, or even defend their reliance on AI-generated recommendations.

Consider how opacity undermines accountability in practice, using the same hypothetical scenario from the prior section (the agency using an AI application to evaluate proposals for system integration services). The application rates Contractor B highest, with detailed rating tables, narrative justifications for each evaluation factor, and comparative analysis. The SSA, recognizing that while extensive, these explanations still provide an insufficient basis to defend the selection decision, requests additional information from the contractor that licensed the AI application to the agency: What specific technical factors drove the adjectival ratings? How did the tool compare Contractor A’s approach against Contractor B’s approach? What training data informed the determination that Contractor A’s methodology represents “best practices”? How were the evaluation subfactors weighted to produce the overall technical ratings? If the award is protested and the factors that drove these conclusions are inaccessible because the contractor’s licensing terms protect this information as proprietary, the SSA will not be able to document the rationale for its decisions or demonstrate independent judgment.

Burden allocation compounds these accountability failures. In bid protests, agencies must prove their evaluations were rational and aligned with the stated criteria. When procurement decisions depend on opaque AI tools, agencies may struggle to meet this burden even if the underlying decision was sound. The agency cannot explain what it does not understand about its own tools. Conversely, protesters face nearly insurmountable barriers in showing that AI-generated evaluations were flawed, as they lack access to the algorithms, training data, or decision logic. This outcome creates a procurement integrity crisis no matter who prevails: if agencies consistently lose protests because they cannot explain AI-assisted decisions, adopting AI becomes unfeasible; if protesters cannot challenge opaque decisions, accountability disappears. Either scenario weakens the integrity of the procurement process.

Much of this opacity is not technically necessary but contractually determined. Training data sources, validation procedures, performance metrics, and known limitations remain undisclosed because many contracts do not require disclosure. For LLM solicitations issued after OMB Memorandum M-26-04, agencies must include contractual requirements that obtain baseline vendor documentation, including acceptable use policies and model, system, and/or data cards (which often summarize training processes and benchmark evaluation scores). Depending on planned use, agencies may also request enhanced disclosures, but the memo does not require independent verification or external audit mechanisms. Even with that baseline documentation, software licensing upgrades and commercial software purchases often include boilerplate licensing terms that restrict transparency or limit audit rights, so documentation requirements do not, on their own, solve deeper visibility and accountability problems. As Professor Cary Coglianese observes, this creates “nested opacity” where contractual choices compound technological limitations to obscure accountability. Agencies could demand greater transparency but they rarely press contractors to provide it, even when it is essential for effective oversight. As agencies integrate AI into increasingly consequential procurement functions, if they do not negotiate access to the information needed to build and defend the evaluation record required by FAR Part 15, it is hard to see how agency awards will withstand legal scrutiny.

Beyond protest risks, information asymmetry enables the vulnerabilities to corruption discussed throughout this article. Detecting whether contractors have poisoned training data, identifying conflicts of interest arising from the customized foundation model, or determining if algorithmic systems produce biased outputs all require access to information about training data, model architectures, and testing capabilities. When contractors successfully block sharing this information as proprietary, oversight bodies lack the means to verify the integrity of AI-assisted procurement decisions.

5. Incumbency Bias and Hallucinations: How Technical Error Becomes Exploitable

AI systems can systematically skew procurement outcomes through two distinct technical failures: incumbency bias in AI-assisted evaluations, a form of historical bias which systematically favors familiar approaches, and hallucinations, which generate confident but false content. These integrity risks arise from the technical limitations of AI systems, rather than intentional manipulation, yet they undermine competition and procurement integrity.

i. Incumbency Bias in AI-Assisted Evaluation

AI models trained on historical procurement data can learn to favor incumbents by associating their characteristics with “quality.” Imagine an agency deploying an AI application developed using a decade of agency-authored procurement documents to assist with technical evaluations for an IT modernization contract. During this period, the agency predominantly awarded contracts to three large contractors that consistently used specific technical terminology, frameworks, and compliance standards. The AI application learns to associate these patterns with the concept of “quality.” When the agency later uses the tool to evaluate new proposals, it assigns higher technical ratings to submissions that use familiar language patterns, even when new entrants propose equivalent capabilities in different but equally valid technical frameworks. This choice creates protest risk for agencies when the record reflects reliance on unstated criteria or the unequal treatment of offerors, rather than the solicitation’s factors.

ii. AI Hallucinations

Generative AI systems can produce plausible but entirely false information, a phenomenon known as “hallucination.” These hallucinations produce outputs that appear authoritative while containing fabricated content undetectable without human verification, a problem that has already led to fabricated court citations, false news reports, and erroneous government analyses.

Suppose a CO asks the AI application to “Summarize Contractor A’s performance on similar incident-response platform contracts,” and the tool returns a narrative asserting on-time/under-budget delivery with Exceptional/Very Good CPARS ratings. Under time pressure, the CO adopts the summary without additional review. If those achievements do not exist, the agency’s award decision based on this flawed record will be extremely vulnerable to a sustained protest.

These technical limitations become integrity and corruption issues when contractors exploit system vulnerabilities. Sophisticated offerors can craft submissions that game AI-assisted evaluations by using terminology the AI associates with quality, regardless of actual technical merit. Others may strategically provide minimal past-performance documentation, relying on the likelihood that the AI tool will generate plausible summaries rather than flagging data gaps. If contractors go further and knowingly submit false or fabricated information, their attempt to “game” the procurement outcome by exploiting the AI tool’s vulnerabilities constitutes fraud.

Current policy often treats technical failures and intentional exploitation as separate concerns addressed through different mechanisms: technical standards for AI reliability on one hand, fraud prosecution for misconduct on the other. This separation overlooks the fact that systemic technical weaknesses create conditions that enable exploitation. Effective AI procurement governance must address both simultaneously. It needs technical safeguards that reduce vulnerabilities while maintaining enforcement mechanisms that deter intentional misconduct. Part V reflects this integrated approach.

V. Navigating Constrained Pathways: From Doctrine to Practice

The corruption and integrity risks identified in Part IV demand comprehensive governance responses. There is no anti-corruption silver bullet, as no institution is immune from bad actors, but modest safeguards are better than none. Designing those safeguards, however, is notoriously difficult, even in settings where integrity protections are welcome. This challenge is heightened by the evolving nature of AI.

The policy environment since January 2025 has narrowed pathways for implementing safeguards. The deregulatory trajectory reflects explicit geopolitical concerns: the fear that regulatory oversight will enable China to achieve AI dominance has become a central motivation for rolling back risk-management frameworks and weakening institutional safeguards. Meanwhile, commercial acquisition preferences direct agencies toward procurement methods least compatible with governance protections, while simultaneous workforce cuts leave fewer acquisition professionals with less AI-specific expertise to implement whatever safeguards remain available. Together, these forces have led a government with one of the world’s most mature procurement oversight systems to dismantle decades of carefully developed integrity protections, prioritizing speed over the safeguards that have long distinguished the U.S. procurement system.

The tension is acute: OMB Memorandum M-25-22, issued less than three months after the start of the second Trump administration, articulates principles for responsible AI procurement, yet it operates within a post-AI Action Plan environment that pivots sharply in the opposite direction. The administration thus endorses governance principles in guidance while simultaneously removing the regulatory authority and workforce capacity necessary to implement them. Without binding regulations or agency-wide policies, M-25-22’s safeguards exist as aspirations rather than obligations.

In this constrained environment, targeted use of existing legal authorities and contract-based mechanisms has become the government’s most practical tool for embedding AI safeguards. These tools address risks arising from both the use of AI within government operations and the acquisition and deployment of AI systems, which present distinct governance challenges. The recommendations that follow are damage-mitigation measures intended to establish baseline protections before market structure and technical dependencies become entrenched. This article organizes proposed reforms by implementation feasibility: first, safeguards that can be implemented immediately under existing authority; second, those requiring modest policy adjustments within the executive branch; and finally, measures that demand sustained institutional investment to achieve durable system-wide governance.

A. Immediate Safeguards

The following measures do not require new legislation or regulatory changes and can be implemented immediately under existing authority, although they require expertise, institutional support, and deliberate implementation.

1. Applying Existing Frameworks to AI Procurement

Before pursuing custom contractual protections, agencies should fully utilize the authority that they already possess. Federal procurement law provides established frameworks for managing conflicts of interest, preventing fraud, and ensuring contractor responsibility. These frameworks apply to AI procurements just as they do to other acquisitions, though their application requires understanding the AI-specific manifestations of traditional risks. This section identifies how existing regulatory authority can address AI corruption risks without requiring new legislation or contractor negotiation of non-standard terms.

i. Modernizing Organizational Conflicts of Interest Rules and Applications for AI

FAR 9.5’s existing OCI framework is broad enough to capture AI-specific organizational conflicts of interest, and the three categories map directly to AI procurement scenarios. Under the current framework, (i) a contractor licensing an AI application that drafts specifications or evaluation factors for an upcoming procurement likely has a biased ground rules conflict; (ii) a contractor licensing an AI application that evaluates proposals and then seeks to participate in a procurement in which the same application will assess its own or competitors’ offers presents an impaired objectivity concern; and (iii) contractors licensing AI applications that ingest confidential, competitively useful information about upcoming procurements raise unequal-access-to-information risks. The challenge lies not in crafting new guardrails, but in recognizing how existing ones apply.

Agencies routinely tailor OCI clauses proactively in procurements, and AI acquisitions warrant similar attention. The legal authority to require OCI disclosures and mitigation exists across all procurement pathways. Practical implementation, however, faces constraints: in concentrated AI markets, vendors possess negotiating leverage to resist provisions they characterize as overly burdensome, and agencies may lack technical expertise to identify AI-specific conflicts without vendor cooperation. Despite these challenges, agencies can implement risk mitigation approaches using existing authority, including, but not limited to:

  • Targeted disclosure requirements.
  • Enhanced conflict mitigation plans.
  • Comparative validation using alternative systems.
  • Restrictions on conflicted AI vendors’ participation in procurements.
  • Restricting contractors from requiring the government to use their affiliated AI providers.
  • Engaging independent evaluators for high-value acquisitions. Although these mitigation measures will reduce risk, practical barriers remain. The current OCI language in the FAR predates modern procurement risks and challenges. In 2022, recognizing the growing OCI risks in areas not explicitly covered by the regulation or its illustrative examples, Congress passed the Preventing Organizational Conflicts of Interest in Federal Acquisition Act, which directed the FAR Council to update definitions, provide new illustrative examples, and supply solicitation and contract clauses. On January 15, 2025, the FAR Council published a proposed rule that relocates and updates OCI coverage, introduces standard solicitation provisions and clauses, and proposes other critical updates to the regulation. The proposed rule is a step forward, but it includes significant exemptions that undermine AI-specific OCI mitigation, and its fate is uncertain in the wake of the RFO. This proposed rule leaves OCI coverage inadequate, even as the federal government’s dependence on AI systems deepens. Unless this regulatory gap is addressed, AI-driven OCI risks will continue to outpace existing safeguards.

Regardless of regulatory reform, successful implementation ultimately depends on the capacity of the acquisition workforce. Stakeholders—including industry, government acquisition professionals, and legal counsel—need training to understand how AI-based OCIs differ from traditional consulting or advisory conflicts. The authority exists; what is missing is awareness. Acting now with existing tools is the only way to prevent today’s theoretical risks from becoming tomorrow’s scandals.

ii. Combating AI-Generated Fraud and Manipulation

Much enthusiasm exists for using AI to detect fraudulent schemes in procurement. Yet AI acquisition and deployment create emerging threats linked to AI’s ability to generate convincing fabricated content and the opacity of many AI systems. Agencies can take two steps to reduce AI-related fraud risks under existing authority: first, building detection infrastructure; and second, mandating contractor disclosures.

As AI-generated fraudulent content becomes increasingly sophisticated, agencies require detection capabilities that can keep pace with evolving threats. This will require deploying authenticity-screening tools that flag suspicious documents or media for human review during proposal evaluations. Solicitations should ask offerors to submit digital materials through screening services, with trained personnel reviewing flagged content. The government can develop this capacity internally or outsource it to third-party validators. Agencies might also require additional authentication methods, such as “watermarking, provenance verification, [and] metadata auditing.”

However, these measures face significant challenges. First, current detection technology struggles with sophisticated AI-generated content, creating both false positives, which flag legitimate materials as high-risk, and false negatives, which miss the actual fabrications. “[B]ecause of the emerging nature of these techniques, as well as the dynamic nature of AI-generated content, it is unlikely that one solution alone—technical or manual, involving human intervention—will fully address the risks posed by AI-generated content.” Second, the technical “arms race” favors generators over detectors, and development costs are substantial. Consequently, solicitation requirements should be proportional to risk, employing more stringent verification procedures for higher-value or higher-risk procurements.

Finally, any framework should not be implemented in a manner that undermines other existing policies, including those related to competition and contractor due process rights. Using an imperfect screening tool during evaluation creates protest vulnerability if proposals are wrongly flagged and excluded. To mitigate this risk, agencies should (1) disclose screening requirements in solicitations; (2) treat flagged results as triggers for further review rather than automatic disqualification; (3) provide notice and an opportunity for offerors to respond if their submissions are flagged; and (4) maintain verification records subject to periodic independent performance audits. Although no tool is infallible, as technology improves, agencies’ ability to “fight AI with AI” will give them a defense against AI-driven procurement fraud.

Beyond detection, agencies should also consider enhancing AI disclosure requirements for contractors. At a minimum, agencies should require offerors to disclose any material use of AI in preparing offers, particularly where AI influences cost estimates, performance claims, delivery schedules, or other proposal elements that carry material performance risk. These disclosures would enable COs to identify portions of submissions warranting heightened scrutiny, especially where AI-generated content may reflect deliberate misrepresentation or unintentional fabrication. Unverified AI outputs can also produce inflated or unrealistic claims that distort competition and increase the likelihood of performance failure, even in the absence of fraud. Disclosure enables agencies to isolate and validate these claims before award using existing procurement tools such as clarifications, negotiations, and independent government cost estimates. Although contractors are already liable for making false statements, requiring disclosure of AI use strengthens agencies’ ability to detect both intentional deception and inadvertent AI errors, thereby preserving evidence for potential enforcement under the FCA or criminal fraud statutes. Enhanced disclosure serves dual purposes: strengthening accountability for intentional fraud while screening for unintentional errors caused by AI.

2. Negotiating Safeguards Beyond Regulatory Requirements

Although existing regulatory frameworks provide baseline protections, AI’s unique characteristics create risks that current regulations do not fully address. When regulatory requirements fall short, agencies must rely on negotiated contract terms to secure transparency rights, testing protocols, data portability, and other safeguards—protections that require contractor agreements and are subject to bargaining dynamics in concentrated AI markets.

These negotiated safeguards are critical because deployment risks become unavoidable the moment that agencies sign contracts without securing adequate governance rights. An agency that acquires an AI system without contractual rights to review training data, test outputs, audit decision processes, or exit without penalty has already created the conditions for corruption—deployment simply exposes the consequences of those acquisition choices.

Government contracts have long served dual purposes: acquiring goods and services while promoting broader policy objectives. Section 508 accessibility requirements demonstrate this potential: by making federal contract eligibility dependent on accessibility compliance, Congress and the related implementing regulations have encouraged technology companies to incorporate accessibility into their product design. DoD’s cybersecurity requirements demonstrate similar reach, extending federal procurement policies across supply chains. These mandates are reinforced by expanded enforcement mechanisms, including oversight from the DOJ’s Cyber Fraud Initiative, which ensures that compliance obligations are not merely aspirational.

These examples demonstrate what leads to market-wide transformation: (1) clear legislative or regulatory mandate; (2) dedicated enforcement; (3) sustained political will; and (4) sufficient purchasing leverage—not individual contracting officers negotiating custom terms.

AI procurement lacks comparable dynamics. There is no comprehensive statutory mandate. OMB guidance is subregulatory and uneven in implementation. Even after OMB Memorandum M-26-04 established a documentation floor, it cannot deliver uniform anti-corruption safeguards without further agency action. Workforce capacity constraints limit implementation. Current leadership favors deregulation. Market concentration gives contractors leverage to resist governance terms. The question is not whether contracts can govern AI risks, as they can and should, but whether contracts alone can deliver Section 508-style transformation. They cannot, absent comparable institutional support. Consequently, our task is twofold: to incorporate the most protective terms practicable now, while also securing the statutory and institutional support needed for consistent, durable, and scalable adoption.

3. Structural Barriers to Contract-Based Governance

Contract-based governance of AI procurement faces structural barriers that hinder agencies from securing essential safeguards. Audit rights, transparency requirements, data rights, independent testing access, and anti-lock-in provisions all face resistance from contractors in concentrated markets under commercial-first policies.

In commercial acquisitions under FAR Part 12, agencies cannot unilaterally impose governance requirements that contractors characterize as non-customary. Contractors must agree to expanded rights, often requiring additional consideration. In concentrated AI markets where a few providers dominate foundation models, cloud infrastructure, and leading applications, contractors can simply reject governance terms that agencies lack the leverage to secure. License upgrades to existing enterprise software introduce unique vulnerabilities: agencies inherit whatever protections are in base agreements negotiated before AI-specific risks were recognized, and securing AI-specific safeguards may require renegotiating enterprise agreements worth hundreds of millions of dollars. Although M-26-04 instructs agencies to seek LLM development and operational information even when models are integrated into other software, it cannot, by itself, expand an agency’s contractual access rights absent a modification.

This dynamic extends beyond direct commercial purchases to framework agreements like the GSA Schedule, where the procurement vehicle’s structural features compound the leverage problem. Buying AI via the GSA Schedule often generates hidden asymmetries: downstream program offices receive the benefits of negotiated master contract terms but lack visibility into whether those terms are sufficient for higher-risk AI applications, and their flexibility to impose additional governance clauses is limited by the Schedule’s contractual framework. GSA marketing and public releases describe OneGov agreements as “support[ing]” M-25-22 compliance, but, without access to underlying agreements, it is unclear how much of that support corresponds to binding governance rather than promotional rhetoric.

OTs theoretically permit government-unique requirements, but the agencies with the authority to use these tools are unlikely to leverage this flexibility to the extent necessary to truly protect against significant corruption risks. OTs are marketed as alternatives to burdensome FAR-based contracts, creating expectations of lighter terms. Low public visibility into OT agreements further complicates external oversight.

Even Part 15’s negotiation latitude is constrained by federal procurement doctrine. In Palantir USG, Inc. v. United States, the Federal Circuit reinforced the statutory preference for commercial acquisition, requiring agencies to justify a noncommercial approach when market research shows that commercial products can meet agency needs. This precedent creates practical pressure to select a commercial procurement route even when governance considerations favor a less constrained approach. Additionally, current policy, which emphasizes maximizing commercial acquisition, reinforces the constraining nature of this pathway.

Beyond pathway-specific limitations, contract-based governance faces fundamental structural challenges. It lacks the procedural legitimacy and uniformity of rulemaking, placing responsibility on individual COs to serve as de facto AI regulators. Professor Albert Sanchez-Graells describes this as a “regulatory hallucination,” a procurement practice made to bear a burden that it cannot sustain. Effective use of AI requires legal, technical, and market expertise, along with time to scrutinize contractor claims—conditions that are rare. Information asymmetries favor major contractors, creating what Sanchez-Graells terms “regulatory tunneling,” where agencies accept contractor-driven standards that enable suppliers to shape the very rules intended to govern them.

The cumulative effect of the current federal approach, which emphasizes “move fast, buy commercial, don’t reinvent the wheel,” directs agencies toward acquisition methods that are least compatible with governance-intensive safeguards. That posture increases the risk of corruption by making contractor control, information gaps, and reduced oversight the norm rather than the exception.

The window for embedding safeguards is shrinking, but meaningful opportunities remain. High-risk acquisitions, especially those involving sensitive data, consequential decisions, or national security concerns, warrant strong governance terms despite policy challenges. Although contracts cannot replace regulation, they can still limit specific risks when the government has bargaining power. The following three examples show both the possibilities and the limitations of “regulation by contract.”

i. Anti-Lock-In Protections: A Case Study in Contract-Based Governance

As federal agencies rapidly integrate AI into their operations, procurement integrity increasingly depends on managing dependence on a small set of dominant providers. Although an oligopolistic market may be inevitable given AI’s capital requirements, preventing single-contractor capture remains critical to maintaining competitive pressure that deters exploitation. OMB Memorandum M-25-22 reminds agencies that, as they “seek to accelerate the adoption of AI-enabled services, they must pay careful attention to vendor sourcing, data portability, and long-term interoperability to avoid significant and costly dependencies on a single vendor.” This language provides agencies with both the rationale and justification to embed anti-lock-in terms at the contract formation stage.

One of the most critical terms to negotiate is portability. When an agency can transfer data and operations to another provider with minimal friction, vendors lose the leverage that comes from being technically dependent on them. Building on that foundational protection, caps or prohibitions on egress fees protect the government’s ability to switch providers when contractors underperform or integrity concerns arise. Recent developments in adjacent contexts demonstrate that these terms are commercially viable. Oracle’s recent OneGov deal with GSA touts “[n]o data egress fees for moving existing workloads from Oracle Government Clouds to another cloud service provider’s FedRAMP Moderate, High or DOD IL 4, 5 Cloud.” Similarly, a recent DoD contract with commercial cloud service providers “resulted in discounts on various fees better than those available commercially . . . ranging from 35 percent to 100 percent.” By negotiating egress limitations at the award stage, agencies reduce switching costs later. Finally, data-control provisions complete the cycle. They prevent contractors from repurposing federal data for model training or commercial use without permission and ensure that, upon contract closeout, all deliverables and license rights revert to the agency, allowing operations to proceed smoothly.

Together, these clauses form a coherent framework: portability provides leverage, egress protections make it affordable, and data-control clauses safeguard government data integrity and sovereignty. Each provision is modest in isolation, but combined, they translate procurement authority into a credible form of governance that manages dependency even in a concentrated market.

ii. Enhancing Transparency in AI Procurement

Transparency is critical for reducing corruption and integrity risks in the acquisition and deployment of AI in government procurement. As the authors describe in AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance, transparency has two dimensions. Substantive transparency concerns what is disclosed, such as the sources of training data, the model architecture, and performance metrics that enable independent detection of flaws. Procedural transparency concerns how agencies document risk ratings, mitigation steps, and ongoing compliance monitoring. Agencies have begun implementing both types, primarily through OMB-directed AI use-case inventories and agency-specific safeguards, but practices remain uneven and incomplete across the government.

The federal government’s current approach to contractor transparency in the acquisition of AI systems is grossly insufficient, given the downstream risks of deploying opaque models in consequential government operations. In procurement, agencies will need to recalibrate contractual disclosure requirements to reflect the distinct types and magnitudes of risk posed by AI systems.

For example, agencies could build on the enhanced Software Bills of Materials (SBOMs) outlined in EO 14028. SBOMs are a cybersecurity “nutrition label” that lists software components. Agencies could pilot AI-adapted bills of materials (AI-BOMs) that disclose training data categories, model architectures and frameworks, customization processes, data management and segregation practices, and documented limitations. Properly scoped, this approach would allow agencies to conduct more informed risk assessments and ensure that government data remains segregated from commercial training workflows, enabling agencies to proactively address many of the corruption and integrity risks outlined in this article.

Of course, implementation faces significant challenges: training data disclosure may conflict with proprietary claims, agencies often lack the technical capacity to verify or interpret these details, and market concentration limits government leverage to mandate vendor compliance. To address these concerns, the implementation of an AI-Bill of Materials (AI-BOM) framework should follow a tiered model akin to DoD’s Cybersecurity Maturity Model Certification (CMMC) 2.0, where disclosure obligations scale with risk. AI‐BOM requirements should begin with high-risk procurement applications, reserving the strongest disclosure and oversight obligations for those tiers.

This tiered framework now appears in OMB’s LLM guidance. M-26-04 establishes a minimum documentation floor and identifies additional categories of technical transparency that agencies may request, depending on intended use. Yet the memo also exposes a critical limitation of disclosure-only regimes: disclosures are not verification. Agencies often remain dependent on vendor representations. OMB also cautions against compelling disclosure of sensitive technical data such as model weights, which can further constrain independent auditability. For AI deployed in core government functions, that level of dependence is unacceptable. Integrity risks in high-impact-use cases require stronger oversight mechanisms, including independent verification, audit rights, and standardized testing protocols calibrated to procurement risk. The goal is transparency that enables oversight, not performative documentation that increases compliance burdens without strengthening accountability.

To address proprietary concerns without compromising oversight, agencies can implement a tiered third-party assessment model under nondisclosure agreements (NDAs). Preapproved, independent technical experts, bound by confidentiality and conflict screening, can verify compliance with data-use restrictions, including ensuring that government data remains properly separated from commercial training workflows, and assess system architectures without exposing proprietary information to agency staff or competitors. Similar to the AI-BOM proposal, a tiered framework would calibrate the scope and rigor of assessments to the system’s risk level. This proportional approach lowers costs and capacity burdens while maintaining rigorous oversight for high-impact or sensitive systems.

Some federal programs, such as FedRAMP and CMMC, already utilize accredited third-party assessors under confidentiality restrictions, and many technology providers are familiar with this model. Agencies can adapt these approaches to AI risks by establishing panels of cleared experts or contracting with specialized firms to perform AI system reviews, including verifying that data segregation requirements comply with contractual obligations.

iii. Red-Teaming and System Integrity Testing

“Red-teaming” is an independent security testing process designed to identify vulnerabilities in AI systems that could compromise procurement decisions. Testing assesses attempts at data poisoning, adversarial manipulation of decision-making, and system vulnerabilities that could distort evaluations or weaken controls. Unlike traditional software, AI systems continually evolve as data and models are updated. Model and data drift can cause unexpected and unwanted behaviors. Continuous testing and monitoring across the system lifecycle are therefore essential, not optional.

High-risk AI systems used in government procurement must undergo thorough testing before deployment and continuous monitoring during their operation. High-risk applications include systems that evaluate proposals, draft specifications, or process competitively sensitive information. Medium-risk applications such as market research tools or document review systems require baseline testing at deployment and periodic retesting. Even low-risk administrative applications warrant initial security validation to prevent mission creep where systems adopted for low-risk purposes later influence procurement decisions.

Centralized capabilities would improve testing quality and consistency. Instead of each agency developing its own AI security expertise, a centralized red-teaming service could pool specialists, publish common test standards, and offer on-demand support. Limited, agency-level implementations demonstrate feasibility; however, creating a centralized AI testing service demands institutional resources and sustained commitment. In the interim, agencies should coordinate through existing consortia and interagency forums, sharing test plans, findings, and templates to promote consistent practice without requiring new infrastructure.

B. Building Institutional Safeguards: Workforce Development and Oversight Mechanisms

The contract-based safeguards and immediate measures proposed above cannot function without qualified personnel to implement them and robust oversight mechanisms to ensure accountability. Their success depends entirely on acquisition professionals who understand AI-specific risks and can enforce governance terms against sophisticated contractors. Current workforce realities present significant challenges. Most federal acquisition professionals lack training to identify AI-specific risks, while traditional oversight mechanisms struggle to monitor AI-driven decisions due to the technology’s complexity and opacity. GAO has repeatedly warned that agencies face severe shortages of staff with AI expertise, contributing to increased reliance on contractors to lead AI projects and develop performance metrics. This dependence creates a dangerous cycle: as agencies depend more on AI tools and contractors, they lose capacity to oversee them.

Automation bias exacerbates this capacity crisis. When AI tools assume core acquisition tasks, such as drafting solicitations, identifying compliance issues, and evaluating proposals, junior professionals never develop expertise to review automated recommendations, while experienced acquisition professionals lose skills through disuse. The federal acquisition workforce is already “smaller, more junior, and. . . less experienced than a generation ago.” Ongoing disinvestment in training increases the risks posed by uncritical AI adoption, threatening the institutional capacity needed to implement the governance measures proposed in this article.

1. AI-Literate Acquisition Workforce

“Procurement is one of those things that few notice until it goes wrong.” Despite numerous warnings from GAO, statutory mandates, and policy acknowledgments of the need for training, the federal government is proceeding full speed ahead with an expansive AI rollout while simultaneously slashing training budgets. The successful acquisition and deployment of AI depends on people. A professionalized, AI-literate acquisition workforce is the most effective safeguard against procurement failure and contractor capture.

Acquisition professionals will increasingly engage in procurements where algorithms influence decisions, contractors make opaque technical claims, and safeguards depend on nuanced judgments that lack established procedures. Contract-based safeguards, such as anti-lock-in provisions, documentation requirements, and disclosure obligations, are effective only when implemented by personnel who understand their purpose and can enforce them in practice. Rules alone cannot deliver trustworthy AI procurement. The capacity to understand, question, and apply them ultimately determines success.

The Federal Acquisition Institute (FAI) and Defense Acquisition University (DAU) possess the infrastructure to develop AI-specific certification pathways. However, several limitations complicate implementation. Both organizations primarily provide standardized retail training, rather than promoting critical thinking and adaptive expertise, yet the issues in this article demand skills that standardized certification programs struggle to cultivate. Certification processes are inherently slow: developing curricula, securing approvals, and assessing competency take years. By the time that programs launch, the technological landscape has shifted.

Developing comprehensive AI curricula requires subject-matter expertise, curriculum development capacity, and delivery infrastructure—significant investments amid workforce reductions and overall government funding constraints. Effective training needs instructors who understand both AI technology and procurement law, a rare combination. AI procurement is happening now, creating a dangerous gap between when governance is needed and when the workforce can catch up.

These obstacles also highlight needs beyond training design: building sustainable pipelines of AI-literate talent, implementing policies that retain experienced professionals, and strengthening the broader procurement community. The DoD’s Defense Civilian Training Corps (DCTC) model, which includes recruiting undergraduates, providing scholarships, delivering structured curricula, and offering internships, should be expanded to civilian agencies. However, pipeline development addresses only talent entry. Equally critical is talent retention, particularly under current fiscal constraints and workforce reductions that make government service less attractive to skilled professionals. Retention requires more than just competitive salaries. Agencies must create clear merit-based career paths where AI procurement expertise leads to advancement. Nonmonetary incentives are critical: ongoing professional development, recognition programs that value specialized skills, and meaningful work-life balance policies must be put in place. Without effective retention strategies, agencies will train personnel only to lose them to higher-paying opportunities in the private sector.

Supporting professional networks through organizations such as the National Contract Management Association (NCMA) and the American Council for Technology and Industry Advisory Council (ACT-IAC) becomes essential in the AI era. NCMA’s AI Community of Practice and ACT-IAC’s AI Working Group promote collaboration between government and industry on responsible AI use, offering a platform for sharing lessons learned and emerging best practices. These communities offer the informal knowledge sharing and mentorship that formal training cannot replicate.

The combination of updated institutional training infrastructure (FAI/DAU), formal pipelines (DCTC-model programs), retention efforts, and supporting professional communities forms the ecosystem necessary for sustained AI procurement capacity. Neglecting any component undermines the others and makes agencies dependent on contractors for expertise that they should possess internally.

2. Interim Measures While Building Long-Term Capacity

The workforce development framework faces an unavoidable timing gap. Building the education and professional development infrastructure described above will take years; yet agencies are already procuring AI systems today. To bridge this gap, agencies should establish technical advisory panels immediately, drawing on Chief AI Officers (CAIOs), IT security staff, and AI specialists. These panels cannot replace AI-literate COs, but they can translate technical concepts, evaluate contractor claims, and flag risks that generalists may overlook. Moreover, as recommended by OMB Memorandum M-25-22, agencies should also develop AI procurement guides that include model solicitation language, evaluation criteria, and contract terms, allowing less-experienced personnel to implement baseline safeguards as capacity grows.

These guidance mechanisms support informed decision-making, but they leave a fundamental gap: no government-wide rule requires documented human review before AI outputs influence agency decisions. Although GAO recommends that agencies develop procedures for human oversight of AI systems and automated decisions, most agencies lack enforceable policies requiring such a review. Recent OMB guidance for LLM procurements emphasizes human accountability, but it does not create a uniform pre-decision review mandate.

Beyond the system oversight measures previously addressed, agencies should also implement internal directives for the high-risk procurement AI—particularly proposal evaluation and source selection—that require documented human review before outputs impact government decision-making. Documentation should identify the AI system, outputs generated, the reviewer’s analysis, and the rationale for accepting or overriding recommendations. Although this patchwork solution lacks the uniformity of a government-wide policy, it can provide a temporary but urgently needed accountability mechanism within current policy constraints.

C. AI Integrity Advocates and Enhanced Oversight Mechanisms

Agencies can build AI procurement oversight using existing institutional models without requiring new regulatory authority. Drawing on the Competition Advocate structure in FAR 6.5, agencies could designate AI Integrity Advocates: officials responsible for reviewing high-risk AI acquisitions to reveal corruption and integrity vulnerabilities. The advocates would examine governance risks, including OCIs embedded in AI systems, lock-in risks, bias and transparency failures, and security vulnerabilities. This approach does not require rulemaking, as agencies possess the authority necessary to establish internal oversight positions through agency directives or acquisition policies. The position would serve as an internal checkpoint, reviewing procurements for elevated risk.

Beyond this role, AI Integrity Advocates should be incorporated into the cross-functional teams mandated by OMB Memorandum M-25-22. These teams, which include COs, program managers, technical experts, and legal counsel, oversee AI acquisition throughout its lifecycle. Including AI Integrity Advocates adds an integrity perspective alongside cost, schedule, and performance considerations from planning through closeout. This integration helps prevent fragmented decision-making, where technical staff focus on capability, COs on compliance, but no one consistently examines integrity and corruption risks.

For this model to work effectively, AI Integrity Advocates need institutional support and independence. Reporting directly to senior agency leadership, rather than through procurement channels, allows advocates to challenge acquisition strategies without risking their careers. The role requires a combination of skills: sufficient procurement knowledge to understand acquisition processes, technical literacy to assess AI systems, and institutional credibility to challenge program offices pushing for rapid AI deployment.

Complementary IG oversight enhances this framework. Agencies should assign at least one IG auditor or investigator with AI-specific expertise to provide specialized oversight of AI procurements, adding a second review layer beyond the Advocate function. The designated IG can examine whether agencies have properly addressed integrity risks and if AI Integrity Advocates have served as effective oversight personnel or have been marginalized. They can also receive and investigate whistleblower complaints alleging procurement irregularities, data misuse, or other integrity violations related to AI.

Government-wide coordination through the Council of the Inspectors General on Integrity and Efficiency (CIGIE) has played a key role in fostering cross-agency learning among the IG community. CIGIE’s Technology Committee supports effective technology-related audits and investigations by IGs, included AI-procurement oversight within its current scope. The Committee has precedent for managing emerging technology procurement risks. In 2011, it issued a memorandum addressing cloud computing contracting concerns across federal agencies.

The Technology Committee could enhance AI procurement integrity by assembling qualified AI-literate auditors, developing standard audit methods, sharing investigative techniques and red flags, and coordinating responses to emerging threats. This leverages existing institutional capacity rather than building separate structures, enabling immediate coordination while maintaining the option to create a dedicated AI subcommittee if future workload demands it.

VI. Conclusion: Governing AI in a Deregulatory Era

Comprehensive reform of federal AI procurement policy is unlikely to materialize this decade. The Trump administration’s deregulatory agenda has closed regulatory pathways, and legislative action remains doubtful in the near term, leaving “regulation by contract” as the primary—and profoundly inadequate —mechanism for embedding integrity safeguards.

This article has demonstrated how current AI policies lead to “buying blind”—agencies procure opaque AI systems, accept contractor-imposed terms, and deploy these tools in high-stakes environments without the institutional capacity to detect integrity risks or corruption. These vulnerabilities are the foreseeable consequence of current procurement policy choices, shaped by the unfounded belief that governance stifles innovation.

The evidence refutes this assumption. Governance encourages competition by avoiding contractor lock-in and lowering barriers for new entrants. Governance safeguards innovation by promoting fair processes that are resistant to manipulation and conflicts of interest. Governance promotes AI adoption by building institutional trust and reducing integrity failures that could trigger public backlash and a broader curtailment of technological deployment.

The federal government now stands at a decision point. Either agencies embed the minimal safeguards still available within existing procurement pathways, or reform will come later, after corruption has already compromised procurement integrity. Sustainable AI governance remains achievable, but only if policymakers acknowledge that today’s weak safeguards are a consequence of policy design, not technological inevitability. When oversight is treated as secondary to innovation, the procurement system itself becomes the risk. The United States now confronts a stark reality: its AI procurement infrastructure is increasingly susceptible to corruption risks that, once embedded, will be exceedingly difficult to undo.

The window for effective governance remains open, but it is closing rapidly.


Endnotes

Named provisions

Introduction Abstract

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
ABA
Instrument
Notice
Legal weight
Non-binding
Stage
Final
Change scope
Minor

Who this affects

Applies to
Government agencies Technology companies
Industry sector
5112 Software & Technology 9261 Government Contracting
Activity scope
Government Contracting Technology Procurement
Geographic scope
United States US

Taxonomy

Primary area
Government Contracting
Operational domain
Compliance
Topics
Artificial Intelligence Anti-Money Laundering Consumer Protection

Get Courts & Legal alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when ABA Legal News publishes new changes.

Optional. Personalizes your daily digest.

Free. Unsubscribe anytime.