Changeflow GovPing Government & Legislation GAO Report on Federal AI Use and Privacy Risks
Priority review Guidance Amended Final

GAO Report on Federal AI Use and Privacy Risks

Favicon for www.gao.gov GAO WatchBlog
Published March 26th, 2026
Detected March 26th, 2026
Email

Summary

The GAO has released a report highlighting increased AI use by federal agencies and associated privacy risks. The report indicates that current OMB guidance lacks sufficient direction on transparency and risk assessment for AI systems, recommending actions for improvement.

What changed

The GAO's new report, published March 26, 2026, identifies significant increases in the federal government's use of Artificial Intelligence (AI) for efficiency and customer service, citing examples like the IRS and OPM. However, the report emphasizes growing concerns regarding privacy risks associated with this AI adoption. Specifically, the GAO found that existing guidance from the Office of Management and Budget (OMB) is insufficient, failing to provide adequate direction on transparency regarding AI use and sensitive data, as well as lacking clear requirements for assessing AI system privacy risks.

This analysis suggests that federal agencies need more robust frameworks to manage the privacy implications of AI. The GAO recommends that OMB take specific actions to address these gaps, which could lead to revised guidance or new mandates for agencies. Compliance officers should monitor for forthcoming OMB directives or agency policy updates stemming from this report, as they may need to implement new procedures for AI system risk assessments and enhance transparency measures related to data usage.

What to do next

  1. Review GAO report GAO-26-107681 for detailed findings and recommendations.
  2. Monitor for forthcoming OMB guidance or directives related to federal AI use and privacy.
  3. Assess current agency AI systems for potential privacy risks and transparency gaps.

Source document (simplified)

Posted on March 26, 2026

The federal government is turning to artificial intelligence (AI) as a tool for creating efficiencies, as well as improving customer service. For example, the IRS is using AI chat- and voice-bots to better answer taxpayers’ questions. And if you’re looking for a government job, the Office of Personnel Management (OPM) is using AI to better connect candidates with employment opportunities that match their skill sets.

While these uses may help the public find answers to questions more quickly, there are concerns about how the federal government’s use of AI could affect people’s privacy.

What are these concerns and what’s being done about them? Today’s WatchBlog post looks at our new report.

What are the risks to privacy when using AI?

The federal government collects a lot of sensitive personal data from people to manage programs that directly interact with the public—everything from Social Security to student loans. This includes information that may be publicly available already, such as your address and phone number. But it also includes information you wouldn’t want misused, like your bank account or tax information.

The customer service applications of AI (chat bots, for example) may not have access to your non-public personal information. But other AI uses may. And this has led to concerns about data breaches. For example, last year, school districts that used AI to monitor school-issued devices for potential threats accidentally revealed the private data of thousands of students to reporters. This breach occurred because that data was not protected by the school districts.

How does AI use raise privacy concerns? We gathered experts to hear their concerns about both the risks AI presents and the challenges in addressing them. These experts were from government, industry, and the nonprofit sector. Here’s what they told us:

  • AI can make it easier to cross-reference information from multiple datasets, which may reveal sensitive personal information about people that once was anonymous. This can happen even if these datasets don’t explicitly include sensitive information because AI applications can extrapolate information.

  • AI can repurpose data. This makes it possible for government agencies, businesses, and other organizations to use personal data for purposes other than the original intent. For example, businesses could use information from tax returns to market products at specific prices.

  • AI can be used to intentionally and unintentionally generate false information, like deepfakes or inaccurate outputs like hallucinations.

What’s being done to protect privacy, and why is that not enough?

The federal government is aware of the risks AI poses to people's privacy and is taking action. OMB plays an important role in overseeing federal agencies’ use of AI. As part of this effort, OMB has issued guidance that gives agencies some direction in protecting privacy when using AI. But when we looked at this guidance, we found that it doesn’t give agencies enough direction on how to be transparent about their use of AI and sensitive data. For example, the guidance doesn’t provide information on how agencies should assess privacy risks for AI systems. These types of assessments ensure that agencies consider these risks when using sensitive data with AI and can be used to provide transparency in data use.

The guidance also doesn't identify best practices for addressing AI privacy risks. It also doesn’t identify technology or other tools that can enhance privacy protections when implementing AI. Our report recommends actions OMB could take to address these concerns.

In addition to risks, actions are needed to address challenges with AI use that make protecting privacy more difficult. For example, separating sensitive data from the vast datasets used by AI to protect it is another challenge. Experts also told us that even when the federal government or other organizations have protections in place, they often lack ways to measure how well those protections are working.

As federal agencies and other organizations have increased their use of AI, more work is needed to protect people's sensitive personal information.

Learn more about these issues and our recommendations to OMB on how to address them by reading our new report.

Marisol Cruz Cain Director Information Technology and Cybersecurity cruzcainm@gao.gov

Related Posts

Blog Post

Inside the IRS's Use of Artificial Intelligence

Wednesday, March 25, 2026

It’s that time of year, again—tax season! As you’re trying to fill out forms and submit your annual...

Blog Post

AI Is Changing Home Buying and Renting—But Not Always for the Better

Thursday, December 4, 2025

Buying or renting a home has never been easier thanks to technology. You can search listings for...

Blog Post

Artificial Intelligence May Help IRS Close the Tax Gap

Thursday, June 6, 2024

Hundreds of billions of dollars are potentially missing from what should be collected in taxes each...

Related Products

GAO-26-107681 Published: Mar 26, 2026 Publicly Released: Mar 26, 2026

GAO's mission is to provide Congress with fact-based, nonpartisan information that can help improve federal government performance and ensure accountability for the benefit of the American people. GAO launched its WatchBlog in January, 2014, as part of its continuing effort to reach its audiences—Congress and the American people—where they are currently looking for information.

The blog format allows GAO to provide a little more context about its work than it can offer on its other social media platforms. Posts will tie GAO work to current events and the news; show how GAO’s work is affecting agencies or legislation; highlight reports, testimonies, and issue areas where GAO does work; and provide information about GAO itself, among other things.

Please send any feedback on GAO's WatchBlog to blog@gao.gov.

Source

Analysis generated by AI. Source diff and links are from the original.

Classification

Agency
GAO
Published
March 26th, 2026
Instrument
Guidance
Legal weight
Non-binding
Stage
Final
Change scope
Substantive
Document ID
GAO-26-107681

Who this affects

Applies to
Government agencies
Industry sector
9211 Government & Public Administration
Activity scope
AI Implementation Data Privacy Management
Geographic scope
United States US

Taxonomy

Primary area
Artificial Intelligence
Operational domain
Compliance
Compliance frameworks
NIST CSF
Topics
Data Privacy Government Operations

Get Government & Legislation alerts

Weekly digest. AI-summarized, no noise.

Free. Unsubscribe anytime.

Get alerts for this source

We'll email you when GAO WatchBlog publishes new changes.

Optional. Personalizes your daily digest.

Free. Unsubscribe anytime.