Artificial intelligence compliance plan

Our AI compliance plan is our response to comply with the Office of Management and Budget, or OMB, Memorandums M-25-21 and M-25-22.

September 2025

General

Describe any planned or current efforts within your agency to update any existing internal AI principles, guidelines, or policy to ensure consistency with M-25-21 and M-25-22.

By December 29, 2025, GSA plans to release 2185.2 CIO, Accelerating Responsible Use of Artificial Intelligence at GSA. This directive will be based on the mandates and guidance set forth in OMB M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust; OMB M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government; and other Federal mandates. GSA’s directive will outline policies for the responsible, efficient, and accelerated adoption of AI technologies at GSA. It will address key issues related to AI adoption within the GSA network, including establishing policies for controlled access and responsible, efficient AI use, as well as outlining procedures for the assessment, procurement, use, monitoring, and governance of AI systems. A critical focus will be ensuring compliance with existing security, privacy, ethics, and legal frameworks, while also accelerating Federal AI use in line with directives for innovation, governance, and public trust. The directive will also aim to remove barriers to AI adoption, empower AI leaders, and safeguard individual rights and liberties, especially concerning high-impact AI. Furthermore, it will address risk management through regular assessments, prioritize appropriate AI uses to improve agency missions, ensure adequate data infrastructure and curation, assess and plan for AI workforce needs, and promote interagency coordination and efficient AI acquisition practices.

GSA currently has in place 2185.1A CIO, Use of Artificial Intelligence (AI) at GSA, which established governing policies for the controlled access and responsible use of AI technologies and platforms within GSA. This policy replaced CIO IL-23-01, which focused on generative AI, and expanded its scope to cover broader AI governance. The 2185.1A CIO directive refines and updates the previous directive to reflect evolving guidance and to accelerate the responsible adoption of AI within GSA, with a particular emphasis on classifying and managing different levels of AI impact.

GSA will conduct annual reviews and assessments of 2185.2 CIO to ensure the mandates set forth in OMB M-25-21 and M-25-22 are maintained as AI capabilities continue to evolve. The Chief Artificial Intelligence Officer (CAIO) will lead these reviews with the support of the Evidence-Based Data Governance Executive (EDGE) Board and AI Safety Team. Updates to 2185.2 CIO and other relevant policies will be released as-needed and are not restricted to an annual update.

AI Governance Bodies

Identify the offices that are represented on your agency’s AI governance body.

GSA’s AI governance structure is composed of two bodies: the EDGE Board and the AI Safety Team. 

The EDGE Board serves as a decisional body to oversee and coordinate GSA’s activities related to AI and data governance. It is responsible for overseeing and coordinating GSA’s activities related to AI and data governance. This includes setting the overall vision and goals for AI development and use within the agency, emphasizing risk management, ethical guidelines, compliance with legal standards, and alignment with GSA’s mission and values, particularly with Executive Order 14179, OMB M-25-21 and M-25-22. The EDGE Board also sets, updates, and funds GSA’s data priorities, making sure the agency fully uses data as a key asset and enforces data governance laws, policies, and regulations as mandated by the Foundations for Evidence-Based Policymaking Act.

The EDGE Board, which is chaired by GSA’s Deputy Administrator and co-chaired by the Chief Data Officer (CDO)/CAIO, reports to the GSA Administrator. The Board comprises executive leadership from GSA’s Services and Staff Offices, as follows:

  • Chief Administrative Services Officer
  • Chief Acquisition Officer
  • Office of Civil Rights (OCR) Associate Administrator
  • Chief Financial Officer
  • Chief Information Officer
  • Chief Information Security Officer
  • Chief Privacy Officer
  • Chief Technology Officer
  • Chief Human Capital Officer (CHCO)
  • Evaluation Officer
  • Federal Acquisition Service (FAS) Commissioner or Deputy Commissioner
  • General Counsel
  • Office of Government-wide Policy (OGP) Associate Administrator
  • Office of Mission Assurance Associate Administrator
  • Office of Small and Disadvantaged Business Utilization Associate Administrator
  • Office of Strategic Communication (OSC) Associate Administrator
  • Performance Improvement Officer
  • Public Buildings Service (PBS) Deputy Commissioner
  • Statistical Official
  • Technology Transformation Services (TTS) Director

The EDGE Board oversees the AI Safety Team, a cross-functional working group established to facilitate the effective integration and utilization of AI technologies across the organization. Its purpose is to ensure that AI initiatives align with organizational goals, promote accessibility, enhance content quality, leverage research and data effectively, provide adequate training, optimize design and user experience, and are supported by robust technology and infrastructure. The team identifies opportunities for AI integration, develops public AI strategies, and establishes guidelines for ethical and responsible AI use, including evaluating AI technologies, reusing existing AI resources, investing in American-developed AI products, and fostering AI talent within GSA. Additionally, the team provides AI training, collaborates with stakeholders, sets expectations for workforce AI use, monitors AI project implementation and impact, and ensures compliance with relevant laws and regulations, while regularly reporting progress to GSA leadership.

The AI Safety Team comprises representatives from key GSA offices, providing expertise and ensuring comprehensive consideration of AI integration across various aspects of GSA’s operations. These offices include:

  • Office of GSA IT
  • FAS
  • OCR
  • OGP
  • Office of Human Resources Management (OHRM)
  • OSDBU
  • OSC
  • Office of the Chief Financial Officer (OCFO)
  • Office of the General Counsel (OGC)
  • PBS

Describe how, if at all, your agency’s AI governance body plans to consult with external experts as appropriate and consistent with applicable law.

GSA has and will continue to collaborate with external experts, including academic institutions, State governments, and other Federal agencies to develop training programs and benchmark its updated AI policies against best practices in the field. Members of the EDGE Board and AI Safety Team regularly engage with interagency groups and councils, including the CDO council, the CAIO council, the CHCO council, the U.S. Office of Personnel Management, the AI Talent Task Force, and the three interagency working groups coordinated by the CAIO council on generative AI, AI-related acquisitions, and AI risk management.  

External consultation will continue to play a critical role in both governance and safety efforts. The EDGE Board plans to consult with external experts as appropriate and consistent with applicable law. This consultation may include input from Federally Funded Research and Development Centers, academic institutions, think tanks, and industry experts to stay informed about AI’s latest trends, risks, and best practices. Civil society organizations, labor unions, and similar groups may also be consulted to ensure that AI systems meet public interest standards and consider workforce impacts.

AI Use Case Inventories

Describe your agency’s process for soliciting and collecting AI use cases across all sub- agencies, components, or bureaus for the inventory. In particular, address how your agency plans to ensure your inventory is comprehensive, complete, and encompasses updates to existing use cases.

GSA’s process for soliciting and collecting AI use cases across the organization is led by the CAIO. The CAIO is responsible for ensuring that the AI use case inventory is comprehensive and up to date, with the support of the AI Safety Team, various program offices, and the Office of the Chief Information Officer (OCIO). 

The process begins with the issuance of an annual request to all offices, requiring them to submit information on current and planned AI systems. Each office must identify all AI applications, regardless of the size or scope of the system. The CAIO provides and maintains standardized guidelines and templates for reporting in compliance with OMB standards. This centralized submission process is used to collect and manage AI use cases, allowing offices to submit their entries in a consistent and structured manner. All offices participate in this process and regularly update their entries to reflect any changes, such as new AI capabilities, updates to existing systems, or the decommissioning of older technologies.

The CAIO oversees periodic reviews and audits to maintain the inventory’s accuracy and completeness. The CAIO also coordinates within the OCIO so that existing processes like Authorizations To Operate, Federal Information Technology Acquisition Reform Act reviews, and new software requests can identify AI instances across the enterprise. This process ensures that GSA’s AI inventory remains accurate and fully reflects the agency’s ongoing use of AI technology. This approach, led by the CAIO, provides a comprehensive and complete inventory of AI use cases that support the agency’s governance and decision-making processes regarding AI deployment.

Reporting on AI Use Cases Not Subject to Inventory

Describe your agency’s process for soliciting and collecting AI use cases that meet the criteria for exclusion from being individually inventoried.

GSA has not identified any AI use cases that are not subject to inventory.

Removing Barriers to the Responsible Use of AI

Describe any barriers to the responsible use of AI that your agency has identified, as well as any steps your agency has taken (or plans to take) to mitigate or remove these identified barriers.

GSA has identified several potential barriers to the responsible use of AI, including the procurement of AI solutions, access to high-quality data products with scalable infrastructure, and AI models and libraries. GSA has established an enterprise data platform (i.e., Enterprise Data Solution or EDS), which includes an enterprise data catalog, scalable compute infrastructure, analytical tooling, and AI/ML systems and services. This platform allows for programs to store, curate, and productize their data holdings for analytical purposes, as well as disseminate the information products generated via hosted services. EDS also allows for sandbox capabilities where AI tools can be safely tested and where rapid prototyping may occur.

GSA supports several pilot projects that assess the capabilities and viability of AI products for specific use cases, including the ease with which AI infrastructure and products may be set up, and how AI tools and services can be leveraged to improve mission outcomes for agencies. These pilot projects are designed to help GSA better understand, manage, and provide guidance for internal and other agency use regarding AI technologies for future implementation.

GSA has leveraged sandboxes to test AI capabilities of AI chatbots, IT security enhancements, custom application development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance.

To support the availability of AI tooling and infrastructure, the FedRAMP program has established a framework for prioritizing emerging technologies (ETs) for FedRAMP authorization, which covers technologies listed in the Office of Science and Technology Policy’s Critical and Emerging Technologies. This framework enables routine and consistent prioritization of the most critical cloud-relevant ETs needed for use by Federal agencies. This prioritization controls how FedRAMP prioritizes its own work and review processes, and will not address how sponsoring agencies manage their own internal priorities.

Identify whether your agency has developed (or is in the process of developing) internal guidance for the use of generative AI.

GSA has developed internal guidance for the use of generative AI and has made it available to employees on an internal website. This guidance includes the safeguards and oversight mechanisms necessary for responsible use without posing undue risk. The EDGE Board and AI Safety Team provide oversight by reviewing and dispositioning AI use cases, ensuring compliance with ethical standards, data privacy, and security protocols. The guidance also requires that generative AI tools be used under controlled conditions set forth by the guidance and standards established in OMB M-25-21 and M-25-22 and enforced and overseen by the AI Safety Team. GSA is evaluating continuous monitoring and evaluation processes and tools for generative AI.

AI Talent

Describe any planned or in-progress initiatives from your agency to increase AI talent. In particular, reference any hiring authorities that your agency is leveraging, describe any AI focused teams that your agency is establishing or expanding, and identify the skillsets or skill levels that your agency is looking to attract. If your agency has designated an AI Talent Lead, identify which office they are assigned to.

GSA is taking a multi-pronged approach to strengthen its AI workforce and infrastructure. Through targeted hiring strategies, the formation of specialized teams, and the development of centralized platforms, GSA is positioning itself as a leader in responsible AI adoption across the Federal Government. These efforts are designed to attract top-tier talent, ensure ethical, and secure AI deployment, and support other agencies in scaling their AI capabilities.

  • AI Talent Recruitment and Hiring Authorities: To meet the growing demand for AI expertise, in FY24 GSA has launched the AI Talent Surge, a strategic initiative aimed at rapidly expanding its AI workforce. The agency utilized flexible hiring authorities such as Direct Hire Authority, which allowed for expedited recruitment of highly qualified candidates, and the Pathways Programs, which targeted students and recent graduates for entry-level roles.
  • AI-Focused Teams and Offices: GSA has established several key teams to guide its AI strategy and implementation. At the helm is the CAIO, who also serves as the agency’s CDO. The EDGE Board is co-chaired by the CDO/CAIO and the Deputy Administrator, both, ensuring that executive-level oversight is in place for all major AI initiatives. The AI Safety Team, a technically focused group, is responsible for conducting risk assessments, ensuring compliance with ethical standards, and evaluating the safety of AI applications. GSA’s TTS and OCIO are also expanding their AI-related functions, supporting both internal innovation and cross-agency collaboration.
  • Infrastructure and Platforms Supporting AI Talent: GSA’s launch of USAi.Gov marks a significant step in providing Federal agencies the tools and infrastructure needed to experiment with and adopt generative AI technologies. The platform offers capabilities such as chatbot development, document summarization, and code generation, while also supporting model evaluation for bias and performance. USAi.Gov includes dashboards for workforce upskilling and analytics, helping agencies monitor and improve their AI readiness. This initiative is part of GSA’s broader role as a Federal AI enabler, offering secure, scalable, and shared services that facilitate responsible AI deployment across Government.

If applicable, describe your agency’s plans to provide any resources or training to develop AI talent internally and increase AI training opportunities for Federal employees.

GSA has made AI-related training available through online learning platforms to develop AI talent internally. The agency supports the AI Community of Excellence (CoE), which serves as a collaborative space for sharing knowledge and best practices, and is leading the AI Talent Surge effort to attract and retain skilled professionals to advance AI capabilities across the agency.

GSA also leverages sandboxes to test AI capabilities, including enhancements, custom development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance. GSA is committed to developing AI talent internally and increasing AI training opportunities for Federal employees. Role-based AI training tracks are accessible through online learning platforms, providing employees at various levels the opportunity to gain relevant skills. These tracks include foundational courses for employees, intermediate training for technical roles, and advanced courses for AI practitioners focusing on development, deployment, and ethical considerations.

Additionally, GSA leads a Government-wide initiative, called the AI Training Series, through its CoE and the Artificial Intelligence Community of Practice (AI CoP). It is designed to equip Federal employees with the knowledge and skills needed to responsibly and effectively engage with AI technologies in their roles. The program supports compliance with the AI Training Act and relevant Executive Orders, offering structured learning across three specialized tracks: Technical Track, Acquisition Track, and Leadership and Policy Track. All sessions are designed to be accessible to Federal employees regardless of technical background. Recordings and materials are made available through platforms like USA Learning and the Federal Acquisition Institute. Through this initiative, GSA plays a central role in modernizing Federal workforce capabilities, fostering cross-agency collaboration, and promoting responsible AI adoption across Government.

AI Sharing and Collaboration

Describe your agency’s process for ensuring that custom-developed AI code—including models and model weights—for AI applications in active use is shared consistent with Section 2.b of M-25-21.

All custom-developed AI code, including models and model weights for AI applications in active use, is shared in compliance with OMB M-25-21, Section 2.b (Sharing of Agency Data and AI Assets) by adhering to established processes for transparency and open access. This includes reviewing AI use case applications through the AI Safety Team and other relevant offices to ensure that code and models meet data security, privacy, and ethical standards before being shared. This content is disseminated via sharing platforms such as open.gsa.gov and data.gov.

Elaborate on your agency’s efforts to encourage or incentivize the sharing of code, models, and data with the public. Include a description of the relevant offices that are responsible for coordinating this work.

To advance the responsible and effective use of AI across the Federal Government, GSA has launched a suite of initiatives designed to foster collaboration, streamline adoption, and ensure ethical governance. These efforts reflect GSA’s commitment to building a unified, secure, and scalable AI ecosystem that supports agency missions while promoting transparency and innovation. From strategic frameworks and interagency communities to shared platforms and internal tools, GSA is positioning itself as a central enabler of AI transformation in the public sector.

  • AI Strategy and Governance: GSA has established a robust framework to guide Federal agencies in the ethical and effective use of artificial intelligence. This includes developing strategic use cases aligned with agency missions, offering governance models to ensure compliance with Federal standards, and providing procurement guidance to mitigate risks. Through these efforts, GSA fosters a culture of responsible AI adoption while encouraging interagency collaboration and knowledge sharing.
  • AI CoP: The AI CoP serves as a collaborative hub for Federal employees, contractors, and academic partners to exchange knowledge and best practices in AI. Through working groups, shared resources, and leadership from an executive steering committee, the CoP supports ongoing learning and cross-agency engagement. It complements GSA’s governance efforts by cultivating a community that drives responsible AI adoption from the ground up.
  • USAi.Gov Platform: USAi.Gov is a centralized platform launched by GSA to streamline AI adoption across Federal agencies. It offers a secure environment for evaluating diverse AI tools, including chatbots and summarization models, from leading providers like OpenAI and Google. Agencies can test solutions before procurement, access performance dashboards, and ensure alignment with Federal AI policies. This shared services model reduces barriers to entry and promotes scalable, trustworthy AI deployment.
  • GSA’s Use of USAi: GSA has developed an AI tool to enhance productivity and safety in Federal workflows. Featuring a chatbot, API integrations, and an admin console, it enables secure and compliant AI use tailored to agency needs. Built with staff input and privacy in mind, GSA’s AI tool is now being considered for broader Federal deployment as a shared service, supporting tenant-based models and continuous improvement through user feedback.

To support the Federal Government’s growing interest in AI, GSA is actively organizing and participating in a range of events that promote collaboration, innovation, and responsible AI adoption. These gatherings—ranging from monthly community meetings and national expos to policy symposia and hackathons—offer valuable opportunities for agencies, industry leaders, and academic partners to share knowledge, explore ETs, and shape the future of AI in Government.

  • AI CoP Monthly Meetings: Held virtually on the second Thursday of each month, these meetings provide a space for Federal stakeholders to engage with emerging AI topics and showcase agency innovations. With guest speakers and collaborative sessions, the events reinforce the CoP’s mission to foster responsible AI use and interagency learning.
  • AI & Big Data Expo North America 2025: Held at the Santa Clara Convention Center, the AI & Big Data Expo North America is a major industry event where GSA participates as a partner organization. With over 250 speakers and 200 exhibitors, the expo covers cutting-edge topics such as responsible AI, cybersecurity, data monetization, and AIOps. The event attracts a diverse audience of technologists, policymakers, and innovators, offering GSA members complimentary passes and opportunities to engage with leading AI solutions and strategies.
  • Federal AI Hackathon (July 31, 2024): The Federal AI Hackathon in 2024 challenged participants in Atlanta, New York, and Washington, DC, to reimagine Federal websites for AI optimization. Teams leveraged large language models and retrieval-augmented generation to build intelligent search engines and user interfaces that enhance transparency and efficiency. Winning solutions demonstrated innovative approaches to improving digital Government services, showcasing the potential of AI to transform public engagement and accessibility.

Harmonization of Artificial Intelligence Requirements

Explain any steps your agency has taken to document and share best practices regarding AI governance, innovation, or risk management. Identify how these resources are shared and maintained across the agency.

GSA’s approach to AI is grounded in a commitment to transparency and institutional learning. To support this, the agency has prioritized the development of mechanisms that capture insights from AI initiatives and make them accessible across teams and stakeholders. These efforts reflect a broader strategy to ensure AI implementations are not only effective but also informed by evidence, experience, and evolving standards.

  • AI Use Case Inventory: To ensure transparency and continuous improvement in its AI initiatives, GSA has developed robust mechanisms for documenting and disseminating best practices. Central to this effort is the AI Use Case Inventory, a comprehensive catalog of AI applications deployed across the agency. Maintained by the CAIO and the AI Safety Team, this inventory serves as a living resource that tracks the life cycle of AI projects—from ideation to implementation. It enables GSA to monitor compliance, assess performance, and identify opportunities for replication or refinement. By systematically documenting each use case, GSA fosters a culture of learning and accountability, ensuring that successful strategies are shared and that risks are proactively managed.
  • AI Guide for Government: Complementing this inventory is the AI Guide for Government, a resource developed by GSA’s AI CoE. This guide is tailored for Federal decision-makers and practitioners, offering practical frameworks, case studies, and implementation strategies for responsible AI adoption. It distills complex technical and ethical considerations into actionable insights, making it easier for agencies to navigate the challenges of AI deployment. The guide is regularly updated to reflect emerging trends, regulatory changes, and lessons learned from across the Federal landscape, reinforcing GSA’s commitment to knowledge sharing and continuous improvement.

GSA has responded to the need for infrastructure, collaboration, and sustained support by building a network of resources and communities that enable safe experimentation, scalable deployment, and cross-functional engagement. These foundational elements help ensure that AI efforts are aligned with agency goals while remaining adaptable to technological and regulatory change.

  • AI CoE: GSA has established a multi-layered infrastructure to support the sharing and maintenance of AI resources across the agency. At the heart of this ecosystem is the AI CoE, which provides strategic guidance, technical support, and scalable infrastructure to Federal agencies pursuing AI initiatives. The CoE offers sandbox environments, analytical tooling, and reusable components that enable safe experimentation and rapid prototyping. By centralizing these resources, GSA reduces duplication of effort and accelerates innovation across the Federal enterprise.
  • AI CoP: In addition to providing technical support, GSA fosters collaboration through the AI CoP. As noted earlier, this network is open to all Government employees and mission-supporting contractors with .gov or .mil email addresses. It serves as a platform for peer learning, cross-agency dialogue, and resource exchange. Members participate in webinars, workshops, and working groups that address key topics such as ethical AI, data governance, and risk mitigation. The CoP also maintains a shared knowledge repository, ensuring that insights and tools are accessible to all participants.
  • EDS: Supporting these efforts is the EDS, a secure and scalable platform managed by GSA’s Office of the Chief Data Officer. EDS provides the technical foundation for AI experimentation, offering sandbox environments, data governance tools, and infrastructure for deploying AI models. It ensures data used in AI projects is properly managed, protected, and aligned with Federal standards. 

Together, these resources create a cohesive framework for innovation, governance, and risk management, enabling GSA to lead by example in the responsible use of artificial intelligence.

Determining Which Artificial Intelligence Is Presumed to Be Safety-Impacting or Rights Impacting

Explain the process by which your agency determines which AI use cases are high-impact. In particular, describe how your agency is reviewing or planning to review each current and planned use of AI to assess whether it matches the definition of high-impact AI, as defined in Section 4.a of M-25-21. Identify whether your agency has created additional criteria for when an AI use is high-impact and describe such supplementary criteria.

GSA handles high-impact AI use cases with additional requirements due to their potential risks to individuals or businesses, establishing transparent governance and compliance processes. The CAIO is responsible for measuring, monitoring, and evaluating the ongoing performance and effectiveness of these applications, and centrally tracking them. System owners and their designees are responsible for enacting these minimum requirements.

Minimum Requirements for High-Impact AI (before and during employment):

  • Before an AI system is employed, several minimum requirements must be met. These include completing an AI Impact Statement, submitting an AI system test plan that demonstrates real-world context testing and contestability, and undergoing an independent evaluation of the AI system by the CAIO (or his or her designee).
  • While the AI system is in use, ongoing monitoring is required, with established thresholds for periodic human review. Emergent risks identified through routine testing, continuous monitoring protocols, or third-party findings must be mitigated. All system practitioners must complete requisite AI training requirements, and human validation and intervention protocols should be in place to ensure that all output decisions made by AI systems are regularly evaluated. Additionally, public notice and plain language documentation regarding high-impact use cases must be provided through the public interface, in public disclosure statements, and in the AI use case inventory.

Additional Requirements for Higher-Impact AI (before and during employment):

  • Before implementing an AI system, it is crucial to proactively identify and mitigate algorithmic discrimination or bias and assess and mitigate disparate impacts for protected classes. Direct user testing of system interactions should be conducted, and comments from the user community should be solicited, along with post-transaction customer feedback activities.
  • While employing the AI, ongoing monitoring studies for AI-enabled discrimination must be conducted. Any negatively affected individuals should be notified, and fallback and escalation options for AI processes or outcomes should be provided. Where practicable, opt-out alternatives should also be offered.

Certain high-impact AI use cases are exempt from the outlined requirements, specifically when evaluating a potential vendor, commercial capability, or freely available AI capability not currently used in agency operations, solely for procurement or acquisition decisions; when evaluating an AI application because its provider is the target or potential target of a regulatory enforcement action; or for research and development purposes.

The CAIO may waive one or more stated requirements for specific covered AI applications, with conditions, if the requirements would increase overall risks to safety or rights, or create an unacceptable impediment to critical agency operations. Waiver appeals are submitted with written justifications to the CAIO and the EDGE Board, are centrally tracked, subject to publication, and reassessed annually.

The EDGE Board is responsible for establishing the enterprise’s AI risk tolerance, prioritization, and strategic approach to risk management, in alignment with GSA’s Enterprise Risk Management Policy, which encompasses defining likelihood and impact ranking criteria and thresholds, outlining factors for the use case risk rubric, and establishing risk management practices and processes for AI systems; subsequently, the AI Safety Team assesses use cases based on this guidance, and individual system owners are accountable for implementing the defined risk management processes. 

If your agency has developed its own distinct criteria to guide a decision to waive one or more of the minimum risk management practices for a particular use case, describe the criteria.

GSA has not developed a distinct criteria in guiding the decision to waive minimum risk management practices.

For the issuance of waivers, the CAIO, in coordination with other relevant officials, may waive one or more of the required minimum practices for high-impact use cases involving a specific AI application or component, following a written determination. The waiver process involves submitting a formal request to the CAIO and the EDGE Board, which includes a risk assessment and review of the specific system and context. The waiver will be considered for approval should the assessment find that meeting the requirement would increase overall risks to safety or rights or impose an unacceptable barrier to critical agency operations. If the board and the CAIO approve, waivers will be reported to OMB within 30 days of issuance, tracked, and routinely reviewed to ensure compliance with agency standards. GSA maintains a certification process to confirm that AI systems granted waivers continue to meet risk management requirements, and the CAIO may revoke waivers if any noncompliance or unforeseen risks emerge. GSA has not issued a waiver at the time of this compliance plan’s publication.

Implementation of Risk Management Practices and Termination of Non-Compliant AI

Elaborate on the controls your agency has put in place to prevent non-compliant safety impacting or rights-impacting AI from being deployed to the public. Describe your agency’s intended process to terminate, and effectuate that termination of, any non-compliant AI.

GSA has implemented several controls to prevent non-compliant, high-impact AI from being deployed to the public. Safeguards include the review of AI use cases by the AI Safety Team, confirming all AI systems meet ethical, legal, and technical standards before deployment. The AI Safety Team evaluates risks related to public safety, privacy, civil liberties, and potential biases, with specific attention to AI systems that may have significant impacts on rights or safety.

To prevent non-compliant, high-impact AI from being deployed to the public, GSA has established continuous monitoring protocols that track AI system interactions at the network level. GSA is developing a strategy to increase its capacity to monitor AI system behaviors and performance. Automated alerts and reporting systems are in place to detect deviations from compliance standards, triggering an immediate review by relevant oversight bodies.

If a high-impact AI system is found to be non-compliant after deployment, GSA has developed a defined process for termination. The EDGE Board, in collaboration with the OCIO, will issue a termination order. The termination process involves revoking system access, ceasing operations, and ensuring that data processed by the AI system is secured, maintained, or destroyed as required. An incident response team coordinates the shutdown and conducts a post-termination review to assess the impact and identify corrective actions. Additionally, communications will be made regarding use-case terminations, the reasons for the action, and any steps being taken to prevent future issues.

Minimum Risk Management Practices

Identify how your agency plans to document and validate implementation of the minimum risk management practices. In addition, discuss how your agency assigns responsibility for the implementation and oversight of these requirements.

GSA policy mandates that AI use cases document and validate the implementation of minimum risk management practices as defined in OMB M-25-210 for AI systems through a comprehensive framework managed by the EDGE Board. This process includes detailed documentation at each stage of the AI life cycle, from development and testing to deployment and monitoring. AI use cases are required to maintain thorough records of risk assessments, compliance checklists, data usage audits, real-world testing, and ethical impact evaluations. These documents will be stored in a centralized repository that is accessible to the AI Safety Team and the EDGE Board.

GSA will conduct periodic audits and performance reviews of AI systems to validate that the minimum risk management practices are being followed. These audits assess compliance with Federal guidelines, including data privacy, bias mitigation, and ethical considerations. Additionally, GSA employs automated monitoring tools to track ongoing compliance, and discrepancies trigger immediate reviews by oversight teams.

Responsibility for implementing and overseeing these risk management practices is clearly assigned to multiple levels within the organization. The EDGE Board provides strategic oversight and policy guidance, and supports the AI Safety Team in ensuring all AI initiatives comply with legal and ethical standards. The OCIO is responsible for the technical implementation and monitoring of AI systems. Individual project managers and teams are tasked with day-to-day compliance, including adhering to risk management protocols, conducting regular risk assessments, and reporting any issues to the EDGE Board.