Purpose
This directive updates the governing policies for the responsible, efficient, and accelerated adoption of artificial intelligence (AI) technologies and platforms within the General Services Administration (GSA). It aligns with the principles outlined in Office of Management and Budget (OMB) Memorandums M-25-21 and M-25-22 by promoting innovation, enhancing public trust, and ensuring mission-enabling use of AI. The directive establishes standards for the assessment, procurement, usage, monitoring, and governance of AI systems, emphasizing risk management, transparency, and lifecycle accountability. It supports the development of agency-wide AI strategies, fosters cross-functional collaboration, and encourages the use of trustworthy, interoperable, and American-made AI solutions. All activities under this directive must comply with existing security, privacy, and ethics regulations and applicable laws.
Background
The AI in Government Act of 2020 (Public Law 116-260); AI Training Act of 2023 (Public Law 117–207); OMB memoranda M-25-21 and M-25-22; and OMB Circular No. A-119 direct all federal agencies to:
- Accelerate the responsible adoption of AI by emphasizing innovation, governance, and public trust, in accordance with OMB M-25-21. Agencies must reduce bureaucratic barriers and promote mission-enabling AI that benefits the American public while safeguarding civil rights, civil liberties, and privacy.
- Empower Chief AI Officers (CAIOs) and other AI leaders to drive strategic planning, workforce development, and cross-agency collaboration. Agencies must establish AI governance boards and publish public AI strategies that identify barriers and outline plans for scaling responsible AI use.
- Ensure compliance with applicable federal laws and policies in the development, deployment, and use of AI and automated systems. This includes adherence to privacy, safety, and nondiscrimination standards and alignment with voluntary consensus standards as outlined in OMB Circular A-119.
- Establish and maintain processes to measure, monitor, evaluate, and report on AI use cases and their performance, especially for high-impact AI. Agencies must implement risk management practices and be prepared to terminate non-compliant systems.
- Conduct regular AI risk assessments, particularly for high-impact systems, and integrate findings into governance and acquisition decisions. Agencies must also contribute to interagency repositories of best practices and tools.
- Prioritize AI applications that advance the agency’s mission, improve service delivery, and promote innovation. Agencies should invest in AI-enabling infrastructure, including data governance, workforce training, and reusable tools and models.
- Ensure sufficient infrastructure and capacity for AI-ready data, including robust data curation, labeling, and stewardship practices. Agencies must manage data as a strategic asset to support trustworthy AI.
- Assess and plan for AI workforce needs by identifying required competencies, offering training (as mandated by the AI Training Act), and aligning hiring and reskilling strategies with evolving AI demands.
- Support interagency coordination and standards-setting initiatives, and encourage the adoption of voluntary consensus standards for AI, as directed by OMB Circular A-119 and M-25-21.
- Drive efficient and responsible AI acquisition by fostering a competitive U.S. AI marketplace, managing performance and risk throughout the acquisition lifecycle, and ensuring cross-functional engagement in procurement decisions.
Applicability
This order applies to:
- All individuals, including GSA employees and contractors, who access, manage, share, or use data, including those involved in system-to-system data exchanges, and those participating in AI-related activities, training, or governance as defined under the AI Training Act of 2023.
- All IT systems owned or operated by or on behalf of any GSA Service and Staff Office that process, store, or transmit federal data, especially where AI capabilities are integrated or acquired, in accordance with OMB Memoranda M-25-21 and M-25-22.
- All federal data contained in or processed by GSA IT systems, including data used to train, evaluate, or operate AI systems, subject to applicable privacy, security, and intellectual property (IP) safeguards outlined in M-25-22.
- The Office of Inspector General (OIG) to the extent that participation aligns with the OIG’s independent authority under the Inspector General Act of 1978 (5 U.S.C. App. 3) and does not conflict with OIG policies or its mission.
- The Civilian Board of Contract Appeals (CBCA) only to the extent that participation is consistent with the CBCA’s requisite independence under the Contract Disputes Act (41 U.S.C. §§ 7101–7109) and its legislative history.
- All AI systems or services acquired by or on behalf of GSA, excluding common commercial products with embedded AI functionality not primarily used for AI purposes, as defined in OMB Memorandum M-25-22.
Standards and conformity assessments used in AI-related activities must align with OMB Circular A-119, which encourages the use of voluntary consensus standards and minimizes reliance on government-unique standards.
Update
This directive supersedes CIO 2185.1A, Use of Artificial Intelligence at GSA to align with existing Executive Orders and OMB memoranda as referenced in the Background section.
Roles and Responsibilities
- Chief AI Officer (CAIO): In addition to the responsibilities defined in applicable Executive Orders and OMB memoranda, the CAIO must:
- Maintain awareness of and promote the responsible and innovative use of AI technologies across GSA, including understanding system design, functionality, and intended use cases. Ensure AI adoption aligns with agency mission and public trust goals.
- Establish, maintain, and chair internal AI governance bodies, including the AI Governance Board and AI Safety Team, to ensure compliance with OMB M-25-21 and M-25-22.
- Establish and update processes to measure, monitor, and evaluate the performance, accessibility, cost-effectiveness, and outcomes of AI applications, including risk management practices for high-impact systems.
- Develop and oversee implementation of agency AI compliance plans, ensuring alignment with Executive Orders, OMB guidance, and applicable laws. Coordinate reporting to OMB, including AI use case inventories and acquisition strategies.
- Ensure AI acquisitions comply with M-25-22, including vendor sourcing, data portability, interoperability, and privacy safeguards. Collaborate with acquisition officials to embed appropriate contract terms and standards.
- Collaborate with the Chief Human Capital Officer and Chief Learning Officer to identify skill gaps and implement training programs for federal managers and supervisors, as required by the AI Training Act of 2023.
- Identify and convene external individuals or organizations with AI expertise to provide input relevant to GSA’s mission functions, ensuring diverse perspectives and technical rigor.
- In coordination with relevant officials, issue waivers for individual AI applications when appropriate. Establish and maintain criteria for categories of AI applications that do not require disposition through the AI Governance Board or AI Safety Team.
- Promote the use of voluntary consensus standards in AI development and acquisition, consistent with OMB Circular A-119, and participate in relevant standards bodies where appropriate.
- AI Governance Board: The Evidence-Based Data Governance Executive (EDGE) Board serves as GSA’s principal AI governance body. In alignment with federal mandates and best practices, the EDGE Board shall:
- Define and periodically update the agency’s vision, goals, and priorities for AI development and deployment, ensuring alignment with GSA’s mission, values, and public service objectives.
- Support the CAIO in implementing governance frameworks, including risk management protocols, ethical guidelines, and compliance mechanisms for AI systems.
- Ensure that AI systems used or acquired by GSA uphold principles of fairness, transparency, accountability, and privacy, consistent with OMB guidance and the AI Risk Management Framework.
- Oversee the integration of AI-related competencies into workforce development plans, in coordination with the CAIO and Human Capital leadership, as required by the AI Training Act of 2023.
- Convene relevant stakeholders across GSA—including legal, acquisition, IT, privacy, and program offices—to ensure coordinated oversight of AI initiatives and procurement.
- Review agency-wide metrics and assessments of AI system performance, including risk mitigation strategies for high-impact use cases. Recommend corrective actions or system decommissioning when necessary.
- Promote the use of voluntary consensus standards and conformity assessment practices in AI development and acquisition, consistent with OMB Circular A-119.
- Validate the agency’s AI use case inventory, compliance plans, and reporting submissions to OMB, ensuring completeness, accuracy, and alignment with federal requirements.
- Recommend external experts or advisory bodies to support GSA’s AI governance, particularly in areas of technical complexity, ethics, and emerging risks.
- AI Safety Team: The AI Safety Team is a technical working group reporting to the CAIO in their role as co-chair of the EDGE Board. In alignment with federal mandates, the AI Safety Team shall:
- Implement the risk posture defined by the EDGE Board by developing and applying a standardized risk rubric for AI use cases. This includes managing intake, adjudication, and documentation of AI use cases across GSA.
- Independently adjudicate Capability Assessment, GSA Pre-Approved Application, Research and Development, and Production or Production-Intent use cases.
- Production or Production-Intent use cases also require CAIO review and approval.
- High-impact use cases must be escalated to the EDGE Board for final adjudication.
- Deliver dispositions for Production or Production-Intent use cases, including assessments of feasibility, ethical implications, and compliance with legal and policy standards.
- Enforce all GSA-authorized security, privacy, and audit policies to protect Controlled Unclassified Information (CUI) and ensure AI systems operate within acceptable levels of residual risk. This includes:
- Privacy Threshold Assessments (PTAs)
- Privacy Impact Assessments (PIAs)
- Privacy Act Statements
- System of Records Notices (SORNs)
- Authorizations to Operate (ATOs)
- Federal Risk and Authorization Management Program (FedRAMP) authorizations
- Assess AI systems for performance, scalability, bias, transparency, and compatibility with existing infrastructure. Evaluate ethical implications, including fairness, accountability, and protection of individual rights.
- Include members with expertise in development, architecture, data science, user experience, privacy, security, and mission delivery. This ensures multi-disciplinary perspectives in AI governance.
- Maintain comprehensive records of adjudication decisions, rationale, and associated risks. Generate regular reports for internal review and external compliance reporting.
- Collaborate with system owners and executive sponsors to ensure ongoing monitoring of AI systems, especially those deemed high-impact. Recommend corrective actions or decommissioning if systems fail to meet standards.
- Engage with external stakeholders, including academic institutions, interagency councils, and civil society organizations, to benchmark practices and incorporate emerging standards.
- System Owner: System owners shall:
- Register AI use cases with the AI Safety Team, including initial deployment and any significant modifications or decommissioning events. This supports centralized tracking and risk tiering as required by M-25-21.
- Ensure compliance with this directive and all applicable OMB guidance, including privacy, security, and performance monitoring requirements for AI systems.
- Support risk management by providing documentation and updates necessary for pre-deployment testing, ongoing monitoring, and human oversight of high-impact AI systems.
- Executive Sponsor: Executive sponsors shall:
- Sponsor AI use cases in Capability Assessment, Research and Development, and Production or Production-Intent phases, ensuring alignment with strategic objectives and risk posture defined by the EDGE Board.
- Champion responsible AI acquisition by coordinating with acquisition officials to ensure AI systems meet interoperability, privacy, and performance standards outlined in M-25-22.
- Facilitate cross-functional engagement across legal, privacy, IT, and program offices to ensure AI initiatives are well-resourced and compliant with lifecycle governance requirements.
- Authorized Users of IT Resources:
- General practitioners shall:
- Protect federal nonpublic information and report any potential IT security incidents, consistent with GSA’s IT General Rules of Behavior.
- Report unregistered or high-impact AI use cases to the AI Safety Team if they believe the system owner has not done so, supporting transparency and risk mitigation as required by M-25-21.
- Specialized practitioners shall:
- Implement and maintain AI policies in support of AI software development and deployment, ensuring adherence to GSA IT policies and federal standards.
- Support acquisition and governance processes by applying open standards and interoperability principles consistent with OMB Circular A-119 and M-25-22.
- Ensure privacy and IP safeguards during AI system lifecycle activities, including data handling, vendor coordination, and documentation of system performance.