
AI strategies and compliance plan
Below we outline our strategies for OMB Memorandum M-25-21 which is our response to the Office of Management and Budget Memorandums M-25-21 and M-25-22. Following that we have our AI compliance plan — the actions we will take as a result of Strategies for OMB Memorandum M-25-21.
Strategies for OMB Memorandum M-25-21
Date: Sep 30, 2025
Prepared by: Zachary Whitman
1. GSA AI use cases
Provide examples of significant agency AI use cases currently in use or planned to be in use.
GSA organizes its AI activities into three tiers to reflect their level of integration, technical complexity, and mission impact. This structure provides a clear roadmap for scaling AI from broad cultural adoption to deep programmatic transformation, while highlighting high-impact and rights-sensitive applications that require heightened oversight.
1.1 Tier 1 — Chatbot-based use cases (enterprise access and cultural adoption)
Tier 1 encompasses AI capabilities delivered through the USAi general chatbot, providing every GSA employee with secure, enterprise-level access to generative AI tools. These use cases focus on broad adoption, productivity gains, and knowledge management. Examples include:
- General Enterprise Support: Employees use the chatbot to draft documents, summarize meeting notes, generate first-draft code, and retrieve policy guidance, reducing time spent on routine tasks.
- Customer Experience Assistance: Public-facing pilots test chatbots that provide plain-language answers to common inquiries about federal programs and GSA services.
- Training and Onboarding: New employees leverage the chatbot to navigate GSA policies, IT procedures, and benefits information, accelerating onboarding and reducing help-desk demand.
1.2 Tier 2 — Application Programming Interface, or API, use cases (programmatic integrations and mission delivery)
Tier 2 covers API-enabled services built on the USAi platform to support direct mission functions, strategic improvement efforts, and deeper automation. These integrations allow programs to call large language models securely within agency applications. Examples include:
- Acquisition Document Generation: Automated drafting of procurement language, market research summaries, and acquisition strategies to reduce cycle times while maintaining compliance.
- Data Quality Enhancement: Model-driven detection and correction of errors in large administrative datasets, improving the accuracy of reports and analytics.
- Agentic Workflows: AI-powered “co-pilots” embedded in business systems that chain tasks together (e.g., compiling regulatory references, validating inputs, and generating recommendations) to support contracting officers and program managers.
- Bias Impact Analysis: Advanced natural language and statistical models are used to study potential algorithmic bias in federal services and procurement processes, informing policy decisions and action plans.
1.3 Tier 3 — Integration use cases (embedded AI in existing tools and high-impact applications)
Tier 3 includes AI features that are embedded directly into existing enterprise platforms or third-party tools, including applications with heightened privacy or civil rights considerations. Examples include:
- Login.gov Face-Matching Technology: Use of facial matching software to support secure identity verification for public authentication services. This high-impact use case is subject to additional testing, human review, and continuous monitoring to safeguard privacy and prevent algorithmic discrimination.
- AI-Enhanced IT Service Management: Natural language classification and routing of help-desk tickets to speed issue resolution.
- Generative Features in Productivity Suites: Secure enablement of built-in generative AI functions in productivity software (e.g., document drafting, spreadsheet analysis) with GSA-specific guardrails.
- Facilities and Property Management Optimization: Predictive analytics integrated into building management systems to forecast energy usage and improve maintenance scheduling.
Through this tiered approach, GSA ensures that AI adoption progresses responsibly and strategically—starting with low-risk, high-value enterprise services that enable broader enterprise adoption and workforce upskilling (Tier 1), advancing to program-specific integrations that leverage a service-based architecture while maintaining all requisite system and data controls (Tier 2), and finally embedding AI directly into mission systems and high-impact applications to increase the direct value and service delivery of those systems (Tier 3).
2. GSA AI maturity goals
Provide an assessment of GSA’s current state of AI maturity and a plan to achieve GSA’s AI maturity goals in the following key areas.
2.1 Current state of AI maturity
GSA’s AI journey began with a series of pilots and research initiatives designed to explore the emergent and evolving landscape of enterprise AI adoption within a federal context. Early efforts focused on understanding the agency’s workforce—identifying current skillsets, assessing readiness, and mapping opportunities to augment day-to-day activities with AI capabilities. The initial priority was “drudge reduction”: using AI to automate repetitive tasks so employees could concentrate on higher-value mission work.
These early pilots revealed several key needs:
- Enterprise availability of tools — AI capabilities must be accessible to all employees, allowing staff to experiment, adopt, and integrate tools when they are most relevant to their work.
- Training and cultural support — GSA invested in training opportunities and community-building, collaborating with its AI Community of Practice to offer agency-wide learning sessions and hosting initiatives such as “Friday Demo Days,” where employees share their generative AI projects to inspire peer adoption.
- Access to cutting-edge technology — GSA worked with industry partners to minimize the procurement timelines for market-leading AI tools, resulting in OneGov agreements that enable GSA and other agencies to purchase AI technologies and platforms at scale.
- Accelerated security authorization — the FedRAMP program launched the “20x” initiative, which expedites the review and approval of generative AI platforms so agencies can safely access the same tools and services used in the private sector.
- Shared services to drive adoption across government — GSA launched USAi, a government-wide AI platform and service model that provides agencies with a secure environment to test, adopt, and scale AI capabilities. USAi helps agencies evaluate cultural readiness, integration opportunities, and technical requirements, enabling data-driven decisions about future enterprise deployment. GSA is consuming the USAi platform service offering in its enterprise adoption journey.
2.2 Path to maturity
Building on these foundations, GSA’s AI maturity plan focuses on three reinforcing objectives:
- Agency-wide enablement — Continue to expand secure, enterprise-level access to AI tools and platforms so every employee can leverage AI in mission delivery.
- Workforce development — Broaden training, guidance, and community engagement to ensure employees at all skill levels can responsibly and effectively use AI.
- Government-wide leadership — Advance shared services like USAi and FedRAMP’s 20x program to help other agencies rapidly adopt secure, state-of-the-art AI solutions.
Through these efforts, GSA is moving from pilot-driven experimentation to sustained, enterprise-wide adoption, while also helping other federal agencies achieve similar AI maturity.
2.3 AI-enabling infrastructure
Describe GSA’s plan to develop AI-enabling infrastructure across the AI lifecycle including development, testing, deployment, and continuous monitoring.
GSA is developing a secure, scalable, and government-wide AI infrastructure that supports every phase of the AI life cycle—from development and testing to deployment and continuous monitoring. The goal is to provide an environment where AI tools can be responsibly created, evaluated, and used across GSA and by other federal agencies, while maintaining strong safeguards for security, privacy, and equity.
2.3.1 Development and testing
- Enterprise Data Solution (EDS): GSA’s central data platform serves as the foundation for AI experimentation, model use, and development. EDS provides curated, well-governed datasets, standardized metadata, and controlled access to ensure that AI models are trained and evaluated on high-quality, secure data.
- Research Environments: Dedicated R&D workspaces allow teams to prototype and test AI models in a secure setting before any production deployment. These environments include sandboxed compute resources, version control for models, and automated documentation requirements to promote reproducibility and transparency.
- Testing and Evaluation Tools: All AI systems undergo structured evaluation—using agency-defined test plans, AI Impact Statements, and real-world context testing—to measure accuracy, robustness, equity, and safety before promotion to production.
- USAi Console: The USAi console provides a unified evaluation environment that captures model performance, safety telemetry, and bias metrics across multiple commercial models. Agencies can run standardized test suites, compare model outputs side-by-side, and export evaluation data to support internal reviews or external audits.
2.3.2 Deployment and operations
- USAi Shared Service: GSA’s USAi platform provides agencies with a FedRAMP-authorized environment to deploy and operate generative AI solutions. USAi offers both a chatbot interface and API access, enabling agencies to integrate AI into mission workflows without the need to build their own infrastructure.
- Secure Cloud Architecture: AI deployments leverage multi-tenant cloud services with built-in encryption, identity management, and logging. Deployment pipelines incorporate automated security scanning and configuration baselines aligned with federal cybersecurity standards.
- 20x FedRAMP Initiative: To ensure agencies can access market-leading AI platforms, GSA is leading an accelerated FedRAMP review process that shortens authorization timelines for generative AI tools while maintaining rigorous security standards.
2.3.3 Continuous monitoring and risk management
- Centralized Governance: The AI Governance Board (known as the EDGE board) and AI Oversight Committee oversee all production systems, requiring ongoing performance monitoring, periodic human review, and annual re-registration of AI use cases.
- Telemetry and Logging: USAi and EDS provide detailed telemetry on model usage, API consumption, and system performance to detect drift, bias, or anomalous behavior.
- Incident Response and Privacy Controls: Integrated workflows connect AI operations with cybersecurity and privacy teams, ensuring rapid response to security incidents and compliance with federal privacy requirements.
Through this layered infrastructure, GSA is creating a repeatable and secure pathway for agencies to develop, test, deploy, and continuously monitor AI solutions—enabling innovation at scale while upholding the highest standards of safety, transparency, and trust.
2.4 Data
Describe your agency’s plan to ensure access to quality data for AI and data traceability.
GSA’s AI strategy is grounded in the principle that high-quality, well-governed data is the foundation of trustworthy AI. The agency is building policies, platforms, and tools that ensure data used for AI is accurate, traceable, and reusable—both within GSA and across the Federal Government.
2.4.1 Access to quality data
- Enterprise Data Solution (EDS): GSA’s enterprise data platform serves as the central catalog for all datasets used in AI development. All AI projects must register their data assets in EDS, including details on provenance, quality, and sensitivity, ensuring that every dataset can be reviewed, approved, and monitored.
- Data Quality Controls: System owners must document data collection methods, preparation processes, and quality measures. Data intended for AI development is required to meet standards for representativeness and coverage to reduce bias and improve model reliability.
2.4.2 Data sharing and reuse
- Open and Shared Data Assets: Consistent with the OPEN Government Data Act, GSA publishes qualified datasets to Data.gov and promotes interagency reuse of AI-ready datasets, models, and evaluation tools where privacy and security permit.
- Standard Agreements and Frameworks: GSA applies reusable data-sharing agreements and governance templates to streamline cross-agency access to AI-relevant data, reducing the time required to establish new exchanges.
- Governmentwide Collaboration: Through initiatives such as USAi and OneGov acquisition vehicles, GSA provides a path for other agencies to leverage shared data resources, model artifacts, and best practices without duplicating infrastructure.
2.4.3 Reproducibility, traceability, explainability, and model lineage
- Reproducibility, Data, and Model Lineage: Reproducibility is reinforced through mandatory metadata in EDS, which records the full provenance of training and testing datasets, preprocessing steps, and model versions. This ensures that evaluation results can be replicated and that any downstream decisions can be traced back to their source data and configurations. All AI systems must document the origin, transformations, and use of data throughout the model life cycle. Metadata in EDS captures details such as data sources, preprocessing steps, training runs, and model versions.
- Transparency and Interpretability: GSA requires AI Impact Statements and system test plans to include explainability considerations, ensuring that model decisions can be audited and contested. AI-generated data products are labeled and indexed to clearly identify machine-generated content.
- Continuous Monitoring: Production AI systems undergo periodic reviews to detect data drift, model degradation, or privacy risks, with mechanisms in place to retrain or retire models when quality standards are not met.
Through these measures, GSA ensures that data used in AI development is discoverable, trustworthy, and accountable, enabling responsible innovation while supporting government-wide sharing and public transparency.
2.5 AI ready-workforce
Describe your agency’s plan to recruit, hire, train, retain, and empower an AI-ready workforce and achieve AI literacy for non-practitioners involved in AI.
GSA is advancing a comprehensive plan to recruit, train, retain, and empower an AI-ready workforce that can both build AI systems and responsibly govern their use. The agency recognizes that AI success depends not only on technical experts, but also on broad AI literacy among non-technical staff who shape policy, procurement, and service delivery.
2.5.1 Recruiting and hiring technical talent
- Targeted AI and Data Roles: GSA is expanding hiring for data scientists, machine-learning engineers, human-centered designers, cybersecurity specialists, and evaluation experts to develop scalable, secure AI solutions for mission delivery and shared services like USAi.
- Flexible Hiring Authorities: The agency uses special pay rates, direct-hire authorities, and fellowship programs to compete for in-demand AI talent, while partnering with the U.S. Digital Service and Presidential Innovation Fellows to bring in experienced technologists.
- Interagency Collaboration: GSA coordinates with OPM and the Chief AI Officers Council to share candidate pools, reduce hiring friction, and attract talent motivated by public-sector impact.
2.5.2 Training and upskilling the existing workforce
- Agency-Wide AI Literacy: All employees have access to foundational AI training covering responsible use, data privacy, and basic model capabilities. Training is delivered through online courses, live workshops, and hands-on “Friday Demo Days” where staff share AI prototypes and lessons learned.
- Role-Specific Learning Paths: Tailored curricula provide deeper instruction for developers, product managers, acquisition professionals, and legal staff, including courses on model evaluation, prompt engineering, procurement of AI technologies, and algorithmic bias mitigation.
- Communities of Practice: GSA’s AI Community of Practice hosts regular meet-ups, peer learning sessions, and office hours to build internal networks and spread best practices across business lines and regions.
2.5.3 Retention and empowerment
- Career Growth and Recognition: GSA provides clear advancement paths for AI professionals, opportunities to rotate across high-impact projects, and recognition programs that highlight innovative AI contributions to mission outcomes.
- Embedded Safety and Ethics Expertise: AI practitioners work alongside privacy, and security officers to ensure that technical staff gain experience with responsible AI design and oversight, making GSA an attractive environment for mission-driven technologists.
2.5.4 Priority application areas
Technical talent will focus on:
- Developing and scaling the USAi shared service to provide secure, government-wide access to generative AI tools.
- Enhancing the EDS to improve data quality, metadata, and AI model traceability.
- Building evaluation and monitoring frameworks that measure safety, bias, and performance across diverse AI models.
- Supporting mission-facing applications in acquisition, customer experience, and federal property management to deliver measurable taxpayer value.
Through these actions, GSA is cultivating a workforce that not only possesses advanced AI skills but also embodies the principles of transparency, accountability, and public trust—ensuring that AI investments translate into secure, efficient, and fair services for the American people.
2.6 Research and development
Describe GSA’s efforts to provide AI tools and capacity to support the agency’s AI research and development (R&D) efforts.
GSA does not maintain a dedicated AI research laboratory or separate line of R&D funding in the manner of a science-focused agency, but it actively fosters AI innovation through applied research, pilot programs, and shared infrastructure that enable experimentation and rapid learning. The agency’s approach emphasizes practical R&D that directly informs mission delivery and government-wide adoption.
2.6.1 AI R&D platforms and tools
- Enterprise Data Solution (EDS): GSA’s secure, enterprise data environment provides curated datasets, version control, and sandboxed compute resources to support model development and testing. AI projects use EDS to experiment with algorithms, assess data quality, and document model lineage before any production deployment.
- USAi Shared Service: GSA’s USAi platform offers agencies an environment for generative AI exploration, including a chatbot interface and API access for custom R&D use cases. USAi allows GSA and partner agencies to test multiple commercial foundation models and evaluate safety, bias, and performance without building duplicative infrastructure. GSA is preparing joint evaluation efforts with the National Institute of Standards and Technology’s AI Safety Institute and the Cybersecurity and Infrastructure Security Agency’s red-teaming initiative to benchmark frontier models for robustness, adversarial resistance, and bias. These partnerships will allow USAi users to incorporate government-wide safety tests directly into their R&D workflows.
- Evaluation and Safety Harnesses: All R&D efforts are paired with structured evaluation plans, impact statements, and real-world context testing to measure accuracy, robustness, and equity while ensuring compliance with federal privacy and security requirements.
2.6.2 Innovation through partnerships and pilots
- Industry Engagement: GSA collaborates with commercial AI providers through market research, demonstrations, and pilot agreements to evaluate emerging technologies and workforce trainings. These efforts inform acquisition strategies and feed lessons learned into government-wide procurement vehicles such as the OneGov AI deals.
- FedRAMP 20x Initiative: In partnership with the FedRAMP program, GSA is piloting an accelerated authorization process to make leading AI platforms available for R&D and operational use across government, ensuring agencies can access the same cutting-edge capabilities as the private sector.
- Cross-Agency Collaboration: Through USAi, GSA provides R&D capacity to other agencies, enabling them to test models, share evaluation data, and develop AI adoption strategies in a secure, multi-tenant environment. The USAi platform also opens up the possibility for tenant agencies to share models, lessons learned, and data assets should the tenant agencies demand those capabilities.
2.6.3 Complementary innovation methods
Even without a traditional research lab, GSA drives innovation by embedding R&D principles into day-to-day operations. Internal programs such as “Friday Demo Days,” hackathons, and targeted pilot projects create opportunities for employees to prototype AI solutions, share findings, and scale successful approaches across the enterprise.
Through these efforts, GSA AI research and experimentation will remain integral to mission delivery, creating a pipeline of tested, secure, and cost-effective solutions that can be rapidly transitioned from concept to government-wide deployment.
2.7 Governance and risk management
Describe GSA’s plan to develop enterprise capacity for AI innovation.
GSA is building enterprise readiness for AI innovation through a layered governance framework that promotes safe experimentation while enforcing rigorous risk controls. Under the agency’s AI governance directive, the Chief Artificial Intelligence Officer (CAIO) maintains agency-wide visibility of AI activities and chairs both the AI Governance Board and the AI Oversight Committee.
- AI Governance Board (known as the EDGE Board) — Co-chaired by the Chief Data Officer (CDO) and the Deputy Administrator, this body sets enterprise risk tolerance, approves high-impact use cases, and integrates AI oversight with GSA’s enterprise risk management program.
- AI Oversight Committee — A cross-functional working group empowered to review every AI request, assess risk, and enforce privacy and security controls. The team adjudicates use cases across familiarization, pre-acquisition, research and development, and production categories before deployment.
To strengthen enterprise capacity, GSA requires all high-impact use cases to submit AI Impact Statements, independent evaluation plans, and real-world test results prior to deployment. Approved systems undergo continuous monitoring, human-in-the-loop validation, and annual re-registration, with thresholds for human review and mitigation of emergent risks. Every production system must also obtain an Authorization to Operate, complete privacy assessments, and be publicly disclosed in an AI use-case inventory.
This structure enables GSA to scale innovation—through pilots, dedicated R&D environments such as the EDS, and shared services like USAi —while ensuring that new AI capabilities are tested, validated, and continually evaluated for safety, fairness, and mission impact.
Describe your agency’s plan to develop the necessary operations, governance, and infrastructure to manage risks from the use of AI, including risks related to information security and privacy.
GSA integrates AI risk management with existing data governance, privacy, cybersecurity, and enterprise risk programs to manage information-security and privacy risks across the AI life cycle.
- Coordinated oversight — The CAIO, Chief Information Security Officer, Chief Privacy Officer, and Data Governance Leads jointly review all production or production-intent AI systems. AI enhancements to existing IT tools trigger re-authorization within the agency’s security framework.
- Data safeguards — All datasets used for design, training, testing, and operation must be registered in the EDS catalog and adhere to internal data-sharing and sensitivity requirements. Sensitive data cannot be used without explicit clearance and a valid Authorization to Operate.
- Incident response — Any cybersecurity or privacy incident involving AI requires re-submission of the use case for reassessment within strict timelines.
- Privacy and equity protections — Covered AI systems must proactively mitigate algorithmic discrimination, provide human alternatives or fallback options, and publish plain-language notices of AI use.
Through this integrated governance model, GSA ensures that every AI system—whether internally developed or commercially procured—is aligned with federal directives, FedRAMP security controls, and GSA’s enterprise risk management framework, enabling responsible adoption of AI while protecting privacy, and mission integrity.
2.8 Resource tracking and planning
Describe GSA’s plan to identify, track, and facilitate future AI investment or procurement.
GSA is implementing a structured approach to identify, track, and plan future AI investments and procurements to ensure that resources align with mission priorities and deliver measurable value to the taxpayer. This approach combines enterprise-wide visibility into AI activities with standardized budgeting and acquisition practices, allowing the agency to manage costs, monitor usage, and scale successful solutions.
2.8.1 Identification of AI investments
- Central AI Inventory: All AI use cases—whether pilots, R&D efforts, or production systems—must be registered in GSA’s enterprise AI inventory. This inventory captures key information on funding sources, technical scope, data requirements, and risk profiles, providing a single view of AI activities across the agency.
- Enterprise Risk and Budget Integration: The AI Governance Board reviews proposed investments alongside GSA’s enterprise risk management process to ensure that funding decisions reflect both mission impact and risk tolerance.
2.8.2 Tracking and cost transparency
- USAi and Platform Telemetry: GSA’s USAi platform provides detailed usage and cost telemetry, enabling real-time tracking of model consumption, API calls, and user activity. This data informs budget planning and allows program offices to forecast future demand and allows agencies to weigh the value propositions of consumption-based AI platforms and license-based platform business models.
- Annual Re-Registration and Reporting: All production or production-intent AI systems are re-evaluated annually for compliance, cost efficiency, and continued mission relevance, ensuring that resource allocations remain aligned with agency goals.
2.8.3 Planning for future investments
- Acquisition Readiness: GSA applies standardized acquisition strategies, such as the OneGov AI agreements, to streamline procurement of commercial AI products and services across government. These vehicles allow for rapid scaling of proven solutions and reduce duplication of effort.
- Accelerated Authorization: Through the FedRAMP 20x initiative, GSA works to expedite the security review of generative AI platforms, enabling faster deployment of market-leading technologies while maintaining rigorous safeguards.
- Data-Driven Budget Forecasting: Insights from the AI inventory, telemetry, and evaluation processes feed into multi-year budget planning to anticipate resource needs for emerging AI opportunities and government-wide shared services.
By combining comprehensive tracking, cost transparency, and forward-looking acquisition planning, GSA ensures that AI investments are strategically prioritized, fiscally responsible, and well positioned to deliver scalable benefits across the agency and the broader federal enterprise.
Artificial intelligence compliance plan
1. Strengthening AI governance
General
Describe any planned or current efforts within your agency to update any existing internal AI principles, guidelines, or policy to ensure consistency with M-25-21 and M-25-22.
By December 29, 2025, GSA plans to release 2185.2 CIO, Accelerating Responsible Use of Artificial Intelligence at GSA. This directive will be based on the mandates and guidance set forth in OMB M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust; OMB M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government; and other Federal mandates. GSA’s directive will outline policies for the responsible, efficient, and accelerated adoption of AI technologies at GSA. It will address key issues related to AI adoption within the GSA network, including establishing policies for controlled access and responsible, efficient AI use, as well as outlining procedures for the assessment, procurement, use, monitoring, and governance of AI systems. A critical focus will be ensuring compliance with existing security, privacy, ethics, and legal frameworks, while also accelerating Federal AI use in line with directives for innovation, governance, and public trust. The directive will also aim to remove barriers to AI adoption, empower AI leaders, and safeguard individual rights and liberties, especially concerning high-impact AI. Furthermore, it will address risk management through regular assessments, prioritize appropriate AI uses to improve agency missions, ensure adequate data infrastructure and curation, assess and plan for AI workforce needs, and promote interagency coordination and efficient AI acquisition practices.
GSA currently has in place 2185.1A CIO, Use of Artificial Intelligence (AI) at GSA, which established governing policies for the controlled access and responsible use of AI technologies and platforms within GSA. This policy replaced CIO IL-23-01, which focused on generative AI, and expanded its scope to cover broader AI governance. The 2185.1A CIO directive refines and updates the previous directive to reflect evolving guidance and to accelerate the responsible adoption of AI within GSA, with a particular emphasis on classifying and managing different levels of AI impact.
GSA will conduct annual reviews and assessments of 2185.2 CIO to ensure the mandates set forth in OMB M-25-21 and M-25-22 are maintained as AI capabilities continue to evolve. The Chief Artificial Intelligence Officer (CAIO) will lead these reviews with the support of the Evidence-Based Data Governance Executive (EDGE) Board and AI Oversight Committee. Updates to 2185.2 CIO and other relevant policies will be released as-needed and are not restricted to an annual update.
AI Governance Bodies
Identify the offices that are represented on your agency’s AI governance body.
GSA’s AI governance structure is composed of two bodies: the EDGE Board and the AI Oversight Committee.
The EDGE Board serves as a decisional body to oversee and coordinate GSA’s activities related to AI and data governance. It is responsible for overseeing and coordinating GSA’s activities related to AI and data governance. This includes setting the overall vision and goals for AI development and use within the agency, emphasizing risk management, ethical guidelines, compliance with legal standards, and alignment with GSA’s mission and values, particularly with Executive Order 14179, OMB M-25-21 and M-25-22. The EDGE Board also sets, updates, and funds GSA’s data priorities, making sure the agency fully uses data as a key asset and enforces data governance laws, policies, and regulations as mandated by the Foundations for Evidence-Based Policymaking Act.
The EDGE Board, which is chaired by GSA’s Deputy Administrator and co-chaired by the Chief Data Officer (CDO)/CAIO, reports to the GSA Administrator. The Board comprises executive leadership from GSA’s Services and Staff Offices, as follows:
- Chief Administrative Services Officer
- Chief Acquisition Officer
- Office of Civil Rights (OCR) Associate Administrator
- Chief Financial Officer
- Chief Information Officer
- Chief Information Security Officer
- Chief Privacy Officer
- Chief Technology Officer
- Chief Human Capital Officer (CHCO)
- Evaluation Officer
- Federal Acquisition Service (FAS) Commissioner or Deputy Commissioner
- General Counsel
- Office of Government-wide Policy (OGP) Associate Administrator
- Office of Mission Assurance Associate Administrator
- Office of Small and Disadvantaged Business Utilization Associate Administrator
- Office of Strategic Communication (OSC) Associate Administrator
- Performance Improvement Officer
- Public Buildings Service (PBS) Deputy Commissioner
- Statistical Official
- Technology Transformation Services (TTS) Director
The EDGE Board oversees the AI Oversight Committee, a cross-functional working group established to facilitate the effective integration and utilization of AI technologies across the organization. Its purpose is to ensure that AI initiatives align with organizational goals, promote accessibility, enhance content quality, leverage research and data effectively, provide adequate training, optimize design and user experience, and are supported by robust technology and infrastructure. The team identifies opportunities for AI integration, develops public AI strategies, and establishes guidelines for ethical and responsible AI use, including evaluating AI technologies, reusing existing AI resources, investing in American-developed AI products, and fostering AI talent within GSA. Additionally, the team provides AI training, collaborates with stakeholders, sets expectations for workforce AI use, monitors AI project implementation and impact, and ensures compliance with relevant laws and regulations, while regularly reporting progress to GSA leadership.
The AI Oversight Committee comprises representatives from key GSA offices, providing expertise and ensuring comprehensive consideration of AI integration across various aspects of GSA’s operations. These offices include:
- Office of GSA IT
- FAS
- OCR
- OGP
- Office of Human Resources Management (OHRM)
- OSDBU
- OSC
- Office of the Chief Financial Officer (OCFO)
- Office of the General Counsel (OGC)
- PBS
Describe how, if at all, your agency’s AI governance body plans to consult with external experts as appropriate and consistent with applicable law.
GSA has and will continue to collaborate with external experts, including academic institutions, State governments, and other Federal agencies to develop training programs and benchmark its updated AI policies against best practices in the field. Members of the EDGE Board and AI Oversight Committee regularly engage with interagency groups and councils, including the CDO council, the CAIO council, the CHCO council, the U.S. Office of Personnel Management, the AI Talent Task Force, and the three interagency working groups coordinated by the CAIO council on generative AI, AI-related acquisitions, and AI risk management.
External consultation will continue to play a critical role in both governance and safety efforts. The EDGE Board plans to consult with external experts as appropriate and consistent with applicable law. This consultation may include input from Federally Funded Research and Development Centers, academic institutions, think tanks, and industry experts to stay informed about AI’s latest trends, risks, and best practices. Civil society organizations, labor unions, and similar groups may also be consulted to ensure that AI systems meet public interest standards and consider workforce impacts.
AI Use Case Inventories
Describe your agency’s process for soliciting and collecting AI use cases across all sub- agencies, components, or bureaus for the inventory. In particular, address how your agency plans to ensure your inventory is comprehensive, complete, and encompasses updates to existing use cases.
GSA’s process for soliciting and collecting AI use cases across the organization is led by the CAIO. The CAIO is responsible for ensuring that the AI use case inventory is comprehensive and up to date, with the support of the AI Oversight Committee, various program offices, and the Office of the Chief Information Officer (OCIO).
The process begins with the issuance of an annual request to all offices, requiring them to submit information on current and planned AI systems. Each office must identify all AI applications, regardless of the size or scope of the system. The CAIO provides and maintains standardized guidelines and templates for reporting in compliance with OMB standards. This centralized submission process is used to collect and manage AI use cases, allowing offices to submit their entries in a consistent and structured manner. All offices participate in this process and regularly update their entries to reflect any changes, such as new AI capabilities, updates to existing systems, or the decommissioning of older technologies.
The CAIO oversees periodic reviews and audits to maintain the inventory’s accuracy and completeness. The CAIO also coordinates within the OCIO so that existing processes like Authorizations To Operate, Federal Information Technology Acquisition Reform Act reviews, and new software requests can identify AI instances across the enterprise. This process ensures that GSA’s AI inventory remains accurate and fully reflects the agency’s ongoing use of AI technology. This approach, led by the CAIO, provides a comprehensive and complete inventory of AI use cases that support the agency’s governance and decision-making processes regarding AI deployment.
Reporting on AI Use Cases Not Subject to Inventory
Describe your agency’s process for soliciting and collecting AI use cases that meet the criteria for exclusion from being individually inventoried.
GSA has not identified any AI use cases that are not subject to inventory.
2. Advancing responsible AI innovation
Removing Barriers to the Responsible Use of AI
Describe any barriers to the responsible use of AI that your agency has identified, as well as any steps your agency has taken (or plans to take) to mitigate or remove these identified barriers.
GSA has identified several potential barriers to the responsible use of AI, including the procurement of AI solutions, access to high-quality data products with scalable infrastructure, and AI models and libraries. GSA has established an enterprise data platform (i.e., Enterprise Data Solution or EDS), which includes an enterprise data catalog, scalable compute infrastructure, analytical tooling, and AI/ML systems and services. This platform allows for programs to store, curate, and productize their data holdings for analytical purposes, as well as disseminate the information products generated via hosted services. EDS also allows for sandbox capabilities where AI tools can be safely tested and where rapid prototyping may occur.
GSA supports several pilot projects that assess the capabilities and viability of AI products for specific use cases, including the ease with which AI infrastructure and products may be set up, and how AI tools and services can be leveraged to improve mission outcomes for agencies. These pilot projects are designed to help GSA better understand, manage, and provide guidance for internal and other agency use regarding AI technologies for future implementation.
GSA has leveraged sandboxes to test AI capabilities of AI chatbots, IT security enhancements, custom application development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance.
To support the availability of AI tooling and infrastructure, the FedRAMP program has established a framework for prioritizing emerging technologies (ETs) for FedRAMP authorization, which covers technologies listed in the Office of Science and Technology Policy’s Critical and Emerging Technologies. This framework enables routine and consistent prioritization of the most critical cloud-relevant ETs needed for use by Federal agencies. This prioritization controls how FedRAMP prioritizes its own work and review processes, and will not address how sponsoring agencies manage their own internal priorities.
Identify whether your agency has developed (or is in the process of developing) internal guidance for the use of generative AI.
GSA has developed internal guidance for the use of generative AI and has made it available to employees on an internal website. This guidance includes the safeguards and oversight mechanisms necessary for responsible use without posing undue risk. The EDGE Board and AI Oversight Committee provide oversight by reviewing and dispositioning AI use cases, ensuring compliance with ethical standards, data privacy, and security protocols. The guidance also requires that generative AI tools be used under controlled conditions set forth by the guidance and standards established in OMB M-25-21 and M-25-22 and enforced and overseen by the AI Oversight Committee. GSA is evaluating continuous monitoring and evaluation processes and tools for generative AI.
AI Talent
Describe any planned or in-progress initiatives from your agency to increase AI talent. In particular, reference any hiring authorities that your agency is leveraging, describe any AI focused teams that your agency is establishing or expanding, and identify the skillsets or skill levels that your agency is looking to attract. If your agency has designated an AI Talent Lead, identify which office they are assigned to.
GSA is taking a multi-pronged approach to strengthen its AI workforce and infrastructure. Through targeted hiring strategies, the formation of specialized teams, and the development of centralized platforms, GSA is positioning itself as a leader in responsible AI adoption across the Federal Government. These efforts are designed to attract top-tier talent, ensure ethical, and secure AI deployment, and support other agencies in scaling their AI capabilities.
AI Talent Recruitment and Hiring Authorities: To meet the growing demand for AI expertise, in FY24 GSA has launched the AI Talent Surge, a strategic initiative aimed at rapidly expanding its AI workforce. The agency utilized flexible hiring authorities such as Direct Hire Authority, which allowed for expedited recruitment of highly qualified candidates, and the Pathways Programs, which targeted students and recent graduates for entry-level roles.
AI-Focused Teams and Offices: GSA has established several key teams to guide its AI strategy and implementation. At the helm is the CAIO, who also serves as the agency’s CDO. The EDGE Board is co-chaired by the CDO/CAIO and the Deputy Administrator, both, ensuring that executive-level oversight is in place for all major AI initiatives. The AI Oversight Committee, a technically focused group, is responsible for conducting risk assessments, ensuring compliance with ethical standards, and evaluating the safety of AI applications. GSA’s TTS and OCIO are also expanding their AI-related functions, supporting both internal innovation and cross-agency collaboration.
Infrastructure and Platforms Supporting AI Talent: GSA’s launch of USAi.Gov marks a significant step in providing Federal agencies the tools and infrastructure needed to experiment with and adopt generative AI technologies. The platform offers capabilities such as chatbot development, document summarization, and code generation, while also supporting model evaluation for bias and performance. USAi.Gov includes dashboards for workforce upskilling and analytics, helping agencies monitor and improve their AI readiness. This initiative is part of GSA’s broader role as a Federal AI enabler, offering secure, scalable, and shared services that facilitate responsible AI deployment across Government.
If applicable, describe your agency’s plans to provide any resources or training to develop AI talent internally and increase AI training opportunities for Federal employees.
GSA has made AI-related training available through online learning platforms to develop AI talent internally. The agency supports the AI Community of Excellence (CoE), which serves as a collaborative space for sharing knowledge and best practices, and is leading the AI Talent Surge effort to attract and retain skilled professionals to advance AI capabilities across the agency.
GSA also leverages sandboxes to test AI capabilities, including enhancements, custom development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance. GSA is committed to developing AI talent internally and increasing AI training opportunities for Federal employees. Role-based AI training tracks are accessible through online learning platforms, providing employees at various levels the opportunity to gain relevant skills. These tracks include foundational courses for employees, intermediate training for technical roles, and advanced courses for AI practitioners focusing on development, deployment, and ethical considerations.
Additionally, GSA leads a Government-wide initiative, called the AI Training Series, through its CoE and the Artificial Intelligence Community of Practice (AI CoP). It is designed to equip Federal employees with the knowledge and skills needed to responsibly and effectively engage with AI technologies in their roles. The program supports compliance with the AI Training Act and relevant Executive Orders, offering structured learning across three specialized tracks: Technical Track, Acquisition Track, and Leadership and Policy Track. All sessions are designed to be accessible to Federal employees regardless of technical background. Recordings and materials are made available through platforms like USA Learning and the Federal Acquisition Institute. Through this initiative, GSA plays a central role in modernizing Federal workforce capabilities, fostering cross-agency collaboration, and promoting responsible AI adoption across Government.
AI Sharing and Collaboration
Describe your agency’s process for ensuring that custom-developed AI code—including models and model weights—for AI applications in active use is shared consistent with Section 2.b of M-25-21.
All custom-developed AI code, including models and model weights for AI applications in active use, is shared in compliance with OMB M-25-21, Section 2.b (Sharing of Agency Data and AI Assets) by adhering to established processes for transparency and open access. This includes reviewing AI use case applications through the AI Oversight Committee and other relevant offices to ensure that code and models meet data security, privacy, and ethical standards before being shared. This content is disseminated via sharing platforms such as open.gsa.gov and data.gov.
Elaborate on your agency’s efforts to encourage or incentivize the sharing of code, models, and data with the public. Include a description of the relevant offices that are responsible for coordinating this work.
To advance the responsible and effective use of AI across the Federal Government, GSA has launched a suite of initiatives designed to foster collaboration, streamline adoption, and ensure ethical governance. These efforts reflect GSA’s commitment to building a unified, secure, and scalable AI ecosystem that supports agency missions while promoting transparency and innovation. From strategic frameworks and interagency communities to shared platforms and internal tools, GSA is positioning itself as a central enabler of AI transformation in the public sector.
AI Strategy and Governance: GSA has established a robust framework to guide Federal agencies in the ethical and effective use of artificial intelligence. This includes developing strategic use cases aligned with agency missions, offering governance models to ensure compliance with Federal standards, and providing procurement guidance to mitigate risks. Through these efforts, GSA fosters a culture of responsible AI adoption while encouraging interagency collaboration and knowledge sharing.
AI CoP: The AI CoP serves as a collaborative hub for Federal employees, contractors, and academic partners to exchange knowledge and best practices in AI. Through working groups, shared resources, and leadership from an executive steering committee, the CoP supports ongoing learning and cross-agency engagement. It complements GSA’s governance efforts by cultivating a community that drives responsible AI adoption from the ground up.
USAi.Gov Platform: USAi.Gov is a centralized platform launched by GSA to streamline AI adoption across Federal agencies. It offers a secure environment for evaluating diverse AI tools, including chatbots and summarization models, from leading providers like OpenAI and Google. Agencies can test solutions before procurement, access performance dashboards, and ensure alignment with Federal AI policies. This shared services model reduces barriers to entry and promotes scalable, trustworthy AI deployment.
GSA’s Use of USAi: GSA has developed an AI tool to enhance productivity and safety in Federal workflows. Featuring a chatbot, API integrations, and an admin console, it enables secure and compliant AI use tailored to agency needs. Built with staff input and privacy in mind, GSA’s AI tool is now being considered for broader Federal deployment as a shared service, supporting tenant-based models and continuous improvement through user feedback.
To support the Federal Government’s growing interest in AI, GSA is actively organizing and participating in a range of events that promote collaboration, innovation, and responsible AI adoption. These gatherings—ranging from monthly community meetings and national expos to policy symposia and hackathons—offer valuable opportunities for agencies, industry leaders, and academic partners to share knowledge, explore ETs, and shape the future of AI in Government.
AI CoP Monthly Meetings: Held virtually on the second Thursday of each month, these meetings provide a space for Federal stakeholders to engage with emerging AI topics and showcase agency innovations. With guest speakers and collaborative sessions, the events reinforce the CoP’s mission to foster responsible AI use and interagency learning.
AI & Big Data Expo North America 2025: Held at the Santa Clara Convention Center, the AI & Big Data Expo North America is a major industry event where GSA participates as a partner organization. With over 250 speakers and 200 exhibitors, the expo covers cutting-edge topics such as responsible AI, cybersecurity, data monetization, and AIOps. The event attracts a diverse audience of technologists, policymakers, and innovators, offering GSA members complimentary passes and opportunities to engage with leading AI solutions and strategies.
Federal AI Hackathon (July 31, 2024): The Federal AI Hackathon in 2024 challenged participants in Atlanta, New York, and Washington, DC, to reimagine Federal websites for AI optimization. Teams leveraged large language models and retrieval-augmented generation to build intelligent search engines and user interfaces that enhance transparency and efficiency. Winning solutions demonstrated innovative approaches to improving digital Government services, showcasing the potential of AI to transform public engagement and accessibility.
Harmonization of Artificial Intelligence Requirements
Explain any steps your agency has taken to document and share best practices regarding AI governance, innovation, or risk management. Identify how these resources are shared and maintained across the agency.
GSA’s approach to AI is grounded in a commitment to transparency and institutional learning. To support this, the agency has prioritized the development of mechanisms that capture insights from AI initiatives and make them accessible across teams and stakeholders. These efforts reflect a broader strategy to ensure AI implementations are not only effective but also informed by evidence, experience, and evolving standards.
AI Use Case Inventory: To ensure transparency and continuous improvement in its AI initiatives, GSA has developed robust mechanisms for documenting and disseminating best practices. Central to this effort is the AI Use Case Inventory, a comprehensive catalog of AI applications deployed across the agency. Maintained by the CAIO and the AI Oversight Committee, this inventory serves as a living resource that tracks the life cycle of AI projects—from ideation to implementation. It enables GSA to monitor compliance, assess performance, and identify opportunities for replication or refinement. By systematically documenting each use case, GSA fosters a culture of learning and accountability, ensuring that successful strategies are shared and that risks are proactively managed.
AI Guide for Government: Complementing this inventory is the AI Guide for Government, a resource developed by GSA’s AI CoE. This guide is tailored for Federal decision-makers and practitioners, offering practical frameworks, case studies, and implementation strategies for responsible AI adoption. It distills complex technical and ethical considerations into actionable insights, making it easier for agencies to navigate the challenges of AI deployment. The guide is regularly updated to reflect emerging trends, regulatory changes, and lessons learned from across the Federal landscape, reinforcing GSA’s commitment to knowledge sharing and continuous improvement.
GSA has responded to the need for infrastructure, collaboration, and sustained support by building a network of resources and communities that enable safe experimentation, scalable deployment, and cross-functional engagement. These foundational elements help ensure that AI efforts are aligned with agency goals while remaining adaptable to technological and regulatory change.
AI CoE: GSA has established a multi-layered infrastructure to support the sharing and maintenance of AI resources across the agency. At the heart of this ecosystem is the AI CoE, which provides strategic guidance, technical support, and scalable infrastructure to Federal agencies pursuing AI initiatives. The CoE offers sandbox environments, analytical tooling, and reusable components that enable safe experimentation and rapid prototyping. By centralizing these resources, GSA reduces duplication of effort and accelerates innovation across the Federal enterprise.
AI CoP: In addition to providing technical support, GSA fosters collaboration through the AI CoP. As noted earlier, this network is open to all Government employees and mission-supporting contractors with .gov or .mil email addresses. It serves as a platform for peer learning, cross-agency dialogue, and resource exchange. Members participate in webinars, workshops, and working groups that address key topics such as ethical AI, data governance, and risk mitigation. The CoP also maintains a shared knowledge repository, ensuring that insights and tools are accessible to all participants.
EDS: Supporting these efforts is the EDS, a secure and scalable platform managed by GSA’s Office of the Chief Data Officer. EDS provides the technical foundation for AI experimentation, offering sandbox environments, data governance tools, and infrastructure for deploying AI models. It ensures data used in AI projects is properly managed, protected, and aligned with Federal standards.
Together, these resources create a cohesive framework for innovation, governance, and risk management, enabling GSA to lead by example in the responsible use of artificial intelligence.
3. Managing risks from the use of AI
Determining Which Artificial Intelligence Is Presumed to Be Safety-Impacting or Rights Impacting
Explain the process by which your agency determines which AI use cases are high-impact. In particular, describe how your agency is reviewing or planning to review each current and planned use of AI to assess whether it matches the definition of high-impact AI, as defined in Section 4.a of M-25-21. Identify whether your agency has created additional criteria for when an AI use is high-impact and describe such supplementary criteria.
GSA handles high-impact AI use cases with additional requirements due to their potential risks to individuals or businesses, establishing transparent governance and compliance processes. The CAIO is responsible for measuring, monitoring, and evaluating the ongoing performance and effectiveness of these applications, and centrally tracking them. System owners and their designees are responsible for enacting these minimum requirements.
Minimum Requirements for High-Impact AI (before and during employment):
- Before an AI system is employed, several minimum requirements must be met. These include completing an AI Impact Statement, submitting an AI system test plan that demonstrates real-world context testing and contestability, and undergoing an independent evaluation of the AI system by the CAIO (or his or her designee).
- While the AI system is in use, ongoing monitoring is required, with established thresholds for periodic human review. Emergent risks identified through routine testing, continuous monitoring protocols, or third-party findings must be mitigated. All system practitioners must complete requisite AI training requirements, and human validation and intervention protocols should be in place to ensure that all output decisions made by AI systems are regularly evaluated. Additionally, public notice and plain language documentation regarding high-impact use cases must be provided through the public interface, in public disclosure statements, and in the AI use case inventory.
Additional Requirements for Higher-Impact AI (before and during employment):
- Before implementing an AI system, it is crucial to proactively identify and mitigate algorithmic discrimination or bias and assess and mitigate disparate impacts for protected classes. Direct user testing of system interactions should be conducted, and comments from the user community should be solicited, along with post-transaction customer feedback activities.
- While employing the AI, ongoing monitoring studies for AI-enabled discrimination must be conducted. Any negatively affected individuals should be notified, and fallback and escalation options for AI processes or outcomes should be provided. Where practicable, opt-out alternatives should also be offered.
Certain high-impact AI use cases are exempt from the outlined requirements, specifically when evaluating a potential vendor, commercial capability, or freely available AI capability not currently used in agency operations, solely for procurement or acquisition decisions; when evaluating an AI application because its provider is the target or potential target of a regulatory enforcement action; or for research and development purposes.
The CAIO may waive one or more stated requirements for specific covered AI applications, with conditions, if the requirements would increase overall risks to safety or rights, or create an unacceptable impediment to critical agency operations. Waiver appeals are submitted with written justifications to the CAIO and the EDGE Board, are centrally tracked, subject to publication, and reassessed annually.
The EDGE Board is responsible for establishing the enterprise’s AI risk tolerance, prioritization, and strategic approach to risk management, in alignment with GSA’s Enterprise Risk Management Policy, which encompasses defining likelihood and impact ranking criteria and thresholds, outlining factors for the use case risk rubric, and establishing risk management practices and processes for AI systems; subsequently, the AI Oversight Committee assesses use cases based on this guidance, and individual system owners are accountable for implementing the defined risk management processes.
If your agency has developed its own distinct criteria to guide a decision to waive one or more of the minimum risk management practices for a particular use case, describe the criteria.
GSA has not developed a distinct criteria in guiding the decision to waive minimum risk management practices.
For the issuance of waivers, the CAIO, in coordination with other relevant officials, may waive one or more of the required minimum practices for high-impact use cases involving a specific AI application or component, following a written determination. The waiver process involves submitting a formal request to the CAIO and the EDGE Board, which includes a risk assessment and review of the specific system and context. The waiver will be considered for approval should the assessment find that meeting the requirement would increase overall risks to safety or rights or impose an unacceptable barrier to critical agency operations. If the board and the CAIO approve, waivers will be reported to OMB within 30 days of issuance, tracked, and routinely reviewed to ensure compliance with agency standards. GSA maintains a certification process to confirm that AI systems granted waivers continue to meet risk management requirements, and the CAIO may revoke waivers if any noncompliance or unforeseen risks emerge. GSA has not issued a waiver at the time of this compliance plan’s publication.
Implementation of Risk Management Practices and Termination of Non-Compliant AI
Elaborate on the controls your agency has put in place to prevent non-compliant safety impacting or rights-impacting AI from being deployed to the public. Describe your agency’s intended process to terminate, and effectuate that termination of, any non-compliant AI.
GSA has implemented several controls to prevent non-compliant, high-impact AI from being deployed to the public. Safeguards include the review of AI use cases by the AI Oversight Committee, confirming all AI systems meet ethical, legal, and technical standards before deployment. The AI Oversight Committee evaluates risks related to public safety, privacy, civil liberties, and potential biases, with specific attention to AI systems that may have significant impacts on rights or safety.
To prevent non-compliant, high-impact AI from being deployed to the public, GSA has established continuous monitoring protocols that track AI system interactions at the network level. GSA is developing a strategy to increase its capacity to monitor AI system behaviors and performance. Automated alerts and reporting systems are in place to detect deviations from compliance standards, triggering an immediate review by relevant oversight bodies.
If a high-impact AI system is found to be non-compliant after deployment, GSA has developed a defined process for termination. The EDGE Board, in collaboration with the OCIO, will issue a termination order. The termination process involves revoking system access, ceasing operations, and ensuring that data processed by the AI system is secured, maintained, or destroyed as required. An incident response team coordinates the shutdown and conducts a post-termination review to assess the impact and identify corrective actions. Additionally, communications will be made regarding use-case terminations, the reasons for the action, and any steps being taken to prevent future issues.
Minimum Risk Management Practices
Identify how your agency plans to document and validate implementation of the minimum risk management practices. In addition, discuss how your agency assigns responsibility for the implementation and oversight of these requirements.
GSA policy mandates that AI use cases document and validate the implementation of minimum risk management practices as defined in OMB M-25-210 for AI systems through a comprehensive framework managed by the EDGE Board. This process includes detailed documentation at each stage of the AI life cycle, from development and testing to deployment and monitoring. AI use cases are required to maintain thorough records of risk assessments, compliance checklists, data usage audits, real-world testing, and ethical impact evaluations. These documents will be stored in a centralized repository that is accessible to the AI Oversight Committee and the EDGE Board.
GSA will conduct periodic audits and performance reviews of AI systems to validate that the minimum risk management practices are being followed. These audits assess compliance with Federal guidelines, including data privacy, bias mitigation, and ethical considerations. Additionally, GSA employs automated monitoring tools to track ongoing compliance, and discrepancies trigger immediate reviews by oversight teams.
Responsibility for implementing and overseeing these risk management practices is clearly assigned to multiple levels within the organization. The EDGE Board provides strategic oversight and policy guidance, and supports the AI Oversight Committee in ensuring all AI initiatives comply with legal and ethical standards. The OCIO is responsible for the technical implementation and monitoring of AI systems. Individual project managers and teams are tasked with day-to-day compliance, including adhering to risk management protocols, conducting regular risk assessments, and reporting any issues to the EDGE Board.
U.S. General Services Administration