Removing Barriers to the Responsible Use of AI
Describe any barriers to the responsible use of AI that your agency has identified, as well as any steps your agency has taken (or plans to take) to mitigate or remove these identified barriers.
GSA has identified several potential barriers to the responsible use of AI, including the procurement of AI solutions, access to high-quality data products with scalable infrastructure, and AI models and libraries. GSA has established an enterprise data platform (i.e., Enterprise Data Solution or EDS), which includes an enterprise data catalog, scalable compute infrastructure, analytical tooling, and AI/ML systems and services. This platform allows for programs to store, curate, and productize their data holdings for analytical purposes, as well as disseminate the information products generated via hosted services. EDS also allows for sandbox capabilities where AI tools can be safely tested and where rapid prototyping may occur.
GSA supports several pilot projects that assess the capabilities and viability of AI products for specific use cases, including the ease with which AI infrastructure and products may be set up, and how AI tools and services can be leveraged to improve mission outcomes for agencies. These pilot projects are designed to help GSA better understand, manage, and provide guidance for internal and other agency use regarding AI technologies for future implementation.
GSA has leveraged sandboxes to test AI capabilities of AI chatbots, IT security enhancements, custom application development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance.
To support the availability of AI tooling and infrastructure, the FedRAMP program has established a framework for prioritizing emerging technologies (ETs) for FedRAMP authorization, which covers technologies listed in the Office of Science and Technology Policy’s Critical and Emerging Technologies. This framework enables routine and consistent prioritization of the most critical cloud-relevant ETs needed for use by Federal agencies. This prioritization controls how FedRAMP prioritizes its own work and review processes, and will not address how sponsoring agencies manage their own internal priorities.
Identify whether your agency has developed (or is in the process of developing) internal guidance for the use of generative AI.
GSA has developed internal guidance for the use of generative AI and has made it available to employees on an internal website. This guidance includes the safeguards and oversight mechanisms necessary for responsible use without posing undue risk. The EDGE Board and AI Safety Team provide oversight by reviewing and dispositioning AI use cases, ensuring compliance with ethical standards, data privacy, and security protocols. The guidance also requires that generative AI tools be used under controlled conditions set forth by the guidance and standards established in OMB M-25-21 and M-25-22 and enforced and overseen by the AI Safety Team. GSA is evaluating continuous monitoring and evaluation processes and tools for generative AI.
AI Talent
Describe any planned or in-progress initiatives from your agency to increase AI talent. In particular, reference any hiring authorities that your agency is leveraging, describe any AI focused teams that your agency is establishing or expanding, and identify the skillsets or skill levels that your agency is looking to attract. If your agency has designated an AI Talent Lead, identify which office they are assigned to.
GSA is taking a multi-pronged approach to strengthen its AI workforce and infrastructure. Through targeted hiring strategies, the formation of specialized teams, and the development of centralized platforms, GSA is positioning itself as a leader in responsible AI adoption across the Federal Government. These efforts are designed to attract top-tier talent, ensure ethical, and secure AI deployment, and support other agencies in scaling their AI capabilities.
- AI Talent Recruitment and Hiring Authorities: To meet the growing demand for AI expertise, in FY24 GSA has launched the AI Talent Surge, a strategic initiative aimed at rapidly expanding its AI workforce. The agency utilized flexible hiring authorities such as Direct Hire Authority, which allowed for expedited recruitment of highly qualified candidates, and the Pathways Programs, which targeted students and recent graduates for entry-level roles.
- AI-Focused Teams and Offices: GSA has established several key teams to guide its AI strategy and implementation. At the helm is the CAIO, who also serves as the agency’s CDO. The EDGE Board is co-chaired by the CDO/CAIO and the Deputy Administrator, both, ensuring that executive-level oversight is in place for all major AI initiatives. The AI Safety Team, a technically focused group, is responsible for conducting risk assessments, ensuring compliance with ethical standards, and evaluating the safety of AI applications. GSA’s TTS and OCIO are also expanding their AI-related functions, supporting both internal innovation and cross-agency collaboration.
- Infrastructure and Platforms Supporting AI Talent: GSA’s launch of USAi.Gov marks a significant step in providing Federal agencies the tools and infrastructure needed to experiment with and adopt generative AI technologies. The platform offers capabilities such as chatbot development, document summarization, and code generation, while also supporting model evaluation for bias and performance. USAi.Gov includes dashboards for workforce upskilling and analytics, helping agencies monitor and improve their AI readiness. This initiative is part of GSA’s broader role as a Federal AI enabler, offering secure, scalable, and shared services that facilitate responsible AI deployment across Government.
If applicable, describe your agency’s plans to provide any resources or training to develop AI talent internally and increase AI training opportunities for Federal employees.
GSA has made AI-related training available through online learning platforms to develop AI talent internally. The agency supports the AI Community of Excellence (CoE), which serves as a collaborative space for sharing knowledge and best practices, and is leading the AI Talent Surge effort to attract and retain skilled professionals to advance AI capabilities across the agency.
GSA also leverages sandboxes to test AI capabilities, including enhancements, custom development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance. GSA is committed to developing AI talent internally and increasing AI training opportunities for Federal employees. Role-based AI training tracks are accessible through online learning platforms, providing employees at various levels the opportunity to gain relevant skills. These tracks include foundational courses for employees, intermediate training for technical roles, and advanced courses for AI practitioners focusing on development, deployment, and ethical considerations.
Additionally, GSA leads a Government-wide initiative, called the AI Training Series, through its CoE and the Artificial Intelligence Community of Practice (AI CoP). It is designed to equip Federal employees with the knowledge and skills needed to responsibly and effectively engage with AI technologies in their roles. The program supports compliance with the AI Training Act and relevant Executive Orders, offering structured learning across three specialized tracks: Technical Track, Acquisition Track, and Leadership and Policy Track. All sessions are designed to be accessible to Federal employees regardless of technical background. Recordings and materials are made available through platforms like USA Learning and the Federal Acquisition Institute. Through this initiative, GSA plays a central role in modernizing Federal workforce capabilities, fostering cross-agency collaboration, and promoting responsible AI adoption across Government.
AI Sharing and Collaboration
Describe your agency’s process for ensuring that custom-developed AI code—including models and model weights—for AI applications in active use is shared consistent with Section 2.b of M-25-21.
All custom-developed AI code, including models and model weights for AI applications in active use, is shared in compliance with OMB M-25-21, Section 2.b (Sharing of Agency Data and AI Assets) by adhering to established processes for transparency and open access. This includes reviewing AI use case applications through the AI Safety Team and other relevant offices to ensure that code and models meet data security, privacy, and ethical standards before being shared. This content is disseminated via sharing platforms such as open.gsa.gov and data.gov.
Elaborate on your agency’s efforts to encourage or incentivize the sharing of code, models, and data with the public. Include a description of the relevant offices that are responsible for coordinating this work.
To advance the responsible and effective use of AI across the Federal Government, GSA has launched a suite of initiatives designed to foster collaboration, streamline adoption, and ensure ethical governance. These efforts reflect GSA’s commitment to building a unified, secure, and scalable AI ecosystem that supports agency missions while promoting transparency and innovation. From strategic frameworks and interagency communities to shared platforms and internal tools, GSA is positioning itself as a central enabler of AI transformation in the public sector.
- AI Strategy and Governance: GSA has established a robust framework to guide Federal agencies in the ethical and effective use of artificial intelligence. This includes developing strategic use cases aligned with agency missions, offering governance models to ensure compliance with Federal standards, and providing procurement guidance to mitigate risks. Through these efforts, GSA fosters a culture of responsible AI adoption while encouraging interagency collaboration and knowledge sharing.
- AI CoP: The AI CoP serves as a collaborative hub for Federal employees, contractors, and academic partners to exchange knowledge and best practices in AI. Through working groups, shared resources, and leadership from an executive steering committee, the CoP supports ongoing learning and cross-agency engagement. It complements GSA’s governance efforts by cultivating a community that drives responsible AI adoption from the ground up.
- USAi.Gov Platform: USAi.Gov is a centralized platform launched by GSA to streamline AI adoption across Federal agencies. It offers a secure environment for evaluating diverse AI tools, including chatbots and summarization models, from leading providers like OpenAI and Google. Agencies can test solutions before procurement, access performance dashboards, and ensure alignment with Federal AI policies. This shared services model reduces barriers to entry and promotes scalable, trustworthy AI deployment.
- GSA’s Use of USAi: GSA has developed an AI tool to enhance productivity and safety in Federal workflows. Featuring a chatbot, API integrations, and an admin console, it enables secure and compliant AI use tailored to agency needs. Built with staff input and privacy in mind, GSA’s AI tool is now being considered for broader Federal deployment as a shared service, supporting tenant-based models and continuous improvement through user feedback.
To support the Federal Government’s growing interest in AI, GSA is actively organizing and participating in a range of events that promote collaboration, innovation, and responsible AI adoption. These gatherings—ranging from monthly community meetings and national expos to policy symposia and hackathons—offer valuable opportunities for agencies, industry leaders, and academic partners to share knowledge, explore ETs, and shape the future of AI in Government.
- AI CoP Monthly Meetings: Held virtually on the second Thursday of each month, these meetings provide a space for Federal stakeholders to engage with emerging AI topics and showcase agency innovations. With guest speakers and collaborative sessions, the events reinforce the CoP’s mission to foster responsible AI use and interagency learning.
- AI & Big Data Expo North America 2025: Held at the Santa Clara Convention Center, the AI & Big Data Expo North America is a major industry event where GSA participates as a partner organization. With over 250 speakers and 200 exhibitors, the expo covers cutting-edge topics such as responsible AI, cybersecurity, data monetization, and AIOps. The event attracts a diverse audience of technologists, policymakers, and innovators, offering GSA members complimentary passes and opportunities to engage with leading AI solutions and strategies.
- Federal AI Hackathon (July 31, 2024): The Federal AI Hackathon in 2024 challenged participants in Atlanta, New York, and Washington, DC, to reimagine Federal websites for AI optimization. Teams leveraged large language models and retrieval-augmented generation to build intelligent search engines and user interfaces that enhance transparency and efficiency. Winning solutions demonstrated innovative approaches to improving digital Government services, showcasing the potential of AI to transform public engagement and accessibility.
Harmonization of Artificial Intelligence Requirements
Explain any steps your agency has taken to document and share best practices regarding AI governance, innovation, or risk management. Identify how these resources are shared and maintained across the agency.
GSA’s approach to AI is grounded in a commitment to transparency and institutional learning. To support this, the agency has prioritized the development of mechanisms that capture insights from AI initiatives and make them accessible across teams and stakeholders. These efforts reflect a broader strategy to ensure AI implementations are not only effective but also informed by evidence, experience, and evolving standards.
- AI Use Case Inventory: To ensure transparency and continuous improvement in its AI initiatives, GSA has developed robust mechanisms for documenting and disseminating best practices. Central to this effort is the AI Use Case Inventory, a comprehensive catalog of AI applications deployed across the agency. Maintained by the CAIO and the AI Safety Team, this inventory serves as a living resource that tracks the life cycle of AI projects—from ideation to implementation. It enables GSA to monitor compliance, assess performance, and identify opportunities for replication or refinement. By systematically documenting each use case, GSA fosters a culture of learning and accountability, ensuring that successful strategies are shared and that risks are proactively managed.
- AI Guide for Government: Complementing this inventory is the AI Guide for Government, a resource developed by GSA’s AI CoE. This guide is tailored for Federal decision-makers and practitioners, offering practical frameworks, case studies, and implementation strategies for responsible AI adoption. It distills complex technical and ethical considerations into actionable insights, making it easier for agencies to navigate the challenges of AI deployment. The guide is regularly updated to reflect emerging trends, regulatory changes, and lessons learned from across the Federal landscape, reinforcing GSA’s commitment to knowledge sharing and continuous improvement.
GSA has responded to the need for infrastructure, collaboration, and sustained support by building a network of resources and communities that enable safe experimentation, scalable deployment, and cross-functional engagement. These foundational elements help ensure that AI efforts are aligned with agency goals while remaining adaptable to technological and regulatory change.
- AI CoE: GSA has established a multi-layered infrastructure to support the sharing and maintenance of AI resources across the agency. At the heart of this ecosystem is the AI CoE, which provides strategic guidance, technical support, and scalable infrastructure to Federal agencies pursuing AI initiatives. The CoE offers sandbox environments, analytical tooling, and reusable components that enable safe experimentation and rapid prototyping. By centralizing these resources, GSA reduces duplication of effort and accelerates innovation across the Federal enterprise.
- AI CoP: In addition to providing technical support, GSA fosters collaboration through the AI CoP. As noted earlier, this network is open to all Government employees and mission-supporting contractors with .gov or .mil email addresses. It serves as a platform for peer learning, cross-agency dialogue, and resource exchange. Members participate in webinars, workshops, and working groups that address key topics such as ethical AI, data governance, and risk mitigation. The CoP also maintains a shared knowledge repository, ensuring that insights and tools are accessible to all participants.
- EDS: Supporting these efforts is the EDS, a secure and scalable platform managed by GSA’s Office of the Chief Data Officer. EDS provides the technical foundation for AI experimentation, offering sandbox environments, data governance tools, and infrastructure for deploying AI models. It ensures data used in AI projects is properly managed, protected, and aligned with Federal standards.
Together, these resources create a cohesive framework for innovation, governance, and risk management, enabling GSA to lead by example in the responsible use of artificial intelligence.