Removing Barriers to the Responsible Use of AI
Describe any barriers to the responsible use of AI that your agency has identified, as well as any steps your agency has taken (or plans to take) to mitigate or remove these identified barriers.
GSA has identified several potential barriers to the responsible use of AI, including the procurement of AI solutions, access to high-quality data products with scalable infrastructure, and AI models and libraries. GSA has established an enterprise data platform (Enterprise Data Solution or EDS), which includes an enterprise data catalog, scalable compute infrastructure, analytical tooling, and AI/ML systems and services. This platform allows for programs to store, curate, and productize their data holdings for analytical purposes, as well as disseminate the information products generated via hosted services. EDS also allows for sandbox capabilities where AI tools can be safely tested and where rapid prototyping may occur.
To support federal procurement of generative AI, GSA published the Generative AI and Specialized Computing Infrastructure Acquisition Resource Guide in support of EO 14110, which assists the federal acquisition workforce of civilian agencies in navigating the complexities of acquiring generative AI technologies in collaboration with relevant agency officials from other domains.
GSA supports several pilot projects that assess the capabilities and viability of AI products for specific use cases, including the ease with which AI infrastructure and products may be set up, and how AI tools and services can be leveraged to improve mission outcomes for agencies. These pilot projects are designed to help GSA better understand, manage, and provide guidance for internal and other agency use regarding AI technologies for future implementation.
GSA has leveraged sandboxes to test AI capabilities of AI chatbots, IT security enhancements, custom application development, and general productivity assessments, allowing staff to evaluate AI’s potential while ensuring security and compliance.
To support the availability of AI tooling and infrastructure, the FedRAMP program has established a framework for prioritizing emerging technologies (ETs) for FedRAMP authorization, which covers technologies listed in the Office of Science and Technology Policy’s Critical and Emerging Technologies. This framework enables routine and consistent prioritization of the most critical cloud-relevant ETs needed for use by federal agencies. This prioritization controls how FedRAMP prioritizes its own work and review processes, and will not address how sponsoring agencies manage their own internal priorities.
Identify whether your agency has developed (or is in the process of developing) internal guidance for the use of generative AI.
GSA has developed internal guidance for the use of generative AI and has made it available to employees on an internal website. This guidance includes the safeguards and oversight mechanisms necessary for responsible use without posing undue risk. The AI Governance Board and AI Safety Team provide oversight by reviewing and dispositioning AI use cases, ensuring compliance with ethical standards, data privacy, and security protocols. The directive also requires that generative AI tools be used under controlled conditions set forth by the guidance and standards established in M-24-10 and enforced and overseen by the AI Safety Team. GSA is evaluating continuous monitoring and evaluation processes and tools for generative AI.
AI Talent
Describe any planned or in-progress initiatives from your agency to increase AI talent. In particular, reference any hiring authorities that your agency is leveraging, describe any AI focused teams that your agency is establishing or expanding, and identify the skillsets or skill levels that your agency is looking to attract. If your agency has designated an AI Talent Lead, identify which office they are assigned to.
GSA has several initiatives in progress to increase AI talent within the agency. Through the AI Talent Surge, GSA is actively recruiting and hiring AI professionals by leveraging hiring authorities such as the Direct Hire Authority (DHA) and the Pathways Programs. GSA is expanding AI-focused teams within the Technology Transformation Services (TTS) and OCIO, focusing on roles requiring machine learning, data science, AI ethics, and cybersecurity expertise. Additionally, GSA is seeking to attract talent with advanced skill sets in AI development, algorithmic fairness, and AI system integration. The designated AI Talent Lead is assigned to the Office of Human Resources Management (OHRM) to coordinate these efforts and ensure alignment with broader agency goals.
GSA is utilizing DHA when appropriate to fill positions in the following approved occupations and job series: 1560 Data Scientist, 1515 Operations Research Analyst, 2210 IT Specialist (Artificial Intelligence), 1550 Computer Scientist (Artificial Intelligence), 0854 Computer Engineer (Artificial Intelligence), and 0343 Management and Program Analyst, focusing on AI-related system design and machine learning development.
If applicable, describe your agency’s plans to provide any resources or training to develop AI talent internally and increase AI training opportunities for Federal employees.
GSA has made AI-related training available through online learning platforms to develop AI talent internally. The agency supports the AI Community of Excellence, which serves as a collaborative space for sharing knowledge and best practices, and is leading the AI Talent Surge effort to attract and retain skilled professionals to advance AI capabilities across the agency.
GSA AI policy (2185.1A CIO) allows for employees to gain hands-on experience with both public and internal AI tools in controlled environments. Employees are able to use public AI tools for non-sensitive use cases, including professional development and training purposes. These applications are relegated for individual uses, with the specific goal of gaining familiarity with market offerings, and are most closely aligned with professional training activities.
GSA is committed to developing AI talent internally and increasing AI training opportunities for federal employees. Role-based AI training tracks are accessible through online learning platforms, providing employees at various levels the opportunity to gain relevant skills. These tracks include foundational courses for employees, intermediate training for technical roles, and advanced courses for AI practitioners focusing on development, deployment, and ethical considerations. Additionally, the AI Community of Practice (AI CoP) fosters knowledge sharing and mentorship, including offering a 3-track governmentwide training series (focused on acquisitions, leadership and policy, and technical), while the AI Talent Surge effort ensures ongoing development of AI expertise across the agency.
AI Sharing and Collaboration
Describe your agency’s process for ensuring that custom-developed AI code—including models and model weights—for AI applications in active use is shared consistent with Section 4(d) of M-24-10.
All custom-developed AI code, including models and model weights for AI applications in active use, is shared in compliance with Section 4(d) of M-24-10 by adhering to established processes for transparency and open access. This includes reviewing AI applications through the AI Safety Team, and other relevant offices to ensure that code and models meet data security, privacy, and ethical standards before being shared. This content is disseminated via sharing platforms such as open.gsa.gov and data.gov.
Elaborate on your agency’s efforts to encourage or incentivize the sharing of code, models, and data with the public. Include a description of the relevant offices that are responsible for coordinating this work.
GSA fosters a culture of collaboration through its TTS and OCIO to encourage the sharing of code, models, and data with the public. These offices coordinate efforts to release code under open-source licenses, allowing for reuse and community contributions. GSA incentivizes public sharing by integrating it into performance evaluations for relevant teams and promoting the benefits of transparency and innovation through collaboration.
GSA has promoted AI sharing and collaboration through various public-private partnerships such as hackathons and symposiums. In support of the President’s Management Agenda Workforce Priority Strategy Goal 3.1, GSA hosted the Federal AI Hackathon to foster collaboration and problem-solving across federal agencies and leading AI commercial partners, focusing on real-world challenges that could be addressed using AI. The event served as a platform for sharing best practices, driving innovation, and developing AI-powered solutions applicable to various government functions. Participants utilized open-source tools and commercial products, shared insights, and contributed to the broader AI community. The hackathon emphasized GSA’s commitment to responsible AI development by prioritizing ethical AI, transparency, and risk mitigation. Outcomes from the event, including code, models, and lessons learned, were shared across agencies.
Harmonization of Artificial Intelligence Requirements
Explain any steps your agency has taken to document and share best practices regarding AI governance, innovation, or risk management. Identify how these resources are shared and maintained across the agency.
GSA has taken several steps to document and share best practices regarding AI governance, innovation, and risk management. The agency’s AI Governance Board plays a central role in this effort by establishing guidelines, reviewing AI use cases, and ensuring adherence to ethical and responsible AI principles. The CAIO, CIO, Administrator, and other leading GSA officials disseminate best practices in various forums, from internal town halls to speaking engagements with federal agencies and partners. The CAIO has documented these practices in internal guidance materials, reports, and policy documents, which are made accessible to employees through GSA’s intranet and collaboration platforms.
To foster continuous learning, GSA leads the AI CoP, which serves as a hub for sharing knowledge, best practices, and lessons learned across the agency. This community hosts workshops and training sessions and maintains a knowledge repository that includes AI governance, innovation, and risk management resources. These resources are regularly updated to reflect evolving standards and are shared across teams to ensure all employees have access to the latest best practices for responsible AI use.
< Previous part | Next part >