Read more at source.
Read more at source.
In February, DOGE tested the GSAi chatbot in a pilot with 150 users within GSA. The plan is to eventually deploy the product across the entire agency. The chatbot has been in development for several months, but new DOGE-affiliated agency leadership has greatly accelerated its deployment timeline.
Federal employees can now interact with GSAi on an interface similar to ChatGPT. The default model is Claude Haiku 3.5, but users can also choose to use Claude Sonnet 3.5 v2 and Meta LLaMa 3.2, depending on the task. The AI-powered chat can draft emails, create talking points, summarize text, and write code.
The internal memo about the product includes a warning against typing or pasting federal nonpublic information and personally identifiable information as inputs. It also instructs employees on how to write an effective prompt, providing examples of ineffective and effective prompts.
The Treasury and the Department of Health and Human Services have both recently considered using a GSA chatbot internally and in their outward-facing contact centers. It is not known whether that chatbot would be GSAi. Elsewhere in the government, the United States Army is using a generative AI tool called CamoGPT to identify and remove references to diversity, equity, inclusion, and accessibility from training materials.
What is the larger strategy here? Is it giving everyone AI and then that legitimizes more layoffs? That wouldn't surprise me.