See how Minnesota uses ChatGPT for government translation and language access—plus a practical model to improve public digital services safely.

How Minnesota Uses ChatGPT for Public Translation
Most government translation breaks in the exact same place: volume.
A single form update turns into 10 languages, 10 vendor tickets, and 10 opportunities for something to drift out of date. Then a winter storm hits, a benefits rule changes, or an election deadline moves—and the people who most need timely information are the last to get it.
That’s why Minnesota’s Enterprise Translation Office experimenting with ChatGPT matters for the broader AI in Government & Public Sector conversation. It’s not “AI for AI’s sake.” It’s a practical response to a real service-delivery constraint: public agencies need faster, more consistent, more accessible translation—without exploding cost or staff workload.
Below is a field guide to what this kind of adoption can look like in U.S. public institutions, how to do it responsibly, and what leaders should measure if the goal is better digital services (and not just a flashy pilot).
Why AI translation is showing up in state government
AI translation is being adopted because public service has a speed problem, not a language problem. Agencies already know they must communicate across many languages. The hard part is keeping information current across channels—web pages, PDFs, notices, call center scripts, and SMS alerts.
In the U.S., language access requirements and expectations are rising at the same time agencies face:
- More digital content (every program has portals, dashboards, online notices)
- Shorter update cycles (policy and eligibility changes can happen quickly)
- Higher stakes (missed deadlines can mean missed benefits, missed court dates, or unsafe situations)
I’ve found that when teams say “translation is slow,” they usually mean workflow is slow. Human translators are essential, but they can’t be the only gear in the system when the content pipeline keeps accelerating.
Translation isn’t a single task—it’s a supply chain
Government translation touches multiple steps, each with failure modes:
- Content creation (often written for internal clarity, not public readability)
- Localization (terminology, program names, culturally appropriate phrasing)
- Review (legal/compliance, plain language, stakeholder sign-off)
- Publishing (CMS, PDFs, email templates, IVR/call scripts)
- Maintenance (updates, version control, archiving)
Generative AI can help at several points—especially drafts, consistency checks, and fast iteration—while humans keep authority over final meaning.
What “using ChatGPT for translation” can actually mean
When agencies say they’re using ChatGPT for translation, the most successful pattern is “AI draft + human verification,” not “AI replaces translators.” That’s the only approach I’d recommend for public-facing, high-impact content.
Here are realistic government use cases that fit an Enterprise Translation Office model.
Draft translations for high-volume, low-risk content
AI is well-suited for content that’s repetitive and time-sensitive:
- Website FAQs
- Appointment reminders
- General office instructions
- Non-legal program overviews
- “How to apply” walkthroughs
The benefit isn’t perfection. The benefit is speed to a usable first draft so professional linguists can spend their time on accuracy, nuance, and stakeholder alignment.
Terminology consistency across agencies
State government has a hidden complexity: different agencies may refer to similar concepts with slightly different terms (and those differences multiply across languages).
A practical approach:
- Maintain an approved bilingual/multilingual glossary
- Use AI to propose translations that must conform to that glossary
- Flag mismatches automatically
Snippet-worthy truth: In government translation, consistency is often as important as fluency.
Plain-language rewriting before translation
Bad source text produces bad translations.
A strong workflow is:
- Rewrite English content into plain language
- Then translate
Generative AI can assist with step 1 by shortening sentences, removing jargon, and standardizing structure (“Who is eligible,” “What you need,” “How to apply,” “When you’ll hear back”). That makes human translation faster and reduces misinterpretation.
Faster turnaround for urgent communications
December is a good example: winter weather alerts, heating assistance notices, holiday closures, and benefit recertification reminders often cluster.
AI can help teams get multilingual drafts out quickly during spikes—as long as there’s an operational review loop.
A responsible operating model (what public sector teams should copy)
The safest model is simple: treat AI as a drafting tool inside a controlled process, with clear red lines. Public sector leaders don’t need a 40-page manifesto; they need an operating rhythm that holds up under scrutiny.
1) Put content into risk tiers
Not everything deserves the same workflow. I prefer a three-tier model:
- Tier 1 (High risk): legal notices, eligibility determinations, rights/appeals, court-related info, enforcement actions
- AI may assist with internal drafting, but human-certified translation and formal QA must control the final output.
- Tier 2 (Medium risk): benefits explainers, policy summaries, public guidance that affects decisions
- AI draft allowed, mandatory bilingual review and program-owner sign-off.
- Tier 3 (Lower risk): general service info, hours/locations, event notices
- AI draft allowed, lightweight review (still human).
This matters because risk-based governance scales. It avoids the trap of treating every piece of content like a legal deposition.
2) Use a “do-not-AI” list that’s actually enforceable
If you want adoption, keep it concrete:
- Don’t paste personally identifiable information (PII)
- Don’t paste protected health information (PHI)
- Don’t paste confidential case notes
- Don’t generate final legal determinations
Pair that with tooling controls (approved accounts, logging, retention rules, and access management) so the policy isn’t just wishful thinking.
3) Require translator-friendly prompts and structured output
Translation quality improves when prompts are consistent. A practical prompt pattern:
- Target language and locale (e.g., “Spanish (U.S.)” vs “Spanish (Spain)”)
- Reading level goal (e.g., 6th–8th grade)
- Required glossary terms
- Format constraints (keep bullet structure, don’t add new info)
- “If unclear, mark as [AMBIGUOUS] instead of guessing”
One-liner worth adopting: A good prompt is a quality-control checklist in disguise.
4) Build a human review loop that’s fast, not ceremonial
The point is speed with accountability. Keep reviews focused:
- Linguist review: accuracy, tone, register, cultural fit
- Program owner review: factual correctness, policy alignment
- Spot checks: numerals, dates, deadlines, addresses, phone numbers
For digital government, publishing speed matters—but publishing wrong information matters more.
What to measure if you want better public service (not a pilot story)
If you can’t measure the impact, you’re stuck arguing about vibes. Agencies should track outcomes that reflect both service delivery and operational health.
Service metrics (citizen impact)
- Time-to-translation: from English approval to multilingual publish
- Time-to-update: how quickly changes propagate across languages
- Call center deflection: fewer “what does this mean?” calls after improved multilingual content
- Form completion rates: especially for limited English proficiency users
- Complaint volume related to language access
Quality metrics (trust and accuracy)
- Revision rate: how much the human reviewer changed the AI draft
- Terminology compliance: percent of required glossary terms used correctly
- Error rate for critical entities: dates, dollar amounts, eligibility thresholds
Operational metrics (scale and cost)
- Cost per translated word/page (blended model vs vendor-only)
- Throughput per linguist (without increasing burnout)
- Backlog size (what’s waiting to be translated)
If you’re a public-sector leader, the metric that usually tells the real story is this: Are you translating more of the right content, faster, with fewer escalations?
Common concerns (and how to address them directly)
The objections to AI in government translation are valid—so handle them with controls, not slogans.
“What about hallucinations or added information?”
Generative models can introduce details. Your defense is process:
- “Don’t add new info” as a rule
- Structured formatting that mirrors source text
- Mandatory review for Tier 1 and Tier 2
- Automated checks for numbers/dates
“Will this replace professional translators?”
It shouldn’t. The stronger argument is workforce modernization:
- Translators spend less time on repetitive first drafts
- More time goes to review, nuance, cultural accuracy, and consistency
- Agencies can cover more content categories that currently go untranslated due to budget constraints
“Is it secure?”
Security depends on how it’s deployed and governed:
- Approved tools/accounts, not personal logins
- Clear data-handling rules
- Audit trails for who translated what and when
- Procurement and IT security review aligned to state policies
Government doesn’t get credit for trying; it gets judged on outcomes and safeguards.
A practical rollout plan for agencies considering AI translation
A good pilot is narrow, measurable, and designed for real adoption. If you’re planning a similar initiative, here’s a rollout sequence that works.
- Pick one service line with steady volume (e.g., benefits FAQs, licensing instructions)
- Define languages based on demand and equity goals
- Create a glossary + style guide (tone, reading level, program terms)
- Stand up the workflow: AI draft → linguist review → program approval → publish
- Measure before/after: turnaround time, revision rate, user support impact
- Expand by risk tier rather than expanding by enthusiasm
This is how you go from “interesting demo” to digital government transformation.
Where this fits in the AI in Government & Public Sector series
Translation is one of the clearest examples of AI improving accessibility in public digital services. It’s not abstract. It’s visible to residents the moment they can finally read a notice, understand a requirement, or complete an application without a family member acting as an interpreter.
Minnesota’s approach—using ChatGPT as part of an enterprise translation function—signals something bigger than a single tool choice: public institutions are starting to treat AI as shared infrastructure for service delivery.
If you’re responsible for citizen communications, the forward-looking question isn’t “Should we use AI?” It’s this:
How do we build a translation operation that keeps up with real life—while protecting accuracy, privacy, and trust?