See how JetBrains applies GPT‑5 to boost developer speed and code quality. Practical lessons for AI coding workflows in U.S. digital services.

AI Coding Workflows: What JetBrains Gets Right
JetBrains tools sit on a huge slice of the software economy: roughly 15 million professional developers use IDEs like IntelliJ, PyCharm, WebStorm, GoLand, Rider, and the company also created Kotlin, Android’s official language. When a platform with that kind of footprint changes how work gets done, it doesn’t stay “inside engineering.” It ripples into product timelines, digital services reliability, and the cost of building and maintaining software.
That’s why JetBrains integrating GPT‑5, ChatGPT, and Codex across its internal workflows—and shipping GPT‑5 options in customer-facing products like Junie (their coding agent) and AI Assistant—is more than a feature update. It’s a case study in how AI is powering technology and digital services in the United States by raising productivity while keeping quality standards intact.
Most teams chasing “AI for developers” focus on one thing: speed. JetBrains is pushing a more useful standard: speed and sustained engineering excellence—code that’s readable, reviewable, and maintainable.
JetBrains’ AI thesis: protect flow, not just output
JetBrains’ approach starts with a simple truth: developers don’t spend most of their day typing code. They spend it thinking—reviewing pull requests, debugging, designing systems, writing tests, and aligning with other humans.
Kris Kang (Head of Product at JetBrains) frames it clearly: AI shouldn’t replace developers; it should raise the ceiling on what a good engineer can do in a day.
Here’s the stance I agree with: AI wins when it reduces context switching. The hidden tax in software isn’t keystrokes; it’s the constant “tab-hopping” between an IDE, documentation, tickets, logs, test failures, and half-remembered architectural constraints.
What “protecting flow” looks like in practice
In practical terms, protecting flow means putting AI where developers already work:
- In the IDE, not in a separate browser chat
- Attached to project context (files, dependencies, tests), not generic advice
- Orchestrated around real tasks (generate tests, update docs, propose refactors), not only Q&A
For U.S.-based tech companies building digital services—especially SaaS—this is the difference between “AI adoption” and “AI actually moving delivery dates.” If a tool adds friction, adoption stalls. If it removes friction, usage compounds.
From chat to agents: why the jump matters
JetBrains draws an important line:
“Chat gives you a lift. Agents give you a step-change.”
That’s a clean way to describe the current shift in AI coding workflows.
Chat helps with local problems
Chat-style assistance is great for:
- Explaining an unfamiliar code path
- Drafting a function or class skeleton
- Translating code between languages
- Summarizing a stack trace or error
This boosts individual productivity, but it still keeps the developer in the driver’s seat minute-to-minute.
Agents help with end-to-end tasks
Agents are where things start to look different operationally. When JetBrains talks about assigning “increasingly difficult tasks” to an agent backed by GPT‑5 and seeing them completed, the real point isn’t novelty. It’s that work gets chunked into delegable units.
A practical definition you can use internally:
A coding agent is software that can plan, modify, run, and verify changes across multiple files—then report back with evidence.
That last part—evidence—is what separates useful automation from a demo.
Why U.S. digital services teams should care
If you run engineering for a digital product, agents change the shape of your backlog:
- Small improvements that never got prioritized (docs, tests, cleanup) become affordable
- Migration work (API updates, deprecations) becomes less painful
- On-call and incident follow-ups (adding guardrails, improving monitoring) stop getting deferred
You don’t just ship features faster. You shrink the “long tail” of maintenance that quietly drains budgets.
The two metrics that actually matter: speed and quality
JetBrains evaluates impact through two lenses:
- Speed: less boilerplate, fewer context switches, faster iteration
- Quality: readable, maintainable code that doesn’t break in production
That balance is the entire ballgame for AI-powered software development. Speed without quality just creates future outages and ugly refactors.
“Quality” needs to be operational, not aspirational
If you’re bringing AI into your engineering org, don’t measure quality by vibes (“looks fine”). Define it.
Here are concrete, team-friendly quality checks that map well to AI assistance:
- Tests added or updated
- If the agent changes behavior, it must change tests.
- Static analysis and linting pass
- No exceptions. A clean run is the minimum.
- Code review readability standard
- If reviewers can’t explain the change in two minutes, it’s too clever.
- Maintainability signals
- Small functions, clear naming, limited scope changes.
My opinion: “AI code” should be easier to review than human code, not harder. If it’s harder, your workflow is wrong.
The maintainability trap (and how JetBrains avoids it)
Many AI coding tools optimize for “getting to compile.” JetBrains explicitly calls out a higher bar: safe, readable, maintainable.
That’s aligned with how mature U.S. SaaS companies operate. Your real cost isn’t the first release; it’s the next 50 releases.
Where to start: friction points humans actually hate
JetBrains’ leadership lessons are refreshingly practical: start where humans feel friction—documentation, tests, reviews, hand-offs.
This is the playbook for adopting AI in software development without setting your team on fire.
A staged rollout that works (and won’t get ignored)
If you’re leading a team and want AI to produce measurable gains, start with these steps:
- Docs and explanations first
- Use AI to generate or improve README sections, architecture notes, and “why” documentation.
- Require engineers to approve and edit—AI drafts, humans sign.
- Tests second
- Have AI propose unit tests from existing code.
- Gate merges on passing tests and coverage deltas where it makes sense.
- Reviews third
- Use AI to summarize PRs, flag risky areas, and propose review checklists.
- Small refactors fourth
- Use agents for mechanical changes: renames, deprecations, config updates.
- Feature work last
- Only after trust is earned through repeated wins.
This sequence matters because it builds confidence and reduces risk. It also creates a shared language for what “good AI help” looks like.
Why “hybrid workflows” beat replacement fantasies
JetBrains is explicit: AI drafts; humans design and review. That’s not a compromise—it's the operating model.
If you’re building AI-powered digital services, you want:
- Humans owning intent, architecture, and customer impact
- AI handling draft work, tedious edits, and first-pass reasoning
- Shared guardrails: tests, linters, secure coding patterns, dependency policies
Teams that try to “replace” developers usually end up replacing quality with chaos.
What this means for U.S. tech companies scaling digital services
JetBrains is a useful mirror for what’s happening across the U.S. tech ecosystem: AI is moving from novelty to infrastructure. Not infrastructure like servers—more like a new layer of labor that changes how quickly organizations can build, iterate, and support software.
The compounding advantage is real
One of JetBrains’ most actionable ideas is compounding experimentation.
A simple, quotable rule:
Teams that run small AI experiments weekly outperform teams that bet big quarterly.
Compounding happens because each small win becomes a reusable pattern:
- A PR template that guides agent output
- A “definition of done” that includes AI verification steps
- A library of prompts tied to internal standards
- A set of safe tasks agents can handle reliably
Over months, this becomes a delivery advantage competitors can’t copy overnight.
A practical “AI developer workflow” checklist
If you want to align with what JetBrains is doing—without copying tools blindly—use this checklist:
- Do we have clear guardrails? (tests, linting, dependency rules)
- Can AI access the right context safely? (repo scope, secrets policies)
- Are we reducing context switching? (AI inside dev tools)
- Do we measure speed and quality? (cycle time and defect rate)
- Do we have an escalation path? (when AI gets stuck, who decides?)
If you can’t answer these, you don’t have an AI workflow yet—you have AI features.
The next year of AI coding looks like “better work,” not less work
JetBrains’ “what’s next” vision is the one I’d bet on: engineers spend more time designing systems, guiding agents, reviewing efficiently, and shipping with more confidence.
This fits the broader theme of our series, How AI Is Powering Technology and Digital Services in the United States: the real payoff isn’t automation for its own sake. It’s building digital products that improve faster, break less, and cost less to maintain.
If you’re responsible for product delivery or engineering performance, the next step is straightforward: pick one painful workflow—tests, docs, or reviews—instrument it, and run a four-week experiment with clear quality gates. You’re not aiming for magic. You’re aiming for repeatable improvement.
The forward-looking question worth sitting with: when AI agents can handle more of the “mechanical” work, will your team’s bottleneck become architecture—and are you training for that now?