Underwriting hires in cyber and terrorism signal where AI in insurance can boost speed, pricing clarity, and portfolio control. See practical 2026 use cases.

AI Underwriting Talent Shifts in Cyber and Terrorism
A lot of insurers say they’re “investing in AI,” then treat underwriting like it’s still 2015: scattered PDFs, broker emails as the system of record, and risk decisions that depend on who’s on call.
The reality is more concrete. When specialty carriers hire senior underwriters in war/terrorism and cyber, they’re not just adding headcount—they’re signaling where complexity (and margin pressure) is climbing fastest, and where AI in insurance underwriting can actually pay for itself.
This week’s London market people moves—Markel International appointing James Howell as Senior War and Terrorism Underwriter and Tokio Marine Kiln appointing Olivia Jackson as Cyber Underwriter—read like staffing updates. I see something else: a clear map of where underwriting teams need better decision support, better data, and faster feedback loops.
Why these hires point to “AI-ready” specialty underwriting
Specialty lines are where AI earns its keep because the problems are messy, high-stakes, and time-sensitive. War/terrorism and cyber sit at the top of that list.
War and terrorism risks are driven by geopolitical shifts, aggregation, and wording. Cyber risks are driven by fast-changing attacker tactics, security posture, and systemic exposure. Both lines share a hard truth: underwriting skill is necessary, but it’s no longer sufficient.
Here’s what “AI-ready” underwriting looks like in practice:
- Data volume is too large for manual review (threat intel feeds, vulnerability signals, sanctions/regional instability indicators, claims notes).
- Time-to-quote is compressing, especially in London specialty markets.
- Portfolio-level accumulation matters as much as individual risk selection.
- Language and coverage nuance (war exclusions, cyber triggers, terrorism definitions) determines loss outcomes.
People hires don’t replace technology. They set the operating model. A senior underwriter who can define appetite, train others, and partner with actuarial and claims is exactly the person who can turn AI from a pilot into a habit.
Markel’s war and terrorism appointment: where AI helps (and where it doesn’t)
Markel’s appointment of James Howell into a senior war and terrorism role comes as geopolitical volatility keeps underwriting teams on edge. In this class, the biggest risk isn’t “not enough data.” It’s too many signals and not enough clarity.
AI’s best role in terrorism and political violence underwriting
AI works when it narrows uncertainty and surfaces what a human should pay attention to.
1) Risk signal triage (what changed since last renewal)
Underwriters often re-underwrite the same account from scratch because the evidence is scattered. AI can summarize what’s changed:
- location exposure changes (new sites, supply chain shifts)
- sector sensitivity (energy, aviation, logistics, public venues)
- recent regional incidents and proximity-weighted threat indicators
- sanctions or compliance flags that impact placement
Done well, this doesn’t “auto-decline” risks. It gives an underwriter a sharper starting point.
2) Accumulation and aggregation intelligence
War and terrorism portfolios break when accumulation is misunderstood. AI can support accumulation by:
- clustering insured locations by radius and likely target profile
- detecting hidden concentration via shared contractors, landlords, or events
- flagging portfolio hotspots as geopolitical conditions shift
This is where I’ve found the fastest ROI: not in flashy model scores, but in preventing accidental correlation across the book.
3) Wordings review and claims-informed underwriting
Underwriting decisions often hinge on specific clauses and definitions. Natural language processing can:
- compare manuscript clauses against house standards
- highlight deviations that historically correlate with claim disputes
- pull “similar claim narratives” from internal notes and decisions
This matters because terrorism and war claims can turn into coverage battles. AI can help underwriters price and negotiate those battles before they happen.
Where AI shouldn’t run the show
War and terrorism underwriting is not a place to hand the keys to a black-box score.
- Sparse, noisy labels: major events are rare and claims data is thin.
- Adversarial dynamics: threat actors adapt; models lag.
- Ethical and compliance risks: using the wrong proxies can create unacceptable bias.
The right posture is decision support, not automated decisions. A senior underwriter who mentors the team (as Howell is expected to do) is exactly the governance layer that keeps AI helpful instead of reckless.
TMK’s cyber underwriting hire: AI belongs in the workflow, not on a slide
Tokio Marine Kiln’s appointment of Olivia Jackson into its Cyber & Enterprise Risk team follows its push to expand cyber offerings. Cyber is the most natural fit for AI-driven underwriting because the data is richer and the feedback loop is faster.
What “AI-powered cyber underwriting” actually means
Forget generic promises. Practical AI in cyber underwriting tends to land in four places:
1) Security posture ingestion and normalization
Cyber submissions vary wildly. AI can standardize inputs from questionnaires, broker narratives, and technical scans into consistent features:
- MFA coverage (where it’s deployed, not just “yes/no”)
- backup immutability and recovery testing frequency
- EDR/XDR deployment maturity
- third-party risk controls
This reduces the junk-data problem that makes many cyber pricing models unreliable.
2) Control-to-loss mapping (pricing that underwriters can defend)
Good cyber pricing is explainable. If the model recommends a rate change, the underwriter needs the “why” in plain language:
- “No MFA on privileged accounts” maps to credential theft frequency
- “Flat network segmentation” maps to ransomware blast radius
- “No tested incident response retainer” maps to higher breach cost
Explainability isn’t a nice-to-have. It’s how you win broker trust and avoid internal fights with actuarial and management.
3) Faster renewals with change detection
A renewal should feel like a refresh, not a restart. AI can:
- detect changed domains, acquisitions, new geography
- highlight new critical vulnerabilities (where permitted)
- summarize claim activity and control remediation
That’s how you cut quote cycle time without cutting corners.
4) Claims feedback loops and coverage learning
Cyber claims are text-heavy. AI can extract patterns from incident reports, adjuster notes, and timelines:
- which controls failed and how
- how long it took to contain
- which vendors improved outcomes
- which policy sections triggered disputes
This is underwriting’s missing muscle: closing the loop from loss to pricing and wording.
A practical blueprint: how underwriting leaders should deploy AI in 2026
Hiring strong underwriters is step one. Step two is giving them tools that match the complexity of the job. If you’re leading underwriting, product, or transformation, here’s a blueprint that works without turning your team into a science project.
Start with two use cases per line, not ten
Pick the use cases that impact:
- loss ratio (better selection, better terms)
- expense ratio (faster processing)
- broker experience (fewer back-and-forths)
For war/terrorism, I’d start with:
- accumulation/aggregation alerts
- wording deviation detection
For cyber, I’d start with:
- renewal change summaries
- control normalization + explainable pricing drivers
Build an “underwriter-in-the-loop” feedback design
If underwriters don’t correct the model, the model won’t improve.
Make it easy to:
- accept/reject AI-extracted fields
- flag false positives (“this subsidiary is out of scope”)
- tag reasons for overrides (“broker intel,” “claims remediation complete”)
This creates training data that’s actually relevant to your underwriting appetite.
Treat model risk like underwriting risk
AI governance shouldn’t live only with compliance. Underwriting needs a seat at the table.
Minimum viable controls:
- documented use cases and decision boundaries
- monitoring for drift (especially in cyber frequency/severity)
- audit trails for what the AI recommended and what the underwriter did
- periodic reviews by underwriting leadership and claims
A simple rule I like: if you can’t explain it to a broker, you can’t use it to price.
Upgrade your data plumbing before you upgrade your models
Most “AI underwriting” failures are data failures.
If you’re still relying on:
- email threads as records
- inconsistent submission templates
- unstructured claims notes with no taxonomy
…then even the best model will disappoint.
The boring fix is also the profitable one: standardize intake, tag documents, structure the core fields, and enforce version control.
People also ask: what’s the link between AI and underwriter hiring?
Underwriter hiring and AI adoption are connected because AI changes what “good” looks like.
Does AI replace specialty underwriters?
No. In specialty lines, AI replaces manual prep work, not judgment. The value is freeing underwriters to negotiate terms, manage accumulation, and apply experience where it matters.
Why hire senior talent if AI is improving?
Because AI needs underwriting leadership to define appetite, validate outputs, and train teams. Models don’t set strategy; people do.
Where should insurers use AI first: cyber or terrorism?
Cyber is usually faster to implement because telemetry and claims frequency create better feedback loops. Terrorism/war benefits most from AI in accumulation, scenario monitoring, and wording analysis rather than pure predictive scoring.
What to do next if you’re building AI in insurance underwriting
These London market appointments are a reminder that underwriting is becoming more specialized, not less. Cyber and war/terrorism are where clients demand clarity, brokers demand speed, and carriers demand profitability.
If you’re evaluating AI in insurance, don’t start with a vendor demo. Start by mapping your underwriting decisions:
- Where do we lose time?
- Where do we lose margin?
- Where do we lose broker confidence?
Then build AI that answers those questions in the underwriter’s workflow.
If you’re planning your 2026 roadmap, here’s the forward-looking question worth debating internally: Will your underwriting team be the bottleneck—or the differentiator—when AI makes “good enough” decisions cheap and common?