هذا المحتوى غير متاح حتى الآن في نسخة محلية ل Jordan. أنت تعرض النسخة العالمية.

عرض الصفحة العالمية

AI Agents at Work: Speed, Scale, and Surveillance

AI & TechnologyBy 3L3C

AI agents are shifting from “assist” to “act.” Learn what this means for productivity, privacy, and ethical AI deployment in real operations.

ai-agentsworkflow-automationprivacycomplianceoperational-aiproductivity-systems
Share:

Featured image for AI Agents at Work: Speed, Scale, and Surveillance

AI Agents at Work: Speed, Scale, and Surveillance

A single procurement record can tell you where AI is headed faster than any keynote. This week, public contracting data showed Immigration and Customs Enforcement (ICE) paid $636,500 for “AI agents” marketed to track down targets and map their networks—a modern version of skip tracing, but turbocharged by automation.

If you work in operations, analytics, compliance, sales, recruiting, or customer success, this matters for a reason that has nothing to do with politics: AI agents are moving from “help me write” to “go do,” and they’re being deployed in mission-critical environments where speed beats nuance. That’s the productivity promise. It’s also the risk.

This post is part of our AI & Technology series, where we look at how AI changes real work. Here, we’ll use this news as a case study to get practical: what “AI agents” usually are, why they’re attractive to organizations under pressure, where they go wrong, and how to adopt agent-style automation without sleepwalking into privacy and compliance disasters.

What this ICE contract tells us about AI agents in real work

AI agents are being bought for outcomes, not features. The contracting story isn’t about a cool model or a flashy demo. It’s about reducing the time it takes to locate people and connect dots across data sources—work that’s traditionally slow, repetitive, and expensive.

The vendor in the report, AI Solutions 87, advertises “skip tracing” AI agents that can rapidly find “persons of interest” and map “services, locations, friends, family, and associates.” Procurement records cited in the coverage show:

  • $636,500 contracted for skip tracing services tied to ICE’s Enforcement and Removal Operations (ERO)
  • “Skip tracing services nationwide” referenced in another record
  • Broader spending: ICE has spent at least $11.6 million on skip tracing services since October (per the reporting)
  • Larger plans reported previously: a procurement posture describing a 1.5 million “docket size” and vendor batches of 50,000 last-known addresses

From a Technology and Work standpoint, here’s the signal: agents are being used to compress multi-step workflows into one request—collect data, cross-reference it, score it, and output actions. That’s exactly what many companies want too.

The “work smarter” pattern hiding in plain sight

Skip tracing is basically an operations pipeline:

  1. Start with a record (name, last known address, phone, case ID)
  2. Enrich it with more data (public records, commercial data, social signals)
  3. Find relationships (household members, associates, employers)
  4. Validate (is this current? is it the right person?)
  5. Hand off to action (contact, visit, enforcement, or escalation)

Swap “target” with “lead,” “customer,” “supplier,” or “candidate,” and you’ve got common workflows in go-to-market, fraud, logistics, and HR. That’s why this story belongs in a Productivity conversation: the agent concept is transferable, even if the use case is controversial.

How “AI agents” typically work (and why that boosts productivity)

Most AI agents are orchestrators, not magic brains. In practice, an “agent” is a system that uses an LLM plus tools (search, databases, APIs, scraping, internal systems) to complete tasks with minimal step-by-step prompting.

Even when vendors don’t disclose the underlying model, the pattern is familiar:

  • An LLM interprets the task (e.g., “find current location signals”)
  • The system runs tool calls (data lookups, web queries, database joins)
  • The agent iterates (follow-up queries based on what it found)
  • It produces structured outputs (profiles, relationship graphs, confidence scores)

Why organizations buy agents instead of “regular” AI

Traditional AI features often stop at “suggest.” Agents push toward “do.” That’s attractive when:

  • The workflow is repetitive (same steps, different inputs)
  • Time-to-decision matters (operations queues, backlogs, service levels)
  • Data lives in too many places (CRMs, spreadsheets, vendors, inboxes)
  • Humans are the bottleneck (manual research, copy/paste, handoffs)

If you’ve ever watched a team burn hours reconciling records across five systems, you already understand the appeal: agent-style automation can remove the “tab-switch tax.”

The productivity upside is real—and measurable

Agent automation usually creates value in three ways:

  • Cycle time reduction: the elapsed time from intake to action drops because steps run in parallel.
  • Throughput increase: one analyst can handle more cases per day with the same headcount.
  • Standardization: outputs become more consistent (same fields, same checks), which improves downstream Work.

That’s the good news. The catch is that agent systems also scale mistakes and overreach.

The uncomfortable part: efficiency can become surveillance by default

When AI agents optimize for “find more, faster,” privacy becomes a rounding error unless you design against it. The reporting describes systems marketed to map “friends, family, and associates.” That’s not just data enrichment; it’s relationship inference, and it has a habit of pulling in the wrong people.

Here’s what tends to break in agent-driven “find and map” workflows:

1) False matches and identity collisions

Names collide. Families share addresses. Phone numbers get recycled. Public records lag. If your agent produces a neat “network map,” it can look authoritative even when it’s stitched together from stale or mismatched signals.

Operational impact: teams act on the wrong record, contact the wrong person, or escalate incorrectly.

2) Network effects amplify harm

Mapping associates creates spillover: one record turns into ten. That’s a common pattern in fraud and collections too. Without guardrails, the scope expands quietly:

  • “Just verify the address” becomes “pull relatives and workplaces”
  • “Check one source” becomes “check everything we can buy”

A simple rule: if your agent can do ten times the Work in the same time, it can also do ten times the damage.

3) Automation changes what people feel is acceptable

I’ve seen this in corporate environments: when a task is manual, teams pause. When it’s automated, teams click.

Agents create a dangerous dynamic: the cost of action drops to near zero, so organizations take actions they wouldn’t have taken if every step required a human.

4) Data provenance gets blurry fast

With multiple vendors, data brokers, and “open web” sources, it becomes hard to answer basic questions:

  • Where did this information come from?
  • Do we have rights to use it this way?
  • How old is it?
  • Can someone correct it?

If you’re in a regulated industry, that’s not theoretical. It’s your next audit finding.

A practical framework for adopting AI agents without losing control

You can get the Productivity benefits of AI agents while reducing risk—if you treat agents like junior employees with powerful tools, not like a button. Here’s the framework I recommend for most teams.

Define the job: outputs, limits, and “no-go” zones

Write a one-page “agent job description” that includes:

  • Allowed inputs: what data the agent can access
  • Allowed tools: which systems/APIs it can call
  • Explicit forbidden actions: sensitive attributes, scraping, outreach, certain data sources
  • Required outputs: fields, confidence scores, citations to internal sources

If you can’t describe the job cleanly, the agent will sprawl.

Add “proof, not prose” requirements

Agents are great at sounding confident. Force them to produce evidence.

For any agent that influences decisions, require:

  • A structured record (JSON-like fields)
  • A source trail (internal system IDs, timestamps)
  • A confidence score and a reason code
  • A “what would change my mind” note (missing data)

This simple move upgrades your Technology stack from “chat” to “operations.”

Use a tiered human-in-the-loop model

Not every task needs the same oversight. Use tiers:

  1. Auto-run, no impact: drafts, internal summaries, dedupe suggestions
  2. Auto-run, human approve: enrichment, matching, prioritization
  3. Human-run only: actions affecting rights, access, employment, credit, legal status

The core idea: automation should accelerate prep, not automate consequences.

Build in rate limits and scope limits

Agents love to keep going. Put boundaries in code and policy:

  • Maximum number of records processed per hour/day
  • Maximum number of “associates” pulled per entity
  • Maximum number of external lookups per case
  • Hard stop on expanding the graph beyond one hop unless approved

Scope limits are Productivity tools too—they prevent noisy outputs that waste human review time.

Audit like you mean it

If an agent touches sensitive workflows, you need auditability:

  • Full tool-call logs
  • Versioning (prompt + policies + model version)
  • Random sample reviews weekly
  • Drift checks (is match accuracy declining?)

A strong audit loop is what keeps “work smarter” from turning into “move faster and break trust.”

What you can copy from this case study (without copying the ethics)

The transferable lesson is workflow design: agents excel at assembling a messy picture from scattered data. Many legitimate business functions need that.

Here are safer, high-ROI applications that use the same mechanics:

Operations: vendor onboarding and risk checks

An agent can:

  • Collect documents from internal repositories
  • Verify required fields are present
  • Flag missing certificates or expired dates
  • Route the case to the right owner

Productivity win: fewer back-and-forth emails and less manual chasing.

Sales/RevOps: account research with tight boundaries

Instead of “scrape everything,” constrain it to approved sources:

  • Internal CRM + support tickets + product usage
  • Public company site and press releases (if allowed)

Output: a structured brief for a rep, not a creepy dossier.

Fraud and abuse: triage with explainable signals

Agents can pre-triage cases by clustering behaviors and pointing investigators to the relevant evidence. The key is that the agent recommends; humans decide.

HR: internal mobility matching

Use agents to match employees to internal roles based on skills inventories and performance artifacts you already own. That’s AI + Work done right: helping people find opportunity, not surveilling them.

The real question for 2026: who gets to automate “finding”?

AI agents are spreading because they improve Productivity in places where people are drowning in repetitive tasks. The ICE contract reporting shows that governments are also buying agent-style systems to accelerate high-stakes operations, including tracking and network mapping.

If you’re adopting AI in your organization, don’t ignore the lesson: agent automation is a force multiplier. It multiplies output, speed, and consistency. It also multiplies whatever values you bake into the workflow—privacy, restraint, accuracy, and accountability, or the absence of them.

For teams trying to work smarter, not harder, the best next step is simple: pick one agent use case and write the guardrails first. Then automate. Then measure.

The question I’m sitting with after reading the procurement details is the same one every operator should ask: when an AI agent can “map an entire network” in minutes, what exactly counts as legitimate work—and who gets to decide?