AI, Facial Recognition, and the Cost of “Smart” Security

AI & TechnologyBy 3L3C

ICE’s Mobile Fortify app shows how AI can boost productivity while quietly eroding rights. Here’s what that means for public safety and your own AI projects.

AI ethicsfacial recognitionpublic safetyproductivityprivacysurveillanceworkplace AI
Share:

Most companies get AI ethics wrong because they treat it as a PR problem, not a product decision.

Here’s what that looks like in the real world: a 23‑year‑old U.S. citizen in Chicago, walking home from the gym, ends up handcuffed in the back of an unmarked SUV. No warrant, no clear probable cause. Agents point a phone at his face, an AI system runs his photo against a 200‑million‑image database, and only then do they decide he can go.

That’s not a hypothetical. That’s how federal agents used ICE’s Mobile Fortify facial recognition app on Jesus Gutiérrez, a U.S. citizen. And it’s a preview of where AI and public safety are headed if we don’t get serious about responsible adoption.

This matters to anyone working with AI and technology—especially leaders who see AI as the next big productivity booster. The same mindset that drives you to automate workflows and streamline work can, if misapplied, create powerful tools that are efficient for the organization and disastrous for people.

In this article, I’ll break down what happened, how Mobile Fortify actually works, why it’s a warning sign for AI adoption everywhere, and what responsible teams can do differently—whether you’re running a startup, a legal practice, or an enterprise AI rollout.


What Mobile Fortify Really Shows About AI in Public Safety

AI facial recognition in public safety isn’t a theoretical debate. It’s already embedded in field operations.

Mobile Fortify is an app deployed on work phones used by ICE and CBP officers. An agent points the phone at someone’s face; the app sends that image to multiple federal and state databases and returns:

  • Name
  • Date of birth
  • “Alien number” (if on file)
  • Immigration and deportation status

Internal documents show it can search around 200 million images pulled from government systems, including border records and law enforcement repositories.

The reality? It takes a system originally meant to verify travelers at borders and turns it inward—toward people on U.S. streets, bus stops, and neighborhoods. Even DHS documents admit what’s happening: photos “could be that of someone other than an alien, including U.S. citizens or lawful permanent residents.”

So while the official justification is public safety and immigration enforcement, the actual effect is a live, nationwide ID checkpoint that can be pointed at anyone who “looks suspicious.”

That’s fast. It’s efficient. It’s also exactly the kind of “smart” system that can erode rights in the background while everyone else is talking about productivity gains.


A U.S. Citizen in the Back of an SUV: What Happened to Jesus Gutiérrez

Here’s the thing about powerful AI tools: they don’t fail in the abstract. They fail on specific people.

Jesus Gutiérrez, 23, was walking home from a gym in Chicago when he noticed a gray Cadillac SUV with no plates. The car stopped. Four federal immigration officials confronted him.

They asked where he was going, where he came from, and whether he had ID. He didn’t. He’s a U.S. citizen and told them so, trying to pull up proof on his phone. Instead of waiting, they handcuffed him and put him in the SUV.

When there was no physical ID to check, they moved to Plan B: they took a photo of his face and ran it through Mobile Fortify. A short time later, the result came back confirming his status. As he recalled, one agent said, “Oh yeah, he’s right. He’s saying the right thing. He does got papers.”

Then they drove him around for roughly an hour and finally let him go. No apology. No explanation. Just laughter from the agents, according to his account.

For days, he didn’t leave the house.

From a workflow perspective, Mobile Fortify “worked.” It confirmed identity quickly. From a rights and dignity perspective, the process was broken from the start:

  • Questionable basis for the stop (he’s of Mexican descent; there was no obvious violation cited).
  • Immediate use of handcuffs despite his claim of citizenship.
  • Reliance on biometric AI over the person’s own testimony.

This is what AI misuse looks like in practice: a system that’s technically functional but operationally harmful.


When AI Becomes the Boss: Over‑Trusting Biometric “Matches”

The Mobile Fortify story exposes a mistake I see in a lot of AI deployments at work: treating AI outputs as definitive, not inputs to judgment.

According to Rep. Bennie Thompson, ICE officials have described a Mobile Fortify match as a “definitive” determination of a person’s status—so definitive that an officer may ignore evidence of American citizenship, including a birth certificate, if the app says the person is an alien.

Translate that logic into your environment:

  • A hiring AI that rejects a qualified candidate, and HR never double‑checks.
  • A fraud model that flags a customer, and support treats them as guilty by default.
  • A productivity scoring tool that labels an employee “low performing,” and managers stop giving them opportunities.

Different domain, same risk: when AI becomes the boss, human judgment withers.

AI systems, especially biometric ones, aren’t neutral:

  • Facial recognition has a documented history of higher error rates for people of color, particularly women and darker skin tones.
  • Training data is often skewed toward certain demographics.
  • Edge cases and “unintended uses” (like turning a border‑control tool into a street‑level ID system) rarely get tested thoroughly.

The ACLU calls Mobile Fortify a “glitchy, privacy‑destroying technology.” Even if you think that’s strong language, there’s a core truth there: biometric AI is brittle, and treating its outputs as absolute truth is a design decision, not a technical necessity.

If you’re adopting AI at work, this is the lesson: never design your process so that the AI is allowed to override clear, real‑world evidence without human review.


The Productivity Trap: When “Efficient” Means “Unaccountable”

Most teams bring AI into their workflow for one reason: productivity. Faster checks. Fewer manual steps. Less paperwork.

That’s exactly what Mobile Fortify offers to field agents:

  • Instant access to multiple databases
  • No need to radio a dispatcher to run IDs
  • A clean, simple interface on a phone

From a process standpoint, it’s brilliant: all the friction points of traditional checks are smoothed out. But that’s the trap.

When an AI system is:

  • Faster than humans
  • Opaque in how it reaches results
  • Embedded in high‑stakes decisions

…it tends to shift power quietly. It makes it easier to act and harder to question.

For public safety, that means quicker stops, more scans, and less resistance because it’s “just standard procedure.” For workplaces, it means employees and customers subjected to automated judgments with little recourse.

If you’re serious about working smarter, not harder with AI, you need a different definition of productivity. It can’t just be “fewer clicks” or “more decisions per hour.” It has to include:

  • Error cost: What happens when the system is wrong?
  • Trust cost: How does this affect how people feel about your brand, product, or organization?
  • Oversight cost: Who has the power and responsibility to say, “Stop, this output doesn’t make sense”?

A facial recognition app that saves agents time but creates wrongful stops and detentions is efficient on paper and corrosive in reality.


How Responsible Teams Should Use AI: Practical Guardrails

There’s a better way to bring AI into serious domains—whether public safety, HR, finance, or operations. The Gutiérrez case highlights exactly where the guardrails need to go.

Here are concrete principles teams can use when deploying AI at work.

1. AI as Assistant, Not Judge

AI systems should propose, not decide.

  • Use AI to surface likely matches, patterns, or anomalies.
  • Require a human to confirm before any high‑impact action (suspension, denial, detention, termination) is taken.
  • Design interfaces that make it clear the AI is one signal among many, not the final authority.

If Mobile Fortify were designed and governed this way, a “match” would trigger more careful vetting, not override someone’s own documentation.

2. Clear Limits on Use Cases

Tools built for one context tend to creep into others. That’s where abuse starts.

  • Define where and when an AI system can be used.
  • Explicitly ban use in sensitive contexts (e.g., scanning random passersby, using productivity scores to make firing decisions without review).
  • Document and train on these limits, and audit them periodically.

In your company, that might mean: AI can help summarize performance feedback, but cannot be the sole input into promotion or termination decisions.

3. Bias and Error Audits as Ongoing Work

Bias checks aren’t a one‑time compliance task.

  • Track misclassifications and complaints.
  • Segment error rates by demographic groups where appropriate and lawful.
  • Pause or restrict use if certain error thresholds are crossed.

This is especially crucial for tools that affect people’s work, freedom, or access to essential services.

4. Build Appeal and Transparency into the Workflow

If someone is affected by an AI‑driven decision, they should:

  • Know that AI was used.
  • Have an understandable explanation of the decision (no buzzword salads).
  • Have a clear path to challenge or appeal it.

For internal productivity tools, this could be as simple as: “Here’s how this AI score was calculated, and here’s how to flag issues if you think it’s wrong.”

5. Align AI Projects With Values, Not Just Metrics

This sounds soft, but it’s not. Values show up in very practical choices:

  • Do you optimize for fewer support tickets, or for more satisfied customers—even if it takes more staff time?
  • Do you optimize for more scans per hour, or for fewer false positives?

If your stated values include fairness, trust, and respect, your AI deployments should visibly protect those, even when it costs a bit of raw efficiency.


What This Means for Anyone Using AI to Work Smarter

The Mobile Fortify story sits at the uncomfortable intersection of AI, technology, work, and productivity.

On one side, you’ve got a powerful real‑time tool that clearly increases operational speed for agents in the field. On the other, you’ve got a U.S. citizen describing his treatment as being “kidnapped” and civil liberties advocates calling it incompatible with a free society.

Most businesses will never build something as heavy‑duty as a nationwide facial recognition system. But the logic behind Mobile Fortify—“we can act faster now, so we will”—shows up everywhere AI is being adopted:

  • In hiring screens that silently filter out people.
  • In workplace monitoring tools that score employees by keystrokes instead of outcomes.
  • In recommendation engines that nudge user behavior without their informed consent.

If your goal is to work smarter, not harder with AI, the bar is higher than “it works and saves time.” Smarter means:

  • The system makes your team more capable, not less thoughtful.
  • People affected by your AI—employees, customers, users—retain agency and dignity.
  • You could explain how your AI is used to a skeptical friend and feel comfortable standing behind it.

The question isn’t whether AI will shape public safety, work, and daily life. It already does.

The real question is: Are we willing to trade human judgment and rights for a faster workflow?

The teams that say no—and design AI systems with guardrails, transparency, and respect built in—will be the ones people trust with the next generation of “smart” tools.