ICEās Mobile Fortify app shows how AI can boost productivity while quietly eroding rights. Hereās what that means for public safety and your own AI projects.
Most companies get AI ethics wrong because they treat it as a PR problem, not a product decision.
Hereās what that looks like in the real world: a 23āyearāold U.S. citizen in Chicago, walking home from the gym, ends up handcuffed in the back of an unmarked SUV. No warrant, no clear probable cause. Agents point a phone at his face, an AI system runs his photo against a 200āmillionāimage database, and only then do they decide he can go.
Thatās not a hypothetical. Thatās how federal agents used ICEās Mobile Fortify facial recognition app on Jesus GutiĆ©rrez, a U.S. citizen. And itās a preview of where AI and public safety are headed if we donāt get serious about responsible adoption.
This matters to anyone working with AI and technologyāespecially leaders who see AI as the next big productivity booster. The same mindset that drives you to automate workflows and streamline work can, if misapplied, create powerful tools that are efficient for the organization and disastrous for people.
In this article, Iāll break down what happened, how Mobile Fortify actually works, why itās a warning sign for AI adoption everywhere, and what responsible teams can do differentlyāwhether youāre running a startup, a legal practice, or an enterprise AI rollout.
What Mobile Fortify Really Shows About AI in Public Safety
AI facial recognition in public safety isnāt a theoretical debate. Itās already embedded in field operations.
Mobile Fortify is an app deployed on work phones used by ICE and CBP officers. An agent points the phone at someoneās face; the app sends that image to multiple federal and state databases and returns:
- Name
- Date of birth
- āAlien numberā (if on file)
- Immigration and deportation status
Internal documents show it can search around 200 million images pulled from government systems, including border records and law enforcement repositories.
The reality? It takes a system originally meant to verify travelers at borders and turns it inwardātoward people on U.S. streets, bus stops, and neighborhoods. Even DHS documents admit whatās happening: photos ācould be that of someone other than an alien, including U.S. citizens or lawful permanent residents.ā
So while the official justification is public safety and immigration enforcement, the actual effect is a live, nationwide ID checkpoint that can be pointed at anyone who ālooks suspicious.ā
Thatās fast. Itās efficient. Itās also exactly the kind of āsmartā system that can erode rights in the background while everyone else is talking about productivity gains.
A U.S. Citizen in the Back of an SUV: What Happened to Jesus GutiƩrrez
Hereās the thing about powerful AI tools: they donāt fail in the abstract. They fail on specific people.
Jesus GutiƩrrez, 23, was walking home from a gym in Chicago when he noticed a gray Cadillac SUV with no plates. The car stopped. Four federal immigration officials confronted him.
They asked where he was going, where he came from, and whether he had ID. He didnāt. Heās a U.S. citizen and told them so, trying to pull up proof on his phone. Instead of waiting, they handcuffed him and put him in the SUV.
When there was no physical ID to check, they moved to Plan B: they took a photo of his face and ran it through Mobile Fortify. A short time later, the result came back confirming his status. As he recalled, one agent said, āOh yeah, heās right. Heās saying the right thing. He does got papers.ā
Then they drove him around for roughly an hour and finally let him go. No apology. No explanation. Just laughter from the agents, according to his account.
For days, he didnāt leave the house.
From a workflow perspective, Mobile Fortify āworked.ā It confirmed identity quickly. From a rights and dignity perspective, the process was broken from the start:
- Questionable basis for the stop (heās of Mexican descent; there was no obvious violation cited).
- Immediate use of handcuffs despite his claim of citizenship.
- Reliance on biometric AI over the personās own testimony.
This is what AI misuse looks like in practice: a system thatās technically functional but operationally harmful.
When AI Becomes the Boss: OverāTrusting Biometric āMatchesā
The Mobile Fortify story exposes a mistake I see in a lot of AI deployments at work: treating AI outputs as definitive, not inputs to judgment.
According to Rep. Bennie Thompson, ICE officials have described a Mobile Fortify match as a ādefinitiveā determination of a personās statusāso definitive that an officer may ignore evidence of American citizenship, including a birth certificate, if the app says the person is an alien.
Translate that logic into your environment:
- A hiring AI that rejects a qualified candidate, and HR never doubleāchecks.
- A fraud model that flags a customer, and support treats them as guilty by default.
- A productivity scoring tool that labels an employee ālow performing,ā and managers stop giving them opportunities.
Different domain, same risk: when AI becomes the boss, human judgment withers.
AI systems, especially biometric ones, arenāt neutral:
- Facial recognition has a documented history of higher error rates for people of color, particularly women and darker skin tones.
- Training data is often skewed toward certain demographics.
- Edge cases and āunintended usesā (like turning a borderācontrol tool into a streetālevel ID system) rarely get tested thoroughly.
The ACLU calls Mobile Fortify a āglitchy, privacyādestroying technology.ā Even if you think thatās strong language, thereās a core truth there: biometric AI is brittle, and treating its outputs as absolute truth is a design decision, not a technical necessity.
If youāre adopting AI at work, this is the lesson: never design your process so that the AI is allowed to override clear, realāworld evidence without human review.
The Productivity Trap: When āEfficientā Means āUnaccountableā
Most teams bring AI into their workflow for one reason: productivity. Faster checks. Fewer manual steps. Less paperwork.
Thatās exactly what Mobile Fortify offers to field agents:
- Instant access to multiple databases
- No need to radio a dispatcher to run IDs
- A clean, simple interface on a phone
From a process standpoint, itās brilliant: all the friction points of traditional checks are smoothed out. But thatās the trap.
When an AI system is:
- Faster than humans
- Opaque in how it reaches results
- Embedded in highāstakes decisions
ā¦it tends to shift power quietly. It makes it easier to act and harder to question.
For public safety, that means quicker stops, more scans, and less resistance because itās ājust standard procedure.ā For workplaces, it means employees and customers subjected to automated judgments with little recourse.
If youāre serious about working smarter, not harder with AI, you need a different definition of productivity. It canāt just be āfewer clicksā or āmore decisions per hour.ā It has to include:
- Error cost: What happens when the system is wrong?
- Trust cost: How does this affect how people feel about your brand, product, or organization?
- Oversight cost: Who has the power and responsibility to say, āStop, this output doesnāt make senseā?
A facial recognition app that saves agents time but creates wrongful stops and detentions is efficient on paper and corrosive in reality.
How Responsible Teams Should Use AI: Practical Guardrails
Thereās a better way to bring AI into serious domainsāwhether public safety, HR, finance, or operations. The GutiĆ©rrez case highlights exactly where the guardrails need to go.
Here are concrete principles teams can use when deploying AI at work.
1. AI as Assistant, Not Judge
AI systems should propose, not decide.
- Use AI to surface likely matches, patterns, or anomalies.
- Require a human to confirm before any highāimpact action (suspension, denial, detention, termination) is taken.
- Design interfaces that make it clear the AI is one signal among many, not the final authority.
If Mobile Fortify were designed and governed this way, a āmatchā would trigger more careful vetting, not override someoneās own documentation.
2. Clear Limits on Use Cases
Tools built for one context tend to creep into others. Thatās where abuse starts.
- Define where and when an AI system can be used.
- Explicitly ban use in sensitive contexts (e.g., scanning random passersby, using productivity scores to make firing decisions without review).
- Document and train on these limits, and audit them periodically.
In your company, that might mean: AI can help summarize performance feedback, but cannot be the sole input into promotion or termination decisions.
3. Bias and Error Audits as Ongoing Work
Bias checks arenāt a oneātime compliance task.
- Track misclassifications and complaints.
- Segment error rates by demographic groups where appropriate and lawful.
- Pause or restrict use if certain error thresholds are crossed.
This is especially crucial for tools that affect peopleās work, freedom, or access to essential services.
4. Build Appeal and Transparency into the Workflow
If someone is affected by an AIādriven decision, they should:
- Know that AI was used.
- Have an understandable explanation of the decision (no buzzword salads).
- Have a clear path to challenge or appeal it.
For internal productivity tools, this could be as simple as: āHereās how this AI score was calculated, and hereās how to flag issues if you think itās wrong.ā
5. Align AI Projects With Values, Not Just Metrics
This sounds soft, but itās not. Values show up in very practical choices:
- Do you optimize for fewer support tickets, or for more satisfied customersāeven if it takes more staff time?
- Do you optimize for more scans per hour, or for fewer false positives?
If your stated values include fairness, trust, and respect, your AI deployments should visibly protect those, even when it costs a bit of raw efficiency.
What This Means for Anyone Using AI to Work Smarter
The Mobile Fortify story sits at the uncomfortable intersection of AI, technology, work, and productivity.
On one side, youāve got a powerful realātime tool that clearly increases operational speed for agents in the field. On the other, youāve got a U.S. citizen describing his treatment as being ākidnappedā and civil liberties advocates calling it incompatible with a free society.
Most businesses will never build something as heavyāduty as a nationwide facial recognition system. But the logic behind Mobile Fortifyāāwe can act faster now, so we willāāshows up everywhere AI is being adopted:
- In hiring screens that silently filter out people.
- In workplace monitoring tools that score employees by keystrokes instead of outcomes.
- In recommendation engines that nudge user behavior without their informed consent.
If your goal is to work smarter, not harder with AI, the bar is higher than āit works and saves time.ā Smarter means:
- The system makes your team more capable, not less thoughtful.
- People affected by your AIāemployees, customers, usersāretain agency and dignity.
- You could explain how your AI is used to a skeptical friend and feel comfortable standing behind it.
The question isnāt whether AI will shape public safety, work, and daily life. It already does.
The real question is: Are we willing to trade human judgment and rights for a faster workflow?
The teams that say noāand design AI systems with guardrails, transparency, and respect built ināwill be the ones people trust with the next generation of āsmartā tools.