AI Fraud at Scale: The $25M “Tutoring” Pipeline

AI in Government & Public Sector••By 3L3C

A $25M “tutoring” network shows how AI branding and ad-tech loopholes scale fraud. Learn the detection patterns public agencies can act on.

AI in cybersecurityFraud detectionPlatform integrityPublic sector riskAd tech securityVendor risk
Share:

Featured image for AI Fraud at Scale: The $25M “Tutoring” Pipeline

AI Fraud at Scale: The $25M “Tutoring” Pipeline

$25 million is a serious revenue line for any online business. It’s also a bright, blinking warning sign when that business sits in a category most ad platforms claim they don’t allow: services that “enable dishonest behavior.”

Brian Krebs’ investigation into the Nerdify/Geekly ecosystem reads like an internet-era crime pattern you’ve probably seen in other contexts: constant rebrands, disposable corporate entities, keyword-driven paid acquisition, and a public-facing “we don’t condone cheating” message that collapses the moment a customer asks for deliverables.

For our AI in Government & Public Sector series, this story matters for a bigger reason than academic integrity. It’s a case study in how modern fraud operations use AI branding, ad-tech loopholes, and cross-border corporate structures to scale deception—then route the profits into adjacent ecosystems that may include influence efforts or even defense-related R&D. If you’re responsible for public-sector security, procurement risk, student services, or platform integrity, the pattern here should feel uncomfortably familiar.

What this $25M network shows about AI-enabled fraud

This operation isn’t “about AI writing essays.” The key lesson is that AI is being used as a narrative shield—a way to repackage contract cheating as “tutoring” or “AI assistance,” keep customer acquisition flowing, and stay one step ahead of enforcement.

The reporting describes a network of “nerd/geek” branded sites promoted heavily through search ads for high-intent queries like exam help and term paper assistance. When an ads account is shut down, the group reportedly spins up:

  • A new business entity (often with a new front person)
  • A new Google Ads account
  • A fresh domain in the same brand family
  • The same keyword strategy and funnel

That cycle is the operational heartbeat of fraud at scale: treat compliance as a cost of doing business, and treat identities as disposable.

The myth: “AI killed essay mills”

Most people assume generative AI made human-run essay mills obsolete. The reality is simpler: it changed their marketing, not their incentives.

When buyers want “plagiarism-free” and “AI-free” text, a human supply chain still sells. When buyers want speed and volume, a hybrid model sells. Either way, the business survives by reframing itself as legitimate support.

For defenders, the takeaway isn’t academic. It’s a playbook:

Fraud operations increasingly present themselves as “AI companies” because AI is trusted, hard to regulate cleanly, and easy to pitch to investors and ad platforms.

The ad-tech weak point: enforcement that can’t keep up

The most sobering thread in the reporting is how the network appears to have repeatedly bought its way to the top of search results despite policy bans.

Public-sector leaders often focus on “AI threats” as malware generation or deepfakes. Those are real. But ad-tech abuse is quietly one of the highest-ROI attack surfaces because it sits upstream of everything:

  • It captures users at the moment of intent
  • It launders legitimacy (“it’s on the first page, it must be real”)
  • It scales globally with minimal infrastructure

Why this matters to government and public-sector institutions

Government touches this problem in more places than people think:

  • Public universities and scholarship programs: Contract cheating undermines outcomes, accreditation confidence, and workforce readiness.
  • Immigration and visa-linked education pathways: The article describes allegations around international student recruitment and visa outcomes—an area where fraud risk spills into public administration.
  • Platform and consumer protection enforcement: Many governments regulate advertising, consumer deception, and online harms, but enforcement lags behind corporate shell games.

If you’re in a public-sector role, the right mental model is: ad platforms are critical infrastructure for fraud prevention, whether they acknowledge it or not.

The pattern behind the names: rebrands, shells, and “clean” faces

The reporting lays out a familiar laundering stack: multiple jurisdictions, shifting company names, and recurring operators.

You don’t have to prove every tie to learn from the structure. The structure is the signal.

Fraud at scale relies on identity churn

The alleged approach—new entities, new domains, new ads accounts—works because many integrity systems still treat badness as a property of a single account or domain.

Modern operations behave more like botnets:

  • A “brand cluster” (Nerdify, Geekly variants)
  • Shared funnels (SMS-based quoting, rapid fulfillment)
  • Shared acquisition (search keywords)
  • Distributed payment and corporate plumbing

Defenders should assume that if one node is removed, the cluster will self-heal.

The public-sector echo: procurement and vendor risk

Here’s where I’ll take a stance: most vendor due diligence is built for stable companies, not for adversarial businesses designed to rotate identities.

If you’re a government agency or a public university buying software, staffing services, “student success” tools, marketing services, or AI writing detection, you should assume some suppliers are:

  • Reincarnations of earlier entities
  • Using nominee directors or front people
  • Minimizing traceable operational history

This isn’t paranoia. It’s basic counter-fraud.

What AI-driven cybersecurity can actually do here

AI won’t fix this on its own. But applied correctly, AI-driven cybersecurity and fraud analytics are well-suited to this kind of problem because the operation leaves patterns—lots of them.

1) Detect the cluster, not the domain

An effective detection strategy looks for shared behaviors across a network, such as:

  • Reused analytics IDs or tag managers
  • Shared hosting, DNS patterns, certificate reuse, or registrar behaviors
  • Copy-pasted page templates, UI flows, and form structures
  • Repeating ad copy structures and keyword sets
  • Common funnel mechanics (e.g., SMS quote flows)

This is where machine learning helps: it can score similarity across many weak signals that humans won’t connect quickly.

2) Build “rebrand readiness” into enforcement

If enforcement takes weeks and rebrands take hours, the bad actor wins.

For platform integrity teams and regulators, the goal is to shorten the loop:

  • Automated brand-family mapping: “If this domain is removed, watch these 50 likely siblings.”
  • Risk scoring for new advertisers based on infrastructure similarity and funnel behaviors.
  • Continuous monitoring of high-risk keyword categories (exam help, paper writing, visa guarantees, “guaranteed approval” language).

3) Treat ad abuse as a security incident

Most organizations treat ad abuse as a policy issue or trust-and-safety work. It belongs in the security program.

A practical public-sector stance:

  • Put ad-driven fraud into your threat model.
  • Include ad platform signals in incident response playbooks.
  • Coordinate with legal/communications early because these cases become reputational fast.

4) Apply anomaly detection to money movement and customer flows

Even without seeing bank data, defenders can detect commercialization patterns that don’t match legitimate tutoring:

  • Unusually high conversion on “term paper” queries
  • Short time-to-delivery promises at scale
  • Refund and dispute spikes
  • Customer language that references “submit-ready” work

For institutions (universities, scholarship boards, training programs), anomaly detection can also work internally:

  • Sudden grade jumps in specific modules
  • Assignment writing style variance across a semester
  • Multiple students submitting structurally similar arguments with different wording

The goal isn’t to “catch everyone.” It’s to reduce the ROI of organized cheating.

Public-sector playbook: what to do next week

If you’re responsible for cybersecurity, compliance, student integrity, or digital services, here are concrete steps that don’t require a moonshot project.

Immediate actions (1–2 weeks)

  1. Add “ad-tech abuse” to your risk register for fraud and cyber-enabled deception.
  2. Inventory your exposure:
    • Are your students/staff targeted by “tutoring” ads?
    • Do your programs rely on online assessments vulnerable to contract cheating?
  3. Create a reporting channel so staff and students can forward suspicious ads and domains.

Near-term actions (30–60 days)

  1. Deploy brand and domain monitoring for high-risk keywords tied to your institution name and programs.
  2. Strengthen identity and assessment controls:
    • More oral checks for high-stakes work
    • Randomized question banks
    • Proctoring for specific modules (used carefully, with privacy constraints)
  3. Stand up a cross-functional integrity group (IT security + student services + legal + comms). These cases cross boundaries.

Medium-term actions (quarterly planning)

  1. Adopt AI-driven fraud detection that can correlate infrastructure, content similarity, and behavioral signals across domains.
  2. Update vendor due diligence to explicitly assess “identity churn” risk (shell entities, nominee directors, recent incorporations tied to high-risk categories).
  3. Run a tabletop exercise: “A cheating service targets our students via ads; a journalist asks for comment; what do we do?”

A helpful rule: if your controls only work when the adversary keeps the same name, your controls don’t work.

Where this is headed in 2026

This case sits at the intersection of three trends that will accelerate next year:

  1. AI-washed legitimacy: More services will label themselves “AI tutoring” or “AI study support” while selling prohibited outcomes.
  2. Policy whack-a-mole: Platform bans won’t hold if enforcement is manual and identity is cheap.
  3. Cross-domain spillover: The same growth tactics used for cheating show up in scams, influence ops, and gray-market services that target government and public institutions.

Public-sector AI strategy often focuses on service delivery and efficiency. That’s necessary, but incomplete. AI also needs to be treated as a defensive capability—a way to spot coordinated deception, map networks, and respond faster than fraudsters can rebrand.

What would change if we measured ad platforms and online identity systems the way we measure other critical infrastructure—by how quickly they detect coordinated abuse, not by how good their policies look on paper?