Pro-Russia hacktivists are targeting OT via exposed VNC and weak passwords. Learn how AI detects anomalies early to protect critical infrastructure.

AI Defense for OT: Stop Hacktivists Before Impact
A lot of critical infrastructure incidents don’t start with exotic zero-days. They start with something boring: an Internet-facing remote access service and a password that never got changed.
That’s why the recent federal warning about pro-Russia hacktivist groups targeting US critical infrastructure is such a useful reality check for security leaders. The playbook described by the FBI/CISA/NSA coalition isn’t complicated—scan for exposed VNC, brute-force passwords, click around an HMI, and change settings. The results, however, can be complicated fast: downtime, physical damage, alarm suppression, and forced “hands-on” operator intervention.
This post sits inside our AI in Defense & National Security series for a reason. National security threats increasingly show up as operational disruption—not just data theft. And in OT environments where patching is slow and uptime rules everything, AI-driven threat detection isn’t a nice-to-have. It’s one of the few approaches that scales against persistent, opportunistic actors.
What the federal advisory is really telling OT leaders
The headline is “hacktivists,” but the operational lesson is simpler: minimally secured, Internet-exposed OT access paths are still everywhere—and they’re being actively hunted.
Federal and international authorities named four groups—Cyber Army of Russia Reborn (CARR), Z-Pentest, NoName057(16), and Sector16—as active in recent weeks targeting water and wastewater, food and agriculture, and energy. Even when individual operators are unsophisticated, the campaign effects can add up because the technique is repeatable and easy to share.
“Low sophistication” doesn’t mean “low risk”
Most companies get this wrong. They hear unsophisticated and mentally downgrade the problem.
But the advisory notes impacts ranging from disruption to physical damage (even if no injuries have been reported). In OT, a basic remote-control foothold can be enough. You don’t need stealthy persistence when you can:
- Change instrument settings
- Disable alarms
- Force a loss-of-view condition
- Restart or shut down devices
This is the part that should worry defense and national security stakeholders: the barrier to causing real-world disruption is often lower than the barrier to stealing data from a hardened enterprise network.
The “hacktivist” label can be camouflage
The advisory and outside analysis highlight another uncomfortable point: don’t take an adversary’s identity claims at face value. Some groups present as fringe collectives, but can have direct or indirect support relationships that complicate attribution and response.
Practically, this means your detection and response plan should assume the operator could be:
- A noisy volunteer trying to make a point, or
- A more capable actor borrowing the volunteer’s “brand” to muddy the waters
Your controls should work either way.
How these OT intrusions work (and why they’re hard to catch)
The core technique described is consistent and scalable: find exposed VNC, brute-force access, then operate the HMI.
Here’s the attack chain in plain terms:
- Scan for Internet-facing devices with VNC ports open.
- Spin up a temporary VPS to run password brute-forcing.
- Use VNC to reach the host and confirm HMI connectivity.
- Authenticate using default/weak credentials (or no password).
- Observe and manipulate via the HMI: screenshots/recordings, parameter changes, account changes, device restarts, alarm disabling.
Why this slips past “traditional IT security”
Even mature SOC teams struggle with OT intrusions because:
- Visibility gaps: OT networks may not feed full-fidelity logs to the SOC.
- Protocol blind spots: ICS/SCADA protocols and HMI telemetry don’t always map cleanly to IT detections.
- Change is rare: OT environments can run “unchanged” for months, which is great for baselining—unless nobody is actually doing the baselining.
- Remote access exceptions: Emergency access paths (vendors, integrators, third-party support) quietly become permanent.
This is where AI earns its keep: not by magically “blocking hackers,” but by spotting deviations humans and signature-based tools miss.
Where AI fits: practical detection that works in OT realities
AI is most valuable in OT security when it does two things well: baseline normal operations and flag high-confidence anomalies with minimal operational disruption.
If you’re evaluating AI for OT security (or trying to justify budget), focus on use cases tied directly to the advisory’s playbook.
1) AI for anomalous remote access detection
Start with the easiest win: detecting weirdness in remote access behavior.
AI models can learn patterns like:
- Typical remote access hours (weekday vs. weekend)
- Common source geographies/ASNs for vendors
- Normal session length and frequency
- Which accounts typically access which HMIs
Then alert when something breaks the pattern:
- First-time source network hitting an HMI at 2:13 a.m.
- Repeated failed logins followed by a success
- VNC sessions that immediately trigger configuration views rather than routine monitoring
Even if the attacker uses “valid” credentials (because the password was weak), their behavior is rarely normal.
2) AI for HMI behavior and “operator intent” anomalies
Here’s what works in the real world: treat HMI interaction as a signal.
AI-assisted analytics can flag sequences that don’t match legitimate operator workflows, such as:
- Rapid navigation across unrelated control pages
- Abrupt toggling of setpoints outside typical bounds
- Alarm suppression actions that don’t align with maintenance windows
- Loss-of-view events correlated with remote sessions
A strong stance: alarm suppression should be treated as a security event until proven otherwise. It’s one of the clearest “safety impact” precursors in the advisory’s described actions.
3) AI for process anomaly detection (the safety backstop)
The highest-value OT detections often come from the process itself: flows, pressures, temperatures, chemical dosing, pump states.
Even if attackers hide in legitimate remote tooling, they still create physical effects. AI-based anomaly detection can watch for:
- Sensor readings drifting in ways inconsistent with physics or historical patterns
- Control loop instability after a configuration change n- Conflicts between commanded state and observed state
This is where cybersecurity meets defense-grade resilience: if you can detect unsafe state transitions quickly, you can contain impact even when perimeter controls fail.
Snippet-worthy truth: In OT, the process is your most honest telemetry—attackers can fake logs more easily than they can fake physics.
4) AI to reduce SOC overload (so the alerts actually get handled)
Many critical infrastructure teams aren’t under-alerted. They’re over-alerted.
AI can help by correlating weak signals into a single, actionable incident:
- VNC exposed + login failures + new source IP + HMI parameter change + alarm disabled nThat’s not five tickets. That’s one incident with a clear story.
Hardening steps that matter this quarter (not someday)
You don’t need a multi-year modernization program to reduce the exact risk described in the advisory. You need a focused sprint aimed at exposure reduction and control validation.
Exposure reduction: shut the front door
If an OT asset is reachable from the public Internet, treat that as a defect.
Priorities:
- Eliminate Internet-facing VNC for OT networks. If you must use remote access, put it behind a hardened broker with MFA.
- Implement allowlisting for remote support sources (vendor IPs, jump hosts).
- Segment OT from IT with explicit, documented conduits—not ad hoc firewall rules nobody owns.
Identity and authentication: kill default and weak credentials
This is the unglamorous work that stops real incidents:
- Remove default passwords on HMIs, engineering workstations, and remote access tools
- Enforce MFA on any path that can reach an HMI
- Rotate credentials used by vendors and integrators; don’t let “shared passwords” live forever
If your environment can’t support MFA on certain legacy components, compensate with:
- Time-bound access approvals
- Jump hosts with session recording
- Strict allowlists and geofencing
Safety-aware controls: separate “view” from “control”
The advisory highlights a key mitigation: separate and audit view vs. control functions.
That’s not bureaucracy. It’s safety engineering.
Practical patterns include:
- Role-based access where most remote users are read-only
- Dual-authorization for setpoint changes
- Session recording for any remote control action
Recovery planning: assume disruption, plan for fast restoration
For critical sectors, resilience is part of security.
Your plan should cover:
- Known-good configuration backups for HMIs and controllers
- Offline recovery procedures that operators can execute under pressure
- A rehearsed process for isolating remote access while keeping the plant running
I’ve found tabletop exercises are the fastest way to surface uncomfortable truths—like “we don’t know who owns that remote access server” or “we can’t restore that HMI image anymore.”
“People also ask” answers (for OT and security leaders)
Are VNC attacks really still a thing in 2025?
Yes. VNC remains common in industrial environments because it’s simple, familiar, and often embedded in older operational workflows. That also makes it a recurring target when it’s exposed to the Internet or protected by weak credentials.
If the attackers are “opportunistic,” why invest in AI?
Because opportunistic actors scale. They scan continuously, share tooling, and move to the next target quickly. AI helps by spotting anomalous access and process changes early—before impact becomes downtime or physical damage.
What’s the single highest-impact fix?
Remove public Internet exposure of OT remote access, then enforce strong authentication (MFA where possible). Most of the described playbook collapses if attackers can’t reach VNC and can’t brute-force credentials.
What defense and national security teams should take from this
The through-line for the AI in Defense & National Security series is that modern conflict blurs boundaries: geopolitics shows up in municipal services, food production, and regional energy operations. These campaigns don’t require elite tradecraft to create public anxiety and operational cost.
AI in cybersecurity is most credible when it’s tied to outcomes: fewer blind spots, faster detection of unsafe changes, and response that doesn’t depend on a perfect perimeter. If you operate or support critical infrastructure, the question isn’t whether you’ll see probing—it’s whether you’ll recognize the early signals before someone starts clicking around your HMI.
If you’re pressure-testing your OT security posture for 2026 budget planning, start with a simple exercise: map every remote path that can reach an HMI, then ask which of those paths would still be safe if a password leaked tonight. That answer will tell you exactly where AI-based monitoring and tighter controls should go next.