هذا المحتوى غير متاح حتى الآن في نسخة محلية ل Jordan. أنت تعرض النسخة العالمية.

عرض الصفحة العالمية

AI, Elections, and Work: Using Power Responsibly

AI & TechnologyBy 3L3C

New research shows AI can shift election preferences and corrupt polls. Here’s what that means for how you use AI at work—and how to protect your judgment.

AI and electionsAI productivityAI ethicspublic opinionworkplace technology
Share:

Most people using AI to boost productivity this year are focused on emails, slide decks, and code. Meanwhile, researchers are quietly showing that the same tools can shift election outcomes by several percentage points.

That tension sits at the heart of modern AI: the same systems that help you work smarter can also be tuned to manipulate what you think, how you vote, and even what the polls say about public opinion.

This matters because AI isn’t just another piece of technology. It’s a persuasion engine at scale. If you’re building products, running campaigns, or simply relying on AI at work, you need a clear mental model for both the upside and the risk.

In this post, part of our AI & Technology series on working smarter, we’ll unpack what new research says about AI and elections—and pull out practical lessons you can apply to how you use AI in your daily work and productivity systems.


What the new research actually shows about AI and elections

AI chatbots can already change people’s political preferences measurably. That’s not a theoretical concern anymore; it’s an experimental result.

Researchers recently ran large-scale experiments in the US, Canada, Poland, and the UK using AI systems to influence political views and behavior:

  • 2,306 U.S. participants were asked to chat with AI models that promoted either Kamala Harris or Donald Trump.
  • The pro-Harris AI moved likely Trump voters 3.9 percentage points toward Harris.
  • The pro-Trump AI nudged likely Harris voters 1.51 points toward Trump.
  • In Canada and Poland, similar bots shifted preferences up to 10 percentage points—about three times more than in the US.
  • A separate project tasked 19 large language models with persuading almost 77,000 UK participants on 707 political issues.

For context, a 2–4 point swing can decide close elections. Traditional political ads often move opinions by around 1 percentage point or less. These AI systems are doing more with a chat window than a TV ad does with millions in media spend.

The reality? We’re no longer guessing whether AI will influence elections. It already can.


Why AI is so persuasive: facts, arguments, and hallucinations

The core finding is surprisingly simple: AI persuades best when it gives lots of specific, fact-based arguments.

Researchers found that:

  • Responses packed with concrete facts and reasons were consistently more persuasive.
  • The models didn’t succeed because they lied better; they succeeded because they argued more.
  • When pushed to produce more and more facts, the models eventually ran out of accurate information and began to hallucinate—making up details that sounded plausible but weren’t true.

“It is not the case that misleading information is more persuasive… As you push the model to provide more and more facts, it starts with accurate facts, and then eventually it runs out… and starts grasping at straws.”

Here’s why that matters for your own use of AI at work:

  • If you reward AI outputs that are longer, denser, and more detailed, you’ll often get more hallucinations.
  • The exact behavior that makes AI great at drafting reports and strategies—rapid generation of arguments—is also what makes it dangerous when accuracy is critical.

When you use AI for productivity, you’re basically harnessing the same persuasive machinery. The difference between “assistant” and “manipulator” is: Who controls the goal, and who checks the facts?


The invisible threat: AI poll manipulation and fake public opinion

There’s another layer that doesn’t get enough attention: AI doesn’t just change individual minds; it can also distort the data we use to understand public opinion.

One recent experiment built an AI agent that:

  • Answered political survey questions at scale.
  • Passed automated bot-detection checks 99.8% of the time across 6,000 attempts.
  • Could be instructed to intentionally skew polling outcomes, for example by over-representing certain views.

If online surveys and polls can be quietly flooded with AI-generated responses, several dominoes fall:

  • Pollsters think opinion has shifted when it hasn’t.
  • Campaigns, brands, and even governments adjust strategy based on fake signals.
  • Media narratives about “what the public wants” become less reliable.

For people using AI in business, research, or product strategy, this is a direct productivity risk. If your dashboards, surveys, or feedback forms can be polluted by automated responses, AI doesn’t just help you work faster—it can help you work confidently in the wrong direction.

So if your workflow depends on “what users say,” you now have to ask: How sure are we that these are even humans?


What this means for anyone using AI at work

Here’s the thing about AI and persuasion: you can’t separate the workplace productivity story from the election story. It’s the same underlying capability applied in different domains.

If you’re using AI every day—to write, to ideate, to analyze—you’re already interacting with a system that’s very good at:

  • Framing arguments in convincing ways
  • Sounding confident even when it’s wrong
  • Shaping how you think about a problem simply by what it chooses to highlight

That doesn’t mean you should stop using AI. It means you should treat it like a very smart but extremely opinionated intern:

  • Great at generating drafts and first passes
  • Terrible at being the single source of truth
  • Surprisingly skilled at nudging your thinking without you noticing

Practical guardrails for daily AI use

Here are concrete practices I’ve found work well if you want the upside of AI for productivity without drifting into uncritical dependence:

  1. Separate “idea mode” from “decision mode.”

    • Use AI to brainstorm options, outline ideas, summarize research.
    • Don’t use AI to decide on strategy, policy, or political positions. Decisions should go through a human review loop.
  2. Force yourself to ask “who set the goal?”
    Before trusting a persuasive answer, mentally ask: What was this model optimizing for? Was it:

    • “Sound convincing”?
    • “Support this specific viewpoint”?
    • “List as many arguments as possible”?
  3. Limit “as many facts as possible” prompts.
    That phrasing is almost an invitation to hallucinations. Try instead:

    • “Give me the 3 strongest arguments you’re most confident are factual.”
    • “Flag any points where the evidence is weak or uncertain.”
  4. Cross-check anything high-stakes.
    If the content affects elections, health, finances, or legal decisions, treat AI output as a draft, not a verdict. Build a checklist:

    • Did a human verify factual claims?
    • Are there original sources behind each key statement?
    • Would I sign my name to this?
  5. Avoid partisan prompting at work.
    The new research shows that models reproduce biases in their training data. If you ask an AI to argue aggressively for one side—political or otherwise—it will oblige, and can drift into inaccuracies faster.

Used well, AI can absolutely improve your work productivity: faster research, better structure, fewer blank-page moments. But the more you let it shape how you think, the more you need explicit guardrails.


How organizations should respond: policy, design, and training

Most companies get this wrong: they roll out AI tools to “save time” without any serious thinking about how persuasive these systems actually are.

If you’re a leader or builder, there are three layers to get right.

1. Clear internal policies for persuasive use

You don’t need a 40-page policy, but you do need clear lines:

  • No AI-generated political messaging on behalf of the company.
  • No automated outreach to voters, customers, or employees without explicit review.
  • No use of AI to participate in public polls or surveys pretending to be individuals.

Spell this out. Ambiguity is where bad habits start.

2. Product and workflow design that exposes intent

One of the most useful ideas from the research is this: when you interact with a model, you should be aware of the motives of whoever set it up.

You can build that awareness into your tools:

  • Label internal bots with their purpose: “This assistant is optimized to help you find arguments in favor of X policy.”
  • Show prompt templates or “bias indicators” so users can see how the bot is nudged.
  • For external-facing tools, be explicit when users are chatting with a campaign bot, sales assistant, or support AI.

If a tool is designed to persuade, users should never have to discover that by accident.

3. Training teams to think critically about AI output

If your team is relying on AI to move faster, they also need to know when to hit the brakes. Good training programs don’t just cover “how to prompt.” They cover:

  • How models hallucinate and why it gets worse under pressure for more details.
  • How to recognize persuasive framing (emotional language, selective facts, false balance).
  • When to escalate: which types of claims must be double-checked by humans.

This isn’t about making people afraid of AI. It’s about matching tool power with user judgment.


Working smarter with AI without becoming easier to manipulate

The same AI that can nudge a voter 4–10 percentage points can nudge you toward a particular strategy, vendor, feature, or opinion at work. That doesn’t make AI bad. It just means it’s powerful—and power needs structure.

Here’s the simple way to think about it:

  • Use AI to speed up execution: drafting, structuring, summarizing, exploring options.
  • Don’t outsource your values, judgment, or decisions to it—especially where politics, ethics, or people’s rights are involved.
  • Whenever AI content might influence how others think or vote, raise the bar: more review, more transparency, more care.

If your goal is to work smarter, not harder, AI belongs in your workflow. Just don’t forget that it also belongs—very prominently—in the story of how elections, public opinion, and information warfare are changing.

The question for every professional and every organization now is straightforward: Are you using AI as an assistant, or is it quietly becoming the strategist?

It’s better to answer that on purpose than to find out later, when the stakes are much higher than a missed deadline.