8 min read

AI and Chatbots in Organizing: Using the Tools We Have to Fight for the Future We Need

Two upturned palms, each holding a sphere studded with lights and electronic circuits on a background with connected networks.
The future is not yet written. Let's use every tool we have to write it ourselves.

The concerns about artificial intelligence are real and serious. AI systems amplify biases, threaten jobs and livelihoods, concentrate power in the hands of tech monopolies, enable unprecedented surveillance, and raise profound questions about creativity, labor, and what it means to be human. These are not problems to dismiss or minimize—they demand systemic solutions, democratic governance, and a fundamental reimagining of how technology serves society.

But here's the uncomfortable truth we must confront: we cannot address these concerns while an authoritarian movement is dismantling the very institutions and protections that could regulate AI, protect workers, and ensure democratic control over technology. We need a functioning democracy to solve the problems AI creates. And right now, that democracy is under direct threat.

This creates a strategic dilemma for organizers. Should we refuse to use AI tools on principle while our opponents use every available technology to suppress votes, spread disinformation, and consolidate power? Or should we pragmatically deploy these same tools to organize, mobilize, and fight back—while maintaining our commitment to eventually creating a more just and equitable technological future?

I believe we must choose the latter. Not because AI is good, but because the alternative is worse. This article explores how organizers can thoughtfully use AI and chatbot tools while addressing legitimate concerns about surveillance and digital security.

The Case for Using AI in Organizing

Capacity and Scale. Organizing movements are almost always resource-constrained. We have fewer staff, less money, and less time than we need. AI tools can multiply our capacity—drafting social media posts, creating outreach materials, analyzing voter data, generating talking points, and automating routine communications. This isn't about replacing human organizers; it's about freeing them to do the work only humans can do: building relationships, having difficult conversations, and making strategic decisions.

Speed and Responsiveness. Political moments move fast. A news cycle that once lasted days now lasts hours. AI allows us to respond quickly—generating rapid response materials, analyzing breaking developments, and adapting our messaging in real-time. When your opponents are using these tools to flood the zone with content, refusing to use them yourself is strategic disarmament.

Accessibility and Democratization. Not everyone has the privilege of formal training in communications, policy analysis, or graphic design. AI tools lower the barriers to entry, allowing passionate volunteers to contribute meaningfully even without specialized skills. This democratizes the work of organizing and allows movements to tap into broader pools of talent and energy.

The Pragmatic Reality. Whether we like it or not, AI is already embedded in the tools we use every day—email platforms, social media algorithms, donor databases, phone banking systems. The question isn't whether we'll use AI, but whether we'll use it consciously and strategically, or simply be subject to its effects without agency or awareness.


Addressing the Concerns

Using AI tools doesn't mean we abandon our values or ignore the harms these systems can cause. It means we use them thoughtfully, with clear-eyed awareness of their limitations and risks.

On Job Displacement: Yes, AI threatens jobs, including in organizing and advocacy. The solution isn't to refuse to use AI—that won't stop the displacement. The solution is political: fighting for worker protections, universal basic income, strong unions, and democratic control over technology deployment. We need a government that regulates how corporations use AI, not one that accelerates corporate power. That's why defeating authoritarianism comes first.

On Bias and Accuracy: AI systems do reproduce biases from their training data. That's why we never outsource judgment to AI. Use AI for drafting, research, and routine tasks—but always have humans review, edit, and make final decisions. Think of AI as an assistant, not an authority. And advocate for transparency, auditing, and accountability in AI systems, which requires democratic governance.

On Corporate Control: Most AI tools are controlled by massive corporations with their own interests and agendas. This is a legitimate concern. But refusing to use these tools doesn't diminish corporate power—it just handicaps our movements. We must use available tools while fighting for public AI infrastructure, open-source alternatives, and regulation that serves the public interest. Again, this requires political power we currently lack.

On Environmental Impact: AI training and deployment consumes enormous amounts of energy and resources. This is deeply problematic in a climate crisis. But the same logic applies: individual refusal doesn't solve the problem. We need political power to regulate AI energy use, mandate renewable energy, and transition our economy. We can't get there from a position of political defeat.


Not all AI tools are created equal. Some prioritize user privacy and security more than others. Here are recommendations based on current capabilities, with attention to digital security concerns.

For General Purpose Text Work: Claude (Anthropic)

Claude (claude.ai) is currently the strongest choice for organizing work that involves writing, research, and analysis. Key advantages:

  • Strong capabilities for drafting communications, policy analysis, and strategic thinking
  • Better at following complex instructions and maintaining context through long conversations
  • Can work with documents, create spreadsheets, and handle multiple file formats
  • Anthropic has committed to responsible AI development, though all corporate commitments should be viewed skeptically
  • Paid plans offer better privacy protections and higher usage limits

Alternative: ChatGPT (OpenAI)

ChatGPT remains a viable option, especially the GPT-4 models. It's widely used and has a large ecosystem of integrations. However, OpenAI's close relationship with Microsoft and its aggressive commercialization raise some concerns. If you use ChatGPT:

  • Use the Plus or Team plans for better privacy
  • Be aware that conversations may be used for training unless you opt out
  • Don't input sensitive personal information or organizing plans

For Image Generation: Adobe Firefly or Midjourney

For creating graphics, social media images, and visual materials:

  • Adobe Firefly is trained on licensed content and Adobe Stock, reducing copyright concerns. Integrated with Adobe's creative suite.
  • Midjourney produces high-quality images but operates through Discord, which may raise security concerns for sensitive organizing work.

For Data Analysis: Claude or GPT-4 with Code Interpreter

Both Claude and ChatGPT can now analyze spreadsheets, run statistical analyses, and create visualizations. Claude's interface for working with data is generally cleaner, but ChatGPT's Code Interpreter has more advanced capabilities for technical users. Never upload sensitive voter data, donor information, or personal details to any AI system.

Tools to Avoid or Use with Extreme Caution:

  • Grok (X/Twitter): Given the platform's current ownership and political alignment, this is not recommended for organizing work.
  • Free or obscure chatbots: Many have unknown security practices and data handling policies.
  • Any chatbot for sensitive operational details: Never input information about specific individuals, security plans, or tactical organizing details.

Surveillance and Digital Security Concerns

This is perhaps the most serious concern for organizers. We operate in an increasingly hostile surveillance environment, and AI tools create new vulnerabilities. Here's how to think about this:

Understand the Threat Model

Different organizing contexts require different security levels:

  • Low sensitivity: Public communications, general education, mainstream electoral work. AI tools are generally fine here.
  • Medium sensitivity: Coordinating protests, working with vulnerable communities, handling personal stories. Use AI tools but be very careful about what information you input. Never use real names or identifying details.
  • High sensitivity: Direct action planning, working with undocumented people, anything involving legal risk. Do not use commercial AI tools at all. Use encrypted communications and assume you're under surveillance.

Operational Security Best Practices

  • Use separate accounts. Don't use your personal email or accounts for organizing work. Create dedicated accounts for movement work.
  • Never input identifying information. Don't enter real names, addresses, phone numbers, or other PII into AI systems. Use pseudonyms and generic descriptions.
  • Understand data retention. Most AI companies retain your conversations, even on paid plans. Assume anything you input could eventually be accessed by law enforcement through subpoena.
  • Review and sanitize outputs. AI-generated content may inadvertently include information you didn't intend to share. Always review before publishing.
  • Use paid plans when possible. Paid plans generally have better privacy protections and clearer terms of service. Free plans often explicitly use your data for training.
  • Keep sensitive work offline. For highly sensitive organizing, do not use any cloud-based tools. Use encrypted local storage and air-gapped devices where appropriate. (An air-gapped device is a computer or device that is completely isolated from the internet and other networks. It has no wireless connections and no physical network cables connected to it.)
  • Know your rights. Understand what legal protections exist for digital organizing in your jurisdiction. Consult with legal support organizations like the Electronic Frontier Foundation or National Lawyers Guild.

The Broader Digital Security Stack

AI tools are just one piece of digital security. Organizers should also:

  • Use Signal for sensitive one-on-one and small group communications
  • Enable two-factor authentication on all accounts
  • Use a password manager (Bitwarden, Proton Pass or KeePassXC)
  • Regularly update devices and software
  • Consider using a VPN for sensitive browsing (Mullvad or ProtonVPN recommended)
  • Encrypt hard drives and use full-disk encryption
  • Get training from organizations like EFF, Access Now, or local tech solidarity groups

Practical Guidelines for AI Use in Organizing

Here's a framework for deciding when and how to use AI tools:

Good Uses of AI:

  • Drafting social media posts, emails, and public communications
  • Creating educational materials and explainers
  • Generating talking points and FAQ responses
  • Researching policy positions and analyzing legislation
  • Brainstorming campaign strategies and messaging approaches
  • Creating graphics and visual content for public campaigns
  • Analyzing aggregated, anonymized data

Bad Uses of AI:

  • Inputting names, addresses, or identifying information about individuals
  • Planning or discussing tactics that could lead to legal jeopardy
  • Storing sensitive voter or donor data
  • Making final decisions without human review
  • Replacing genuine relationship-building and organizing conversations
  • Using AI to make decisions about people without their knowledge or consent
The Human-in-the-Loop Principle: Always remember: AI is a tool for augmenting human organizers, not replacing them. Every piece of AI-generated content should be reviewed, edited, and approved by a person who understands the context and stakes. AI can help you work faster and smarter, but it cannot replace the judgment, creativity, empathy, and strategic thinking that humans bring to organizing work.

Conclusion: Fighting with What We Have

The rise of AI is one of the defining challenges of our time. It will reshape work, creativity, power, and society in profound ways. We need strong democratic institutions to guide that transformation in service of justice and human flourishing, not corporate profit and authoritarian control.

But we cannot build those institutions from a position of defeat. We cannot regulate AI if we've lost our democracy. We cannot protect workers if labor rights are dismantled. We cannot ensure equitable access to technology if wealth continues to concentrate in the hands of oligarchs. We cannot hold corporations accountable if government becomes their instrument rather than their check.

So we face a choice: We can refuse to use AI tools on principle and watch our movements get outpaced by opponents who have no such scruples. Or we can use these tools strategically and thoughtfully—with clear awareness of their limitations and dangers—to build the power necessary to eventually create a better technological future.

This is not a comfortable position. It requires us to hold two truths simultaneously: that AI is deeply problematic, and that we need to use it anyway. It demands both pragmatism and principle, both tactical flexibility and strategic clarity about our ultimate goals.

But discomfort is where organizers live. We operate in the space between the world as it is and the world as it should be. We use imperfect tools to pursue a more perfect union. We make difficult compromises in service of larger victories.

The question before us is not whether AI is good or bad. It's whether we will use every tool at our disposal to fight for a future where we have the democratic power to decide how AI and other technologies are developed, deployed, and governed. A future where technology serves human needs rather than extracting value from human labor. A future where the gains from automation are shared broadly rather than concentrated narrowly. A future where surveillance is limited, rights are protected, and power is distributed.

That future is still possible. But we can only reach it by winning the immediate battles before us. And winning those battles means using the tools we have—including AI chatbots—with both strategic intelligence and moral clarity about why we're fighting in the first place.

The future is not yet written. Let's use every tool we have to write it ourselves.