Security

Does OpenClaw Have Security Problems?

Does OpenClaw have security problems? We address every concern — data privacy, self-hosting, device pairing, bot tokens, and more. Full security audit with best practices.

12 min read
Feb 21, 2026
Ampere Team

Let's be upfront: any software that connects to your messaging platforms, runs shell commands, and accesses your devices needs serious security scrutiny. OpenClaw is no exception.

But "does it have security problems" and "is it insecure by design" are very different questions. This article addresses every common concern, explains OpenClaw's security architecture, and gives you a complete hardening checklist to run your agent safely.

Short answer: OpenClaw is designed with security as a core principle. It's open source, self-hostable, and gives you more control over your data than any cloud-based AI assistant. But like any powerful tool, it needs to be configured correctly.

Open Source · Self-Hosted · You Own Your Data

The Quick Answer: Is OpenClaw Secure?

Yes — with proper configuration. OpenClaw's security model is actually stronger than most AI tools because:

  • Your data stays on your server — no third-party cloud storing your conversations
  • Open source — anyone can audit the code (MIT license)
  • Self-hosted by default — you control the infrastructure
  • Minimal attack surface — single process, no public API endpoints required
  • Manual device approval — no auto-pairing by default

Compare this to ChatGPT, Gemini, or Copilot — where your conversations are stored on someone else's servers, used for training (unless you opt out), and processed through infrastructure you can't audit.

Addressing Common Security Concerns

Let's go through every concern people raise — one by one — and address each honestly.

Concern 1: "The Agent Can Run Shell Commands"

- The Concern

An AI with shell access could delete files, install malware, or damage my system.

This is the most common worry — and it's valid. Shell access is powerful.

- The Reality

OpenClaw has multiple layers of protection:

Sandboxed execution — commands run in a controlled environment. On Ampere.sh, each agent runs in an isolated container with resource limits.

SOUL.md safety rules — your agent's personality file includes explicit safety guidelines: "Don't run destructive commands without asking," "trash > rm," and "When in doubt, ask."

LLM safety training — the underlying models (Claude, GPT-4) are trained to be cautious with system operations and ask for confirmation on dangerous actions.

You control the tools — you decide which tools your agent has access to. Don't want shell access? Don't enable it.

Concern 2: "My Bot Token Could Be Stolen"

- The Concern

If someone gets my Discord/Telegram bot token, they can impersonate my bot.

Bot tokens are sensitive credentials. This is true for any bot framework, not just OpenClaw.

- The Reality

Standard credential management applies:

Tokens are stored locally — in your openclaw.yaml on your server. They're never transmitted to OpenClaw's servers or any third party.

File permissions — set your config file to 600 (owner-only read/write). OpenClaw doesn't need world-readable configs.

Environment variables — you can store tokens in environment variables instead of config files for additional security.

If compromised — rotate the token immediately in the Discord/Telegram developer portal. The old token is invalidated instantly.

# Secure your config file $ chmod 600 ~/.openclaw/openclaw.yaml # Or use environment variables $ export DISCORD_TOKEN="your-token-here" # Reference in config channels: discord: token: "${DISCORD_TOKEN}"

Concern 3: "Device Pairing Gives Too Much Access"

- The Concern

A paired device can access my camera, location, and run commands. That's a lot of power.

It is. Device pairing is one of OpenClaw's most powerful features — and with power comes responsibility.

- The Reality

Multiple safeguards are built in:

Manual approval requiredautoApprove: false is the default. Every device must be explicitly approved before it can connect.

Capability-based permissions — each node only reports capabilities it has permission for. You control what's enabled on the device side.

On-demand only — cameras and location don't stream continuously. They only activate when the agent explicitly requests them (which you can see in the logs).

Instant unpair — if a device is compromised or lost, unpair it with one command. Access is revoked immediately.

TLS encryption — all communication between nodes and the gateway is encrypted in transit.

Concern 4: "LLM API Calls Send My Data to Cloud Providers"

- The Concern

My conversations are sent to Anthropic/OpenAI/Google for processing. That's not truly private.

This is a fair point — and it applies to every AI tool that uses cloud LLMs.

- The Reality

You have options:

API data policies — Anthropic, OpenAI, and Google's API terms explicitly state that API data is not used for training (unlike free-tier consumer products). Your conversations are processed and discarded.

Local models — OpenClaw supports local LLMs through Ollama, llama.cpp, and any OpenAI-compatible API. Run Llama, Mistral, or other open models on your own hardware for zero data leaving your server.

Choose your provider — you're not locked into any single LLM provider. Pick the one whose privacy policy you trust most.

Memory stays local — your MEMORY.md, daily notes, SOUL.md, and all workspace files never leave your server. Only the current conversation context is sent to the LLM.

# Use a local model for maximum privacy model: ollama/llama3.1:70b # Or use API providers (data not used for training) model: anthropic/claude-sonnet-4-20250514

Concern 5: "The Agent Could Be Prompt-Injected"

- The Concern

Someone in a group chat could trick my agent into doing something harmful via prompt injection.

Prompt injection is a real concern for any AI system that processes untrusted input.

- The Reality

OpenClaw has multiple defenses:

Trusted vs untrusted context — OpenClaw separates system instructions (trusted) from user messages (untrusted). The agent knows the difference between its own config and messages from users.

SOUL.md boundaries — your agent's personality file defines clear rules about what it should and shouldn't do. Well-crafted boundaries make injection much harder.

External content warnings — when the agent fetches web pages or reads external data, it's wrapped in security notices marking it as untrusted external content.

Model-level resistance — modern LLMs like Claude and GPT-4 have built-in resistance to prompt injection attempts. They're trained to follow system instructions over conflicting user messages.

Approval for sensitive actions — configure your agent to ask for confirmation before sending emails, making purchases, or running destructive commands. Even if injected, the attack requires your explicit approval.

Concern 6: "It's Open Source — Doesn't That Mean Attackers Know the Code?"

- The Concern

If the source code is public, attackers can study it for vulnerabilities.

- The Reality

Open source is a security advantage, not a weakness:

Community auditing — thousands of developers can review the code. Vulnerabilities are found and fixed faster than in closed-source software.

No security through obscurity — hiding code doesn't make it secure. The strongest security systems in the world (Linux, OpenSSL, Signal Protocol) are all open source.

Rapid patching — when issues are found, the community can submit fixes immediately. No waiting for a corporate security team to prioritize your bug.

Audit it yourself — don't trust us. Read the code at github.com/openclaw/openclaw.

OpenClaw vs Cloud AI: Security Comparison

How does OpenClaw's security compare to popular cloud AI tools?

OpenClaw (Self-Hosted)

Data stays on your server
Open source, auditable
You control everything
No data used for training
Local model option

ChatGPT / Gemini / Copilot

Data on their servers
Closed source
They control the infra
May use data for training
No local model option

OpenClaw (Ampere.sh)

- Isolated containers
- Open source agent code
- TLS everywhere
- API data not trained on
- Data on Ampere infra

AI Assistants (Siri/Alexa)

- Data on Apple/Amazon servers
Closed source
- Always listening concerns
- Voice data processing
- No transparency

Security Hardening Checklist

Follow this checklist to run OpenClaw as securely as possible:

Server Security

  • Keep OpenClaw updated — run the latest version for security patches
  • Use a firewall — only open ports you actually need
  • SSH key auth only — disable password authentication
  • Run as non-root — create a dedicated user for OpenClaw
  • Enable automatic OS updates — unattended-upgrades on Ubuntu/Debian

OpenClaw Configuration

  • Set config file permissionschmod 600 openclaw.yaml
  • Use environment variables for tokens — don't hardcode secrets
  • Set autoApprove: false for nodes — require manual device approval
  • Restrict channel access — use allowedChannels to limit where the bot responds
  • Configure SOUL.md safety rules — explicit boundaries for sensitive actions

Operational Security

  • Review agent activity periodically — check logs for unexpected behavior
  • Rotate bot tokens regularly — especially if team members change
  • Unpair lost devices immediately — don't leave compromised nodes connected
  • Back up your workspace — memory files and config are your agent's brain
  • Use HTTPS for the gateway — never expose over plain HTTP
# Complete security-hardened config example model: anthropic/claude-sonnet-4-20250514 channels: discord: token: "${DISCORD_TOKEN}" mentionOnly: true allowedChannels: - "ai-chat" - "bot-commands" nodes: enabled: true autoApprove: false heartbeat: enabled: true intervalMs: 1800000

Found a Vulnerability?

OpenClaw takes security seriously. If you discover a vulnerability:

  • Don't disclose publicly — responsible disclosure protects all users
  • Report via GitHub Security Advisories — go to the OpenClaw repo and use the Security tab
  • Or email the team directly — security contact details are in the repo README
  • We respond within 48 hours — critical issues are patched and released ASAP

Frequently Asked Questions

Is OpenClaw safe to use in a business environment?
Yes, with proper configuration. Many teams use OpenClaw for internal bots on Slack and Discord. For business use, we recommend: self-hosting (or Ampere.sh for managed), restricted channel access, mention-only mode, and clear SOUL.md boundaries about handling confidential information.
Can someone hack my agent through Discord messages?
Prompt injection attempts are possible but heavily mitigated. OpenClaw separates trusted system context from untrusted user messages. The agent follows its SOUL.md instructions over conflicting user commands. For maximum safety, configure the agent to require approval for sensitive actions.
Does OpenClaw store my conversations?
Only on your own server, in your workspace. Memory files (MEMORY.md, daily notes) are stored locally. OpenClaw doesn't have any central server that collects user data. On Ampere.sh, data is stored in your isolated container.
Is it safe to give the agent shell access?
For personal use on your own server, yes — with caution. The agent follows safety rules in SOUL.md (preferring trash over rm, asking before destructive actions). For shared or business environments, you can disable shell access entirely or use restricted execution policies.
What happens if my server is compromised?
Same as any other software on that server — the attacker would have access to your OpenClaw config, memory files, and bot tokens. This is why server security (firewall, SSH keys, updates) is crucial. Rotate all tokens immediately if you suspect a breach.
Is Ampere.sh more or less secure than self-hosting?
Different trade-offs. Self-hosting gives you maximum control but requires you to handle server security. Ampere.sh provides isolated containers, automatic TLS, DDoS protection, and managed infrastructure — but your data lives on Ampere's servers. For most users, Ampere is actually more secure because server hardening is handled by professionals.
Has OpenClaw ever had a security breach?
As of February 2026, there have been no known security breaches. The open-source community actively audits the codebase, and vulnerabilities are patched through responsible disclosure. Check the GitHub Security Advisories page for the full history.

Security Is a Feature, Not an Afterthought

Does OpenClaw have security problems? No more than any powerful software — and fewer than most cloud AI tools.

The key difference: with OpenClaw, you're in control. Your data stays on your server. The code is open for anyone to audit. You choose what tools the agent has access to, which devices can pair, and what actions require your approval.

Compare that to sending every conversation to a closed-source cloud service where you have zero visibility into how your data is stored, processed, or used.

Security isn't about being perfect — it's about being transparent, configurable, and honest about the trade-offs. OpenClaw does all three.

Ready for a secure AI agent?

Self-host OpenClaw or deploy on Ampere with enterprise-grade isolation.

Get Started with Ampere →