# How to Run OpenClaw on GPU (NVIDIA RTX, AMD & Local LLM)

> Running OpenClaw on GPU is one of the best ways to unlock faster performance, local AI models, multi-agent workflows, and heavy automation capabilities.

**Published:** Mar 25, 2026 | **Author:** Alex Chen | **Read time:** 8 min read

Learn how to run OpenClaw on GPU using NVIDIA RTX or AMD GPUs. Complete setup guide with system requirements, and step-by-step installation.

---

Running OpenClaw on GPU improves speed, supports larger models, and enables smoother automation. It helps OpenClaw run more efficiently and reliably.

This guide explains system requirements, GPU setup, local model configuration, and best practices.


## System Requirements for OpenClaw on GPU




| Component | Minimum | Recommended |
| --- | --- | --- |
| GPU | 6GB VRAM (GTX 1660 / RTX 2060) | 8GB–16GB VRAM (RTX 3060+) |
| RAM | 8GB | 16GB+ |
| CPU | 4 cores / 4 threads | 8 cores / 8+ threads |
| Storage | 20GB SSD free | 40GB–60GB SSD |
| OS | Linux / Windows 11+ / Ubuntu 22.04+ | Ubuntu 22.04+ |
| Node.js | v22+ | Latest version |
| Model Backend | Ollama | Ollama (recommended) |
| GPU Drivers | NVIDIA CUDA / AMD ROCm | Latest drivers |

## Recommended Models by VRAM




| VRAM | Model | Pull Command |
| --- | --- | --- |
| 4–6 GB | Llama3.2 3B / Gemma3 4B | ollama pull llama3.2:3b |
| 6–8 GB | Qwen2.5 7B / Mistral 7B | ollama pull qwen2.5:7b |
| 12–16 GB | Llama3.1 8B / DeepSeek-R1 8B | ollama pull llama3.1:8b |
| 20–24 GB | GPT-OSS 20B / Qwen2.5 32B | ollama pull gpt-oss:20b |
| 48 GB+ | DeepSeek-R1 70B / Llama3.1 70B | ollama pull deepseek-r1:70b |

## How to Install OpenClaw on GPU — Step by Step


### Step 1: Install WSL (Windows Only)

If you are on Windows, open PowerShell as Administrator and run:

Restart your PC, then open WSL to confirm it is working:

Linux and Ubuntu users can skip this step entirely.


### Step 2: Install NVIDIA Drivers

Check if your GPU is already detected by running:

If the command is not found, install the NVIDIA drivers and reboot:

After reboot, run `nvidia-smi` again. You should see your GPU name, VRAM, and driver version:

nvidia-smi confirming GPU detected — RTX 3060, 12GB VRAM, CUDA 12.2


### Step 3: Install Ollama

Run the official Ollama installer:

Verify the installation completed successfully:

Ollama detects your NVIDIA GPU and CUDA version automatically during install


### Step 4: Pull a Local Model

Choose a model based on your available VRAM:

Run the model to confirm it loads correctly:

In a second terminal, confirm your GPU is being used during inference:

Model download completes layer by layer — model responds immediately after


### Step 5: Install OpenClaw

Run the OpenClaw install script:

The installer detects your OS, installs Node.js 22, and sets up OpenClaw automatically.


### Step 6: Configure OpenClaw to Use Ollama

Run the onboarding wizard — this is the easiest way to connect Ollama:

When prompted, select **Ollama** as the model provider and choose your mode:

- **Cloud + Local** — combines your GPU models with cloud providers

Or configure it manually if you prefer:

To see all available models at any time:


### Step 7: Start OpenClaw

Start the OpenClaw gateway:

Check that it is running:

Open the dashboard to confirm your local Ollama model is the active provider:


### Connect a Messaging Channel


## Common Issues and Fix


## Don't Have a Powerful GPU?

Running OpenClaw on GPU requires high VRAM, drivers, and setup. If your system doesn't meet the requirements, run OpenClaw instantly on Ampere.sh without any hardware setup.

[Get Started on Ampere.sh →](https://www.ampere.sh/setup)


## Frequently Asked Questions

### Can I run OpenClaw on GPU?

Yes. OpenClaw integrates with Ollama, which uses CUDA (NVIDIA) or ROCm (AMD) for GPU-accelerated local inference. This lets you run local LLMs faster, reduce API costs, and keep your data private.

### What is the minimum GPU requirement for OpenClaw?

A minimum of 6GB VRAM is needed to run smaller 4B models like Llama3.2 3B or Gemma3 4B. For a smoother experience with larger models, 12GB+ VRAM (RTX 3060 Ti or better) is recommended.

### Can I run OpenClaw on NVIDIA RTX GPUs?

Yes. NVIDIA RTX GPUs provide the best performance for OpenClaw using CUDA and Tensor Cores. RTX 3060 and above offer the most stable and fastest local LLM experience.

### Can I run OpenClaw without local models?

Yes. You can connect OpenClaw to cloud AI providers like OpenAI, Anthropic, or Gemini using API keys. A GPU is only needed if you want to run local models through Ollama.

### Can I use a gaming PC to run OpenClaw?

Yes. A gaming PC with an RTX 3060 or better can act as a full local AI server for OpenClaw. Keep the PC running for 24/7 availability — or use a VPS for always-on uptime.

### Can I run OpenClaw on a cloud GPU?

Yes. OpenClaw works well on cloud GPU servers running Ubuntu. Install Ollama, pull your model, then install and configure OpenClaw using the same steps as a local GPU setup.

### Why is my GPU memory full?

Large models require more VRAM than your GPU has available. Switch to a smaller or quantized model — for example: ollama pull llama3.2:3b for 4–6GB VRAM cards.
