Skip to main content

System Requirements

A system with a modern processor and a minimum of 4GB RAM is recommended to run OpenHands.

Prerequisites

Docker Desktop
  1. Install Docker Desktop on Mac.
  2. Open Docker Desktop, go to Settings > Advanced and ensure Allow the default Docker socket to be used is enabled.
Tested with Ubuntu 22.04.
Docker Desktop
  1. Install Docker Desktop on Linux.
WSL
  1. Install WSL.
  2. Run wsl --version in powershell and confirm Default Version: 2.
Ubuntu (Linux Distribution)
  1. Install Ubuntu: wsl --install -d Ubuntu in PowerShell as Administrator.
  2. Restart computer when prompted.
  3. Open Ubuntu from Start menu to complete setup.
  4. Verify installation: wsl --list should show Ubuntu.
Docker Desktop
  1. Install Docker Desktop on Windows.
  2. Open Docker Desktop, go to Settings and confirm the following:
  • General: Use the WSL 2 based engine is enabled.
  • Resources > WSL Integration: Enable integration with my default WSL distro is enabled.
The docker command below to start the app must be run inside the WSL terminal. Use wsl -d Ubuntu in PowerShell or search “Ubuntu” in the Start menu to access the Ubuntu terminal.
Alternative: Windows without WSLIf you prefer to run OpenHands on Windows without WSL or Docker, see our Windows Without WSL Guide.

Start the App

We recommend using uv for the best OpenHands experience. uv provides better isolation from your current project’s virtual environment and is required for OpenHands’ default MCP servers (like the fetch MCP server). Install uv (if you haven’t already): See the uv installation guide for the latest installation instructions for your platform. Launch OpenHands:
# Launch the GUI server
uvx --python 3.12 --from openhands-ai openhands serve

# Or with GPU support (requires nvidia-docker)
uvx --python 3.12 --from openhands-ai openhands serve --gpu

# Or with current directory mounted
uvx --python 3.12 --from openhands-ai openhands serve --mount-cwd
This will automatically handle Docker requirements checking, image pulling, and launching the GUI server. The --gpu flag enables GPU support via nvidia-docker, and --mount-cwd mounts your current directory into the container.
If you prefer to use pip and have Python 3.12+ installed:
# Install OpenHands
pip install openhands-ai

# Launch the GUI server
openhands serve
Note that you’ll still need uv installed for the default MCP servers to work properly.

Option 2: Using Docker Directly

docker pull docker.all-hands.dev/all-hands-ai/runtime:0.58-nikolaik

docker run -it --rm --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:0.58-nikolaik \
    -e LOG_ALL_EVENTS=true \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v ~/.openhands:/.openhands \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app \
    docker.all-hands.dev/all-hands-ai/openhands:0.58
Note: If you used OpenHands before version 0.44, you may want to run mv ~/.openhands-state ~/.openhands to migrate your conversation history to the new location.
You’ll find OpenHands running at http://localhost:3000!

Setup

After launching OpenHands, you must select an LLM Provider and LLM Model and enter a corresponding API Key. This can be done during the initial settings popup or by selecting the Settings button (gear icon) in the UI. If the required model does not exist in the list, in Settings under the LLM tab, you can toggle Advanced options and manually enter it with the correct prefix in the Custom Model text box. The Advanced options also allow you to specify a Base URL if required.

Getting an API Key

OpenHands requires an API key to access most language models. Here’s how to get an API key from the recommended providers:
  1. Create a Google account if you don’t already have one.
  2. Generate an API key.
  3. Set up billing.
If your local LLM server isn’t behind an authentication proxy, you can enter any value as the API key (e.g. local-key, test123) — it won’t be used.
Consider setting usage limits to control costs.

Using a Local LLM

Effective use of local models for agent tasks requires capable hardware, along with models specifically tuned for instruction-following and agent-style behavior.
To run OpenHands with a locally hosted language model instead of a cloud provider, see the Local LLMs guide for setup instructions.

Setting Up Search Engine

OpenHands can be configured to use a search engine to allow the agent to search the web for information when needed. To enable search functionality in OpenHands:
  1. Get a Tavily API key from tavily.com.
  2. Enter the Tavily API key in the Settings page under LLM tab > Search API Key (Tavily)
For more details, see the Search Engine Setup guide.

Versions

The docker command above pulls the most recent stable release of OpenHands. You have other options as well:
  • For a specific release, replace $VERSION in openhands:$VERSION and runtime:$VERSION, with the version number. For example, 0.9 will automatically point to the latest 0.9.x release, and 0 will point to the latest 0.x.x release.
  • For the most up-to-date development version, replace $VERSION in openhands:$VERSION and runtime:$VERSION, with main. This version is unstable and is recommended for testing or development purposes only.

Next Steps

I