Prerequisites

Launching the GUI Server

Using the CLI Command

You can launch the OpenHands GUI server directly from the command line using the serve command:
openhands serve
This command will:
  • Check that Docker is installed and running
  • Pull the required Docker images
  • Launch the OpenHands GUI server at http://localhost:3000
  • Use the same configuration directory (~/.openhands) as the CLI mode

Mounting Your Current Directory

To mount your current working directory into the GUI server container, use the --mount-cwd flag:
openhands serve --mount-cwd
This is useful when you want to work on files in your current directory through the GUI. The directory will be mounted at /workspace inside the container.

Using GPU Support

If you have NVIDIA GPUs and want to make them available to the OpenHands container, use the --gpu flag:
openhands serve --gpu
This will enable GPU support via nvidia-docker, mounting all available GPUs into the container. You can combine this with other flags:
openhands serve --gpu --mount-cwd
Prerequisites for GPU support:

Requirements

Before using the openhands serve command, ensure that:
  • Docker is installed and running on your system
  • You have internet access to pull the required Docker images
  • Port 3000 is available on your system
The CLI will automatically check these requirements and provide helpful error messages if anything is missing.

Using Docker Directly

Alternatively, you can run the GUI server using Docker directly. See the local setup guide for detailed Docker instructions.

Overview

Initial Setup

  1. Upon first launch, you’ll see a settings popup.
  2. Select an LLM Provider and LLM Model from the dropdown menus. If the required model does not exist in the list, select see advanced settings. Then toggle Advanced options and enter it with the correct prefix in the Custom Model text box.
  3. Enter the corresponding API Key for your chosen provider.
  4. Click Save Changes to apply the settings.

Settings

You can use the Settings page at any time to:

GitHub Setup

OpenHands automatically exports a GITHUB_TOKEN to the shell environment if provided:

GitLab Setup

OpenHands automatically exports a GITLAB_TOKEN to the shell environment if provided:

BitBucket Setup

Advanced Settings

The Advanced settings allows configuration of additional LLM settings. Inside the Settings page, under the LLM tab, toggle Advanced options to access additional settings.
  • Custom Model: Use the Custom Model text box to manually enter a model. Make sure to use the correct prefix based on litellm docs.
  • Base URL: Specify a Base URL if required by your LLM provider.
  • Memory Condensation: The memory condenser manages the LLM’s context by ensuring only the most important and relevant information is presented.
  • Confirmation Mode: Enabling this mode will cause OpenHands to confirm an action with the user before performing it.

Key Features

For an overview of the key features available inside a conversation, please refer to the Key Features section of the documentation.

Status Indicator

The status indicator located in the bottom left of the screen will cycle through a number of states as a new conversation is loaded. Typically these include:
  • Disconnected : The frontend is not connected to any conversation.
  • Connecting : The frontend is connecting a websocket to a conversation.
  • Building Runtime... : The server is building a runtime. This is typically in development mode only while building a docker image.
  • Starting Runtime... : The server is starting a new runtime instance - probably a new docker container or remote runtime.
  • Initializing Agent... : The server is starting the agent loop (This step does not appear at present with Nested runtimes).
  • Setting up workspace... : Usually this means a git clone ... operation.
  • Setting up git hooks : Setting up the git pre commit hooks for the workspace.
  • Agent is awaiting user input... : Ready to go!

Tips for Effective Use

  • Be specific in your requests to get the most accurate and helpful responses, as described in the prompting best practices.
  • Use one of the recommended models, as described in the LLMs section.

Other Ways to Run Openhands