When using a Local LLM, OpenHands may have limited functionality. It is highly recommended that you use GPUs to serve local models for optimal experience.
http://localhost:3000
in your browser.openai/mistralai/devstral-small-2505
(the Model API identifier from LM Studio, prefixed with “openai/”)http://host.docker.internal:1234/v1
local-llm
make run
.
LLM
tab.
openai/<served-model-name>
e.g. openai/devstral
if you’re using Ollama, or openai/Devstral-Small-2505
for SGLang or vLLM.http://host.docker.internal:<port>/v1
Use port 11434
for Ollama, or 8000
for SGLang and vLLM.dummy
, local-llm
)mykey
)