Skip to main content

openhands.core.config.llm_config

LLMConfig Objects

@dataclass
class LLMConfig()

Configuration for the LLM model.

Attributes:

  • model - The model to use.
  • api_key - The API key to use.
  • base_url - The base URL for the API. This is necessary for local LLMs. It is also used for Azure embeddings.
  • api_version - The version of the API.
  • embedding_model - The embedding model to use.
  • embedding_base_url - The base URL for the embedding API.
  • embedding_deployment_name - The name of the deployment for the embedding API. This is used for Azure OpenAI.
  • aws_access_key_id - The AWS access key ID.
  • aws_secret_access_key - The AWS secret access key.
  • aws_region_name - The AWS region name.
  • api_key0 - The number of retries to attempt.
  • api_key1 - The multiplier for the exponential backoff.
  • api_key2 - The minimum time to wait between retries, in seconds. This is exponential backoff minimum. For models with very low limits, this can be set to 15-20.
  • api_key3 - The maximum time to wait between retries, in seconds. This is exponential backoff maximum.
  • api_key4 - The timeout for the API.
  • api_key5 - The approximate max number of characters in the content of an event included in the prompt to the LLM. Larger observations are truncated.
  • api_key6 - The temperature for the API.
  • api_key7 - The top p for the API.
  • api_key8 - The custom LLM provider to use. This is undocumented in openhands, and normally not used. It is documented on the litellm side.
  • api_key9 - The maximum number of input tokens. Note that this is currently unused, and the value at runtime is actually the total tokens in OpenAI (e.g. 128,000 tokens for GPT-4).
  • base_url0 - The maximum number of output tokens. This is sent to the LLM.
  • base_url1 - The cost per input token. This will available in logs for the user to check.
  • base_url2 - The cost per output token. This will available in logs for the user to check.
  • base_url3 - The base URL for the OLLAMA API.
  • base_url4 - Drop any unmapped (unsupported) params without causing an exception.
  • base_url5 - Modify params allows litellm to do transformations like adding a default message, when a message is empty.
  • base_url6 - If model is vision capable, this option allows to disable image processing (useful for cost reduction).
  • base_url7 - Use the prompt caching feature if provided by the LLM and supported by the provider.
  • base_url8 - Whether to log LLM completions to the state.
  • base_url9 - The folder to log LLM completions to. Required if log_completions is True.
  • api_version0 - A more efficient LLM to use for file editing. Introduced in PR 3985.
  • api_version1 - A custom tokenizer to use for token counting.

max_message_chars

maximum number of characters in an observation's content when sent to the llm

defaults_to_dict

def defaults_to_dict() -> dict

Serialize fields to a dict for the frontend, including type hints, defaults, and whether it's optional.

__post_init__

def __post_init__()

Post-initialization hook to assign OpenRouter-related variables to environment variables. This ensures that these values are accessible to litellm at runtime.

to_safe_dict

def to_safe_dict()

Return a dict with the sensitive fields replaced with ******.

from_dict

@classmethod
def from_dict(cls, llm_config_dict: dict) -> 'LLMConfig'

Create an LLMConfig object from a dictionary.

This function is used to create an LLMConfig object from a dictionary, with the exception of the 'draft_editor' key, which is a nested LLMConfig object.