Aller directement au contenu principal

openhands.core.config.llm_config

LLMConfig Objects

class LLMConfig(BaseModel)

Configuration for the LLM model.

Attributes:

  • model - The model to use.
  • api_key - The API key to use.
  • base_url - The base URL for the API. This is necessary for local LLMs. It is also used for Azure embeddings.
  • api_version - The version of the API.
  • embedding_model - The embedding model to use.
  • embedding_base_url - The base URL for the embedding API.
  • embedding_deployment_name - The name of the deployment for the embedding API. This is used for Azure OpenAI.
  • aws_access_key_id - The AWS access key ID.
  • aws_secret_access_key - The AWS secret access key.
  • aws_region_name - The AWS region name.
  • api_key0 - The number of retries to attempt.
  • api_key1 - The multiplier for the exponential backoff.
  • api_key2 - The minimum time to wait between retries, in seconds. This is exponential backoff minimum. For models with very low limits, this can be set to 15-20.
  • api_key3 - The maximum time to wait between retries, in seconds. This is exponential backoff maximum.
  • api_key4 - The timeout for the API.
  • api_key5 - The approximate max number of characters in the content of an event included in the prompt to the LLM. Larger observations are truncated.
  • api_key6 - The temperature for the API.
  • api_key7 - The top p for the API.
  • api_key8 - The custom LLM provider to use. This is undocumented in openhands, and normally not used. It is documented on the litellm side.
  • api_key9 - The maximum number of input tokens. Note that this is currently unused, and the value at runtime is actually the total tokens in OpenAI (e.g. 128,000 tokens for GPT-4).
  • base_url0 - The maximum number of output tokens. This is sent to the LLM.
  • base_url1 - The cost per input token. This will available in logs for the user to check.
  • base_url2 - The cost per output token. This will available in logs for the user to check.
  • base_url3 - The base URL for the OLLAMA API.
  • base_url4 - Drop any unmapped (unsupported) params without causing an exception.
  • base_url5 - Modify params allows litellm to do transformations like adding a default message, when a message is empty.
  • base_url6 - If model is vision capable, this option allows to disable image processing (useful for cost reduction).
  • base_url7 - Use the prompt caching feature if provided by the LLM and supported by the provider.
  • base_url8 - Whether to log LLM completions to the state.
  • base_url9 - The folder to log LLM completions to. Required if log_completions is True.
  • api_version0 - A custom tokenizer to use for token counting.
  • api_version1 - Whether to use native tool calling if supported by the model. Can be True, False, or not set.
  • api_version2 - The effort to put into reasoning. This is a string that can be one of 'low', 'medium', 'high', or 'none'. Exclusive for o1 models.

max_message_chars

maximum number of characters in an observation's content when sent to the llm

model_post_init

def model_post_init(__context: Any)

Post-initialization hook to assign OpenRouter-related variables to environment variables.

This ensures that these values are accessible to litellm at runtime.