Aller directement au contenu principal

openhands.llm.llm

LLM Objects

class LLM(RetryMixin, DebugMixin)

The LLM class represents a Language Model instance.

Attributes:

  • config - an LLMConfig object specifying the configuration of the LLM.

__init__

def __init__(config: LLMConfig, metrics: Metrics | None = None)

Initializes the LLM. If LLMConfig is passed, its values will be the fallback.

Passing simple parameters always overrides config.

Arguments:

  • config - The LLM configuration.
  • metrics - The metrics to use.

completion

@property
def completion()

Decorator for the litellm completion function.

Check the complete documentation at https://litellm.vercel.app/docs/completion

is_caching_prompt_active

def is_caching_prompt_active() -> bool

Check if prompt caching is supported and enabled for current model.

Returns:

  • boolean - True if prompt caching is supported and enabled for the given model.

get_token_count

def get_token_count(messages: list[dict] | list[Message]) -> int

Get the number of tokens in a list of messages. Use dicts for better token counting.

Arguments:

  • messages list - A list of messages, either as a list of dicts or as a list of Message objects.

Returns:

  • int - The number of tokens.