Skip to content

Model

CodeActModelUsage dataclass

CodeActModelUsage(total_tokens: int = 0, input_tokens: int = 0, thinking_tokens: int = 0, output_tokens: int = 0, cache_write_tokens: int = 0, cache_read_tokens: int = 0, cost: float | None = None)

Tracks token usage and costs from interactions with code action models.

cost class-attribute instance-attribute

cost: float | None = None

Cost of code action model usage in USD based on token counts or None if cost estimation is not available for the used model.

update

update(other: CodeActModelUsage)

Adds token counts and cost of other to this instance. This is used to accumulate usage across multiple interactions.

Parameters:

Name Type Description Default
other CodeActModelUsage

The usage instance to add to this instance.

required
Source code in freeact/model/base.py
def update(self, other: "CodeActModelUsage"):
    """Adds token counts and cost of `other` to this instance.
    This is used to accumulate usage across multiple interactions.

    Args:
        other: The usage instance to add to this instance.
    """

    self.total_tokens += other.total_tokens
    self.input_tokens += other.input_tokens
    self.thinking_tokens += other.thinking_tokens
    self.output_tokens += other.output_tokens
    self.cache_write_tokens += other.cache_write_tokens
    self.cache_read_tokens += other.cache_read_tokens

    if self.cost is None and other.cost is not None:
        self.cost = other.cost
    elif self.cost is not None and other.cost is not None:
        self.cost += other.cost

CodeActModelResponse dataclass

CodeActModelResponse(text: str, is_error: bool, usage: CodeActModelUsage = CodeActModelUsage())

Bases: ABC

A response from a code action model. If the code property is None it is a final response to the user, otherwise it is a code action.

text instance-attribute

text: str

Response text generated by a code action model. Depending on the strategy to generate code actions, this may or may not include the generated code. If it contains code, it is extracted and available in the code property.

is_error instance-attribute

is_error: bool

Whether the response text contains error information. If True, text contains error information that is NOT related to code execution errors. Not handled by applications but rather freeact-internally.

usage class-attribute instance-attribute

usage: CodeActModelUsage = field(default_factory=CodeActModelUsage)

Token usage and costs from the interaction with a code action model.

code abstractmethod property

code: str | None

Executable code generated by a code action model. If None, this response is a final response to the user.

CodeActModelTurn

Bases: ABC

A single interaction with a code action model. This is either initiated by a user query or code execution feedback (code action results or execution errors).

response abstractmethod async

response() -> CodeActModelResponse

Retrieve the complete response from a code action model. Waits until the response is available.

Source code in freeact/model/base.py
@abstractmethod
async def response(self) -> CodeActModelResponse:
    """Retrieve the complete response from a code action model. Waits until
    the response is available.
    """

stream abstractmethod

stream() -> AsyncIterator[str]

Stream the code action model's response as it is generated. Once the stream is consumed, response is immediately available without waiting.

Source code in freeact/model/base.py
@abstractmethod
def stream(self) -> AsyncIterator[str]:
    """Stream the code action model's response as it is generated. Once the
    stream is consumed, [`response`][freeact.model.base.CodeActModelTurn.response]
    is immediately available without waiting.
    """

CodeActModel

Bases: ABC

A code action model.

A code action model is a model that responds with code if wants to perform an an action. An action is performed by executing the generated code.

A code action model responds to user queries and code execution feedback by returning a CodeActModelTurn object which is used to retrieve the model response.

request abstractmethod

request(user_query: str, **kwargs) -> CodeActModelTurn

Initiates an interaction with this model from a user query.

Parameters:

Name Type Description Default
user_query str

The user query (a question, instruction, etc.)

required
**kwargs

Additional interaction-specific parameters

{}

Returns:

Name Type Description
CodeActModelTurn CodeActModelTurn

An object for retrieving the model's response.

Source code in freeact/model/base.py
@abstractmethod
def request(self, user_query: str, **kwargs) -> CodeActModelTurn:
    """Initiates an interaction with this model from a user query.

    Args:
        user_query: The user query (a question, instruction, etc.)
        **kwargs: Additional interaction-specific parameters

    Returns:
        CodeActModelTurn: An object for retrieving the model's response.
    """

feedback abstractmethod

feedback(feedback: str, is_error: bool, tool_use_id: str | None, tool_use_name: str | None, **kwargs) -> CodeActModelTurn

Initiates an interaction with this model from code execution feedback, allowing the model to refine or correct previous responses, or returning a final response to the user. A feedback call must follow a previous request or feedback call.

Parameters:

Name Type Description Default
feedback str

The feedback text from code execution.

required
is_error bool

Whether the feedback text contains error information.

required
**kwargs

Additional model-specific parameters for the feedback.

{}

Returns:

Name Type Description
CodeActModelTurn CodeActModelTurn

An object for retrieving the model's response.

Source code in freeact/model/base.py
@abstractmethod
def feedback(
    self,
    feedback: str,
    is_error: bool,
    tool_use_id: str | None,
    tool_use_name: str | None,
    **kwargs,
) -> CodeActModelTurn:
    """Initiates an interaction with this model from code execution feedback,
    allowing the model to refine or correct previous responses, or returning
    a final response to the user. A `feedback` call must follow a previous
    `request` or `feedback` call.

    Args:
        feedback: The feedback text from code execution.
        is_error: Whether the `feedback` text contains error information.
        **kwargs: Additional model-specific parameters for the feedback.

    Returns:
        CodeActModelTurn: An object for retrieving the model's response.
    """

LiteCodeActModel

LiteCodeActModel(model_name: str, skill_sources: str | None = None, system_template: str | None = None, execution_output_template: str | None = None, execution_error_template: str | None = None, use_executor_tool: bool | None = None, use_editor_tool: bool | None = None, **kwargs)

Bases: CodeActModel

A LiteLLM-based code action model.

Code actions are generated differently depending on the use_executor_tool argument:

  • use_executor_tool=False: Code actions are included directly into the model's response text, enclosed in <code-action> ... </code-action> tags. Uses the CODE_TAG_SYSTEM_TEMPLATE by default.

  • use_executor_tool=True: Code actions are generated by calling an internal execute_ipython_cell tool. Uses the TOOL_USE_SYSTEM_TEMPLATE by default.

  • use_executor_tool=None: A sensible default is chosen based on the model name and provider. Currently, a tool use approach is used for Anthropic and OpenAI models, a code tag approach is used for all other models.

A custom system template can be provided with the system_template constructor argument. Its semantics should match the use_executor_tool argument value.

Models created with use_editor_tool=True are also able to create and edit files. This allows them to store and edit code actions on disk (= long-term memory). Stored code actions can be loaded as custom skills via get_sources. If use_editor_tool is None, a sensible default is chosen based on the model name and provider. Currently, Anthropic and OpenAI models are configured to use the editor tool.

Parameters:

Name Type Description Default
model_name str

A model name supported by LiteLLM.

required
skill_sources str | None

Source code of Python modules offered to the model as skills. They are utilized by generated code actions, if useful for the task. Skill sources are usually loaded and formatted with get_sources.

None
system_template str | None

A system template that guides the model to generate code actions. Must define a {python_modules} placeholder for skill_sources.

None
execution_output_template str | None

A prompt template for formatting successful code execution output. Must define an {execution_feedback} placeholder.

None
execution_error_template str | None

A prompt template for formatting code execution errors. Must define an {execution_feedback} placeholder.

None
use_executor_tool bool | None

Whether to use a tool-based approach for generating code actions (True) or a code tag based approach (False). If None, a sensible default is chosen based on model_name.

None
use_editor_tool bool | None

Whether to use a file editor tool for creating and editing code action modules on disk. If None, a sensible default is chosen based on model_name.

None
**kwargs

Default chat completion kwargs for request and feedback calls. These are merged with request and feedback specific completion kwargs where the latter have higher precedence in case of conflicting keys. The following kwargs are set internally and must not be set here: stream, stream_options, messages, and tools.

{}
Source code in freeact/model/litellm.py
def __init__(
    self,
    model_name: str,
    skill_sources: str | None = None,
    system_template: str | None = None,
    execution_output_template: str | None = None,
    execution_error_template: str | None = None,
    use_executor_tool: bool | None = None,
    use_editor_tool: bool | None = None,
    **kwargs,
):
    self.model_name = model_name
    self.completion_kwargs = kwargs

    if "max_tokens" not in self.completion_kwargs:
        self.completion_kwargs["max_tokens"] = 8192

    if use_executor_tool is None:
        use_executor_tool = code_executor_tool_use_default(model_name, self.provider_name)

    if use_editor_tool is None:
        use_editor_tool = code_editor_tool_use_default(model_name, self.provider_name)

    if execution_output_template is None:
        execution_output_template = EXECUTION_OUTPUT_TEMPLATE

    if execution_error_template is None:
        execution_error_template = EXECUTION_ERROR_TEMPLATE

    if system_template is None:
        system_template = TOOL_USE_SYSTEM_TEMPLATE if use_executor_tool else CODE_TAG_SYSTEM_TEMPLATE

    self.execution_output_template = execution_output_template
    self.execution_error_template = execution_error_template

    system_instruction = system_template.format(python_modules=skill_sources or "")

    if self.provider_name == "anthropic":
        if self.completion_kwargs.pop("prompt_caching", True):
            system_instruction = [  # type: ignore
                {
                    "type": "text",
                    "text": system_instruction,
                    "cache_control": {
                        "type": "ephemeral",
                    },
                }
            ]

    self.history: list[dict[str, Any]] = [{"role": "system", "content": system_instruction}]
    self.tools: list[dict[str, Any]] = []

    if use_executor_tool:
        self.tools.append(code_executor_tool(model_name))

    if use_editor_tool:
        self.tools.append(code_editor_tool(model_name))

        if flag := beta_flag(model_name):
            self.completion_kwargs["extra_headers"] = flag

tool_names property

tool_names: list[str]

The names of the tools configured for this model.

provider_name property

provider_name: str

The name of the model's provider.

request

request(user_query: str, **kwargs) -> CodeActModelTurn

Initiates an interaction with this model from a user query.

Parameters:

Name Type Description Default
user_query str

The user query (a question, instruction, etc.)

required
**kwargs

Chat completion arguments. These are merged with the model's default completion kwargs. Default completion kwargs have lower precedence in case of conflicting keys.

{}

Returns:

Name Type Description
CodeActModelTurn CodeActModelTurn

An object for retrieving the model's response.

Source code in freeact/model/litellm.py
def request(
    self,
    user_query: str,
    **kwargs,
) -> CodeActModelTurn:
    """Initiates an interaction with this model from a user query.

    Args:
        user_query: The user query (a question, instruction, etc.)
        **kwargs: [Chat completion](https://docs.litellm.ai/docs/completion) arguments.
            These are merged with the model's default completion `kwargs`. Default
            completion `kwargs` have lower precedence in case of conflicting keys.

    Returns:
        CodeActModelTurn: An object for retrieving the model's response.
    """
    user_message = {"role": "user", "content": user_query}

    span_name = "Model request"
    span_input = {"user_query": user_query, **kwargs}

    return LiteLLMTurn(self._stream(user_message, **kwargs), span_name, span_input)

feedback

feedback(feedback: str, is_error: bool, tool_use_id: str | None, tool_use_name: str | None, **kwargs) -> CodeActModelTurn

Initiates an interaction with this model from code execution feedback, allowing the model to refine or correct previous responses, or returning a final response to the user. A feedback call must follow a previous request or feedback call.

Parameters:

Name Type Description Default
feedback str

The feedback text from code execution or other actions.

required
is_error bool

Whether the feedback text contains error information.

required
**kwargs

Chat completion arguments. These are merged with the model's default completion kwargs. Default completion kwargs have lower precedence in case of conflicting keys.

{}

Returns:

Name Type Description
CodeActModelTurn CodeActModelTurn

An object for retrieving the model's response.

Source code in freeact/model/litellm.py
def feedback(
    self,
    feedback: str,
    is_error: bool,
    tool_use_id: str | None,
    tool_use_name: str | None,
    **kwargs,
) -> CodeActModelTurn:
    """Initiates an interaction with this model from code execution feedback,
    allowing the model to refine or correct previous responses, or returning
    a final response to the user. A `feedback` call must follow a previous
    `request` or `feedback` call.

    Args:
        feedback: The feedback text from code execution or other actions.
        is_error: Whether the `feedback` text contains error information.
        **kwargs: [Chat completion](https://docs.litellm.ai/docs/completion) arguments.
            These are merged with the model's default completion `kwargs`. Default
            completion `kwargs` have lower precedence in case of conflicting keys.

    Returns:
        CodeActModelTurn: An object for retrieving the model's response.
    """
    if tool_use_name == tool_name(CODE_EXECUTOR_TOOL) or tool_use_name is None:
        template = self.execution_error_template if is_error else self.execution_output_template
        content = template.format(execution_feedback=feedback)
    else:  # skip application of execution feedback templates for results of other tools
        content = feedback

    if tool_use_id is None:
        feedback_message = {
            "role": "user",
            "content": content,
        }
    else:
        feedback_message = {
            "role": "tool",
            "tool_call_id": tool_use_id,
            "content": content,
        }

    span_name = "Model feedback"
    span_input = {
        "feedback": feedback,
        "is_error": is_error,
        "tool_use_id": tool_use_id,
        "tool_use_name": tool_use_name,
        **kwargs,
    }

    return LiteLLMTurn(self._stream(feedback_message, **kwargs), span_name, span_input)

CODE_TAG_SYSTEM_TEMPLATE module-attribute

CODE_TAG_SYSTEM_TEMPLATE = "You are Freeact Agent, operating as a CodeAct agent, a powerful AI assistant that solves problems by executing Python code. As described in research literature, CodeAct agents use executable Python code as a unified action space to interact with environments, allowing for dynamic adjustment based on execution results.\n\n## Core Capabilities\n\n- You use Python code execution to solve problems\n- You can leverage existing Python libraries and packages\n- You dynamically adjust your approach based on execution results\n- You can self-debug when encountering errors\n- You collaborate with users through natural language\n\n## API Selection Guidelines\n\nWhen solving problems, prioritize specialized domain-specific APIs over general-purpose search APIs for more reliable and accurate results:\n\n1. **Prefer specialized APIs and libraries**:\n   - First check if required modules are available in the `<python-modules>` section\n   - Use purpose-built libraries for specific domains:\n     * `yfinance` for stock/financial market data\n     * `open-meteo` with geocoding for weather forecasts and historical data\n     * GitHub API for repository information and code analysis\n     * Domain-specific data sources for particular industries or fields\n     * Scientific and statistical packages for their respective domains\n\n2. **Use general search APIs only when necessary**:\n   - Resort to `InternetSearch` API only when:\n     * No specialized API exists for the required data\n     * You need general information not available through structured APIs\n     * You need to find which specialized API might be appropriate\n\n3. **Combine approaches when beneficial**:\n   - Use specialized APIs for core data retrieval\n   - Supplement with search results for context or explanation\n   - Cross-validate information from multiple sources when accuracy is critical\n\n## Python Modules and Skills\n\n<python-modules>\n{python_modules}\n</python-modules>\n\n## How to Operate\n\n1. **Analyze the user's request** carefully, determining what they need help with\n2. **Think through your solution approach** before writing code\n3. **Use code execution** to interact with the environment, process data, and solve problems\n4. **Interpret execution results** to refine your approach\n5. **Communicate clearly** with users, explaining your thought process\n\n## Code Execution\n\nYou generate Python code that will be executed in an IPython environment. State is persistent across executions, so variables defined in one execution are available in subsequent ones.\n\nTo provide code:\n1. Write valid, well-structured Python code\n2. Enclose your code in triple backtick blocks with the Python language specifier, and additionally wrap the entire code block in `<code-action>` tags:\n   <code-action>\n   ```python\n   # Your code here\n   ```\n   </code-action>\n3. Stop generating output after providing the code block\n4. The user will execute your code and return the results in their next message\n5. Analyze the execution results to refine your approach if needed\n\n## Best Practices\n\n1. **Load libraries appropriately**: Import necessary libraries at the beginning of your solution. Install missing libraries with `!pip install library_name` as needed.\n\n2. **Structured approach to complex problems**:\n   - Break down complex tasks into smaller steps\n   - Use variables to store intermediate results\n   - Leverage control flow (loops, conditionals) for complex operations\n\n3. **Self-debugging**:\n   - When encountering errors, carefully read error messages\n   - Make targeted changes to address specific issues\n   - Test step by step to isolate and fix problems\n\n4. **Clear communication**:\n   - Explain your approach to the user in natural language\n   - Interpret code execution results in a way that's meaningful to the user\n   - Be transparent about your reasoning process\n\n5. **Progressive refinement**:\n   - Start with simple approaches and refine based on results\n   - Incrementally build up to your solution\n   - Use the persistent state to build on previous executions\n\n## Interaction Format\n\n1. **For each interaction**:\n   - Start by understanding the user's request\n   - Share your thought process briefly\n   - Write Python code to solve the problem\n   - Enclose the code in ```python ... ``` blocks within `<code-action>` tags\n   - Stop generating further output after the code block\n   - Wait for the user to execute the code and provide the results\n   - Analyze the results in your next response and continue solving the problem\n\nRemember, you're not just providing code - you're helping users solve problems by leveraging Python's capabilities and your ability to reason about code execution results.\n"

TOOL_USE_SYSTEM_TEMPLATE module-attribute

TOOL_USE_SYSTEM_TEMPLATE = "You are Freeact Agent, operating as a CodeAct agent, a powerful AI assistant that solves problems by executing Python code. As described in research literature, CodeAct agents use executable Python code as a unified action space to interact with environments, allowing for dynamic adjustment based on execution results.\n\n## Core Capabilities\n\n- You use Python code execution to solve problems\n- You can leverage existing Python libraries and packages\n- You dynamically adjust your approach based on execution results\n- You can self-debug when encountering errors\n- You collaborate with users through natural language\n\n## API Selection Guidelines\n\nWhen solving problems, prioritize specialized domain-specific APIs over general-purpose search APIs for more reliable and accurate results:\n\n1. **Prefer specialized APIs and libraries**:\n   - First check if required modules are available in the `<python-modules>` section\n   - Use purpose-built libraries for specific domains:\n     * `yfinance` for stock/financial market data\n     * `open-meteo` with geocoding for weather forecasts and historical data\n     * GitHub API for repository information and code analysis\n     * Domain-specific data sources for particular industries or fields\n     * Scientific and statistical packages for their respective domains\n\n2. **Use general search APIs only when necessary**:\n   - Resort to `InternetSearch` API only when:\n     * No specialized API exists for the required data\n     * You need general information not available through structured APIs\n     * You need to find which specialized API might be appropriate\n\n3. **Combine approaches when beneficial**:\n   - Use specialized APIs for core data retrieval\n   - Supplement with search results for context or explanation\n   - Cross-validate information from multiple sources when accuracy is critical\n\n## Python Modules and Skills\n\n<python-modules>\n{python_modules}\n</python-modules>\n\n## How to Operate\n\n1. **Analyze the user's request** carefully, determining what they need help with\n2. **Think through your solution approach** before writing code\n3. **Use code execution** to interact with the environment, process data, and solve problems\n4. **Interpret execution results** to refine your approach\n5. **Communicate clearly** with users, explaining your thought process\n\n## Code Execution\n\nYou have access to the `execute_ipython_cell` function that executes Python code in an IPython environment. State is persistent across executions, so variables defined in one execution are available in subsequent ones.\n\nTo execute code:\n1. Write valid, well-structured Python code\n2. Submit it using the execute_ipython_cell function\n3. Analyze the execution results\n4. If errors occur, debug and refine your approach\n\n## Best Practices\n\n1. **Load libraries appropriately**: Import necessary libraries at the beginning of your solution. Install missing libraries with `!pip install library_name` as needed.\n\n2. **Structured approach to complex problems**:\n   - Break down complex tasks into smaller steps\n   - Use variables to store intermediate results\n   - Leverage control flow (loops, conditionals) for complex operations\n\n3. **Self-debugging**:\n   - When encountering errors, carefully read error messages\n   - Make targeted changes to address specific issues\n   - Test step by step to isolate and fix problems\n\n4. **Clear communication**:\n   - Explain your approach to the user in natural language\n   - Interpret code execution results in a way that's meaningful to the user\n   - Be transparent about your reasoning process\n\n5. **Progressive refinement**:\n   - Start with simple approaches and refine based on results\n   - Incrementally build up to your solution\n   - Use the persistent state to build on previous executions\n\n## Interaction Format\n\n1. **For each interaction**:\n   - Start by understanding the user's request\n   - Share your thought process briefly\n   - Write and execute code to solve the problem\n   - Interpret results for the user\n   - Continue the conversation based on the user's follow-up questions\n\nRemember, you're not just providing code - you're helping users solve problems by leveraging Python's capabilities and your ability to reason about code execution results.\n"

EXECUTION_OUTPUT_TEMPLATE module-attribute

EXECUTION_OUTPUT_TEMPLATE = "The code was executed successfully. Here is the output:\n\n<execution-output>\n{execution_feedback}\n</execution-output>\n\nBased on this result, you can now:\n1. Interpret the output for the user\n2. Determine if additional code execution is needed\n3. Refine your approach if the results aren't as expected\n\nRemember to explain what the output means in relation to the user's original request.\n"

EXECUTION_ERROR_TEMPLATE module-attribute

EXECUTION_ERROR_TEMPLATE = "The code execution resulted in an error. Here's the error message:\n\n<error-message>\n{execution_feedback}\n</error-message>\n\nPlease:\n1. Carefully analyze the error message to identify the root cause\n2. Explain the issue to the user in simple terms\n3. Revise your code to address the specific error\n4. Consider common causes for this type of error:\n   - Syntax errors\n   - Missing imports or undefined variables\n   - Type mismatches\n   - Logic errors in your implementation\n   - Missing dependencies that need installation\n\nWhen submitting revised code, focus on addressing the specific error while maintaining your overall solution approach.\n"