# freeact > Freeact code action agent Freeact is a lightweight agent that acts by executing Python code and shell commands. Code actions are key for an agent to improve its own tool library and codebase. Freeact has a tiny core, a small system prompt, and is extensible with agent skills. It relies on a minimal set of generic tools: read, write, execute, subagent, and tool search. Code and shell command execution runs locally in a stateful, sandboxed environment. Freeact supports utilization of MCP servers by generating Python APIs for their tools. # User Guide # Overview Freeact is a lightweight agent that acts by executing Python code and shell commands. Code actions are key for an agent to improve itself and its tool library. Freeact has a tiny core, a small system prompt, and is extensible with agent skills. It relies on a minimal set of generic tools: read, write, execute, subagent, and tool search. Code and shell command execution runs locally in a stateful, sandboxed environment. Freeact supports utilization of MCP servers by generating Python APIs for their tools. Supported models Freeact supports any model compatible with [Pydantic AI](https://ai.pydantic.dev/), with `gemini-3-flash-preview` as the default. See [Models](https://gradion-ai.github.io/freeact/models/index.md) for provider configuration and examples. ## Usage | Component | Description | | ------------------------------------------------------------------ | ---------------------------------------------------------------------- | | **[Agent SDK](https://gradion-ai.github.io/freeact/sdk/index.md)** | Agent harness and Python API for building freeact applications. | | **[CLI tool](https://gradion-ai.github.io/freeact/cli/index.md)** | Terminal interface for interactive conversations with a freeact agent. | ## Capabilities | Capability | Description | | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **Code actions** | Freeact agents act via Python code and shell commands. This enables tool composition and intermediate result processing in a single LLM inference pass. | | **Local execution** | Freeact executes code and shell commands locally in an IPython kernel provided by [ipybox](https://github.com/gradion-ai/ipybox). Data, configuration and generated tools live in local workspaces. | | **Sandbox mode** | IPython kernels optionally run in a sandbox environment based on Anthropic's [sandbox-runtime](https://github.com/anthropic-experimental/sandbox-runtime). It enforces filesystem and network restrictions on OS-level. | | **MCP code mode** | Freeact calls MCP server tools programmatically[1](#fn:1) via generated Python APIs. This enables composition of tool calls in code actions with much lower latency. | | **Tool discovery** | Tools are discovered via category browsing or hybrid BM25/vector search. On-demand loading frees the context window and scales to larger tool libraries. | | **Tool authoring** | Agents can create new tools, enhance existing tools, or save code actions as reusable tools. This captures successful experience as executable knowledge. | | **Agent skills** | Skills give agents new capabilities and expertise based on [agentskills.io](https://agentskills.io/). They compose naturally with code actions and agent-authored tools. | | **Subagent delegation** | Tasks can be delegated to subagents, each using their own sandbox. It enables specialization and parallelization without cluttering the main agent's context. | | **Action approval** | Fine-grained approval of code actions and (programmatic) tool calls from both main agents and subagents. Enables human control over potentially risky actions. | | **Session persistence** | Freeact persists agent state incrementally. Persisted sessions can be resumed and serve as a record for debugging, evaluation, and improvement. | ______________________________________________________________________ 1. Freeact also supports MCP server integration via JSON tool calling but the recommended approach is programmatic tool calling. [↩](#fnref:1 "Jump back to footnote 1 in the text") # Installation ## Prerequisites - Python 3.11+ - [uv](https://docs.astral.sh/uv/) package manager - Node.js 20+ (for MCP servers) ## Workspace Setup A workspace is a directory where freeact stores configuration, tools, and other resources. Both setup options below require their own workspace directory. ### Option 1: Minimal The fastest way to get started is using `uvx`, which keeps the virtual environment separate from the workspace: ``` mkdir my-workspace && cd my-workspace uvx freeact ``` This is ideal when you don't need to install additional Python packages in the workspace. ### Option 2: With Virtual Environment To create a workspace with its own virtual environment: ``` mkdir my-workspace && cd my-workspace uv init --bare --python 3.13 uv add freeact ``` Then run freeact with: ``` uv run freeact ``` This approach lets you install additional packages (e.g., `uv add pandas`) that will be available to the agent. ## API Key Freeact uses `gemini-3-flash-preview` as the [default model](https://gradion-ai.github.io/freeact/models/index.md). Set the API key in your environment: ``` export GEMINI_API_KEY="your-api-key" ``` Alternatively, place it in a `.env` file in the workspace directory: .env ``` GEMINI_API_KEY=your-api-key ``` ## Sandbox Mode Prerequisites For running freeact in sandbox mode, install Anthropic's [sandbox-runtime](https://github.com/anthropic-experimental/sandbox-runtime): ``` npm install -g @anthropic-ai/sandbox-runtime@0.0.21 ``` Higher versions should also work, but 0.0.21 is the version used in current tests. Required OS-level packages are: ### macOS ``` brew install ripgrep ``` macOS uses the native `sandbox-exec` for process isolation. ### Linux ``` apt-get install bubblewrap socat ripgrep ``` Work in progress Sandboxing on Linux is currently work in progress. # Quickstart This guide shows how to run a simple task using the freeact [CLI tool](#cli-tool) and the [Agent SDK](#agent-sdk). ## CLI Tool Freeact provides a [CLI tool](https://gradion-ai.github.io/freeact/cli/index.md) for running the agent in a terminal. ### Starting Freeact Create a workspace directory, set your API key, and start the agent: ``` mkdir my-workspace && cd my-workspace echo "GEMINI_API_KEY=your-api-key" > .env uvx freeact ``` See [Installation](https://gradion-ai.github.io/freeact/installation/index.md) for alternative setup options and sandbox mode prerequisites. Using a different model Freeact supports any model compatible with Pydantic AI. To switch providers or configure model settings, see [Models](https://gradion-ai.github.io/freeact/models/index.md). ### Generating MCP Tool APIs On first start, the CLI tool auto-generates Python APIs for [configured](https://gradion-ai.github.io/freeact/configuration/#ptc-servers) MCP servers. For example, it creates `.freeact/generated/mcptools/google/web_search.py` for the `web_search` tool of the bundled `google` MCP server. With the generated Python API, the agent can import and call this tool programmatically. Custom MCP servers For calling the tools of your own MCP servers programmatically, add them to the [`ptc-servers`](https://gradion-ai.github.io/freeact/configuration/#ptc-servers) section in `.freeact/agent.json`. Freeact auto-generates a Python API for them when the CLI tool starts. ### Running a Task With this setup and a question like > who is F1 world champion 2025? the CLI tool should generate an output similar to the following: The recorded session demonstrates: - **Progressive tool loading**: The agent progressively loads tool information: lists categories, lists tools in the `google` category, then reads the `web_search` API to understand its parameters. - **Programmatic tool calling**: The agent writes Python code that imports the `web_search` tool from `mcptools.google` and calls it programmatically with the user's query. - **Action approval**: The code action and the programmatic `web_search` tool call are explicitly approved by the user, other tool calls were [pre-approved](https://gradion-ai.github.io/freeact/configuration/#permissions) for this example. The code execution output shows the search result with source URLs. The agent response is a summary of it. ## Agent SDK The CLI tool is built on the [Agent SDK](https://gradion-ai.github.io/freeact/sdk/index.md) that you can use directly in your applications. The following minimal example shows how to run the same task programmatically, with code actions and tool calls auto-approved: ``` import asyncio from freeact.agent import ( Agent, ApprovalRequest, CodeExecutionOutput, Response, Thoughts, ToolOutput, ) from freeact.agent.config import Config from freeact.tools.pytools.apigen import generate_mcp_sources async def main() -> None: # Scaffold .freeact/ config directory if needed await Config.init() # Load configuration from .freeact/ config = Config() # Generate Python APIs for MCP servers in ptc_servers for server_name, params in config.ptc_servers.items(): if not (config.generated_dir / "mcptools" / server_name).exists(): await generate_mcp_sources({server_name: params}, config.generated_dir) async with Agent(config=config) as agent: prompt = "Who is the F1 world champion 2025?" async for event in agent.stream(prompt): match event: case ApprovalRequest(tool_name="ipybox_execute_ipython_cell", tool_args=args) as request: print(f"Code action:\n{args['code']}") request.approve(True) case ApprovalRequest(tool_name=name, tool_args=args) as request: print(f"Tool: {name}") print(f"Args: {args}") request.approve(True) case Thoughts(content=content): print(f"Thinking: {content}") case CodeExecutionOutput(text=text): print(f"Code execution output: {text}") case ToolOutput(content=content): print(f"Tool call result: {content}") case Response(content=content): print(content) if __name__ == "__main__": asyncio.run(main()) ``` # Configuration Freeact configuration is stored in the `.freeact/` directory. This page describes the directory structure and configuration formats. It also describes the structure of [tool directories](#tool-directories). ## Initialization The `.freeact/` directory is created and populated from bundled templates through three entry points: | Entry Point | Description | | -------------------------- | ------------------------------------------------------------------------------------------------------------ | | `freeact` or `freeact run` | Creates config with [CLI tool](https://gradion-ai.github.io/freeact/cli/index.md) before starting the agent | | `freeact init` | Creates config with [CLI tool](https://gradion-ai.github.io/freeact/cli/index.md) without starting the agent | | Config.init() | Creates config programmatically without starting the agent | All three entry points share the same behavior: - **Missing files are created** from [default templates](https://github.com/gradion-ai/freeact/tree/main/freeact/agent/config/templates) - **Existing files are preserved** and never overwritten - **User modifications persist** across restarts and updates This allows safe customization: edit any configuration file, and your changes remain intact. If you delete a file, it is recreated from the default template on next initialization. ## Directory Structure Freeact stores agent configuration and runtime state in `.freeact/`. Project-level customization uses `AGENTS.md` for [project instructions](#project-instructions) and `.agents/skills/` for [custom skills](#custom-skills). ``` / ├── AGENTS.md # Project instructions (injected into system prompt) ├── .agents/ │ └── skills/ # Custom skills │ └── / │ ├── SKILL.md │ └── ... └── .freeact/ ├── agent.json # Configuration and MCP server definitions ├── skills/ # Bundled skills │ └── / │ ├── SKILL.md # Skill metadata and instructions │ └── ... # Further skill resources ├── generated/ # Generated tool sources (on PYTHONPATH) │ ├── mcptools/ # Generated Python APIs from ptc-servers │ └── gentools/ # User-defined tools saved from code actions ├── plans/ # Task plan storage ├── sessions/ # Session trace storage │ └── / │ ├── main.jsonl │ └── sub-xxxx.jsonl └── permissions.json # Persisted approval decisions ``` ## Configuration File The `agent.json` file contains agent settings and MCP server configurations: ``` { "model": "google-gla:gemini-3-flash-preview", "model-settings": { ... }, "tool-search": "basic", "images-dir": null, "execution-timeout": 300, "approval-timeout": null, "enable-subagents": true, "max-subagents": 5, "kernel-env": {}, "mcp-servers": {}, "ptc-servers": { "server-name": { ... } } } ``` ### Agent Settings | Setting | Default | Description | | ------------------- | ----------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `model` | `google-gla:gemini-3-flash-preview` | [Model identifier](https://gradion-ai.github.io/freeact/models/#model-identifier) in `provider:model-name` format | | `model-settings` | `{}` | Provider-specific [model settings](https://gradion-ai.github.io/freeact/models/#model-settings) (e.g., thinking config, temperature) | | `model-provider` | `null` | Custom API credentials, endpoints, or other [provider-specific options](https://gradion-ai.github.io/freeact/models/#model-provider) | | `images-dir` | `null` | Directory for saving generated images to disk. `null` defaults to `images` in the working directory. | | `execution-timeout` | `300` | Maximum time in seconds for [code execution](https://gradion-ai.github.io/freeact/execution/index.md). Approval wait time is excluded. `null` means no timeout. | | `approval-timeout` | `null` | Timeout in seconds for PTC approval requests. `null` means no timeout. | | `enable-subagents` | `true` | Whether to enable subagent delegation | | `max-subagents` | `5` | Maximum number of concurrent subagents | | `kernel-env` | `{}` | Environment variables passed to the IPython kernel. Supports `${VAR}` placeholders resolved against the host environment. | ### `tool-search` Controls how the agent discovers Python tools: | Mode | Description | | -------- | --------------------------------------------------------------------------- | | `basic` | Category browsing with `pytools_list_categories` and `pytools_list_tools` | | `hybrid` | BM25/vector search with `pytools_search_tools` for natural language queries | The `tool-search` setting also selects the matching system prompt template (see [System Prompt](#system-prompt)). For hybrid mode environment variables, see [Hybrid Search](#hybrid-search). ### `mcp-servers` MCP servers called directly via JSON tool calls. Internal servers (`pytools` for basic or hybrid tool search and filesystem for file operations) are provided automatically and do not need to be configured. User-defined servers in this section are merged with the internal defaults. If a user entry uses the same key as an internal server, the user entry takes precedence. Custom MCP servers Application-specific MCP servers for JSON tool calls can be added to this section as needed. ### `ptc-servers` MCP servers called programmatically via generated Python APIs. This is freeact's implementation of *code mode*[1](#fn:1), where the agent calls MCP tools by writing code against generated APIs rather than through JSON tool calls. This allows composing multiple tool calls, processing intermediate results, and using control flow within a single code action. Python APIs must be generated from `ptc-servers` to `.freeact/generated/mcptools//.py` before the agent can use them. The [CLI tool](https://gradion-ai.github.io/freeact/cli/index.md) handles this automatically. When using the [Agent SDK](https://gradion-ai.github.io/freeact/sdk/index.md), call generate_mcp_sources() explicitly. Code actions can then import and call the generated APIs because `.freeact/generated/` is on the kernel's `PYTHONPATH`. The default configuration includes the bundled `google` MCP server (web search via Gemini): ``` { "ptc-servers": { "google": { "command": "python", "args": ["-m", "freeact.tools.gsearch", "--thinking-level", "medium"], "env": {"GEMINI_API_KEY": "${GEMINI_API_KEY}"} } } } ``` Custom MCP servers Application-specific MCP servers can be added as needed to `ptc-servers` for programmatic tool calling. ### Server Formats Both `mcp-servers` and `ptc-servers` support stdio servers and streamable HTTP servers. ### Environment Variables Server configurations support environment variable references using `${VAR_NAME}` syntax. Config() validates that all referenced variables are set. If a variable is missing, loading fails with an error. ## Hybrid Search When `tool-search` is set to `"hybrid"` in `agent.json`, the hybrid search server reads additional configuration from environment variables. Default values are provided for all optional variables: | Variable | Default | Description | | ------------------------- | --------------------------------- | ----------------------------------------------------- | | `GEMINI_API_KEY` | *(required)* | API key for the default embedding model | | `PYTOOLS_DIR` | `.freeact/generated` | Base directory containing `mcptools/` and `gentools/` | | `PYTOOLS_DB_PATH` | `.freeact/search.db` | Path to SQLite database for search index | | `PYTOOLS_EMBEDDING_MODEL` | `google-gla:gemini-embedding-001` | Embedding model identifier | | `PYTOOLS_EMBEDDING_DIM` | `3072` | Embedding vector dimensions | | `PYTOOLS_SYNC` | `true` | Sync index with tool directories on startup | | `PYTOOLS_WATCH` | `true` | Watch tool directories for changes | | `PYTOOLS_BM25_WEIGHT` | `1.0` | Weight for BM25 (keyword) results in hybrid fusion | | `PYTOOLS_VEC_WEIGHT` | `1.0` | Weight for vector (semantic) results in hybrid fusion | To use a different embedding provider, change `PYTOOLS_EMBEDDING_MODEL` to a supported [pydantic-ai embedder](https://ai.pydantic.dev/embeddings/) identifier. Testing without an API key Set `PYTOOLS_EMBEDDING_MODEL=test` to use a test embedder that generates deterministic embeddings. This is useful for development and testing but produces meaningless search results. ## System Prompt The system prompt is an internal resource bundled with the package. The template used depends on the `tool-search` setting in `agent.json`: | Mode | Template | Description | | -------- | ------------------ | ------------------------------------------------------------------------- | | `basic` | `system-basic.md` | Category browsing with `pytools_list_categories` and `pytools_list_tools` | | `hybrid` | `system-hybrid.md` | Semantic search with `pytools_search_tools` | The template supports placeholders: | Placeholder | Description | | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------- | | `{working_dir}` | The agent's workspace directory | | `{generated_rel_dir}` | Relative path to the generated tool sources directory | | `{project_instructions}` | Content from `AGENTS.md`, wrapped in `` tags. Omitted if the file is absent or empty. | | `{skills}` | Rendered metadata from bundled skills (`.freeact/skills/`) and custom skills (`.agents/skills/`). Omitted if no skills exist. | See the templates for [basic](https://github.com/gradion-ai/freeact/blob/main/freeact/agent/config/prompts/system-basic.md) and [hybrid](https://github.com/gradion-ai/freeact/blob/main/freeact/agent/config/prompts/system-hybrid.md) modes. ## Project Instructions The agent loads project-specific instructions from an `AGENTS.md` file in the working directory. If the file exists and is non-empty, its content is injected into the system prompt. If the file is absent or empty, the section is omitted. `AGENTS.md` provides project context to the agent: domain-specific conventions, workflow preferences, or any instructions relevant to the agent's tasks. ## Skills Skills are filesystem-based capability packages that specialize agent behavior. A skill is a directory containing a `SKILL.md` file with metadata in YAML frontmatter, and optionally further skill resources. Skills follow the [agentskills.io](https://agentskills.io/specification/) specification. Skills are loaded on demand: only metadata is in context initially, full instructions load when relevant. ### Bundled Skills Freeact contributes three bundled skills to `.freeact/skills/`: | Skill | Description | | ------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------- | | [output-parsers](https://github.com/gradion-ai/freeact/tree/main/freeact/agent/config/templates/skills/output-parsers) | Generate output parsers for `mcptools/` with unstructured return types | | [saving-codeacts](https://github.com/gradion-ai/freeact/tree/main/freeact/agent/config/templates/skills/saving-codeacts) | Save generated code actions as reusable tools in `gentools/` | | [task-planning](https://github.com/gradion-ai/freeact/tree/main/freeact/agent/config/templates/skills/task-planning) | Basic task planning and tracking workflows | Bundled skills are auto-created from templates on [initialization](#initialization). User modifications persist across restarts. Tool authoring The `output-parsers` and `saving-codeacts` skills enable tool authoring. See [Enhancing Tools](https://gradion-ai.github.io/freeact/examples/output-parser/index.md) and [Code Action Reuse](https://gradion-ai.github.io/freeact/examples/saving-codeacts/index.md) for walkthroughs. ### Custom Skills Custom skills are loaded from `.agents/skills/` in the working directory. Each subdirectory containing a `SKILL.md` file is registered as a skill. Metadata of custom skills appears in the system prompt after bundled skills. The `.agents/skills/` directory is not managed by freeact and is not auto-created. Example See [Custom Agent Skills](https://gradion-ai.github.io/freeact/examples/agent-skills/index.md) for a walkthrough of installing and using a custom skill. ## Permissions [Tool permissions](https://gradion-ai.github.io/freeact/sdk/#permissions-api) are stored in `.freeact/permissions.json` based on tool name: ``` { "allowed_tools": [ "tool_name_1", "tool_name_2" ] } ``` Tools in `allowed_tools` are auto-approved by the [CLI tool](https://gradion-ai.github.io/freeact/cli/index.md) without prompting. Selecting `"a"` at the approval prompt adds the tool to this list. ## Tool Directories The agent discovers tools from two directories under `.freeact/generated/`: ### `mcptools/` Generated Python APIs from `ptc-servers` schemas: ``` .freeact/generated/mcptools/ └── / └── .py # Generated tool module ``` ### `gentools/` User-defined tools saved from successful code actions: ``` .freeact/generated/gentools/ └── / └── / ├── __init__.py ├── api.py # Public interface └── impl.py # Implementation ``` ______________________________________________________________________ 1. [Code Mode: the better way to use MCP](https://blog.cloudflare.com/code-mode/) [↩](#fnref:1 "Jump back to footnote 1 in the text") # Code Execution Freeact executes Python code and shell commands through an IPython kernel provided by [ipybox](https://github.com/gradion-ai/ipybox). Both are submitted via the same `ipybox_execute_ipython_cell` [internal tool](https://gradion-ai.github.io/freeact/sdk/#internal-tools). Python code runs inside the kernel process, while shell commands (prefixed with `!`, e.g., `!ls`, `!git status`, `!uv pip install`) run in subprocesses spawned by the kernel. The kernel is stateful: variables, imports, and function definitions persist across executions within a session. The bundled [system prompts](https://github.com/gradion-ai/freeact/tree/main/freeact/agent/config/prompts) provide initial guidance on when to use shell commands versus Python code. More detailed guidance can be given in custom [agent skills](https://gradion-ai.github.io/freeact/configuration/#skills). ## Python Code Given a prompt like *"what is 17 raised to the power of 0.13"*, the agent generates and executes Python code directly: ``` print(17 ** 0.13) ``` ``` 1.4453011884051326 ``` ## Shell Commands Given a prompt like *"which .py files in tests/ contain ipybox"*, the agent uses a shell command with the `!` prefix: ``` !grep -r "ipybox" tests/ --include="*.py" -l ``` ``` tests/unit/test_agent.py tests/conftest.py tests/integration/test_agent.py tests/integration/test_subagents.py ``` Each `!` line spawns a separate subprocess. Multi-line shell scripts can use the `%%bash` cell magic, which runs as a single subprocess: ``` %%bash cd /tmp echo "Now in $(pwd)" ls -la ``` Shell state (working directory, variables) does not persist across `!` lines but persists within a `%%bash` block. Neither carries state to the next cell execution. ## Mixing Both Python and shell commands can be freely combined within a single code action. A common pattern is installing a package and using it immediately: ``` !uv pip install pandas import pandas as pd df = pd.read_csv("data.csv") print(df.describe()) ``` Shell output can be captured into Python variables: ``` files = !ls /data/*.csv print(f"Found {len(files)} CSV files") ``` Python variables can be interpolated into shell commands: ``` filename = "report.pdf" !cp /tmp/{filename} /output/ ``` # Sandbox Mode Freeact can restrict filesystem and network access for [code execution](https://gradion-ai.github.io/freeact/execution/index.md) and MCP servers using [ipybox sandbox](https://gradion-ai.github.io/ipybox/sandbox/) and Anthropic's [sandbox-runtime](https://github.com/anthropic-experimental/sandbox-runtime). Prerequisites Check the installation instructions for [sandbox mode prerequisites](https://gradion-ai.github.io/freeact/installation/#sandbox-mode-prerequisites). ## Code Execution Scope Sandbox restrictions apply equally to Python code and shell commands, as both [execute](https://gradion-ai.github.io/freeact/execution/index.md) in the same IPython kernel. ### CLI Tool The `--sandbox` option enables sandboxed [code execution](https://gradion-ai.github.io/freeact/execution/index.md): ``` freeact --sandbox ``` A custom configuration file can override the [default restrictions](#default-restrictions): ``` freeact --sandbox --sandbox-config sandbox-config.json ``` ### Agent SDK The `sandbox` and `sandbox_config` parameters of the Agent constructor provide the same functionality: ``` from pathlib import Path agent = Agent( ... sandbox=True, sandbox_config=Path("sandbox-config.json"), ) ``` ### Default Restrictions Without a custom configuration file, sandbox mode applies these defaults: - **Filesystem**: Read all files except `.env`, write to current directory and subdirectories - **Network**: Internet access blocked, local network access to tool execution server permitted ### Custom Configuration sandbox-config.json ``` { "network": { "allowedDomains": ["example.org"], "deniedDomains": [], "allowLocalBinding": true }, "filesystem": { "denyRead": ["sandbox-config.json"], "allowWrite": [".", "~/Library/Jupyter/", "~/.ipython/"], "denyWrite": ["sandbox-config.json"] } } ``` This macOS-specific example configuration allows additional network access to `example.org`. Filesystem settings permit writes to `~/Library/Jupyter/` and `~/.ipython/`, which is required for running a sandboxed IPython kernel. The sandbox configuration file itself is protected from reads and writes. ## MCP Servers MCP servers run as separate processes and are not affected by [code execution sandboxing](#code-execution). Local stdio servers can be sandboxed independently by wrapping the server command with the `srt` tool from sandbox-runtime. This applies to both [`mcp-servers`](https://gradion-ai.github.io/freeact/configuration/#mcp-servers) and [`ptc-servers`](https://gradion-ai.github.io/freeact/configuration/#ptc-servers) in the [configuration file](https://gradion-ai.github.io/freeact/configuration/#configuration-file). ### Filesystem MCP Server This example shows a sandboxed [filesystem MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/filesystem) in the `mcp-servers` section: .freeact/agent.json ``` { "mcp-servers": { "filesystem": { "command": "srt", "args": [ "--settings", "sandbox-filesystem-mcp.json", "npx", "-y", "@modelcontextprotocol/server-filesystem", "." ] } } } ``` The sandbox configuration blocks `.env` reads and allows network access to the npm registry, which is required for `npx` to download the server package: sandbox-filesystem-mcp.json ``` { "filesystem": { "denyRead": [".env"], "allowWrite": [".", "~/.npm"], "denyWrite": [] }, "network": { "allowedDomains": ["registry.npmjs.org"], "deniedDomains": [], "allowLocalBinding": true } } ``` ### Fetch MCP Server This example shows a sandboxed [fetch MCP server](https://github.com/modelcontextprotocol/servers/tree/main/src/fetch). First, install it locally with: ``` uv add mcp-server-fetch uv add "httpx[socks]>=0.28.1" ``` Then add it to the `ptc-servers` section: .freeact/agent.json ``` { "ptc-servers": { "fetch": { "command": "srt", "args": [ "--settings", "sandbox-fetch-mcp.json", "python", "-m", "mcp_server_fetch" ] } } } ``` The sandbox configuration blocks `.env` reads and restricts the MCP server to fetch only from `example.com`. Access to the npm registry is required for the server's internal operations: sandbox-fetch-mcp.json ``` { "filesystem": { "denyRead": [".env"], "allowWrite": [".", "~/.npm", "/tmp/**", "/private/tmp/**"], "denyWrite": [] }, "network": { "allowedDomains": ["registry.npmjs.org", "example.com"], "deniedDomains": [], "allowLocalBinding": true } } ``` # Models Freeact supports any model compatible with [Pydantic AI](https://ai.pydantic.dev/models/). The model is configured in [`.freeact/agent.json`](https://gradion-ai.github.io/freeact/configuration/#configuration-file) through three settings: | Setting | Required | Description | | ---------------- | -------- | ----------------------------------------------------------------------------------- | | `model` | yes | Model identifier in `provider:model-name` format | | `model-settings` | no | Provider-specific settings passed to the model (e.g., thinking config, temperature) | | `model-provider` | no | Provider constructor kwargs for custom endpoints or credentials | ## Model Identifier The `model` field uses Pydantic AI's `provider:model-name` format. Common providers: | Provider | Prefix | Example | | ------------------- | ---------------- | ---------------------------------------- | | Google (Gemini API) | `google-gla:` | `google-gla:gemini-3-flash-preview` | | Google (Vertex AI) | `google-vertex:` | `google-vertex:gemini-3-flash-preview` | | Anthropic | `anthropic:` | `anthropic:claude-sonnet-4-6` | | OpenAI | `openai:` | `openai:gpt-5.2` | | OpenRouter | `openrouter:` | `openrouter:anthropic/claude-sonnet-4.6` | See Pydantic AI's [model documentation](https://ai.pydantic.dev/models/) for the full list of supported providers and model names. ## Provider Examples ### Google (default) The default configuration uses Google's Gemini API with dynamic thinking enabled: ``` { "model": "google-gla:gemini-3-flash-preview", "model-settings": { "google_thinking_config": { "thinking_level": "high", "include_thoughts": true } } } ``` Set the `GEMINI_API_KEY` environment variable to authenticate. ### Anthropic ``` { "model": "anthropic:claude-sonnet-4-6", "model-settings": { "anthropic_thinking": { "type": "adaptive" } } } ``` Set the `ANTHROPIC_API_KEY` environment variable to authenticate. ### OpenAI ``` { "model": "openai:gpt-5.2", "model-settings": { "openai_reasoning_effort": "medium" } } ``` Set the `OPENAI_API_KEY` environment variable to authenticate. ### OpenRouter Providers like OpenRouter require `model-provider` to pass constructor kwargs (API key, app metadata) to the provider: ``` { "model": "openrouter:anthropic/claude-sonnet-4.6", "model-settings": { "anthropic_thinking": { "type": "adaptive" } }, "model-provider": { "api_key": "${OPENROUTER_API_KEY}", "app_url": "https://my-app.example.com", "app_title": "freeact" } } ``` ### OpenAI-Compatible Endpoints Any OpenAI-compatible API can be used by setting `base_url` in `model-provider`: ``` { "model": "openai:my-custom-model", "model-settings": { "temperature": 0.7 }, "model-provider": { "base_url": "https://my-api.example.com/v1", "api_key": "${CUSTOM_API_KEY}" } } ``` ## Model Settings `model-settings` is passed directly to Pydantic AI's model request. Available settings depend on the provider. ### Extended Thinking Freeact streams thinking content when the model supports it. Thinking is configured through provider-specific settings in `model-settings`. **Google (Gemini)**: ``` "model-settings": { "google_thinking_config": { "thinking_level": "high", "include_thoughts": true } } ``` `thinking_level` accepts `"low"`, `"medium"`, or `"high"`. Set `include_thoughts` to `true` to stream thinking content. **Anthropic** (Opus 4.6, Sonnet 4.6): ``` "model-settings": { "anthropic_thinking": { "type": "adaptive" }, "anthropic_effort": "high" } ``` Adaptive thinking lets the model decide when and how much to think. `anthropic_effort` accepts `"low"`, `"medium"`, `"high"`, or `"max"` (Opus only). The default is `"high"`. **OpenAI**: ``` "model-settings": { "openai_reasoning_effort": "medium" } ``` `openai_reasoning_effort` accepts `"low"`, `"medium"`, or `"high"`. ### Common Settings | Setting | Description | | ------------- | --------------------------------- | | `temperature` | Controls randomness (e.g., `0.7`) | | `max_tokens` | Maximum response tokens | See Pydantic AI's [settings documentation](https://ai.pydantic.dev/api/settings/) for the full reference. ## Model Provider `model-provider` configures custom API credentials, endpoints, or other provider-specific options. Provider config supports `${VAR}` placeholders resolved against the host environment. Missing variables cause a startup error. When `model-provider` is omitted, Pydantic AI resolves the provider from the model name prefix and uses its default authentication (typically an environment variable like `GEMINI_API_KEY` or `ANTHROPIC_API_KEY`). # Agent SDK The Agent SDK provides four main APIs: - [Configuration API](https://gradion-ai.github.io/freeact/api/config/index.md) for initializing and loading configuration from `.freeact/` - [Generation API](https://gradion-ai.github.io/freeact/api/generate/index.md) for generating Python APIs for MCP server tools - [Agent API](https://gradion-ai.github.io/freeact/api/agent/index.md) for running the agentic code action loop - [Permissions API](https://gradion-ai.github.io/freeact/api/permissions/index.md) for managing approval decisions ## Configuration API Use Config.init() to scaffold the `.freeact/` directory from default templates. The Config() constructor loads all configuration from it: ``` from freeact.agent.config import Config # Scaffold .freeact/ config directory if needed await Config.init() # Load configuration from .freeact/ config = Config() ``` See the [Configuration](https://gradion-ai.github.io/freeact/configuration/index.md) reference for details on the `.freeact/` directory structure. ## Generation API MCP servers [configured](https://gradion-ai.github.io/freeact/configuration/#ptc-servers) as `ptc-servers` in `agent.json` require Python API generation with generate_mcp_sources() before the agent can call their tools programmatically: ``` from freeact.tools.pytools.apigen import generate_mcp_sources # Generate Python APIs for MCP servers in ptc_servers for server_name, params in config.ptc_servers.items(): if not (config.generated_dir / "mcptools" / server_name).exists(): await generate_mcp_sources({server_name: params}, config.generated_dir) ``` Generated APIs are stored as `.freeact/generated/mcptools//.py` modules and persist across agent sessions. The `.freeact/generated/` directory is on the kernel's `PYTHONPATH`, so the agent can import them directly: ``` from mcptools.google.web_search import run, Params result = run(Params(query="python async tutorial")) ``` ## Agent API The Agent class implements the agentic code action loop, handling code action generation, [code execution](https://gradion-ai.github.io/freeact/execution/index.md), tool calls, and the approval workflow. Each stream() call runs a single agent turn, with the agent managing conversation history across calls. Use `stream()` to iterate over [events](#events) and handle them with pattern matching: ``` from freeact.agent import ( Agent, ApprovalRequest, CodeExecutionOutput, Response, Thoughts, ToolOutput, ) async with Agent(config=config) as agent: prompt = "Who is the F1 world champion 2025?" async for event in agent.stream(prompt): match event: case ApprovalRequest(tool_name="ipybox_execute_ipython_cell", tool_args=args) as request: print(f"Code action:\n{args['code']}") request.approve(True) case ApprovalRequest(tool_name=name, tool_args=args) as request: print(f"Tool: {name}") print(f"Args: {args}") request.approve(True) case Thoughts(content=content): print(f"Thinking: {content}") case CodeExecutionOutput(text=text): print(f"Code execution output: {text}") case ToolOutput(content=content): print(f"Tool call result: {content}") case Response(content=content): print(content) ``` For processing output incrementally, match the `*Chunk` event variants listed below. ### Events The Agent.stream() method yields events as they occur: | Event | Description | | ------------------------ | ------------------------------------------------- | | ThoughtsChunk | Partial model thoughts (content streaming) | | Thoughts | Complete model thoughts at a given step | | ResponseChunk | Partial model response (content streaming) | | Response | Complete model response | | ApprovalRequest | Pending code action or tool call approval | | CodeExecutionOutputChunk | Partial code execution output (content streaming) | | CodeExecutionOutput | Complete code execution output | | ToolOutput | Tool or built-in operation output | All yielded events inherit from AgentEvent and carry `agent_id`. ### Internal tools The agent uses a small set of internal tools for reading and writing files, executing code and commands, spawning subagents, and discovering tools: | Tool | Implementation | Description | | ----------- | ------------------------------------------------------- | -------------------------------------------------------------------------------------------------- | | read, write | filesystem MCP server | Reading and writing files via JSON tool calls | | execute | `ipybox_execute_ipython_cell` | Execution of Python code and shell commands (via `!` prefix), delegated to ipybox's `CodeExecutor` | | subagent | [`subagent_task`](#subagents) | Task delegation to child agents | | tool search | `pytools` MCP server for basic search and hybrid search | Tool discovery via category browsing or hybrid search | ### Turn limits Use `max_turns` to limit the number of tool-execution rounds before the stream stops: ``` async for event in agent.stream(prompt, max_turns=50): ... ``` If `max_turns=None` (default), the loop continues until the model produces a final response. ### Subagents The built-in `subagent_task` tool delegates a subtask to a child agent with a fresh IPython kernel and fresh MCP server connections. The child inherits model, system prompt, and sandbox settings from the parent. Its events flow through the parent's stream using the same [approval](#approval) mechanism, with `agent_id` identifying the source: ``` async for event in agent.stream(prompt): match event: case ApprovalRequest(agent_id=agent_id) as request: print(f"[{agent_id}] Approve {request.tool_name}?") request.approve(True) case Response(content=content, agent_id=agent_id): print(f"[{agent_id}] {content}") ``` The main agent's `agent_id` is `main`, subagent IDs use the form `sub-xxxx`. Each delegated task defaults to `max_turns=100`. The [`max-subagents`](https://gradion-ai.github.io/freeact/configuration/#agent-settings) setting in `agent.json` limits concurrent subagents (default 5). ### Approval The agent provides a unified approval mechanism. It yields ApprovalRequest for all code actions, programmatic tool calls, and JSON tool calls. Execution is suspended until `approve()` is called. Calling `approve(True)` executes the code action or tool call; `approve(False)` rejects it and ends the current agent turn. ``` async for event in agent.stream(prompt): match event: case ApprovalRequest() as request: # Inspect the pending action print(f"Tool: {request.tool_name}") print(f"Args: {request.tool_args}") # Approve or reject request.approve(True) case Response(content=content): print(content) ``` Code action approval For code actions, `tool_name` is `ipybox_execute_ipython_cell` and `tool_args` contains the `code` to execute. ### Lifecycle The agent manages MCP server connections and an IPython kernel via [ipybox](https://gradion-ai.github.io/ipybox/). On entering the async context manager, the IPython kernel starts and MCP servers configured for JSON tool calling connect. MCP servers configured for programmatic tool calling connect lazily on first tool call. ``` config = Config() async with Agent(config=config) as agent: async for event in agent.stream(prompt): ... # Connections closed, kernel stopped ``` Without using the async context manager: ``` config = Config() agent = Agent(config=config) await agent.start() try: async for event in agent.stream(prompt): ... finally: await agent.stop() ``` ### Timeouts The agent supports two timeout settings in [`agent.json`](https://gradion-ai.github.io/freeact/configuration/#agent-settings): - **`execution-timeout`**: Maximum time in seconds for each [code execution](https://gradion-ai.github.io/freeact/execution/index.md). Approval wait time is excluded from this budget, so the timeout only counts actual execution time. Defaults to 300 seconds. Set to `null` to disable. - **`approval-timeout`**: Timeout for approval requests during programmatic tool calls. If an approval request is not accepted or rejected within this time, the tool call fails. Defaults to `null` (no timeout). ``` { "execution-timeout": 60, "approval-timeout": 30 } ``` ### Persistence SessionStore persists agent message history to `.freeact/sessions//.jsonl`. Each agent turn appends messages incrementally, so the history is durable even if the process terminates mid-session. ``` from freeact.agent.store import SessionStore # Create a session store with a new session ID session_id = str(uuid.uuid4()) session_store = SessionStore(config.sessions_dir, session_id) ``` Pass the store to Agent to enable persistence. ``` # Run agent with session persistence async with Agent(config=config, session_store=session_store) as agent: await handle_events(agent, "What is the capital of France?") await handle_events(agent, "What about Germany?") ``` To resume a session, create a new `SessionStore` with the same `session_id`. The agent loads the persisted message history on startup and continues from where it left off. ``` # Resume session with the same session ID session_store = SessionStore(config.sessions_dir, session_id) async with Agent(config=config, session_store=session_store) as agent: # Previous message history is restored automatically await handle_events(agent, "And what was the first country we discussed?") ``` Only the main agent's message history (`main.jsonl`) is loaded on resume. Subagent messages are persisted to separate files (`sub-xxxx.jsonl`) for auditing but are not rehydrated. The [CLI tool](https://gradion-ai.github.io/freeact/cli/index.md) accepts `--session-id` to resume a session from the command line. ## Permissions API Work in progress Current permission management is preliminary and will be reimplemented in a future release. The agent requests approval for each code action and tool call but doesn't remember past decisions. PermissionManager adds memory: `allow_always()` persists to `.freeact/permissions.json`, while `allow_session()` stores in-memory until the session ends: ``` from freeact.permissions import PermissionManager from ipybox.utils import arun manager = PermissionManager() await manager.load() async for event in agent.stream(prompt): match event: case ApprovalRequest() as request: if manager.is_allowed(request.tool_name, request.tool_args): request.approve(True) else: choice = await arun(input, "Allow? [Y/n/a/s]: ") match choice: case "a": await manager.allow_always(request.tool_name) request.approve(True) case "s": manager.allow_session(request.tool_name) request.approve(True) case "n": request.approve(False) case _: request.approve(True) ``` # CLI tool Work in progress The [terminal interface](#interactive-mode) is preliminary and will be reimplemented in a future release. The `freeact` or `freeact run` command starts the [interactive mode](#interactive-mode): ``` freeact ``` A `.freeact/` [configuration](https://gradion-ai.github.io/freeact/configuration/index.md) directory is created automatically if it does not exist yet. The `init` subcommand initializes the configuration directory without starting the interactive mode: ``` freeact init ``` ## Options | Option | Description | | ----------------------- | -------------------------------------------------------------------------------------------- | | `--sandbox` | Run code execution in [sandbox mode](https://gradion-ai.github.io/freeact/sandbox/index.md). | | `--sandbox-config PATH` | Path to sandbox configuration file. | | `--session-id UUID` | Resume a previous session by its UUID. Generates a new UUID if omitted. | | `--log-level LEVEL` | Set logging level: `debug`, `info` (default), `warning`, `error`, `critical`. | | `--record` | Record the conversation as SVG and HTML files. | | `--record-dir PATH` | Output directory for recordings (default: `output`). | | `--record-title TEXT` | Title for the recording (default: `Conversation`). | ## Examples Running code execution in [sandbox mode](https://gradion-ai.github.io/freeact/sandbox/index.md): ``` freeact --sandbox ``` Running with a [custom sandbox configuration](https://gradion-ai.github.io/freeact/sandbox/#custom-configuration): ``` freeact --sandbox --sandbox-config sandbox-config.json ``` Resuming a previous [session](https://gradion-ai.github.io/freeact/sdk/#persistence): ``` freeact --session-id 550e8400-e29b-41d4-a716-446655440000 ``` Recording a session for documentation: ``` freeact --record --record-dir docs/recordings/demo --record-title "Demo Session" ``` ## Interactive Mode The interactive mode provides a conversation interface with the agent in a terminal window. ### User messages | Key | Action | | -------------------------------------------------- | -------------- | | `Enter` | Send message | | `Option+Enter` (macOS) `Alt+Enter` (Linux/Windows) | Insert newline | | `q` + `Enter` | Quit | ### Image Attachments Reference images using `@path` syntax: ``` @screenshot.png What does this show? @images/ Describe these images ``` - Single file: `@path/to/image.png` - Directory: `@path/to/dir/` includes all images in directory, non-recursive - Supported formats: PNG, JPG, JPEG, GIF, WEBP - Tab completion available for paths Images are automatically downscaled if larger than 1024 pixels in either dimension. ### Approval Prompt Before executing code actions or tool calls, the agent requests approval: ``` Approve? [Y/n/a/s]: ``` | Response | Effect | | -------------- | -------------------------------------------------------- | | `Y` or `Enter` | Approve once | | `n` | Reject once (ends the current agent turn) | | `a` | Approve always (persists to `.freeact/permissions.json`) | | `s` | Approve for current session | See [Permissions API](https://gradion-ai.github.io/freeact/sdk/#permissions-api) for details. # API Reference ## freeact.agent.Agent ``` Agent( config: Config, agent_id: str | None = None, sandbox: bool = False, sandbox_config: Path | None = None, session_store: SessionStore | None = None, ) ``` Code action agent that executes Python code and shell commands. Fulfills user requests by writing code and running it in a stateful IPython kernel provided by ipybox. Variables persist across executions. MCP server tools can be called in two ways: - JSON tool calls: MCP servers called directly via structured arguments - Programmatic tool calls (PTC): agent writes Python code that imports and calls tool APIs, auto-generated from MCP schemas (`mcptools/`) or user-defined (`gentools/`) All code actions and tool calls require approval. The `stream()` method yields ApprovalRequest events that must be resolved before execution proceeds. Use as an async context manager or call `start()`/`stop()` explicitly. Initialize the agent. Parameters: | Name | Type | Description | Default | | ---------------- | -------------- | -------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | | `config` | `Config` | Agent configuration containing model, system prompt, MCP servers, kernel env, timeouts, and subagent settings. | *required* | | `agent_id` | \`str | None\` | Identifier for this agent instance. Defaults to "main" when not provided. | | `sandbox` | `bool` | Run the kernel in sandbox mode. | `False` | | `sandbox_config` | \`Path | None\` | Path to custom sandbox configuration. | | `session_store` | \`SessionStore | None\` | Store for persisting message history. If None, history is kept in memory only. | ### start ``` start() -> None ``` Restore persisted history, start the code executor and MCP servers. Automatically called when entering the async context manager. ### stop ``` stop() -> None ``` Stop the code executor and MCP servers. Automatically called when exiting the async context manager. ### stream ``` stream( prompt: str | Sequence[UserContent], max_turns: int | None = None, ) -> AsyncIterator[AgentEvent] ``` Run a single agent turn, yielding events as they occur. Loops through model responses and tool executions until the model produces a response without tool calls. All code actions and tool calls yield an ApprovalRequest that must be resolved before execution proceeds. Parameters: | Name | Type | Description | Default | | ----------- | ----- | ----------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `prompt` | \`str | Sequence[UserContent]\` | User message as text or multimodal content sequence. | | `max_turns` | \`int | None\` | Maximum number of tool-execution rounds. Each round consists of a model response followed by tool execution. If None, runs until the model stops calling tools. | Returns: | Type | Description | | --------------------------- | ------------------------ | | `AsyncIterator[AgentEvent]` | An async event iterator. | ## freeact.agent.AgentEvent ``` AgentEvent(*, agent_id: str = '') ``` Base class for all agent stream events. Carries the `agent_id` of the agent that produced the event, allowing callers to distinguish events from a parent agent vs. its subagents. ## freeact.agent.ApprovalRequest ``` ApprovalRequest( tool_name: str, tool_args: dict[str, Any], _future: Future[bool] = Future(), *, agent_id: str = "" ) ``` Bases: `AgentEvent` Pending code action or tool call awaiting user approval. Yielded by Agent.stream() before executing any code action, programmatic tool call, or JSON tool call. The stream is suspended until `approve()` is called. ### approve ``` approve(decision: bool) -> None ``` Resolve this approval request. Parameters: | Name | Type | Description | Default | | ---------- | ------ | ---------------------------------------------------------------- | ---------- | | `decision` | `bool` | True to execute, False to reject and end the current agent turn. | *required* | ### approved ``` approved() -> bool ``` Await until `approve()` is called and return the decision. ## freeact.agent.Response ``` Response(content: str, *, agent_id: str = '') ``` Bases: `AgentEvent` Complete model response at a given step. ## freeact.agent.ResponseChunk ``` ResponseChunk(content: str, *, agent_id: str = '') ``` Bases: `AgentEvent` Partial model response text (content streaming). ## freeact.agent.Thoughts ``` Thoughts(content: str, *, agent_id: str = '') ``` Bases: `AgentEvent` Complete model thoughts at a given step. ## freeact.agent.ThoughtsChunk ``` ThoughtsChunk(content: str, *, agent_id: str = '') ``` Bases: `AgentEvent` Partial model thinking text (content streaming). ## freeact.agent.CodeExecutionOutput ``` CodeExecutionOutput( text: str | None, images: list[Path], *, agent_id: str = "" ) ``` Bases: `AgentEvent` Complete code execution output. ## freeact.agent.CodeExecutionOutputChunk ``` CodeExecutionOutputChunk(text: str, *, agent_id: str = '') ``` Bases: `AgentEvent` Partial code execution output (content streaming). ## freeact.agent.ToolOutput ``` ToolOutput(content: ToolResult, *, agent_id: str = '') ``` Bases: `AgentEvent` Tool or built-in operation output. ## freeact.agent.store.SessionStore ``` SessionStore( sessions_root: Path, session_id: str, flush_after_append: bool = False, ) ``` Persist and restore per-agent pydantic-ai message history as JSONL. ### append ``` append(agent_id: str, messages: list[ModelMessage]) -> None ``` Append serialized messages to an agent-specific session log. Each message is written as a versioned JSONL envelope with a UTC timestamp. The session file is created on demand. Parameters: | Name | Type | Description | Default | | ---------- | -------------------- | ----------------------------------------------------------------------------------------------- | ---------- | | `agent_id` | `str` | Logical agent stream name (for example, "main" or "sub-1234"), used as the JSONL filename stem. | *required* | | `messages` | `list[ModelMessage]` | Messages to append in order. | *required* | ### load ``` load(agent_id: str) -> list[ModelMessage] ``` Load and validate all persisted messages for an agent. Returns an empty list when no session file exists. If the final line is truncated (for example from an interrupted write), that line is ignored. Earlier malformed lines raise `ValueError`. Parameters: | Name | Type | Description | Default | | ---------- | ----- | -------------------------------------------------------- | ---------- | | `agent_id` | `str` | Logical agent stream name used to locate the JSONL file. | *required* | Returns: | Type | Description | | -------------------- | --------------------------------------------- | | `list[ModelMessage]` | Deserialized message history in append order. | ## freeact.agent.config.Config ``` Config(working_dir: Path | None = None) ``` Configuration loader for the `.freeact/` directory structure. Loads and parses all configuration on instantiation: skills metadata, system prompts, MCP servers (JSON tool calls), and PTC servers (programmatic tool calling). Internal MCP servers (pytools, filesystem) are defined as constants in this module. User-defined servers from `agent.json` override internal configs when they share the same key. Attributes: | Name | Type | Description | | ------------------- | ---------------- | ------------------------------------------------------------------- | | `working_dir` | `Path` | Agent's working directory. | | `freeact_dir` | `Path` | Path to .freeact/ configuration directory. | | `model` | `Path` | LLM model name or instance. | | `model_settings` | `Path` | Model-specific settings (e.g., thinking config). | | `tool_search` | `str` | Tool discovery mode read from agent.json. | | `images_dir` | \`Path | None\` | | `execution_timeout` | \`float | None\` | | `approval_timeout` | \`float | None\` | | `enable_subagents` | `bool` | Whether to enable subagent delegation. | | `max_subagents` | `int` | Maximum number of concurrent subagents. | | `kernel_env` | `dict[str, str]` | Environment variables passed to the IPython kernel. | | `skills_metadata` | | Parsed skill definitions from .freeact/skills/ and .agents/skills/. | | `system_prompt` | | Rendered system prompt loaded from package resources. | | `mcp_servers` | | Merged and resolved MCP server configs. | | `ptc_servers` | | Raw PTC server configs loaded from agent.json. | | `sessions_dir` | `Path` | Session trace storage directory. | ### freeact_dir ``` freeact_dir: Path ``` Path to `.freeact/` configuration directory. ### generated_dir ``` generated_dir: Path ``` Generated MCP tool sources directory. ### plans_dir ``` plans_dir: Path ``` Plan storage directory. ### search_db_file ``` search_db_file: Path ``` Hybrid search database path. ### sessions_dir ``` sessions_dir: Path ``` Session trace storage directory. ### working_dir ``` working_dir: Path ``` Agent's working directory. ### for_subagent ``` for_subagent() -> Config ``` Create a subagent configuration from this config. Returns a shallow copy with subagent-specific overrides: subagents disabled, mcp_servers deep-copied with pytools sync/watch disabled, and kernel_env shallow-copied for independence. ### init ``` init(working_dir: Path | None = None) -> None ``` Scaffold `.freeact/` directory from bundled templates. Copies template files that don't already exist, preserving user modifications. Runs blocking I/O in a separate thread. Parameters: | Name | Type | Description | Default | | ------------- | ------ | ----------- | ------------------------------------------------------ | | `working_dir` | \`Path | None\` | Base directory. Defaults to current working directory. | ## freeact.agent.config.SkillMetadata ``` SkillMetadata(name: str, description: str, path: Path) ``` Metadata parsed from a skill's SKILL.md frontmatter. ## freeact.agent.config.PYTOOLS_BASIC_CONFIG ``` PYTOOLS_BASIC_CONFIG: dict[str, Any] = { "command": "python", "args": ["-m", "freeact.tools.pytools.search.basic"], "env": {"PYTOOLS_DIR": "${PYTOOLS_DIR}"}, } ``` ## freeact.agent.config.PYTOOLS_HYBRID_CONFIG ``` PYTOOLS_HYBRID_CONFIG: dict[str, Any] = { "command": "python", "args": ["-m", "freeact.tools.pytools.search.hybrid"], "env": { "GEMINI_API_KEY": "${GEMINI_API_KEY}", "PYTOOLS_DIR": "${PYTOOLS_DIR}", "PYTOOLS_DB_PATH": "${PYTOOLS_DB_PATH}", "PYTOOLS_EMBEDDING_MODEL": "${PYTOOLS_EMBEDDING_MODEL}", "PYTOOLS_EMBEDDING_DIM": "${PYTOOLS_EMBEDDING_DIM}", "PYTOOLS_SYNC": "${PYTOOLS_SYNC}", "PYTOOLS_WATCH": "${PYTOOLS_WATCH}", "PYTOOLS_BM25_WEIGHT": "${PYTOOLS_BM25_WEIGHT}", "PYTOOLS_VEC_WEIGHT": "${PYTOOLS_VEC_WEIGHT}", }, } ``` ## freeact.agent.config.FILESYSTEM_CONFIG ``` FILESYSTEM_CONFIG: dict[str, Any] = { "command": "npx", "args": [ "-y", "@modelcontextprotocol/server-filesystem", ".", ], "excluded_tools": [ "create_directory", "list_directory", "list_directory_with_sizes", "directory_tree", "move_file", "search_files", "list_allowed_directories", "read_file", ], } ``` ## freeact.tools.pytools.apigen.generate_mcp_sources ``` generate_mcp_sources( config: dict[str, dict[str, Any]], generated_dir: Path ) -> None ``` Generate Python API for MCP servers in `config`. For servers not already in `mcptools/` categories, generates Python API using `ipybox.generate_mcp_sources`. Parameters: | Name | Type | Description | Default | | --------------- | --------------------------- | --------------------------------------------------------- | ---------- | | `config` | `dict[str, dict[str, Any]]` | Dictionary mapping server names to server configurations. | *required* | | `generated_dir` | `Path` | Directory for generated tool sources. | *required* | ## freeact.permissions.PermissionManager ``` PermissionManager(freeact_dir: Path = Path('.freeact')) ``` Tool permission gating with two-tier approval: always-allowed (persisted) and session-only (in-memory). Filesystem tools targeting paths within `.freeact/` are auto-approved without explicit permission grants. ### allow_always ``` allow_always(tool_name: str) -> None ``` Grant permanent permission for a tool and persist to disk. ### allow_session ``` allow_session(tool_name: str) -> None ``` Grant permission for a tool until the session ends (not persisted). ### is_allowed ``` is_allowed( tool_name: str, tool_args: dict[str, Any] | None = None ) -> bool ``` Check if a tool call is pre-approved. Returns `True` if the tool is in the always-allowed or session-allowed set, or if it's a filesystem tool operating within `.freeact/`. ### load ``` load() -> None ``` Load always-allowed tools from `.freeact/permissions.json`. ### save ``` save() -> None ``` Persist always-allowed tools to `.freeact/permissions.json`.