Skip to content

Quickstart

The following example runs a conversational freeact agent through a terminal user interface, configured with anthropic/claude-3-7-sonnet-20250219 as code action model and skills for generative Google search with Gemini and PubMed literature search with an MCP server. Both skills are executed in context of code actions in a sandboxed environment based on IPython and Docker. Code actions are generated by the code action model.

The code action model learns about these skills via source code. It understands how to perform generative Google search with Gemini by inspecting the sources of the freeact_skills.search.google.stream.api skill module, which is pre-installed on the ghcr.io/gradion-ai/ipybox:basic code execution container. The source code to invoke PubMed MCP tools is automatically generated by freeact during registration of the MCP server.

Start agent

Place API keys for Anthropic and Gemini in a .env file:

.env
ANTHROPIC_API_KEY=...
GEMINI_API_KEY=...

Install the freeact package:

pip install freeact

Then start the agent with this Python script:

import asyncio

from rich.console import Console

from freeact import CodeActAgent, LiteCodeActModel, execution_environment
from freeact.cli.utils import stream_conversation


async def main():
    async with execution_environment(
        ipybox_tag="ghcr.io/gradion-ai/ipybox:basic",
    ) as env:
        async with env.code_provider() as provider:
            mcp_tool_names = await provider.register_mcp_servers(
                {
                    "pubmed": {
                        "command": "uvx",
                        "args": ["--quiet", "pubmedmcp@0.1.3"],
                        "env": {"UV_PYTHON": "3.12"},
                    }
                }
            )
            skill_sources = await provider.get_sources(
                module_names=["freeact_skills.search.google.stream.api"],
                mcp_tool_names=mcp_tool_names,
            )

        async with env.code_executor() as executor:
            model = LiteCodeActModel(
                model_name="anthropic/claude-3-7-sonnet-20250219",
                reasoning_effort="low",
                skill_sources=skill_sources,
            )
            agent = CodeActAgent(model=model, executor=executor)

            # provides a terminal user interface for interacting with the agent
            await stream_conversation(agent, console=Console())


if __name__ == "__main__":
    asyncio.run(main())

Add the MCP server data to an mcp.json file:

mcp.json
{
    "mcpServers": {
        "pubmed": {
            "command": "uvx",
            "args": ["--quiet", "pubmedmcp@0.1.3"],
            "env": {"UV_PYTHON": "3.12"}
        }
    }
}

Then start the agent with uvx via the freeact CLI:

uvx freeact \
  --ipybox-tag=ghcr.io/gradion-ai/ipybox:basic \
  --model-name=anthropic/claude-3-7-sonnet-20250219 \
  --reasoning-effort=low \
  --skill-modules=freeact_skills.search.google.stream.api \
  --mcp-servers=mcp.json

Info

An initial start of the agent may take a few minutes as it downloads the Docker image of the code execution container. Subsequent runs have much lower startup latencies.

Use agent

Example

output