MCP integration

To leverage the vast ecosystem of MCP servers and their tools, freeact generates Python client functions from MCP tool metadata and provides them as skills to freeact agents. When freeact agents use these skills in their code actions, they invoke the corresponding MCP server tools. stdio based MCP servers are executed within the sandboxed environment while sse based MCP servers are expected to run elsewhere.

MCP servers are first registered at the execution environment with register_mcp_servers. Registration loads the MCP tool metadata from the MCP server and generates Python client functions from it. The sources of these functions (or a subset thereof) can then be loaded with get_sources and provided as skill sources to code action models. This is demonstrated in the following example:

import asyncio
import os

from rich.console import Console

from freeact import CodeActAgent, LiteCodeActModel, execution_environment
from freeact.cli.utils import stream_conversation


async def main():
    async with execution_environment(
        ipybox_tag="ghcr.io/gradion-ai/ipybox:basic",
    ) as env:
        async with env.code_provider() as provider:
            tool_names = await provider.register_mcp_servers(  # (1)!
                {
                    "firecrawl": {
                        "command": "npx",
                        "args": ["-y", "firecrawl-mcp"],
                        "env": {"FIRECRAWL_API_KEY": os.getenv("FIRECRAWL_API_KEY")},
                    }
                }
            )

            assert "firecrawl_scrape" in tool_names["firecrawl"]
            assert "firecrawl_extract" in tool_names["firecrawl"]

            skill_sources = await provider.get_sources(
                mcp_tool_names={
                    "firecrawl": ["firecrawl_scrape", "firecrawl_extract"],  # (2)!
                }
            )

        async with env.code_executor() as executor:
            model = LiteCodeActModel(
                model_name="gpt-4.1",
                skill_sources=skill_sources,
                api_key=os.getenv("OPENAI_API_KEY"),
            )
            agent = CodeActAgent(model=model, executor=executor)
            await stream_conversation(agent, console=Console())


if __name__ == "__main__":
    asyncio.run(main())
  1. Registration generates MCP skill sources and returns the tool names of registered servers e.g.

    {
        "firecrawl": [
            "firecrawl_scrape",
            "firecrawl_map",
            "firecrawl_crawl",
            "firecrawl_check_crawl_status",
            "firecrawl_search",
            "firecrawl_extract",
            "firecrawl_deep_research",
            "firecrawl_generate_llmstxt"
        ]
    }
    
  2. Here, we load only a subset of MCP skill sources. For loading all skill sources, use mcp_tool_names=tool_names or mcp_tool_names={"firecrawl": None}.

Add MCP server data to an mcp.json file under key mcpServers. An optional mcpTools key supports the selection of a subset of MCP tools.

mcp.json
{
    "mcpServers": {
        "firecrawl": {
            "command": "npx",
            "args": ["-y", "firecrawl-mcp"],
            "env": {"FIRECRAWL_API_KEY": "your-firecrawl-api-key"}
        }
    },
    "mcpTools": {
        "firecrawl": ["firecrawl_scrape", "firecrawl_extract"]
    }
}

Then start the agent from the command line with:

uvx freeact \
  --ipybox-tag=ghcr.io/gradion-ai/ipybox:basic \
  --model-name=gpt-4.1 \
  --mcp-servers=mcp.json \
  --api-key=$OPENAI_API_KEY

Generated skill sources

Example

output