Skip to content

Usage

Code examples in the following sections are from the project's examples directory. They use a default gradion-ai/ipybox Docker image that you need to build yourself with

python -m ipybox build

Tip

Alternatively, you can also use one of the prebuilt Docker images, as done in quickstart, for example.

Basic usage

Use the ExecutionContainer context manager to create a container from an ipybox Docker image. The container is created on entering the context manager and removed on exit. Use the ExecutionClient context manager to manage the lifecycle of an IPython kernel within the container. A kernel is created on entering the context manager and removed on exit. Call execute on an ExecutionClient instance to execute code in its kernel.

from ipybox import ExecutionClient, ExecutionContainer


async with ExecutionContainer(tag="gradion-ai/ipybox") as container:  # (1)!
    async with ExecutionClient(port=container.executor_port) as client:  # (2)!
        result = await client.execute("print('Hello, world!')")  # (3)!
        print(f"Output: {result.text}")  # (4)!
  1. Create and start a code execution container
  2. Create an IPython kernel in the container
  3. Execute Python code and await the result
  4. Prints: Output: Hello, world!

The execute method accepts an optional timeout argument (defaults to 120 seconds). On timeout, the execution is terminated by interrupting the kernel and a TimeoutError is raised.

Info

Instead of using the ExecutionContainer context manager for lifecycle management, you can also manually run and kill a container.

container = ExecutionContainer()  # (1)!
await container.run()  # (2)!

# do some work ...

await container.kill()  # (3)!
  1. Create an ExecutionContainer instance.
  2. Run the container (detached).
  3. Kill the container.

Stateful code execution

Code executions with the same ExecutionClient instance are stateful. Definitions and variables from previous executions can be used in later executions. Code executions with different ExecutionClient instances run in different kernels and do not share in-memory state.

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client_1:  # (1)!
        result = await client_1.execute("x = 1")  # (2)!
        assert result.text is None
        result = await client_1.execute("print(x)")  # (3)!
        assert result.text == "1"

    async with ExecutionClient(port=container.executor_port) as client_2:  # (4)!
        try:
            await client_2.execute("print(x)")  # (5)!
        except ExecutionError as e:
            assert e.args[0] == "NameError: name 'x' is not defined"
  1. First client instance
  2. Execute code that defines variable x
  3. Use variable x defined in previous execution
  4. Second client instance
  5. Variable x is not defined in client_2's kernel

Note

While kernels in the same container don't share in-memory state, they can still exchange data by reading and writing files to the shared container filesystem. For full isolation of code executions, you need to run them in different containers.

Execution output streaming

Instead of waiting for code execution to complete, output can also be streamed as it is generated:

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client:
        code = """
        import time
        for i in range(5):
            print(f"Processing step {i}")
            time.sleep(1)
        """  # (1)!

        execution = await client.submit(code)  # (2)!
        print("Streaming output:")
        async for chunk in execution.stream():  # (3)!
            print(f"Received output: {chunk.strip()}")  # (4)!

        result = await execution.result()  # (5)!
        print("\nAggregated output:")
        print(result.text)  # (6)!
  1. Code that produces gradual output every second
  2. Submit the code for execution
  3. Stream the output
  4. Prints one line per second:
    Received output: Processing step 0
    Received output: Processing step 1
    Received output: Processing step 2
    Received output: Processing step 3
    Received output: Processing step 4
    
  5. Get the aggregated output (returns immediately)
  6. Prints the aggregated output:
    Aggregated output:
    Processing step 0
    Processing step 1
    Processing step 2
    Processing step 3
    Processing step 4
    

Install packages at runtime

Python packages can be installed at runtime by executing !pip install <package>:

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client:
        execution = await client.submit("!pip install einops")  # (1)!
        async for chunk in execution.stream():  # (2)!
            print(chunk, end="", flush=True)

        result = await client.execute("""
            import einops
            print(einops.__version__)
        """)  # (3)!
        print(f"Output: {result.text}")  # (4)!
  1. Install the einops package using pip
  2. Stream the installation progress. Something like
    Collecting einops
    Downloading einops-0.8.0-py3-none-any.whl (10.0 kB)
    Installing collected packages: einops
    Successfully installed einops-0.8.0
    
  3. Import and use the installed package
  4. Prints Output: 0.8.0

You can also install and use a package in a single execution step, as shown in the next section, for example.

Generate plots

Plots generated with matplotlib and other visualization libraries are returned as PIL images. Images are not part of the output stream; they can be obtained from the result object as images list.

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client:
        execution = await client.submit("""
            !pip install matplotlib

            import matplotlib.pyplot as plt
            import numpy as np

            x = np.linspace(0, 10, 100)
            plt.figure(figsize=(8, 6))
            plt.plot(x, np.sin(x))
            plt.title('Sine Wave')
            plt.show()

            print("Plot generation complete!")
            """)  # (1)!

        async for chunk in execution.stream():  # (2)!
            print(chunk, end="", flush=True)

        result = await execution.result()
        result.images[0].save("sine.png")  # (3)!
  1. Install matplotlib and generate a plot
  2. Stream output text (installation progress and print statement)
  3. Get attached image from execution result and save it as sine.png

Environment variables

Environment variables for the container can be passed to the ExecutionContainer constructor.

env = {"API_KEY": "secret-key-123", "DEBUG": "1"}

async with ExecutionContainer(env=env) as container:  # (1)!
    async with ExecutionClient(port=container.executor_port) as client:
        result = await client.execute("""
            import os

            api_key = os.environ['API_KEY']
            print(f"Using API key: {api_key}")

            debug = bool(int(os.environ.get('DEBUG', '0')))
            if debug:
                print("Debug mode enabled")
        """)  # (2)!
        print(result.text)  # (3)!
  1. Set environment variables for the container
  2. Access environment variables in executed code
  3. Prints
    Using API key: secret-key-123
    Debug mode enabled
    

Remote DOCKER_HOST

If you want to run a code execution container on a remote host but manage the container with ExecutionContainer locally, set the DOCKER_HOST environment variable to that host. The following example assumes that the remote Docker daemon has been configured to accept tcp connections at port 2375.

HOST = "192.168.94.50"  # (1)!
os.environ["DOCKER_HOST"] = f"tcp://{HOST}:2375"  # (2)!

async with ExecutionContainer(tag="ghcr.io/gradion-ai/ipybox:minimal") as container:  # (3)!
    async with ExecutionClient(host=HOST, port=container.executor_port) as client:  # (4)!
        result = await client.execute("17 ** 0.13")
        print(f"Output: {result.text}")
  1. Example IP address of the remote Docker host
  2. Remote Docker daemon is accessible via tcp at port 2375
  3. Creates a container on the remote host
  4. Create an IPython kernel in the remote container

MCP integration

ipybox supports the invocation of MCP servers in containers via generated MCP client code. An application first calls generate_mcp_sources to generate a Python function for each tool provided by an MCP server, using the tool's input schema. This needs to be done only once per MCP server. Generated functions are then available on the container's Python path.

Generated function

The example below generates a fetch function from the input schema of the fetch tool provided by the Fetch MCP server.

from ipybox import ExecutionClient, ExecutionContainer, ResourceClient

server_params = {  # (1)!
    "command": "uvx",
    "args": ["mcp-server-fetch"],
}

async with ExecutionContainer(tag="gradion-ai/ipybox") as container:
    async with ResourceClient(port=container.resource_port) as client:
        tool_names = await client.generate_mcp_sources(  # (2)!
            relpath="mcpgen",
            server_name="fetchurl",
            server_params=server_params,
        )
        assert tool_names == ["fetch"]  # (3)!

    async with ExecutionClient(port=container.executor_port) as client:
        result = await client.execute("""
            from mcpgen.fetchurl.fetch import Params, fetch
            print(fetch(Params(url="https://www.gradion.ai"))[:375])
        """)  # (4)!
        print(result.text)  # (5)!
  1. Configuration of the Fetch MCP server.
  2. Generate MCP client code from an MCP server config. One MCP client function is generated per MCP tool.
  3. List of tool names provided by the MCP server. A single fetch tool in this example.
  4. Execute code that imports and calls the generated MCP client function.
  5. Prints
    ```
                             ___                    _
       ____ __________ _____/ (_)___  ____   ____ _(_)
      / __ `/ ___/ __ `/ __  / / __ \/ __ \ / __ `/ /
     / /_/ / /  / /_/ / /_/ / / /_/ / / / // /_/ / /
     \__, /_/   \__,_/\__,_/_/\____/_/ /_(_)__,_/_/
    /____/
    ```
    

Calling a generated MCP client function, executes the corresponding MCP tool. Tools of stdio based MCP servers are always executed inside the container, while sse based MCP servers are expected to run elsewhere. Generated MCP client code can be downloaded from the container with get_mcp_sources (not shown).

Application example

freeact agents use the ipybox MCP integration for calling MCP tools in their code actions.