Skip to content

Usage

Info

Code examples in the following sections are from the project's examples directory. Most of them use a prebuilt Docker image.

Basic usage

Use the ExecutionContainer context manager to create a container from an ipybox Docker image. The container is created on entering the context manager and removed on exit. Use the ExecutionClient context manager to manage the lifecycle of an IPython kernel within the container. A kernel is created on entering the context manager and removed on exit. Call execute on an ExecutionClient instance to execute code in its kernel.

from ipybox import ExecutionClient, ExecutionContainer


async with ExecutionContainer(tag="ghcr.io/gradion-ai/ipybox") as container:  # (1)!
    async with ExecutionClient(port=container.executor_port) as client:  # (2)!
        result = await client.execute("print('Hello, world!')")  # (3)!
        print(f"Output: {result.text}")  # (4)!
  1. Create and start a code execution container
  2. Create an IPython kernel in the container
  3. Execute Python code and await the result
  4. Prints: Output: Hello, world!

The execute method accepts an optional timeout argument (defaults to 120 seconds). On timeout, the execution is terminated by interrupting the kernel and a TimeoutError is raised.

Info

Instead of using the ExecutionContainer context manager for lifecycle management, you can also manually run and kill a container.

container = ExecutionContainer()  # (1)!
await container.run()  # (2)!

# do some work ...

await container.kill()  # (3)!
  1. Create an ExecutionContainer instance.
  2. Run the container (detached).
  3. Kill the container.

Stateful code execution

Code executions with the same ExecutionClient instance are stateful. Definitions and variables from previous executions can be used in later executions. Code executions with different ExecutionClient instances run in different kernels and do not share in-memory state.

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client_1:  # (1)!
        result = await client_1.execute("x = 1")  # (2)!
        assert result.text is None
        result = await client_1.execute("print(x)")  # (3)!
        assert result.text == "1"

    async with ExecutionClient(port=container.executor_port) as client_2:  # (4)!
        try:
            await client_2.execute("print(x)")  # (5)!
        except ExecutionError as e:
            assert e.args[0] == "NameError: name 'x' is not defined"
  1. First client instance
  2. Execute code that defines variable x
  3. Use variable x defined in previous execution
  4. Second client instance
  5. Variable x is not defined in client_2's kernel

Note

While kernels in the same container don't share in-memory state, they can still exchange data by reading and writing files to the shared container filesystem. For full isolation of code executions, you need to run them in different containers.

Execution output streaming

Instead of waiting for code execution to complete, output can also be streamed as it is generated:

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client:
        code = """
        import time
        for i in range(5):
            print(f"Processing step {i}")
            time.sleep(1)
        """  # (1)!

        execution = await client.submit(code)  # (2)!
        print("Streaming output:")
        async for chunk in execution.stream():  # (3)!
            print(f"Received output: {chunk.strip()}")  # (4)!

        result = await execution.result()  # (5)!
        print("\nAggregated output:")
        print(result.text)  # (6)!
  1. Code that produces gradual output every second
  2. Submit the code for execution
  3. Stream the output
  4. Prints one line per second:
    Received output: Processing step 0
    Received output: Processing step 1
    Received output: Processing step 2
    Received output: Processing step 3
    Received output: Processing step 4
    
  5. Get the aggregated output (returns immediately)
  6. Prints the aggregated output:
    Aggregated output:
    Processing step 0
    Processing step 1
    Processing step 2
    Processing step 3
    Processing step 4
    

Restrict network access

A container allows all outbound internet traffic by default. This can be restricted with the init_firewall method to a list of domain names, IP addresses, or CIDR ranges.

Note

For the following example, build the Docker image with python -m ipybox build. The firewall can only be initialized on containers running as a non-root user i.e. containers of ipybox images that were built without the -r or -root flag. An attempt to initialize the firewall on a container running as root will raise an error.

from ipybox import ExecutionClient, ExecutionContainer


CODE = """
import socket

try:
    with socket.create_connection(("{host}", 80), timeout=1) as s:
        print("connected")
except Exception:
    print("timeout")
"""

async with ExecutionContainer(tag="ghcr.io/gradion-ai/ipybox") as container:
    async with ExecutionClient(port=container.executor_port) as client:
        result = await client.execute(CODE.format(host="example.com"))  # (1)!
        assert result.text == "connected"

        await container.init_firewall(["gradion.ai"])  # (2)!

        result = await client.execute(CODE.format(host="gradion.ai"))  # (3)!
        assert result.text == "connected"

        result = await client.execute(CODE.format(host="example.com"))  # (4)!
        assert result.text == "timeout"
  1. Internet access is not restricted before firewall initialization
  2. Restrict internet access to domain gradion.ai
  3. Allowed by firewall
  4. Blocked by firewall. May take longer than the configured 1 second timeout because example.com resolves to multiple IP addresses and all are tried before failing.

Install packages at runtime

Python packages can be installed at runtime by executing !pip install <package>:

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client:
        execution = await client.submit("!pip install einops")  # (1)!
        async for chunk in execution.stream():  # (2)!
            print(chunk, end="", flush=True)

        result = await client.execute("""
            import einops
            print(einops.__version__)
        """)  # (3)!
        print(f"Output: {result.text}")  # (4)!
  1. Install the einops package using pip
  2. Stream the installation progress. Something like
    Collecting einops
    Downloading einops-0.8.0-py3-none-any.whl (10.0 kB)
    Installing collected packages: einops
    Successfully installed einops-0.8.0
    
  3. Import and use the installed package
  4. Prints Output: 0.8.0

You can also install and use a package in a single execution step, as shown in the next section, for example.

Generate plots

Plots generated with matplotlib and other visualization libraries are returned as PIL images. Images are not part of the output stream; they can be obtained from the result object as images list.

async with ExecutionContainer() as container:
    async with ExecutionClient(port=container.executor_port) as client:
        execution = await client.submit("""
            !pip install matplotlib

            import matplotlib.pyplot as plt
            import numpy as np

            x = np.linspace(0, 10, 100)
            plt.figure(figsize=(8, 6))
            plt.plot(x, np.sin(x))
            plt.title('Sine Wave')
            plt.show()

            print("Plot generation complete!")
            """)  # (1)!

        async for chunk in execution.stream():  # (2)!
            print(chunk, end="", flush=True)

        result = await execution.result()
        result.images[0].save("sine.png")  # (3)!
  1. Install matplotlib and generate a plot
  2. Stream output text (installation progress and print statement)
  3. Get attached image from execution result and save it as sine.png

File operations

Files and directories can be transferred between the host and container using the ResourceClient.

input_dir = Path("examples", "data")

async with ExecutionContainer() as container:
    async with ResourceClient(port=container.resource_port) as res_client:
        async with ExecutionClient(port=container.executor_port) as exec_client:
            await res_client.upload_file("data/example.txt", input_dir / "example.txt")  # (1)!
            await res_client.upload_directory("data/subdir", input_dir / "subdir")  # (2)!

            await exec_client.execute("""
                import os
                import shutil
                os.makedirs('output', exist_ok=True)
                shutil.copy('data/example.txt', 'output/example.txt')
                shutil.copytree('data/subdir', 'output/subdir')
            """)  # (3)!

            output_dir = Path("examples", "output")
            output_dir.mkdir(exist_ok=True, parents=True)
            await res_client.download_file("output/example.txt", output_dir / "example.txt")  # (4)!
            await res_client.download_directory("output/subdir", output_dir / "subdir")  # (5)!
            await res_client.delete_file("data/example.txt")  # (6)!
  1. Upload a single file to the container
  2. Upload an entire directory to the container
  3. Copy files within the container
  4. Download a file from the container
  5. Download a directory from the container
  6. Delete a file in the container

Environment variables

Environment variables for the container can be passed to the ExecutionContainer constructor.

env = {"API_KEY": "secret-key-123", "DEBUG": "1"}

async with ExecutionContainer(env=env) as container:  # (1)!
    async with ExecutionClient(port=container.executor_port) as client:
        result = await client.execute("""
            import os

            api_key = os.environ['API_KEY']
            print(f"Using API key: {api_key}")

            debug = bool(int(os.environ.get('DEBUG', '0')))
            if debug:
                print("Debug mode enabled")
        """)  # (2)!
        print(result.text)  # (3)!
  1. Set environment variables for the container
  2. Access environment variables in executed code
  3. Prints
    Using API key: secret-key-123
    Debug mode enabled
    

Remote DOCKER_HOST

If you want to run a code execution container on a remote host but manage the container with ExecutionContainer locally, set the DOCKER_HOST environment variable to that host. The following example assumes that the remote Docker daemon has been configured to accept tcp connections at port 2375.

HOST = "192.168.94.50"  # (1)!
os.environ["DOCKER_HOST"] = f"tcp://{HOST}:2375"  # (2)!

async with ExecutionContainer(tag="ghcr.io/gradion-ai/ipybox") as container:  # (3)!
    async with ExecutionClient(host=HOST, port=container.executor_port) as client:  # (4)!
        result = await client.execute("17 ** 0.13")
        print(f"Output: {result.text}")
  1. Example IP address of the remote Docker host
  2. Remote Docker daemon is accessible via tcp at port 2375
  3. Creates a container on the remote host
  4. Create an IPython kernel in the remote container

MCP integration

ipybox exposes an MCP server interface for secure Python code execution accessible to MCP clients. ipybox can also invoke other MCP servers via automatically generated Python client code inside the container.