Usage
Code examples in the following sections are from the project's examples directory. They use a default gradion-ai/ipybox
Docker image that you need to build yourself with
Tip
Alternatively, you can also use one of the prebuilt Docker images, as done in quickstart, for example.
Basic usage
Use the ExecutionContainer
context manager to create a container from an ipybox
Docker image. The container is created on entering the context manager and removed on exit.
Use the ExecutionClient
context manager to manage the lifecycle of an IPython kernel within the container. A kernel is created on entering the context manager and removed on exit.
Call execute
on an ExecutionClient
instance to execute code in its kernel.
from ipybox import ExecutionClient, ExecutionContainer
async with ExecutionContainer(tag="gradion-ai/ipybox") as container: # (1)!
async with ExecutionClient(port=container.executor_port) as client: # (2)!
result = await client.execute("print('Hello, world!')") # (3)!
print(f"Output: {result.text}") # (4)!
- Create and start a code execution container
- Create an IPython kernel in the container
- Execute Python code and await the result
- Prints:
Output: Hello, world!
The execute
method accepts an optional timeout
argument (defaults to 120
seconds). On timeout, the execution is terminated by interrupting the kernel and a TimeoutError
is raised.
Info
Instead of using the ExecutionContainer
context manager for lifecycle management, you can also manually run
and kill
a container.
container = ExecutionContainer() # (1)!
await container.run() # (2)!
# do some work ...
await container.kill() # (3)!
- Create an
ExecutionContainer
instance. - Run the container (detached).
- Kill the container.
Stateful code execution
Code executions with the same ExecutionClient
instance are stateful. Definitions and variables from previous executions can be used in later executions. Code executions with different ExecutionClient
instances run in different kernels and do not share in-memory state.
async with ExecutionContainer() as container:
async with ExecutionClient(port=container.executor_port) as client_1: # (1)!
result = await client_1.execute("x = 1") # (2)!
assert result.text is None
result = await client_1.execute("print(x)") # (3)!
assert result.text == "1"
async with ExecutionClient(port=container.executor_port) as client_2: # (4)!
try:
await client_2.execute("print(x)") # (5)!
except ExecutionError as e:
assert e.args[0] == "NameError: name 'x' is not defined"
- First client instance
- Execute code that defines variable
x
- Use variable
x
defined in previous execution - Second client instance
- Variable
x
is not defined inclient_2
's kernel
Note
While kernels in the same container don't share in-memory state, they can still exchange data by reading and writing files to the shared container filesystem. For full isolation of code executions, you need to run them in different containers.
Execution output streaming
Instead of waiting for code execution to complete, output can also be streamed as it is generated:
async with ExecutionContainer() as container:
async with ExecutionClient(port=container.executor_port) as client:
code = """
import time
for i in range(5):
print(f"Processing step {i}")
time.sleep(1)
""" # (1)!
execution = await client.submit(code) # (2)!
print("Streaming output:")
async for chunk in execution.stream(): # (3)!
print(f"Received output: {chunk.strip()}") # (4)!
result = await execution.result() # (5)!
print("\nAggregated output:")
print(result.text) # (6)!
- Code that produces gradual output every second
- Submit the code for execution
- Stream the output
- Prints one line per second:
- Get the aggregated output (returns immediately)
- Prints the aggregated output:
Install packages at runtime
Python packages can be installed at runtime by executing !pip install <package>
:
async with ExecutionContainer() as container:
async with ExecutionClient(port=container.executor_port) as client:
execution = await client.submit("!pip install einops") # (1)!
async for chunk in execution.stream(): # (2)!
print(chunk, end="", flush=True)
result = await client.execute("""
import einops
print(einops.__version__)
""") # (3)!
print(f"Output: {result.text}") # (4)!
- Install the
einops
package using pip - Stream the installation progress. Something like
- Import and use the installed package
- Prints
Output: 0.8.0
You can also install and use a package in a single execution step, as shown in the next section, for example.
Generate plots
Plots generated with matplotlib
and other visualization libraries are returned as PIL images. Images are not part of the output stream; they can be obtained from the result
object as images
list.
async with ExecutionContainer() as container:
async with ExecutionClient(port=container.executor_port) as client:
execution = await client.submit("""
!pip install matplotlib
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 100)
plt.figure(figsize=(8, 6))
plt.plot(x, np.sin(x))
plt.title('Sine Wave')
plt.show()
print("Plot generation complete!")
""") # (1)!
async for chunk in execution.stream(): # (2)!
print(chunk, end="", flush=True)
result = await execution.result()
result.images[0].save("sine.png") # (3)!
- Install
matplotlib
and generate a plot - Stream output text (installation progress and
print
statement) - Get attached image from execution result and save it as sine.png
Environment variables
Environment variables for the container can be passed to the ExecutionContainer
constructor.
env = {"API_KEY": "secret-key-123", "DEBUG": "1"}
async with ExecutionContainer(env=env) as container: # (1)!
async with ExecutionClient(port=container.executor_port) as client:
result = await client.execute("""
import os
api_key = os.environ['API_KEY']
print(f"Using API key: {api_key}")
debug = bool(int(os.environ.get('DEBUG', '0')))
if debug:
print("Debug mode enabled")
""") # (2)!
print(result.text) # (3)!
- Set environment variables for the container
- Access environment variables in executed code
- Prints
Remote DOCKER_HOST
If you want to run a code execution container on a remote host but manage the container with ExecutionContainer
locally, set the DOCKER_HOST
environment variable to that host. The following example assumes that the remote Docker daemon has been configured to accept tcp
connections at port 2375
.
HOST = "192.168.94.50" # (1)!
os.environ["DOCKER_HOST"] = f"tcp://{HOST}:2375" # (2)!
async with ExecutionContainer(tag="ghcr.io/gradion-ai/ipybox:minimal") as container: # (3)!
async with ExecutionClient(host=HOST, port=container.executor_port) as client: # (4)!
result = await client.execute("17 ** 0.13")
print(f"Output: {result.text}")
- Example IP address of the remote Docker host
- Remote Docker daemon is accessible via
tcp
at port2375
- Creates a container on the remote host
- Create an IPython kernel in the remote container
MCP integration
ipybox
supports the invocation of MCP servers in containers via generated MCP client code. An application first calls generate_mcp_sources
to generate a Python function for each tool provided by an MCP server, using the tool's input schema. This needs to be done only once per MCP server. Generated functions are then available on the container's Python path.
Generated function
The example below generates a fetch
function from the input schema of the fetch
tool provided by the Fetch MCP server.
from ipybox import ExecutionClient, ExecutionContainer, ResourceClient
server_params = { # (1)!
"command": "uvx",
"args": ["mcp-server-fetch"],
}
async with ExecutionContainer(tag="gradion-ai/ipybox") as container:
async with ResourceClient(port=container.resource_port) as client:
tool_names = await client.generate_mcp_sources( # (2)!
relpath="mcpgen",
server_name="fetchurl",
server_params=server_params,
)
assert tool_names == ["fetch"] # (3)!
async with ExecutionClient(port=container.executor_port) as client:
result = await client.execute("""
from mcpgen.fetchurl.fetch import Params, fetch
print(fetch(Params(url="https://www.gradion.ai"))[:375])
""") # (4)!
print(result.text) # (5)!
- Configuration of the Fetch MCP server.
- Generate MCP client code from an MCP server config. One MCP client function is generated per MCP tool.
- List of tool names provided by the MCP server. A single
fetch
tool in this example. - Execute code that imports and calls the generated MCP client function.
- Prints
Calling a generated MCP client function, executes the corresponding MCP tool. Tools of stdio
based MCP servers are always executed inside the container, while sse
based MCP servers are expected to run elsewhere. Generated MCP client code can be downloaded from the container with get_mcp_sources
(not shown).
Application example
freeact
agents use the ipybox
MCP integration for calling MCP tools in their code actions.