Skip to content

Gemini

GeminiModelName module-attribute

GeminiModelName = Literal['gemini-2.0-flash-exp']

Gemini

Gemini(model_name: GeminiModelName = 'gemini-2.0-flash-exp', skill_sources: str | None = None, temperature: float = 0.0, max_tokens: int = 4096)

Bases: CodeActModel

A CodeActModel implementation based on Google's Gemini 2 chat API.

Parameters:

Name Type Description Default
model_name GeminiModelName

The specific Gemini 2 model to use

'gemini-2.0-flash-exp'
skill_sources str | None

Skill module sources to include in the system instruction

None
temperature float

Controls randomness in the model's output (0.0 = deterministic)

0.0
max_tokens int

Maximum number of tokens in the model's response

4096
Source code in freeact/model/gemini/model/chat.py
def __init__(
    self,
    model_name: GeminiModelName = "gemini-2.0-flash-exp",
    skill_sources: str | None = None,
    temperature: float = 0.0,
    max_tokens: int = 4096,
):
    self._model_name = model_name
    self._client = genai.Client(http_options={"api_version": "v1alpha"})
    self._chat = self._client.aio.chats.create(
        model=model_name,
        config=GenerateContentConfig(
            temperature=temperature,
            max_output_tokens=max_tokens,
            response_modalities=["TEXT"],
            system_instruction=SYSTEM_TEMPLATE.format(python_modules=skill_sources or ""),
        ),
    )

GeminiLive async

GeminiLive(model_name: GeminiModelName = 'gemini-2.0-flash-exp', skill_sources: str | None = None, temperature: float = 0.0, max_tokens: int = 4096)

Context manager for a CodeActModel implementation based on Google's Gemini 2 live API.

Parameters:

Name Type Description Default
model_name GeminiModelName

The specific Gemini 2 model to use

'gemini-2.0-flash-exp'
skill_sources str | None

Skill module sources to include in the system instruction.

None
temperature float

Controls randomness in the model's output (0.0 = deterministic)

0.0
max_tokens int

Maximum number of tokens in the model's response

4096
Example
async with GeminiLive(model_name="gemini-2.0-flash-exp", skill_sources=skill_sources) as model:
    # use model with active session to Gemini 2 live API
    agent = CodeActAgent(model=model, ...)
Source code in freeact/model/gemini/model/live.py
@asynccontextmanager
async def GeminiLive(
    model_name: GeminiModelName = "gemini-2.0-flash-exp",
    skill_sources: str | None = None,
    temperature: float = 0.0,
    max_tokens: int = 4096,
):
    """
    Context manager for a `CodeActModel` implementation based on Google's Gemini 2 live API.

    Args:
        model_name: The specific Gemini 2 model to use
        skill_sources: Skill module sources to include in the system instruction.
        temperature: Controls randomness in the model's output (0.0 = deterministic)
        max_tokens: Maximum number of tokens in the model's response

    Example:
        ```python
        async with GeminiLive(model_name="gemini-2.0-flash-exp", skill_sources=skill_sources) as model:
            # use model with active session to Gemini 2 live API
            agent = CodeActAgent(model=model, ...)
        ```
    """

    client = genai.Client(http_options={"api_version": "v1alpha"})
    config = {
        "tools": [],
        "generation_config": {
            "temperature": temperature,
            "max_output_tokens": max_tokens,
            "response_modalities": ["TEXT"],
            "system_instruction": SYSTEM_TEMPLATE.format(
                python_modules=skill_sources or "",
            ),
        },
    }

    async with client.aio.live.connect(model=model_name, config=config) as session:
        yield _GeminiLive(session)