Gemini
Gemini
Gemini(model_name: str = 'gemini/gemini-2.0-flash', skill_sources: str | None = None, system_template: str | None = None, execution_output_template: str | None = None, execution_error_template: str | None = None, api_key: str | None = None, **kwargs)
Bases: LiteLLM
Code action model class for Gemini 2 models.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The LiteLLM-specific name of the model. |
'gemini/gemini-2.0-flash'
|
skill_sources
|
str | None
|
Skill modules source code to be included into |
None
|
system_template
|
str | None
|
Prompt template for the system instruction that guides the model to generate code actions.
Must define a |
None
|
execution_output_template
|
str | None
|
A template for formatting successful code execution output.
Must define an |
None
|
execution_error_template
|
str | None
|
A template for formatting code execution errors.
Must define an |
None
|
api_key
|
str | None
|
Provider-specific API key. If not provided, reads from
|
None
|
**kwargs
|
{}
|
Source code in freeact/model/gemini/model/chat.py
extract_code
extract_code(response: LiteLLMResponse)
Extracts all Python code blocks from response.text
and joins them by empty lines.
Source code in freeact/model/gemini/model/chat.py
GeminiLive
async
GeminiLive(model_name: str = 'gemini-2.0-flash', skill_sources: str | None = None, temperature: float = 0.0, max_tokens: int = 4096, **kwargs)
Context manager for a CodeActModel
implementation based on Google's Gemini 2 live API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The specific Gemini 2 model to use |
'gemini-2.0-flash'
|
skill_sources
|
str | None
|
Skill module sources to include in the system instruction. |
None
|
temperature
|
float
|
Controls randomness in the model's output (0.0 = deterministic) |
0.0
|
max_tokens
|
int
|
Maximum number of tokens in the model's response |
4096
|
**kwargs
|
Additional keyword arguments to pass to the Google Gen AI client. |
{}
|