Skip to content

fenic.api.session.config

Session configuration classes for Fenic.

Classes:

AnthropicModelConfig

Bases: BaseModel

Configuration for Anthropic models.

This class defines the configuration settings for Anthropic language models, including model selection and separate rate limiting parameters for input and output tokens.

Attributes:

  • model_name (ANTHROPIC_AVAILABLE_LANGUAGE_MODELS) –

    The name of the Anthropic model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • input_tpm (int) –

    Input tokens per minute limit; must be greater than 0.

  • output_tpm (int) –

    Output tokens per minute limit; must be greater than 0.

Examples:

Configuring an Anthropic model with separate input/output rate limits:

config = AnthropicModelConfig(
    model_name="claude-3-5-haiku-latest",
    rpm=100,
    input_tpm=100,
    output_tpm=100
)

CloudConfig

Bases: BaseModel

Configuration for cloud-based execution.

This class defines settings for running operations in a cloud environment, allowing for scalable and distributed processing of language model operations.

Attributes:

  • size (Optional[CloudExecutorSize]) –

    Size of the cloud executor instance. If None, the default size will be used.

CloudExecutorSize

Bases: str, Enum

Enum defining available cloud executor sizes.

This enum represents the different size options available for cloud-based execution environments.

Attributes:

  • SMALL

    Small instance size.

  • MEDIUM

    Medium instance size.

  • LARGE

    Large instance size.

  • XLARGE

    Extra large instance size.

GoogleGLAModelConfig

Bases: BaseModel

Configuration for Google GenerativeLAnguage (GLA) models.

This class defines the configuration settings for models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GEMINI_API_KEY environment variable.

GoogleVertexModelConfig

Bases: BaseModel

Configuration for Google Vertex models.

This class defines the configuration settings for models available in Google Vertex AI, including model selection and rate limiting parameters. In order to use these models, you must have a Google Cloud service account, or use the gcloud cli tool to authenticate your local environment.

OpenAIModelConfig

Bases: BaseModel

Configuration for OpenAI models.

This class defines the configuration settings for OpenAI language and embedding models, including model selection and rate limiting parameters.

Attributes:

  • model_name (Union[OPENAI_AVAILABLE_LANGUAGE_MODELS, OPENAI_AVAILABLE_EMBEDDING_MODELS]) –

    The name of the OpenAI model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

Examples:

Configuring an OpenAI Language model with rate limits:

config = OpenAIModelConfig(model_name="gpt-4.1-nano", rpm=100, tpm=100)

Configuring an OpenAI Embedding model with rate limits:

config = OpenAIModelConfig(model_name="text-embedding-3-small", rpm=100, tpm=100)

SemanticConfig

Bases: BaseModel

Configuration for semantic language and embedding models.

This class defines the configuration for both language models and optional embedding models used in semantic operations. It ensures that all configured models are valid and supported by their respective providers.

Attributes:

  • language_models (dict[str, ModelConfig]) –

    Mapping of model aliases to language model configurations.

  • default_language_model (Optional[str]) –

    The alias of the default language model to use for semantic operations. Not required if only one language model is configured.

  • embedding_models (Optional[dict[str, ModelConfig]]) –

    Optional mapping of model aliases to embedding model configurations.

  • default_embedding_model (Optional[str]) –

    The alias of the default embedding model to use for semantic operations.

Note

The embedding model is optional and only required for operations that need semantic search or embedding capabilities.

Methods:

model_post_init

model_post_init(__context) -> None

Post initialization hook to set defaults.

This hook runs after the model is initialized and validated. It sets the default language and embedding models if they are not set and there is only one model available.

Source code in src/fenic/api/session/config.py
154
155
156
157
158
159
160
161
162
163
164
165
166
def model_post_init(self, __context) -> None:
    """Post initialization hook to set defaults.

    This hook runs after the model is initialized and validated.
    It sets the default language and embedding models if they are not set
    and there is only one model available.
    """
    # Set default language model if not set and only one model exists
    if self.default_language_model is None and len(self.language_models) == 1:
        self.default_language_model = list(self.language_models.keys())[0]
    # Set default embedding model if not set and only one model exists
    if self.embedding_models is not None and self.default_embedding_model is None and len(self.embedding_models) == 1:
        self.default_embedding_model = list(self.embedding_models.keys())[0]

validate_models

validate_models() -> SemanticConfig

Validates that the selected models are supported by the system.

This validator checks that both the language model and embedding model (if provided) are valid and supported by their respective providers.

Returns:

Raises:

Source code in src/fenic/api/session/config.py
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
@model_validator(mode="after")
def validate_models(self) -> SemanticConfig:
    """Validates that the selected models are supported by the system.

    This validator checks that both the language model and embedding model (if provided)
    are valid and supported by their respective providers.

    Returns:
        The validated SemanticConfig instance.

    Raises:
        ConfigurationError: If any of the models are not supported.
    """
    if len(self.language_models) == 0:
        raise ConfigurationError("You must specify at least one language model configuration.")
    available_language_model_aliases = list(self.language_models.keys())
    if self.default_language_model is None and len(self.language_models) > 1:
        raise ConfigurationError(f"default_language_model is not set, and multiple language models are configured. Please specify one of: {available_language_model_aliases} as a default_language_model.")

    if self.default_language_model is not None and self.default_language_model not in self.language_models:
        raise ConfigurationError(f"default_language_model {self.default_language_model} is not in configured map of language models. Available models: {available_language_model_aliases} .")

    for model_alias, language_model in self.language_models.items():
        if isinstance(language_model, OpenAIModelConfig):
            language_model_provider = ModelProvider.OPENAI
            language_model_name = language_model.model_name
        elif isinstance(language_model, AnthropicModelConfig):
            language_model_provider = ModelProvider.ANTHROPIC
            language_model_name = language_model.model_name
        elif isinstance(language_model, GoogleGLAModelConfig):
            language_model_provider = ModelProvider.GOOGLE_GLA
            language_model_name = language_model.model_name
        elif isinstance(language_model, GoogleVertexModelConfig):
            language_model_provider = ModelProvider.GOOGLE_VERTEX
            language_model_name = language_model.model_name
        else:
            raise ConfigurationError(
                f"Invalid language model: {model_alias}: {language_model} unsupported model type.")

        completion_model = model_catalog.get_completion_model_parameters(language_model_provider,
                                                                         language_model_name)
        if completion_model is None:
            raise ConfigurationError(
                model_catalog.generate_unsupported_completion_model_error_message(
                    language_model_provider,
                    language_model_name
                )
            )
    if self.embedding_models is not None:
        if self.default_embedding_model is None and len(self.embedding_models) > 1:
            raise ConfigurationError("embedding_models is set but default_embedding_model is missing (ambiguous).")

        if self.default_embedding_model is not None and self.default_embedding_model not in self.embedding_models:
            raise ConfigurationError(
                f"default_embedding_model {self.default_embedding_model} is not in embedding_models")
        for model_alias, embedding_model in self.embedding_models.items():
            if isinstance(embedding_model, OpenAIModelConfig):
                embedding_model_provider = ModelProvider.OPENAI
                embedding_model_name = embedding_model.model_name
            else:
                raise ConfigurationError(
                    f"Invalid embedding model: {model_alias}: {embedding_model} unsupported model type")
            embedding_model_parameters = model_catalog.get_embedding_model_parameters(embedding_model_provider,
                                                                                 embedding_model_name)
            if embedding_model_parameters is None:
                raise ConfigurationError(model_catalog.generate_unsupported_embedding_model_error_message(
                    embedding_model_provider,
                    embedding_model_name
                ))

    return self

SessionConfig

Bases: BaseModel

Configuration for a user session.

This class defines the complete configuration for a user session, including application settings, model configurations, and optional cloud settings. It serves as the central configuration object for all language model operations.

Attributes:

  • app_name (str) –

    Name of the application using this session. Defaults to "default_app".

  • db_path (Optional[Path]) –

    Optional path to a local database file for persistent storage.

  • semantic (SemanticConfig) –

    Configuration for semantic models (required).

  • cloud (Optional[CloudConfig]) –

    Optional configuration for cloud execution.

Note

The semantic configuration is required as it defines the language models that will be used for processing. The cloud configuration is optional and only needed for distributed processing.