Skip to content

fenic.api.session.config

Session configuration classes for Fenic.

Classes:

AnthropicLanguageModel

Bases: BaseModel

Configuration for Anthropic language models.

This class defines the configuration settings for Anthropic language models, including model selection and separate rate limiting parameters for input and output tokens.

Attributes:

  • model_name (AnthropicLanguageModelName) –

    The name of the Anthropic model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • input_tpm (int) –

    Input tokens per minute limit; must be greater than 0.

  • output_tpm (int) –

    Output tokens per minute limit; must be greater than 0.

  • profiles (Optional[dict[str, Profile]]) –

    Optional mapping of profile names to profile configurations.

  • default_profile (Optional[str]) –

    The name of the default profile to use if profiles are configured.

Example

Configuring an Anthropic model with separate input/output rate limits:

config = AnthropicLanguageModel(
    model_name="claude-3-5-haiku-latest",
    rpm=100,
    input_tpm=100,
    output_tpm=100
)

Configuring an Anthropic model with profiles:

config = SessionConfig(
    semantic=SemanticConfig(
        language_models={
            "claude": AnthropicLanguageModel(
                model_name="claude-opus-4-0",
                rpm=100,
                input_tpm=100,
                output_tpm=100,
                profiles={
                    "thinking_disabled": AnthropicLanguageModel.Profile(),
                    "fast": AnthropicLanguageModel.Profile(thinking_token_budget=1024),
                    "thorough": AnthropicLanguageModel.Profile(thinking_token_budget=4096)
                },
                default_profile="fast"
            )
        },
        default_language_model="claude"
)

# Using the default "fast" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="claude")

# Using the "thorough" profile for the "claude" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="claude", profile="thorough"))

Classes:

  • Profile

    Anthropic-specific profile configurations.

Profile

Bases: BaseModel

Anthropic-specific profile configurations.

This class defines profile configurations for Anthropic models, allowing different thinking token budget settings to be applied to the same model.

Attributes:

  • thinking_token_budget (Optional[int]) –

    If configuring a model that supports reasoning, provide a default thinking budget in tokens. If not provided, thinking will be disabled for the profile. The minimum token budget supported by Anthropic is 1024 tokens.

Note

If thinking_token_budget is set, temperature cannot be customized -- any changes to temperature will be ignored.

Example

Configuring a profile with a thinking budget:

profile = AnthropicLanguageModel.Profile(thinking_token_budget=2048)

Configuring a profile with a large thinking budget:

profile = AnthropicLanguageModel.Profile(thinking_token_budget=8192)

CloudConfig

Bases: BaseModel

Configuration for cloud-based execution.

This class defines settings for running operations in a cloud environment, allowing for scalable and distributed processing of language model operations.

Attributes:

  • size (Optional[CloudExecutorSize]) –

    Size of the cloud executor instance. If None, the default size will be used.

Example

Configuring cloud execution with a specific size:

config = CloudConfig(size=CloudExecutorSize.MEDIUM)

Using default cloud configuration:

config = CloudConfig()

CloudExecutorSize

Bases: str, Enum

Enum defining available cloud executor sizes.

This enum represents the different size options available for cloud-based execution environments.

Attributes:

  • SMALL

    Small instance size.

  • MEDIUM

    Medium instance size.

  • LARGE

    Large instance size.

  • XLARGE

    Extra large instance size.

CohereEmbeddingModel

Bases: BaseModel

Configuration for Cohere embedding models.

This class defines the configuration settings for Cohere embedding models, including model selection and rate limiting parameters.

Attributes:

  • model_name (CohereEmbeddingModelName) –

    The name of the Cohere model to use.

  • rpm (int) –

    Requests per minute limit for the model.

  • tpm (int) –

    Tokens per minute limit for the model.

  • profiles (Optional[dict[str, Profile]]) –

    Optional dictionary of profile configurations.

  • default_profile (Optional[str]) –

    Default profile name to use if none specified.

Example

Configuring a Cohere embedding model with profiles:

cohere_config = CohereEmbeddingModel(
    model_name="embed-v4.0",
    rpm=100,
    tpm=50_000,
    profiles={
        "high_dim": CohereEmbeddingModel.Profile(
            embedding_dimensionality=1536,
            embedding_task_type="search_document"
        ),
        "classification": CohereEmbeddingModel.Profile(
            embedding_dimensionality=1024,
            embedding_task_type="classification"
        )
    },
    default_profile="high_dim"
)

Classes:

  • Profile

    Profile configurations for Cohere embedding models.

Profile

Bases: BaseModel

Profile configurations for Cohere embedding models.

This class defines profile configurations for Cohere embedding models, allowing different output dimensionality and task type settings to be applied to the same model.

Attributes:

  • output_dimensionality (Optional[int]) –

    The dimensionality of the embedding created by this model. If not provided, the model will use its default dimensionality.

  • input_type (CohereEmbeddingTaskType) –

    The type of input text (search_query, search_document, classification, clustering)

Example

Configuring a profile with custom dimensionality:

profile = CohereEmbeddingModel.Profile(output_dimensionality=1536)

Configuring a profile with default settings:

profile = CohereEmbeddingModel.Profile()

GoogleDeveloperEmbeddingModel

Bases: BaseModel

Configuration for Google Developer embedding models.

This class defines the configuration settings for Google embedding models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.

Attributes:

  • model_name (GoogleDeveloperEmbeddingModelName) –

    The name of the Google Developer embedding model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

  • profiles (Optional[dict[str, Profile]]) –

    Optional mapping of profile names to profile configurations.

  • default_profile (Optional[str]) –

    The name of the default profile to use if profiles are configured.

Example

Configuring a Google Developer embedding model with rate limits:

config = GoogleDeveloperEmbeddingModelConfig(
    model_name="gemini-embedding-001",
    rpm=100,
    tpm=1000
)

Configuring a Google Developer embedding model with profiles:

config = GoogleDeveloperEmbeddingModelConfig(
    model_name="gemini-embedding-001",
    rpm=100,
    tpm=1000,
    profiles={
        "default": GoogleDeveloperEmbeddingModelConfig.Profile(),
        "high_dim": GoogleDeveloperEmbeddingModelConfig.Profile(output_dimensionality=3072)
    },
    default_profile="default"
)

Classes:

  • Profile

    Profile configurations for Google Developer embedding models.

Profile

Bases: BaseModel

Profile configurations for Google Developer embedding models.

This class defines profile configurations for Google embedding models, allowing different output dimensionality and task type settings to be applied to the same model.

Attributes:

  • output_dimensionality (Optional[int]) –

    The dimensionality of the embedding created by this model. If not provided, the model will use its default dimensionality.

  • task_type (GoogleEmbeddingTaskType) –

    The type of task for the embedding model.

Example

Configuring a profile with custom dimensionality:

profile = GoogleDeveloperEmbeddingModelConfig.Profile(output_dimensionality=3072)

Configuring a profile with default settings:

profile = GoogleDeveloperEmbeddingModelConfig.Profile()

GoogleDeveloperLanguageModel

Bases: BaseModel

Configuration for Gemini models accessible through Google Developer AI Studio.

This class defines the configuration settings for Google Gemini models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GOOGLE_API_KEY environment variable.

Attributes:

  • model_name (GoogleDeveloperLanguageModelName) –

    The name of the Google Developer model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

  • profiles (Optional[dict[str, Profile]]) –

    Optional mapping of profile names to profile configurations.

  • default_profile (Optional[str]) –

    The name of the default profile to use if profiles are configured.

Example

Configuring a Google Developer model with rate limits:

config = GoogleDeveloperLanguageModel(
    model_name="gemini-2.0-flash",
    rpm=100,
    tpm=1000
)

Configuring a reasoning Google Developer model with profiles:

config = GoogleDeveloperLanguageModel(
    model_name="gemini-2.5-flash",
    rpm=100,
    tpm=1000,
    profiles={
        "thinking_disabled": GoogleDeveloperLanguageModel.Profile(),
        "fast": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=1024),
        "thorough": GoogleDeveloperLanguageModel.Profile(thinking_token_budget=8192)
    },
    default_profile="fast"
)

Classes:

  • Profile

    Profile configurations for Google Developer models.

Profile

Bases: BaseModel

Profile configurations for Google Developer models.

This class defines profile configurations for Google Gemini models, allowing different thinking/reasoning settings to be applied to the same model.

Attributes:

  • thinking_token_budget (Optional[int]) –

    If configuring a reasoning model, provide a thinking budget in tokens. If not provided, or if set to 0, thinking will be disabled for the profile (not supported on gemini-2.5-pro). To have the model automatically determine a thinking budget based on the complexity of the prompt, set this to -1. Note that Gemini models take this as a suggestion -- and not a hard limit. It is very possible for the model to generate far more thinking tokens than the suggested budget, and for the model to generate reasoning tokens even if thinking is disabled.

Example

Configuring a profile with a fixed thinking budget:

profile = GoogleDeveloperLanguageModel.Profile(thinking_token_budget=4096)

Configuring a profile with automatic thinking budget:

profile = GoogleDeveloperLanguageModel.Profile(thinking_token_budget=-1)

Configuring a profile with thinking disabled:

profile = GoogleDeveloperLanguageModel.Profile(thinking_token_budget=0)

GoogleVertexEmbeddingModel

Bases: BaseModel

Configuration for Google Vertex AI embedding models.

This class defines the configuration settings for Google embedding models available in Google Vertex AI, including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.

Attributes:

  • model_name (GoogleVertexEmbeddingModelName) –

    The name of the Google Vertex embedding model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

  • profiles (Optional[dict[str, Profile]]) –

    Optional mapping of profile names to profile configurations.

  • default_profile (Optional[str]) –

    The name of the default profile to use if profiles are configured.

Example

Configuring a Google Vertex embedding model with rate limits:

embedding_model = GoogleVertexEmbeddingModel(
    model_name="gemini-embedding-001",
    rpm=100,
    tpm=1000
)

Configuring a Google Vertex embedding model with profiles:

embedding_model = GoogleVertexEmbeddingModel(
    model_name="gemini-embedding-001",
    rpm=100,
    tpm=1000,
    profiles={
        "default": GoogleVertexEmbeddingModel.Profile(),
        "high_dim": GoogleVertexEmbeddingModel.Profile(output_dimensionality=3072)
    },
    default_profile="default"
)

Classes:

  • Profile

    Profile configurations for Google Vertex embedding models.

Profile

Bases: BaseModel

Profile configurations for Google Vertex embedding models.

This class defines profile configurations for Google embedding models, allowing different output dimensionality and task type settings to be applied to the same model.

Attributes:

  • output_dimensionality (Optional[int]) –

    The dimensionality of the embedding created by this model. If not provided, the model will use its default dimensionality.

  • task_type (GoogleEmbeddingTaskType) –

    The type of task for the embedding model.

Example

Configuring a profile with custom dimensionality:

profile = GoogleVertexEmbeddingModelConfig.Profile(output_dimensionality=3072)

Configuring a profile with default settings:

profile = GoogleVertexEmbeddingModelConfig.Profile()

GoogleVertexLanguageModel

Bases: BaseModel

Configuration for Google Vertex AI models.

This class defines the configuration settings for Google Gemini models available in Google Vertex AI, including model selection and rate limiting parameters. These models are accessible using Google Cloud credentials.

Attributes:

  • model_name (GoogleVertexLanguageModelName) –

    The name of the Google Vertex model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

  • profiles (Optional[dict[str, Profile]]) –

    Optional mapping of profile names to profile configurations.

  • default_profile (Optional[str]) –

    The name of the default profile to use if profiles are configured.

Example

Configuring a Google Vertex model with rate limits:

config = GoogleVertexLanguageModel(
    model_name="gemini-2.0-flash",
    rpm=100,
    tpm=1000
)

Configuring a reasoning Google Vertex model with profiles:

config = GoogleVertexLanguageModel(
    model_name="gemini-2.5-flash",
    rpm=100,
    tpm=1000,
    profiles={
        "thinking_disabled": GoogleVertexLanguageModel.Profile(),
        "fast": GoogleVertexLanguageModel.Profile(thinking_token_budget=1024),
        "thorough": GoogleVertexLanguageModel.Profile(thinking_token_budget=8192)
    },
    default_profile="fast"
)

Classes:

  • Profile

    Profile configurations for Google Vertex models.

Profile

Bases: BaseModel

Profile configurations for Google Vertex models.

This class defines profile configurations for Google Gemini models, allowing different thinking/reasoning settings to be applied to the same underlying model.

Attributes:

  • thinking_token_budget (Optional[int]) –

    If configuring a reasoning model, provide a thinking budget in tokens. If not provided, or if set to 0, thinking will be disabled for the profile (not supported on gemini-2.5-pro). To have the model automatically determine a thinking budget based on the complexity of the prompt, set this to -1. Note that Gemini models take this as a suggestion -- and not a hard limit. It is very possible for the model to generate far more thinking tokens than the suggested budget, and for the model to generate reasoning tokens even if thinking is disabled.

Example

Configuring a profile with a fixed thinking budget:

profile = GoogleVertexLanguageModel.Profile(thinking_token_budget=4096)

Configuring a profile with automatic thinking budget:

profile = GoogleVertexLanguageModel.Profile(thinking_token_budget=-1)

Configuring a profile with thinking disabled:

profile = GoogleVertexLanguageModel.Profile(thinking_token_budget=0)

OpenAIEmbeddingModel

Bases: BaseModel

Configuration for OpenAI embedding models.

This class defines the configuration settings for OpenAI embedding models, including model selection and rate limiting parameters.

Attributes:

  • model_name (OpenAIEmbeddingModelName) –

    The name of the OpenAI embedding model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

Example

Configuring an OpenAI embedding model with rate limits:

config = OpenAIEmbeddingModel(
    model_name="text-embedding-3-small",
    rpm=100,
    tpm=100
)

OpenAILanguageModel

Bases: BaseModel

Configuration for OpenAI language models.

This class defines the configuration settings for OpenAI language models, including model selection and rate limiting parameters.

Attributes:

  • model_name (OpenAILanguageModelName) –

    The name of the OpenAI model to use.

  • rpm (int) –

    Requests per minute limit; must be greater than 0.

  • tpm (int) –

    Tokens per minute limit; must be greater than 0.

  • profiles (Optional[dict[str, Profile]]) –

    Optional mapping of profile names to profile configurations.

  • default_profile (Optional[str]) –

    The name of the default profile to use if profiles are configured.

Example

Configuring an OpenAI language model with rate limits:

config = OpenAILanguageModel(
    model_name="gpt-4.1-nano",
    rpm=100,
    tpm=100
)

Configuring an OpenAI model with profiles:

config = OpenAILanguageModel(
    model_name="o4-mini",
    rpm=100,
    tpm=100,
    profiles={
        "fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
        "thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
    },
    default_profile="fast"
)

Using a profile in a semantic operation:

config = SemanticConfig(
    language_models={
        "o4": OpenAILanguageModel(
            model_name="o4-mini",
            rpm=1_000,
            tpm=1_000_000,
            profiles={
                "fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
                "thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
            },
            default_profile="fast"
        )
    },
    default_language_model="o4"
)

# Will use the default "fast" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias="o4")

# Will use the "thorough" profile for the "o4" model
semantic.map(instruction="Construct a formal proof of the {hypothesis}.", model_alias=ModelAlias(name="o4", profile="thorough"))

Classes:

  • Profile

    OpenAI-specific profile configurations.

Profile

Bases: BaseModel

OpenAI-specific profile configurations.

This class defines profile configurations for OpenAI models, allowing a user to reference the same underlying model in semantic operations with different settings. For now, only the reasoning effort can be customized.

Attributes:

  • reasoning_effort (Optional[ReasoningEffort]) –

    If configuring a reasoning model, provide a reasoning effort. OpenAI has separate o-series reasoning models, for which thinking cannot be disabled. If an o-series model is specified, but no reasoning_effort is provided, the reasoning_effort will be set to low.

Note

When using an o-series reasoning model, the temperature cannot be customized -- any changes to temperature will be ignored.

Example

Configuring a profile with medium reasoning effort:

profile = OpenAILanguageModel.Profile(reasoning_effort="medium")

SemanticConfig

Bases: BaseModel

Configuration for semantic language and embedding models.

This class defines the configuration for both language models and optional embedding models used in semantic operations. It ensures that all configured models are valid and supported by their respective providers.

Attributes:

  • language_models (Optional[dict[str, LanguageModel]]) –

    Mapping of model aliases to language model configurations.

  • default_language_model (Optional[str]) –

    The alias of the default language model to use for semantic operations. Not required if only one language model is configured.

  • embedding_models (Optional[dict[str, EmbeddingModel]]) –

    Optional mapping of model aliases to embedding model configurations.

  • default_embedding_model (Optional[str]) –

    The alias of the default embedding model to use for semantic operations.

Note

The embedding model is optional and only required for operations that need semantic search or embedding capabilities.

Example

Configuring semantic models with a single language model:

config = SemanticConfig(
    language_models={
        "gpt4": OpenAILanguageModel(
            model_name="gpt-4.1-nano",
            rpm=100,
            tpm=100
        )
    }
)

Configuring semantic models with multiple language models and an embedding model:

config = SemanticConfig(
    language_models={
        "gpt4": OpenAILanguageModel(
            model_name="gpt-4.1-nano",
            rpm=100,
            tpm=100
        ),
        "claude": AnthropicLanguageModel(
            model_name="claude-3-5-haiku-latest",
            rpm=100,
            input_tpm=100,
            output_tpm=100
        ),
        "gemini": GoogleDeveloperLanguageModel(
            model_name="gemini-2.0-flash",
            rpm=100,
            tpm=1000
        )
    },
    default_language_model="gpt4",
    embedding_models={
        "openai_embeddings": OpenAIEmbeddingModel(
            model_name="text-embedding-3-small",
            rpm=100,
            tpm=100
        )
    },
    default_embedding_model="openai_embeddings"
)

Configuring models with profiles:

config = SemanticConfig(
    language_models={
        "gpt4": OpenAILanguageModel(
            model_name="gpt-4o-mini",
            rpm=100,
            tpm=100,
            profiles={
                "fast": OpenAILanguageModel.Profile(reasoning_effort="low"),
                "thorough": OpenAILanguageModel.Profile(reasoning_effort="high")
            },
            default_profile="fast"
        ),
        "claude": AnthropicLanguageModel(
            model_name="claude-3-5-haiku-latest",
            rpm=100,
            input_tpm=100,
            output_tpm=100,
            profiles={
                "fast": AnthropicLanguageModel.Profile(thinking_token_budget=1024),
                "thorough": AnthropicLanguageModel.Profile(thinking_token_budget=4096)
            },
            default_profile="fast"
        )
    },
    default_language_model="gpt4"
)

Methods:

model_post_init

model_post_init(__context) -> None

Post initialization hook to set defaults.

This hook runs after the model is initialized and validated. It sets the default language and embedding models if they are not set and there is only one model available.

Source code in src/fenic/api/session/config.py
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
def model_post_init(self, __context) -> None:
    """Post initialization hook to set defaults.

    This hook runs after the model is initialized and validated.
    It sets the default language and embedding models if they are not set
    and there is only one model available.
    """
    if self.language_models:
        # Set default language model if not set and only one model exists
        if self.default_language_model is None and len(self.language_models) == 1:
            self.default_language_model = list(self.language_models.keys())[0]

        # Set default profile for each model if not set and only one profile exists
        for model_config in self.language_models.values():
            if model_config.profiles is not None:
                profile_names = list(model_config.profiles.keys())
                if model_config.default_profile is None and len(profile_names) == 1:
                    model_config.default_profile = profile_names[0]

    # Set default embedding model if not set and only one model exists
    if self.embedding_models:
        if self.default_embedding_model is None and len(self.embedding_models) == 1:
            self.default_embedding_model = list(self.embedding_models.keys())[0]
        # Set default profile for each model if not set and only one preset exists
        for model_config in self.embedding_models.values():
            if hasattr(model_config, "profiles") and model_config.profiles is not None:
                preset_names = list(model_config.profiles.keys())
                if (
                    model_config.default_profile is None
                    and len(preset_names) == 1
                ):
                    model_config.default_profile = preset_names[0]

validate_models

validate_models() -> SemanticConfig

Validates that the selected models are supported by the system.

This validator checks that both the language model and embedding model (if provided) are valid and supported by their respective providers.

Returns:

Raises:

Source code in src/fenic/api/session/config.py
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
@model_validator(mode="after")
def validate_models(self) -> SemanticConfig:
    """Validates that the selected models are supported by the system.

    This validator checks that both the language model and embedding model (if provided)
    are valid and supported by their respective providers.

    Returns:
        The validated SemanticConfig instance.

    Raises:
        ConfigurationError: If any of the models are not supported.
    """
    # Skip validation if no models configured (embedding-only or empty config)
    if not self.language_models and not self.embedding_models:
        return self

    # Validate language models if provided
    if self.language_models:
        available_language_model_aliases = list(self.language_models.keys())
        if self.default_language_model is None and len(self.language_models) > 1:
            raise ConfigurationError(
                f"default_language_model is not set, and multiple language models are configured. Please specify one of: {available_language_model_aliases} as a default_language_model.")

        if self.default_language_model is not None and self.default_language_model not in self.language_models:
            raise ConfigurationError(
                f"default_language_model {self.default_language_model} is not in configured map of language models. Available models: {available_language_model_aliases} .")

        for model_alias, language_model in self.language_models.items():
            language_model_name = language_model.model_name
            language_model_provider = _get_model_provider_for_model_config(language_model)

            if language_model.profiles is not None:
                profile_names = list(language_model.profiles.keys())
                if language_model.default_profile is None and len(profile_names) > 0:
                    raise ConfigurationError(
                        f"default_profile is not set for model {model_alias}, but multiple profiles are configured. Please specify one of: {profile_names} as a default_profile.")
                if language_model.default_profile is not None and language_model.default_profile not in profile_names:
                    raise ConfigurationError(
                        f"default_profile {language_model.default_profile} is not in configured profiles for model {model_alias}. Available profiles: {profile_names}")

            completion_model = model_catalog.get_completion_model_parameters(language_model_provider,
                                                                             language_model_name)
            if completion_model is None:
                raise ConfigurationError(
                    model_catalog.generate_unsupported_completion_model_error_message(
                        language_model_provider,
                        language_model_name
                    )
                )
    if self.embedding_models is not None:
        available_embedding_model_aliases = list(self.embedding_models.keys())
        if self.default_embedding_model is None and len(self.embedding_models) > 1:
            raise ConfigurationError(
                f"default_embedding_model is not set, and multiple embedding models are configured. Please specify one of: {available_embedding_model_aliases} as a default_embedding_model.")

        if self.default_embedding_model is not None and self.default_embedding_model not in self.embedding_models:
            raise ConfigurationError(
                f"default_embedding_model {self.default_embedding_model} is not in configured map of embedding models. Available models: {available_embedding_model_aliases} .")
        for model_alias, embedding_model in self.embedding_models.items():
            embedding_model_provider = _get_model_provider_for_model_config(embedding_model)
            embedding_model_name = embedding_model.model_name
            embedding_model_parameters = model_catalog.get_embedding_model_parameters(embedding_model_provider,
                                                                                      embedding_model_name)
            if embedding_model_parameters is None:
                raise ConfigurationError(model_catalog.generate_unsupported_embedding_model_error_message(
                    embedding_model_provider,
                    embedding_model_name
                ))
            if hasattr(embedding_model, "profiles") and embedding_model.profiles:
                profile_names = list(embedding_model.profiles.keys())
                if embedding_model.default_profile is None and len(profile_names) > 0:
                    raise ConfigurationError(
                        f"default_profile is not set for model {model_alias}, but multiple profiles are configured. Please specify one of: {profile_names} as a default_profile.")
                if embedding_model.default_profile is not None and embedding_model.default_profile not in profile_names:
                    raise ConfigurationError(
                        f"default_profile {embedding_model.default_profile} is not in configured profiles for model {model_alias}. Available profiles: {profile_names}")

                for profile_alias, profile in embedding_model.profiles.items():
                    _validate_embedding_profile(embedding_model_parameters, profile_alias, profile)


    return self

SessionConfig

Bases: BaseModel

Configuration for a user session.

This class defines the complete configuration for a user session, including application settings, model configurations, and optional cloud settings. It serves as the central configuration object for all language model operations.

Attributes:

  • app_name (str) –

    Name of the application using this session. Defaults to "default_app".

  • db_path (Optional[Path]) –

    Optional path to a local database file for persistent storage.

  • semantic (Optional[SemanticConfig]) –

    Configuration for semantic models (optional).

  • cloud (Optional[CloudConfig]) –

    Optional configuration for cloud execution.

Note

The semantic configuration is optional. When not provided, only non-semantic operations are available. The cloud configuration is optional and only needed for distributed processing.

Example

Configuring a basic session with a single language model:

config = SessionConfig(
    app_name="my_app",
    semantic=SemanticConfig(
        language_models={
            "gpt4": OpenAILanguageModel(
                model_name="gpt-4.1-nano",
                rpm=100,
                tpm=100
            )
        }
    )
)

Configuring a session with multiple models and cloud execution:

config = SessionConfig(
    app_name="production_app",
    db_path=Path("/path/to/database.db"),
    semantic=SemanticConfig(
        language_models={
            "gpt4": OpenAILanguageModel(
                model_name="gpt-4.1-nano",
                rpm=100,
                tpm=100
            ),
            "claude": AnthropicLanguageModel(
                model_name="claude-3-5-haiku-latest",
                rpm=100,
                input_tpm=100,
                output_tpm=100
            )
        },
        default_language_model="gpt4",
        embedding_models={
            "openai_embeddings": OpenAIEmbeddingModel(
                model_name="text-embedding-3-small",
                rpm=100,
                tpm=100
            )
        },
        default_embedding_model="openai_embeddings"
    ),
    cloud=CloudConfig(size=CloudExecutorSize.MEDIUM)
)