Skip to content

fenic.core

Core module for Fenic.

Classes:

  • ArrayType

    A type representing a homogeneous variable-length array (list) of elements.

  • BoundToolParam

    A bound tool parameter.

  • ClassDefinition

    Definition of a classification class with optional description.

  • ClassifyExample

    A single semantic example for classification operations.

  • ClassifyExampleCollection

    Collection of text-to-category examples for classification operations.

  • ColumnField

    Represents a typed column in a DataFrame schema.

  • DataType

    Base class for all data types.

  • DatasetMetadata

    Metadata for a dataset (table or view).

  • DocumentPathType

    Represents a string containing a a document's local (file system) or remote (URL) path.

  • EmbeddingType

    A type representing a fixed-length embedding vector.

  • JoinExample

    A single semantic example for semantic join operations.

  • JoinExampleCollection

    Collection of comparison examples for semantic join operations.

  • KeyPoints

    Summary as a concise bulleted list.

  • LMMetrics

    Tracks language model usage metrics including token counts and costs.

  • MapExample

    A single semantic example for semantic mapping operations.

  • MapExampleCollection

    Collection of input-output examples for semantic map operations.

  • OperatorMetrics

    Metrics for a single operator in the query execution plan.

  • Paragraph

    Summary as a cohesive narrative.

  • ParameterizedToolDefinition

    A tool that has been bound to a specific Parameterized View.

  • PredicateExample

    A single semantic example for semantic predicate operations.

  • PredicateExampleCollection

    Collection of input-to-boolean examples for predicate operations.

  • QueryMetrics

    Comprehensive metrics for an executed query.

  • QueryResult

    Container for query execution results and associated metadata.

  • RMMetrics

    Tracks embedding model usage metrics including token counts and costs.

  • Schema

    Represents the schema of a DataFrame.

  • StructField

    A field in a StructType. Fields are nullable.

  • StructType

    A type representing a struct (record) with named fields.

  • ToolParam

    A parameter for a parameterized view tool.

  • TranscriptType

    Represents a string containing a transcript in a specific format.

Attributes:

  • BooleanType

    Represents a boolean value. (True/False)

  • BranchSide

    Type alias representing the side of a branch in a lineage graph.

  • DataLike

    Union type representing any supported data format for both input and output operations.

  • DataLikeType

    String literal type for specifying data output formats.

  • DoubleType

    Represents a 64-bit floating-point number.

  • FloatType

    Represents a 32-bit floating-point number.

  • FuzzySimilarityMethod

    Type alias representing the supported fuzzy string similarity algorithms.

  • HtmlType

    Represents a string containing raw HTML markup.

  • IntegerType

    Represents a signed integer value.

  • JsonType

    Represents a string containing JSON data.

  • MarkdownType

    Represents a string containing Markdown-formatted text.

  • SemanticSimilarityMetric

    Type alias representing supported semantic similarity metrics.

  • StringType

    Represents a UTF-8 encoded string value.

BooleanType module-attribute

BooleanType = _BooleanType()

Represents a boolean value. (True/False)

BranchSide module-attribute

BranchSide = Literal['left', 'right']

Type alias representing the side of a branch in a lineage graph.

Valid values:

  • "left": The left branch of a join.
  • "right": The right branch of a join.

DataLike module-attribute

DataLike = Union[DataFrame, DataFrame, Dict[str, List[Any]], List[Dict[str, Any]], Table]

Union type representing any supported data format for both input and output operations.

This type encompasses all possible data structures that can be: 1. Used as input when creating DataFrames 2. Returned as output from query results

Supported formats
  • pl.DataFrame: Native Polars DataFrame with efficient columnar storage
  • pd.DataFrame: Pandas DataFrame, optionally with PyArrow extension arrays
  • Dict[str, List[Any]]: Column-oriented dictionary where:
    • Keys are column names (str)
    • Values are lists containing all values for that column
  • List[Dict[str, Any]]: Row-oriented list where:
    • Each element is a dictionary representing one row
    • Dictionary keys are column names, values are cell values
  • pa.Table: Apache Arrow Table with columnar memory layout
Usage
  • Input: Used in create_dataframe() to accept data in various formats
  • Output: Used in QueryResult.data to return results in requested format

The specific type returned depends on the DataLikeType format specified when collecting query results.

DataLikeType module-attribute

DataLikeType = Literal['polars', 'pandas', 'pydict', 'pylist', 'arrow']

String literal type for specifying data output formats.

Valid values
  • "polars": Native Polars DataFrame format
  • "pandas": Pandas DataFrame with PyArrow extension arrays
  • "pydict": Python dictionary with column names as keys, lists as values
  • "pylist": Python list of dictionaries, each representing one row
  • "arrow": Apache Arrow Table format

Used as input parameter for methods that can return data in multiple formats.

DoubleType module-attribute

DoubleType = _DoubleType()

Represents a 64-bit floating-point number.

FloatType module-attribute

FloatType = _FloatType()

Represents a 32-bit floating-point number.

FuzzySimilarityMethod module-attribute

FuzzySimilarityMethod = Literal['indel', 'levenshtein', 'damerau_levenshtein', 'jaro_winkler', 'jaro', 'hamming']

Type alias representing the supported fuzzy string similarity algorithms.

These algorithms quantify the similarity or difference between two strings using various distance or similarity metrics:

  • "indel": Computes the Indel (Insertion-Deletion) distance, which counts only insertions and deletions needed to transform one string into another, excluding substitutions. This is equivalent to the Longest Common Subsequence (LCS) problem. Useful when character substitutions should not be considered as valid operations (e.g., DNA sequence alignment where only insertions/deletions occur).
  • "levenshtein": Computes the Levenshtein distance, which is the minimum number of single-character edits (insertions, deletions, or substitutions) required to transform one string into another. Suitable for general-purpose fuzzy matching where transpositions do not matter.
  • "damerau_levenshtein": An extension of Levenshtein distance that also accounts for transpositions of adjacent characters (e.g., "ab" → "ba"). This metric is more accurate for real-world typos and keyboard errors.
  • "jaro": Measures similarity based on the number and order of common characters between two strings. It is particularly effective for short strings such as names. Returns a normalized score between 0 (no similarity) and 1 (exact match).
  • "jaro_winkler": A variant of the Jaro distance that gives more weight to common prefixes. Designed to improve accuracy on strings with shared beginnings (e.g., first names, surnames).
  • "hamming": Measures the number of differing characters between two strings of equal length. Only valid when both strings are the same length. It does not support insertions or deletions—only substitutions.

Choose the method based on the type of expected variation (e.g., typos, transpositions, or structural changes).

HtmlType module-attribute

HtmlType = _HtmlType()

Represents a string containing raw HTML markup.

IntegerType module-attribute

IntegerType = _IntegerType()

Represents a signed integer value.

JsonType module-attribute

JsonType = _JsonType()

Represents a string containing JSON data.

MarkdownType module-attribute

MarkdownType = _MarkdownType()

Represents a string containing Markdown-formatted text.

SemanticSimilarityMetric module-attribute

SemanticSimilarityMetric = Literal['cosine', 'l2', 'dot']

Type alias representing supported semantic similarity metrics.

Valid values:

  • "cosine": Cosine similarity, measures the cosine of the angle between two vectors.
  • "l2": Euclidean (L2) distance, measures the straight-line distance between two vectors.
  • "dot": Dot product similarity, the raw inner product of two vectors.

These metrics are commonly used for comparing embedding vectors in semantic search and other similarity-based applications.

StringType module-attribute

StringType = _StringType()

Represents a UTF-8 encoded string value.

ArrayType

Bases: DataType

A type representing a homogeneous variable-length array (list) of elements.

Attributes:

  • element_type (DataType) –

    The data type of each element in the array.

Create an array of strings
ArrayType(StringType)
ArrayType(element_type=StringType)

BoundToolParam

A bound tool parameter.

A bound tool parameter is a parameter that has been bound to a specific, typed, tool_param usage within a Dataframe.

ClassDefinition

Bases: BaseModel

Definition of a classification class with optional description.

Used to define the available classes for semantic classification operations. The description helps the LLM understand what each class represents.

ClassifyExample

Bases: BaseModel

A single semantic example for classification operations.

Classify examples demonstrate the classification of an input string into a specific category string, used in a semantic.classify operation.

ClassifyExampleCollection

ClassifyExampleCollection(examples: List[ExampleType] = None)

Bases: BaseExampleCollection[ClassifyExample]

Collection of text-to-category examples for classification operations.

Stores examples showing which category each input text should be assigned to. Each example contains an input string and its corresponding category label.

Methods:

  • from_polars

    Create collection from a Polars DataFrame. Must have an 'output' column and an 'input' column.

Source code in src/fenic/core/types/semantic_examples.py
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
def __init__(self, examples: List[ExampleType] = None):
    """Initialize a collection of semantic examples.

    Args:
        examples: Optional list of examples to add to the collection. Each example
            will be processed through create_example() to ensure proper formatting
            and validation.

    Note:
        The examples list is initialized as empty if no examples are provided.
        Each example in the provided list will be processed through create_example()
        to ensure proper formatting and validation.
    """
    self.examples: List[ExampleType] = []
    if examples:
        for example in examples:
            self.create_example(example)

from_polars classmethod

from_polars(df: DataFrame) -> ClassifyExampleCollection

Create collection from a Polars DataFrame. Must have an 'output' column and an 'input' column.

Source code in src/fenic/core/types/semantic_examples.py
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
@classmethod
def from_polars(cls, df: pl.DataFrame) -> ClassifyExampleCollection:
    """Create collection from a Polars DataFrame. Must have an 'output' column and an 'input' column."""
    collection = cls()

    if EXAMPLE_INPUT_KEY not in df.columns:
        raise InvalidExampleCollectionError(
            f"Classify Examples DataFrame missing required '{EXAMPLE_INPUT_KEY}' column"
        )
    if EXAMPLE_OUTPUT_KEY not in df.columns:
        raise InvalidExampleCollectionError(
            f"Classify Examples DataFrame missing required '{EXAMPLE_OUTPUT_KEY}' column"
        )

    for row in df.iter_rows(named=True):
        if row[EXAMPLE_INPUT_KEY] is None:
            raise InvalidExampleCollectionError(
                f"Classify Examples DataFrame contains null values in '{EXAMPLE_INPUT_KEY}' column"
            )
        if row[EXAMPLE_OUTPUT_KEY] is None:
            raise InvalidExampleCollectionError(
                f"Classify Examples DataFrame contains null values in '{EXAMPLE_OUTPUT_KEY}' column"
            )

        example = ClassifyExample(
            input=row[EXAMPLE_INPUT_KEY],
            output=row[EXAMPLE_OUTPUT_KEY],
        )
        collection.create_example(example)

    return collection

ColumnField

Represents a typed column in a DataFrame schema.

A ColumnField defines the structure of a single column by specifying its name and data type. This is used as a building block for DataFrame schemas.

Attributes:

  • name (str) –

    The name of the column.

  • data_type (DataType) –

    The data type of the column, as a DataType instance.

DataType

Bases: ABC

Base class for all data types.

You won't instantiate this class directly. Instead, use one of the concrete types like StringType, ArrayType, or StructType.

Used for casting, type validation, and schema inference in the DataFrame API.

DatasetMetadata

Metadata for a dataset (table or view).

Attributes:

  • schema (Schema) –

    The schema of the dataset.

  • description (Optional[str]) –

    The natural language description of the dataset's contents.

DocumentPathType

Bases: _LogicalType

Represents a string containing a a document's local (file system) or remote (URL) path.

EmbeddingType

Bases: _LogicalType

A type representing a fixed-length embedding vector.

Attributes:

  • dimensions (int) –

    The number of dimensions in the embedding vector.

  • embedding_model (str) –

    Name of the model used to generate the embedding.

Create an embedding type for text-embedding-3-small
EmbeddingType(384, embedding_model="text-embedding-3-small")

JoinExample

Bases: BaseModel

A single semantic example for semantic join operations.

Join examples demonstrate the evaluation of two input variables across different datasets against a specific condition, used in a semantic.join operation.

JoinExampleCollection

JoinExampleCollection(examples: List[JoinExample] = None)

Bases: BaseExampleCollection[JoinExample]

Collection of comparison examples for semantic join operations.

Stores examples showing which pairs of values should be considered matches for joining data. Each example contains a left value, right value, and boolean output indicating whether they match.

Initialize a collection of semantic join examples.

Parameters:

  • examples (List[JoinExample], default: None ) –

    List of examples to add to the collection. Each example will be processed through create_example() to ensure proper formatting and validation.

Methods:

  • create_example

    Create an example in the collection with type validation.

  • from_polars

    Create collection from a Polars DataFrame. Must have 'left_on', 'right_on', and 'output' columns.

Source code in src/fenic/core/types/semantic_examples.py
566
567
568
569
570
571
572
573
574
575
def __init__(self, examples: List[JoinExample] = None):
    """Initialize a collection of semantic join examples.

    Args:
        examples: List of examples to add to the collection. Each example
            will be processed through create_example() to ensure proper formatting
            and validation.
    """
    self._type_validator = _ExampleTypeValidator()
    super().__init__(examples)

create_example

create_example(example: JoinExample) -> JoinExampleCollection

Create an example in the collection with type validation.

Validates that left_on and right_on values have consistent types across examples. The first example establishes the types and cannot have None values. Subsequent examples must have matching types but can have None values.

Parameters:

Returns:

Raises:

  • InvalidExampleCollectionError

    If the example type is wrong, if the first example contains None values, or if subsequent examples have type mismatches.

Source code in src/fenic/core/types/semantic_examples.py
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
def create_example(self, example: JoinExample) -> JoinExampleCollection:
    """Create an example in the collection with type validation.

    Validates that left_on and right_on values have consistent types across
    examples. The first example establishes the types and cannot have None values.
    Subsequent examples must have matching types but can have None values.

    Args:
        example: The JoinExample to add.

    Returns:
        Self for method chaining.

    Raises:
        InvalidExampleCollectionError: If the example type is wrong, if the
            first example contains None values, or if subsequent examples
            have type mismatches.
    """
    if not isinstance(example, JoinExample):
        raise InvalidExampleCollectionError(
            f"Expected example of type {JoinExample.__name__}, got {type(example).__name__}"
        )

    # Convert to dict format for validation
    example_dict = {
        LEFT_ON_KEY: example.left_on,
        RIGHT_ON_KEY: example.right_on
    }

    example_num = len(self.examples) + 1
    self._type_validator.process_example(example_dict, example_num)

    self.examples.append(example)
    return self

from_polars classmethod

from_polars(df: DataFrame) -> JoinExampleCollection

Create collection from a Polars DataFrame. Must have 'left_on', 'right_on', and 'output' columns.

Source code in src/fenic/core/types/semantic_examples.py
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
@classmethod
def from_polars(cls, df: pl.DataFrame) -> JoinExampleCollection:
    """Create collection from a Polars DataFrame. Must have 'left_on', 'right_on', and 'output' columns."""
    collection = cls()

    required_columns = [
        LEFT_ON_KEY,
        RIGHT_ON_KEY,
        EXAMPLE_OUTPUT_KEY,
    ]
    for col in required_columns:
        if col not in df.columns:
            raise InvalidExampleCollectionError(
                f"Join Examples DataFrame missing required '{col}' column"
            )

    for row in df.iter_rows(named=True):
        for col in required_columns:
            if row[col] is None:
                raise InvalidExampleCollectionError(
                    f"Join Examples DataFrame contains null values in '{col}' column"
                )

        example = JoinExample(
            left_on=row[LEFT_ON_KEY],
            right_on=row[RIGHT_ON_KEY],
            output=row[EXAMPLE_OUTPUT_KEY],
        )
        collection.create_example(example)

    return collection

KeyPoints

Bases: BaseModel

Summary as a concise bulleted list.

Each bullet should capture a distinct and essential idea, with a maximum number of points specified.

Attributes:

  • max_points (int) –

    The maximum number of key points to include in the summary.

Methods:

  • max_tokens

    Calculate the maximum number of tokens for the summary based on the number of key points.

max_tokens

max_tokens() -> int

Calculate the maximum number of tokens for the summary based on the number of key points.

Source code in src/fenic/core/types/summarize.py
25
26
27
def max_tokens(self) -> int:
    """Calculate the maximum number of tokens for the summary based on the number of key points."""
    return self.max_points * 75

LMMetrics dataclass

LMMetrics(num_uncached_input_tokens: int = 0, num_cached_input_tokens: int = 0, num_output_tokens: int = 0, cost: float = 0.0, num_requests: int = 0)

Tracks language model usage metrics including token counts and costs.

Attributes:

  • num_uncached_input_tokens (int) –

    Number of uncached tokens in the prompt/input

  • num_cached_input_tokens (int) –

    Number of cached tokens in the prompt/input,

  • num_output_tokens (int) –

    Number of tokens in the completion/output

  • cost (float) –

    Total cost in USD for the LM API call

MapExample

Bases: BaseModel

A single semantic example for semantic mapping operations.

Map examples demonstrate the transformation of input variables to a specific output string or structured model used in a semantic.map operation.

MapExampleCollection

MapExampleCollection(examples: List[MapExample] = None)

Bases: BaseExampleCollection[MapExample]

Collection of input-output examples for semantic map operations.

Stores examples that demonstrate how input data should be transformed into output text or structured data. Each example shows the expected output for a given set of input fields.

Initialize a collection of semantic map examples.

Parameters:

  • examples (List[MapExample], default: None ) –

    List of examples to add to the collection. Each example will be processed through create_example() to ensure proper formatting and validation.

Methods:

  • create_example

    Create an example in the collection with output and input type validation.

  • from_polars

    Create collection from a Polars DataFrame. Must have an 'output' column and at least one input column.

Source code in src/fenic/core/types/semantic_examples.py
258
259
260
261
262
263
264
265
266
267
def __init__(self, examples: List[MapExample] = None):
    """Initialize a collection of semantic map examples.

    Args:
        examples: List of examples to add to the collection. Each example
            will be processed through create_example() to ensure proper formatting
            and validation.
    """
    self._type_validator = _ExampleTypeValidator()
    super().__init__(examples)

create_example

create_example(example: MapExample) -> MapExampleCollection

Create an example in the collection with output and input type validation.

Ensures all examples in the collection have consistent output types (either all strings or all BaseModel instances) and validates that input fields have consistent types across examples.

For input validation: - The first example establishes the schema and cannot have None values - Subsequent examples must have the same fields but can have None values - Non-None values must match the established type for each field

Parameters:

Returns:

Raises:

  • InvalidExampleCollectionError

    If the example output type doesn't match the existing examples in the collection, if the first example contains None values, or if subsequent examples have type mismatches.

Source code in src/fenic/core/types/semantic_examples.py
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
def create_example(self, example: MapExample) -> MapExampleCollection:
    """Create an example in the collection with output and input type validation.

    Ensures all examples in the collection have consistent output types
    (either all strings or all BaseModel instances) and validates that input
    fields have consistent types across examples.

    For input validation:
    - The first example establishes the schema and cannot have None values
    - Subsequent examples must have the same fields but can have None values
    - Non-None values must match the established type for each field

    Args:
        example: The MapExample to add.

    Returns:
        Self for method chaining.

    Raises:
        InvalidExampleCollectionError: If the example output type doesn't match
            the existing examples in the collection, if the first example contains
            None values, or if subsequent examples have type mismatches.
    """
    if not isinstance(example, MapExample):
        raise InvalidExampleCollectionError(
            f"Expected example of type {MapExample.__name__}, got {type(example).__name__}"
        )

    # Validate output type consistency
    self._validate_single_example_output_type(example)

    # Validate input types
    example_num = len(self.examples) + 1
    self._type_validator.process_example(example.input, example_num)

    self.examples.append(example)
    return self

from_polars classmethod

from_polars(df: DataFrame) -> MapExampleCollection

Create collection from a Polars DataFrame. Must have an 'output' column and at least one input column.

Source code in src/fenic/core/types/semantic_examples.py
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
@classmethod
def from_polars(cls, df: pl.DataFrame) -> MapExampleCollection:
    """Create collection from a Polars DataFrame. Must have an 'output' column and at least one input column."""
    collection = cls()

    if EXAMPLE_OUTPUT_KEY not in df.columns:
        raise ValueError(
            f"Map Examples DataFrame missing required '{EXAMPLE_OUTPUT_KEY}' column"
        )

    input_cols = [col for col in df.columns if col != EXAMPLE_OUTPUT_KEY]

    if not input_cols:
        raise ValueError(
            "Map Examples DataFrame must have at least one input column"
        )

    for row in df.iter_rows(named=True):
        input_dict = {col: row[col] for col in input_cols}
        example = MapExample(input=input_dict, output=row[EXAMPLE_OUTPUT_KEY])
        collection.create_example(example)

    return collection

OperatorMetrics dataclass

OperatorMetrics(operator_id: str, num_output_rows: int = 0, execution_time_ms: float = 0.0, lm_metrics: LMMetrics = LMMetrics(), rm_metrics: RMMetrics = RMMetrics(), is_cache_hit: bool = False)

Metrics for a single operator in the query execution plan.

Attributes:

  • operator_id (str) –

    Unique identifier for the operator

  • num_output_rows (int) –

    Number of rows output by this operator

  • execution_time_ms (float) –

    Execution time in milliseconds

  • lm_metrics (LMMetrics) –

    Language model usage metrics for this operator

  • is_cache_hit (bool) –

    Whether results were retrieved from cache

Paragraph

Bases: BaseModel

Summary as a cohesive narrative.

The summary should flow naturally and not exceed a specified maximum word count.

Attributes:

  • max_words (int) –

    The maximum number of words allowed in the summary.

Methods:

  • max_tokens

    Calculate the maximum number of tokens for the summary based on the number of words.

max_tokens

max_tokens() -> int

Calculate the maximum number of tokens for the summary based on the number of words.

Source code in src/fenic/core/types/summarize.py
46
47
48
def max_tokens(self) -> int:
    """Calculate the maximum number of tokens for the summary based on the number of words."""
    return int(self.max_words * 1.5)

ParameterizedToolDefinition

A tool that has been bound to a specific Parameterized View.

PredicateExample

Bases: BaseModel

A single semantic example for semantic predicate operations.

Predicate examples demonstrate the evaluation of input variables against a specific condition, used in a semantic.predicate operation.

PredicateExampleCollection

PredicateExampleCollection(examples: List[PredicateExample] = None)

Bases: BaseExampleCollection[PredicateExample]

Collection of input-to-boolean examples for predicate operations.

Stores examples showing which inputs should evaluate to True or False based on some condition. Each example contains input fields and a boolean output indicating whether the condition holds.

Initialize a collection of semantic predicate examples.

Parameters:

  • examples (List[PredicateExample], default: None ) –

    List of examples to add to the collection. Each example will be processed through create_example() to ensure proper formatting and validation.

Methods:

  • create_example

    Create an example in the collection with input type validation.

  • from_polars

    Create collection from a Polars DataFrame.

Source code in src/fenic/core/types/semantic_examples.py
463
464
465
466
467
468
469
470
471
472
def __init__(self, examples: List[PredicateExample] = None):
    """Initialize a collection of semantic predicate examples.

    Args:
        examples: List of examples to add to the collection. Each example
            will be processed through create_example() to ensure proper formatting
            and validation.
    """
    self._type_validator = _ExampleTypeValidator()
    super().__init__(examples)

create_example

create_example(example: PredicateExample) -> PredicateExampleCollection

Create an example in the collection with input type validation.

Validates that input fields have consistent types across examples. The first example establishes the schema and cannot have None values. Subsequent examples must have the same fields but can have None values.

Parameters:

Returns:

Raises:

  • InvalidExampleCollectionError

    If the example type is wrong, if the first example contains None values, or if subsequent examples have type mismatches.

Source code in src/fenic/core/types/semantic_examples.py
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
def create_example(self, example: PredicateExample) -> PredicateExampleCollection:
    """Create an example in the collection with input type validation.

    Validates that input fields have consistent types across examples.
    The first example establishes the schema and cannot have None values.
    Subsequent examples must have the same fields but can have None values.

    Args:
        example: The PredicateExample to add.

    Returns:
        Self for method chaining.

    Raises:
        InvalidExampleCollectionError: If the example type is wrong, if the
            first example contains None values, or if subsequent examples
            have type mismatches.
    """
    if not isinstance(example, PredicateExample):
        raise InvalidExampleCollectionError(
            f"Expected example of type {PredicateExample.__name__}, got {type(example).__name__}"
        )

    # Validate input types
    example_num = len(self.examples) + 1
    self._type_validator.process_example(example.input, example_num)

    self.examples.append(example)
    return self

from_polars classmethod

from_polars(df: DataFrame) -> PredicateExampleCollection

Create collection from a Polars DataFrame.

Source code in src/fenic/core/types/semantic_examples.py
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
@classmethod
def from_polars(cls, df: pl.DataFrame) -> PredicateExampleCollection:
    """Create collection from a Polars DataFrame."""
    collection = cls()

    # Validate output column exists
    if EXAMPLE_OUTPUT_KEY not in df.columns:
        raise InvalidExampleCollectionError(
            f"Predicate Examples DataFrame missing required '{EXAMPLE_OUTPUT_KEY}' column"
        )

    input_cols = [col for col in df.columns if col != EXAMPLE_OUTPUT_KEY]

    if not input_cols:
        raise InvalidExampleCollectionError(
            "Predicate Examples DataFrame must have at least one input column"
        )

    for row in df.iter_rows(named=True):
        if row[EXAMPLE_OUTPUT_KEY] is None:
            raise InvalidExampleCollectionError(
                f"Predicate Examples DataFrame contains null values in '{EXAMPLE_OUTPUT_KEY}' column"
            )

        input_dict = {col: row[col] for col in input_cols if row[col] is not None}

        example = PredicateExample(input=input_dict, output=row[EXAMPLE_OUTPUT_KEY])
        collection.create_example(example)

    return collection

QueryMetrics dataclass

QueryMetrics(execution_id: str, session_id: str, execution_time_ms: float = 0.0, num_output_rows: int = 0, total_lm_metrics: LMMetrics = LMMetrics(), total_rm_metrics: RMMetrics = RMMetrics(), end_ts: datetime = datetime.now(), _operator_metrics: Dict[str, OperatorMetrics] = dict(), _plan_repr: PhysicalPlanRepr = lambda: PhysicalPlanRepr(operator_id='empty')())

Comprehensive metrics for an executed query.

Includes overall statistics and detailed metrics for each operator in the execution plan.

Attributes:

  • execution_id (str) –

    Unique identifier for this query execution

  • session_id (str) –

    Identifier for the session this query belongs to

  • execution_time_ms (float) –

    Total query execution time in milliseconds

  • num_output_rows (int) –

    Total number of rows returned by the query

  • total_lm_metrics (LMMetrics) –

    Aggregated language model metrics across all operators

  • end_ts (datetime) –

    Timestamp when query execution completed

Methods:

  • get_execution_plan_details

    Generate a formatted execution plan with detailed metrics.

  • get_summary

    Summarize the query metrics in a single line.

  • to_dict

    Convert QueryMetrics to a dictionary for table storage.

start_ts property

start_ts: datetime

Calculate start timestamp from end timestamp and execution time.

get_execution_plan_details

get_execution_plan_details() -> str

Generate a formatted execution plan with detailed metrics.

Produces a hierarchical representation of the query execution plan, including performance metrics and language model usage for each operator.

Returns:

  • str ( str ) –

    A formatted string showing the execution plan with metrics.

Source code in src/fenic/core/metrics.py
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
def get_execution_plan_details(self) -> str:
    """Generate a formatted execution plan with detailed metrics.

    Produces a hierarchical representation of the query execution plan,
    including performance metrics and language model usage for each operator.

    Returns:
        str: A formatted string showing the execution plan with metrics.
    """

    def _format_node(node: PhysicalPlanRepr, indent: int = 1) -> str:
        op = self._operator_metrics[node.operator_id]
        indent_str = "  " * indent

        details = [
            f"{indent_str}{op.operator_id}",
            f"{indent_str}  Output Rows: {op.num_output_rows:,}",
            f"{indent_str}  Execution Time: {op.execution_time_ms:.2f}ms",
            f"{indent_str}  Cached: {op.is_cache_hit}",
        ]

        if op.lm_metrics.cost > 0:
            details.extend(
                [
                    f"{indent_str}  Language Model Usage: {op.lm_metrics.num_uncached_input_tokens:,} input tokens, {op.lm_metrics.num_cached_input_tokens:,} cached input tokens, {op.lm_metrics.num_output_tokens:,} output tokens",
                    f"{indent_str}  Language Model Cost: ${op.lm_metrics.cost:.6f}",
                ]
            )

        if op.rm_metrics.cost > 0:
            details.extend(
                [
                    f"{indent_str}  Embedding Model Usage: {op.rm_metrics.num_input_tokens:,} input tokens",
                    f"{indent_str}  Embedding Model Cost: ${op.rm_metrics.cost:.6f}",
                ]
            )
        return (
            "\n".join(details)
            + "\n"
            + "".join(_format_node(child, indent + 1) for child in node.children)
        )

    return f"Execution Plan\n{_format_node(self._plan_repr)}"

get_summary

get_summary() -> str

Summarize the query metrics in a single line.

Returns:

  • str ( str ) –

    A concise summary of execution time, row count, and LM cost.

Source code in src/fenic/core/metrics.py
163
164
165
166
167
168
169
170
171
172
173
174
def get_summary(self) -> str:
    """Summarize the query metrics in a single line.

    Returns:
        str: A concise summary of execution time, row count, and LM cost.
    """
    return (
        f"Query executed in {self.execution_time_ms:.2f}ms, "
        f"returned {self.num_output_rows:,} rows, "
        f"language model cost: ${self.total_lm_metrics.cost:.6f}, "
        f"embedding model cost: ${self.total_rm_metrics.cost:.6f}"
    )

to_dict

to_dict() -> Dict[str, Any]

Convert QueryMetrics to a dictionary for table storage.

Returns:

  • Dict[str, Any]

    Dict containing all metrics fields suitable for database storage.

Source code in src/fenic/core/metrics.py
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
def to_dict(self) -> Dict[str, Any]:
    """Convert QueryMetrics to a dictionary for table storage.

    Returns:
        Dict containing all metrics fields suitable for database storage.
    """
    return {
        "execution_id": self.execution_id,
        "session_id": self.session_id,
        "execution_time_ms": self.execution_time_ms,
        "num_output_rows": self.num_output_rows,
        "start_ts": self.start_ts,
        "end_ts": self.end_ts,
        "total_lm_cost": self.total_lm_metrics.cost,
        "total_lm_uncached_input_tokens": self.total_lm_metrics.num_uncached_input_tokens,
        "total_lm_cached_input_tokens": self.total_lm_metrics.num_cached_input_tokens,
        "total_lm_output_tokens": self.total_lm_metrics.num_output_tokens,
        "total_lm_requests": self.total_lm_metrics.num_requests,
        "total_rm_cost": self.total_rm_metrics.cost,
        "total_rm_input_tokens": self.total_rm_metrics.num_input_tokens,
        "total_rm_requests": self.total_rm_metrics.num_requests,
    }

QueryResult dataclass

QueryResult(data: DataLike, metrics: QueryMetrics)

Container for query execution results and associated metadata.

This dataclass bundles together the materialized data from a query execution along with metrics about the execution process. It provides a unified interface for accessing both the computed results and performance information.

Attributes:

  • data (DataLike) –

    The materialized query results in the requested format. Can be any of the supported data types (Polars/Pandas DataFrame, Arrow Table, or Python dict/list structures).

  • metrics (QueryMetrics) –

    Execution metadata including timing information, memory usage, rows processed, and other performance metrics collected during query execution.

Access query results and metrics
# Execute query and get results with metrics
result = df.filter(col("age") > 25).collect("pandas")
pandas_df = result.data  # Access the Pandas DataFrame
print(result.metrics.execution_time)  # Access execution metrics
print(result.metrics.rows_processed)  # Access row count
Work with different data formats
# Get results in different formats
polars_result = df.collect("polars")
arrow_result = df.collect("arrow")
dict_result = df.collect("pydict")

# All contain the same data, different formats
print(type(polars_result.data))  # <class 'polars.DataFrame'>
print(type(arrow_result.data))   # <class 'pyarrow.lib.Table'>
print(type(dict_result.data))    # <class 'dict'>
Note

The actual type of the data attribute depends on the format requested during collection. Use type checking or isinstance() if you need to handle the data differently based on its format.

RMMetrics dataclass

RMMetrics(num_input_tokens: int = 0, num_requests: int = 0, cost: float = 0.0)

Tracks embedding model usage metrics including token counts and costs.

Attributes:

  • num_input_tokens (int) –

    Number of tokens to embed

  • cost (float) –

    Total cost in USD to embed the tokens

Schema

Represents the schema of a DataFrame.

A Schema defines the structure of a DataFrame by specifying an ordered collection of column fields. Each column field defines the name and data type of a column in the DataFrame.

Attributes:

  • column_fields (List[ColumnField]) –

    An ordered list of ColumnField objects that define the structure of the DataFrame.

Methods:

  • column_names

    Get a list of all column names in the schema.

column_names

column_names() -> List[str]

Get a list of all column names in the schema.

Returns:

  • List[str]

    A list of strings containing the names of all columns in the schema.

Source code in src/fenic/core/types/schema.py
117
118
119
120
121
122
123
def column_names(self) -> List[str]:
    """Get a list of all column names in the schema.

    Returns:
        A list of strings containing the names of all columns in the schema.
    """
    return [field.name for field in self.column_fields]

StructField

A field in a StructType. Fields are nullable.

Attributes:

  • name (str) –

    The name of the field.

  • data_type (DataType) –

    The data type of the field.

StructType

Bases: DataType

A type representing a struct (record) with named fields.

Attributes:

  • fields

    List of field definitions.

Create a struct with name and age fields
StructType([
    StructField("name", StringType),
    StructField("age", IntegerType),
])

ToolParam

Bases: BaseModel

A parameter for a parameterized view tool.

A parameter is a named value that can be passed to a tool. These are matched to the parameter names of the tool_param UnresolvedLiteralExpr expressions captured in the Logical Plan.

Attributes:

  • name (str) –

    The name of the parameter.

  • description (str) –

    The description of the parameter.

  • allowed_values (Optional[List[ToolParameterType]]) –

    The allowed values for the parameter.

  • has_default (bool) –

    Whether the parameter has a default value.

  • default_value (Optional[ToolParameterType]) –

    The default value for the parameter.

required property

required: bool

Whether the parameter is required.

Returns:

  • bool

    True if the parameter is required, False otherwise.

TranscriptType

Bases: _LogicalType

Represents a string containing a transcript in a specific format.