fenic
Fenic is an opinionated, PySpark-inspired DataFrame framework for building production AI and agentic applications.
Classes:
-
AnthropicModelConfig
–Configuration for Anthropic models.
-
ArrayType
–A type representing a homogeneous variable-length array (list) of elements.
-
Catalog
–Entry point for catalog operations.
-
ClassifyExample
–A single semantic example for classification operations.
-
ClassifyExampleCollection
–Collection of examples for semantic classification operations.
-
Column
–A column expression in a DataFrame.
-
ColumnField
–Represents a typed column in a DataFrame schema.
-
DataFrame
–A data collection organized into named columns.
-
DataFrameReader
–Interface used to load a DataFrame from external storage systems.
-
DataFrameWriter
–Interface used to write a DataFrame to external storage systems.
-
DataType
–Base class for all data types.
-
DocumentPathType
–Represents a string containing a a document's local (file system) or remote (URL) path.
-
EmbeddingType
–A type representing a fixed-length embedding vector.
-
ExtractSchema
–Represents a structured extraction schema.
-
ExtractSchemaField
–Represents a field within an structured extraction schema.
-
ExtractSchemaList
–Represents a list data type for structured extraction schema definitions.
-
GoogleGLAModelConfig
–Configuration for Google GenerativeLAnguage (GLA) models.
-
GoogleVertexModelConfig
–Configuration for Google Vertex models.
-
GroupedData
–Methods for aggregations on a grouped DataFrame.
-
JoinExample
–A single semantic example for semantic join operations.
-
JoinExampleCollection
–Collection of examples for semantic join operations.
-
LMMetrics
–Tracks language model usage metrics including token counts and costs.
-
Lineage
–Query interface for tracing data lineage through a query plan.
-
MapExample
–A single semantic example for semantic mapping operations.
-
MapExampleCollection
–Collection of examples for semantic mapping operations.
-
OpenAIModelConfig
–Configuration for OpenAI models.
-
OperatorMetrics
–Metrics for a single operator in the query execution plan.
-
PredicateExample
–A single semantic example for semantic predicate operations.
-
PredicateExampleCollection
–Collection of examples for semantic predicate operations.
-
QueryMetrics
–Comprehensive metrics for an executed query.
-
QueryResult
–Container for query execution results and associated metadata.
-
RMMetrics
–Tracks embedding model usage metrics including token counts and costs.
-
Schema
–Represents the schema of a DataFrame.
-
SemanticConfig
–Configuration for semantic language and embedding models.
-
SemanticExtensions
–A namespace for semantic dataframe operators.
-
Session
–The entry point to programming with the DataFrame API. Similar to PySpark's SparkSession.
-
SessionConfig
–Configuration for a user session.
-
StructField
–A field in a StructType. Fields are nullable.
-
StructType
–A type representing a struct (record) with named fields.
-
TranscriptType
–Represents a string containing a transcript in a specific format.
Functions:
-
array
–Creates a new array column from multiple input columns.
-
array_agg
–Alias for collect_list().
-
array_contains
–Checks if array column contains a specific value.
-
array_size
–Returns the number of elements in an array column.
-
asc
–Creates a Column expression representing an ascending sort order.
-
asc_nulls_first
–Creates a Column expression representing an ascending sort order with nulls first.
-
asc_nulls_last
–Creates a Column expression representing an ascending sort order with nulls last.
-
avg
–Aggregate function: returns the average (mean) of all values in the specified column.
-
coalesce
–Returns the first non-null value from the given columns for each row.
-
col
–Creates a Column expression referencing a column in the DataFrame.
-
collect_list
–Aggregate function: collects all values from the specified column into a list.
-
configure_logging
–Configure logging for the library and root logger in interactive environments.
-
count
–Aggregate function: returns the count of non-null values in the specified column.
-
desc
–Creates a Column expression representing a descending sort order.
-
desc_nulls_first
–Creates a Column expression representing a descending sort order with nulls first.
-
desc_nulls_last
–Creates a Column expression representing a descending sort order with nulls last.
-
first
–Aggregate function: returns the first non-null value in the specified column.
-
lit
–Creates a Column expression representing a literal value.
-
max
–Aggregate function: returns the maximum value in the specified column.
-
mean
–Aggregate function: returns the mean (average) of all values in the specified column.
-
min
–Aggregate function: returns the minimum value in the specified column.
-
stddev
–Aggregate function: returns the sample standard deviation of the specified column.
-
struct
–Creates a new struct column from multiple input columns.
-
sum
–Aggregate function: returns the sum of all values in the specified column.
-
udf
–A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows.
-
when
–Evaluates a condition and returns a value if true.
Attributes:
-
BooleanType
–Represents a boolean value. (True/False)
-
DataLike
–Union type representing any supported data format for both input and output operations.
-
DataLikeType
–String literal type for specifying data output formats.
-
DoubleType
–Represents a 64-bit floating-point number.
-
FloatType
–Represents a 32-bit floating-point number.
-
HtmlType
–Represents a string containing raw HTML markup.
-
IntegerType
–Represents a signed integer value.
-
JsonType
–Represents a string containing JSON data.
-
MarkdownType
–Represents a string containing Markdown-formatted text.
-
SemanticSimilarityMetric
–Type alias representing supported semantic similarity metrics.
-
StringType
–Represents a UTF-8 encoded string value.
BooleanType
module-attribute
BooleanType = _BooleanType()
Represents a boolean value. (True/False)
DataLike
module-attribute
DataLike = Union[DataFrame, DataFrame, Dict[str, List[Any]], List[Dict[str, Any]], Table]
Union type representing any supported data format for both input and output operations.
This type encompasses all possible data structures that can be: 1. Used as input when creating DataFrames 2. Returned as output from query results
Supported formats
- pl.DataFrame: Native Polars DataFrame with efficient columnar storage
- pd.DataFrame: Pandas DataFrame, optionally with PyArrow extension arrays
- Dict[str, List[Any]]: Column-oriented dictionary where:
- Keys are column names (str)
- Values are lists containing all values for that column
- List[Dict[str, Any]]: Row-oriented list where:
- Each element is a dictionary representing one row
- Dictionary keys are column names, values are cell values
- pa.Table: Apache Arrow Table with columnar memory layout
Usage
- Input: Used in create_dataframe() to accept data in various formats
- Output: Used in QueryResult.data to return results in requested format
The specific type returned depends on the DataLikeType format specified when collecting query results.
DataLikeType
module-attribute
DataLikeType = Literal['polars', 'pandas', 'pydict', 'pylist', 'arrow']
String literal type for specifying data output formats.
Valid values
- "polars": Native Polars DataFrame format
- "pandas": Pandas DataFrame with PyArrow extension arrays
- "pydict": Python dictionary with column names as keys, lists as values
- "pylist": Python list of dictionaries, each representing one row
- "arrow": Apache Arrow Table format
Used as input parameter for methods that can return data in multiple formats.
DoubleType
module-attribute
DoubleType = _DoubleType()
Represents a 64-bit floating-point number.
FloatType
module-attribute
FloatType = _FloatType()
Represents a 32-bit floating-point number.
HtmlType
module-attribute
HtmlType = _HtmlType()
Represents a string containing raw HTML markup.
IntegerType
module-attribute
IntegerType = _IntegerType()
Represents a signed integer value.
JsonType
module-attribute
JsonType = _JsonType()
Represents a string containing JSON data.
MarkdownType
module-attribute
MarkdownType = _MarkdownType()
Represents a string containing Markdown-formatted text.
SemanticSimilarityMetric
module-attribute
SemanticSimilarityMetric = Literal['cosine', 'l2', 'dot']
Type alias representing supported semantic similarity metrics.
Valid values:
- "cosine": Cosine similarity, measures the cosine of the angle between two vectors.
- "l2": Euclidean (L2) distance, measures the straight-line distance between two vectors.
- "dot": Dot product similarity, the raw inner product of two vectors.
These metrics are commonly used for comparing embedding vectors in semantic search and other similarity-based applications.
StringType
module-attribute
StringType = _StringType()
Represents a UTF-8 encoded string value.
AnthropicModelConfig
Bases: BaseModel
Configuration for Anthropic models.
This class defines the configuration settings for Anthropic language models, including model selection and separate rate limiting parameters for input and output tokens.
Attributes:
-
model_name
(ANTHROPIC_AVAILABLE_LANGUAGE_MODELS
) –The name of the Anthropic model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
input_tpm
(int
) –Input tokens per minute limit; must be greater than 0.
-
output_tpm
(int
) –Output tokens per minute limit; must be greater than 0.
Examples:
Configuring an Anthropic model with separate input/output rate limits:
config = AnthropicModelConfig(
model_name="claude-3-5-haiku-latest",
rpm=100,
input_tpm=100,
output_tpm=100
)
ArrayType
Catalog
Catalog(catalog: BaseCatalog)
Entry point for catalog operations.
The Catalog provides methods to interact with and manage database tables, including listing available tables, describing table schemas, and dropping tables.
Basic usage
# Create a new catalog
session.catalog.create_catalog('my_catalog')
# Returns: True
# Set the current catalog
session.catalog.set_current_catalog('my_catalog')
# Returns: None
# Create a new database
session.catalog.create_database('my_database')
# Returns: True
# Use the new database
session.catalog.set_current_database('my_database')
# Returns: None
# Create a new table
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]))
# Returns: True
Initialize a Catalog instance.
Parameters:
-
catalog
(BaseCatalog
) –The underlying catalog implementation.
Methods:
-
create_catalog
–Creates a new catalog.
-
create_database
–Creates a new database.
-
create_table
–Creates a new table.
-
describe_table
–Returns the schema of the specified table.
-
does_catalog_exist
–Checks if a catalog with the specified name exists.
-
does_database_exist
–Checks if a database with the specified name exists.
-
does_table_exist
–Checks if a table with the specified name exists.
-
drop_catalog
–Drops a catalog.
-
drop_database
–Drops a database.
-
drop_table
–Drops the specified table.
-
get_current_catalog
–Returns the name of the current catalog.
-
get_current_database
–Returns the name of the current database in the current catalog.
-
list_catalogs
–Returns a list of available catalogs.
-
list_databases
–Returns a list of databases in the current catalog.
-
list_tables
–Returns a list of tables stored in the current database.
-
set_current_catalog
–Sets the current catalog.
-
set_current_database
–Sets the current database.
Source code in src/fenic/api/catalog.py
43 44 45 46 47 48 49 |
|
create_catalog
create_catalog(catalog_name: str, ignore_if_exists: bool = True) -> bool
Creates a new catalog.
Parameters:
-
catalog_name
(str
) –Name of the catalog to create.
-
ignore_if_exists
(bool
, default:True
) –If True, return False when the catalog already exists. If False, raise an error when the catalog already exists. Defaults to True.
Raises:
-
CatalogAlreadyExistsError
–If the catalog already exists and ignore_if_exists is False.
Returns:
-
bool
(bool
) –True if the catalog was created successfully, False if the catalog
-
bool
–already exists and ignore_if_exists is True.
Create a new catalog
# Create a new catalog named 'my_catalog'
session.catalog.create_catalog('my_catalog')
# Returns: True
Create an existing catalog with ignore_if_exists
# Try to create an existing catalog with ignore_if_exists=True
session.catalog.create_catalog('my_catalog', ignore_if_exists=True)
# Returns: False
Create an existing catalog without ignore_if_exists
# Try to create an existing catalog with ignore_if_exists=False
session.catalog.create_catalog('my_catalog', ignore_if_exists=False)
# Raises: CatalogAlreadyExistsError
Source code in src/fenic/api/catalog.py
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 |
|
create_database
create_database(database_name: str, ignore_if_exists: bool = True) -> bool
Creates a new database.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to create.
-
ignore_if_exists
(bool
, default:True
) –If True, return False when the database already exists. If False, raise an error when the database already exists. Defaults to True.
Raises:
-
DatabaseAlreadyExistsError
–If the database already exists and ignore_if_exists is False.
Returns:
-
bool
(bool
) –True if the database was created successfully, False if the database
-
bool
–already exists and ignore_if_exists is True.
Create a new database
# Create a new database named 'my_database'
session.catalog.create_database('my_database')
# Returns: True
Create an existing database with ignore_if_exists
# Try to create an existing database with ignore_if_exists=True
session.catalog.create_database('my_database', ignore_if_exists=True)
# Returns: False
Create an existing database without ignore_if_exists
# Try to create an existing database with ignore_if_exists=False
session.catalog.create_database('my_database', ignore_if_exists=False)
# Raises: DatabaseAlreadyExistsError
Source code in src/fenic/api/catalog.py
263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 |
|
create_table
create_table(table_name: str, schema: Schema, ignore_if_exists: bool = True) -> bool
Creates a new table.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to create.
-
schema
(Schema
) –Schema of the table to create.
-
ignore_if_exists
(bool
, default:True
) –If True, return False when the table already exists. If False, raise an error when the table already exists. Defaults to True.
Returns:
-
bool
(bool
) –True if the table was created successfully, False if the table
-
bool
–already exists and ignore_if_exists is True.
Raises:
-
TableAlreadyExistsError
–If the table already exists and ignore_if_exists is False
Create a new table
# Create a new table with an integer column
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]))
# Returns: True
Create an existing table with ignore_if_exists
# Try to create an existing table with ignore_if_exists=True
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]), ignore_if_exists=True)
# Returns: False
Create an existing table without ignore_if_exists
# Try to create an existing table with ignore_if_exists=False
session.catalog.create_table('my_table', Schema([
ColumnField('id', IntegerType),
]), ignore_if_exists=False)
# Raises: TableAlreadyExistsError
Source code in src/fenic/api/catalog.py
449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 |
|
describe_table
describe_table(table_name: str) -> Schema
Returns the schema of the specified table.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to describe.
Returns:
-
Schema
(Schema
) –A schema object describing the table's structure with field names and types.
Raises:
-
TableNotFoundError
–If the table doesn't exist.
Describe a table's schema
# For a table created with: CREATE TABLE t1 (id int)
session.catalog.describe_table('t1')
# Returns: Schema([
# ColumnField('id', IntegerType),
# ])
Source code in src/fenic/api/catalog.py
383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 |
|
does_catalog_exist
does_catalog_exist(catalog_name: str) -> bool
Checks if a catalog with the specified name exists.
Parameters:
-
catalog_name
(str
) –Name of the catalog to check.
Returns:
-
bool
(bool
) –True if the catalog exists, False otherwise.
Check if a catalog exists
# Check if 'my_catalog' exists
session.catalog.does_catalog_exist('my_catalog')
# Returns: True
Source code in src/fenic/api/catalog.py
52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
|
does_database_exist
does_database_exist(database_name: str) -> bool
Checks if a database with the specified name exists.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to check.
Returns:
-
bool
(bool
) –True if the database exists, False otherwise.
Check if a database exists
# Check if 'my_database' exists
session.catalog.does_database_exist('my_database')
# Returns: True
Source code in src/fenic/api/catalog.py
195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 |
|
does_table_exist
does_table_exist(table_name: str) -> bool
Checks if a table with the specified name exists.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to check.
Returns:
-
bool
(bool
) –True if the table exists, False otherwise.
Check if a table exists
# Check if 'my_table' exists
session.catalog.does_table_exist('my_table')
# Returns: True
Source code in src/fenic/api/catalog.py
346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 |
|
drop_catalog
drop_catalog(catalog_name: str, ignore_if_not_exists: bool = True) -> bool
Drops a catalog.
Parameters:
-
catalog_name
(str
) –Name of the catalog to drop.
-
ignore_if_not_exists
(bool
, default:True
) –If True, silently return if the catalog doesn't exist. If False, raise an error if the catalog doesn't exist. Defaults to True.
Raises:
-
CatalogNotFoundError
–If the catalog does not exist and ignore_if_not_exists is False
Returns:
-
bool
(bool
) –True if the catalog was dropped successfully, False if the catalog
-
bool
–didn't exist and ignore_if_not_exists is True.
Drop a non-existent catalog
# Try to drop a non-existent catalog
session.catalog.drop_catalog('my_catalog')
# Returns: False
Drop a non-existent catalog without ignore_if_not_exists
# Try to drop a non-existent catalog with ignore_if_not_exists=False
session.catalog.drop_catalog('my_catalog', ignore_if_not_exists=False)
# Raises: CatalogNotFoundError
Source code in src/fenic/api/catalog.py
160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 |
|
drop_database
drop_database(database_name: str, cascade: bool = False, ignore_if_not_exists: bool = True) -> bool
Drops a database.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to drop.
-
cascade
(bool
, default:False
) –If True, drop all tables in the database. Defaults to False.
-
ignore_if_not_exists
(bool
, default:True
) –If True, silently return if the database doesn't exist. If False, raise an error if the database doesn't exist. Defaults to True.
Raises:
-
DatabaseNotFoundError
–If the database does not exist and ignore_if_not_exists is False
-
CatalogError
–If the current database is being dropped, if the database is not empty and cascade is False
Returns:
-
bool
(bool
) –True if the database was dropped successfully, False if the database
-
bool
–didn't exist and ignore_if_not_exists is True.
Drop a non-existent database
# Try to drop a non-existent database
session.catalog.drop_database('my_database')
# Returns: False
Drop a non-existent database without ignore_if_not_exists
# Try to drop a non-existent database with ignore_if_not_exists=False
session.catalog.drop_database('my_database', ignore_if_not_exists=False)
# Raises: DatabaseNotFoundError
Source code in src/fenic/api/catalog.py
305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 |
|
drop_table
drop_table(table_name: str, ignore_if_not_exists: bool = True) -> bool
Drops the specified table.
By default this method will return False if the table doesn't exist.
Parameters:
-
table_name
(str
) –Fully qualified or relative table name to drop.
-
ignore_if_not_exists
(bool
, default:True
) –If True, return False when the table doesn't exist. If False, raise an error when the table doesn't exist. Defaults to True.
Returns:
-
bool
(bool
) –True if the table was dropped successfully, False if the table
-
bool
–didn't exist and ignore_if_not_exist is True.
Raises:
-
TableNotFoundError
–If the table doesn't exist and ignore_if_not_exists is False
Drop an existing table
# Drop an existing table 't1'
session.catalog.drop_table('t1')
# Returns: True
Drop a non-existent table with ignore_if_not_exists
# Try to drop a non-existent table with ignore_if_not_exists=True
session.catalog.drop_table('t2', ignore_if_not_exists=True)
# Returns: False
Drop a non-existent table without ignore_if_not_exists
# Try to drop a non-existent table with ignore_if_not_exists=False
session.catalog.drop_table('t2', ignore_if_not_exists=False)
# Raises: TableNotFoundError
Source code in src/fenic/api/catalog.py
407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 |
|
get_current_catalog
get_current_catalog() -> str
Returns the name of the current catalog.
Returns:
-
str
(str
) –The name of the current catalog.
Get current catalog name
# Get the name of the current catalog
session.catalog.get_current_catalog()
# Returns: 'default'
Source code in src/fenic/api/catalog.py
71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
|
get_current_database
get_current_database() -> str
Returns the name of the current database in the current catalog.
Returns:
-
str
(str
) –The name of the current database.
Get current database name
# Get the name of the current database
session.catalog.get_current_database()
# Returns: 'default'
Source code in src/fenic/api/catalog.py
214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
|
list_catalogs
list_catalogs() -> List[str]
Returns a list of available catalogs.
Returns:
-
List[str]
–List[str]: A list of catalog names available in the system.
-
List[str]
–Returns an empty list if no catalogs are found.
List all catalogs
# Get all available catalogs
session.catalog.list_catalogs()
# Returns: ['default', 'my_catalog', 'other_catalog']
Source code in src/fenic/api/catalog.py
104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 |
|
list_databases
list_databases() -> List[str]
Returns a list of databases in the current catalog.
Returns:
-
List[str]
–List[str]: A list of database names in the current catalog.
-
List[str]
–Returns an empty list if no databases are found.
List all databases
# Get all databases in the current catalog
session.catalog.list_databases()
# Returns: ['default', 'my_database', 'other_database']
Source code in src/fenic/api/catalog.py
247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 |
|
list_tables
list_tables() -> List[str]
Returns a list of tables stored in the current database.
This method queries the current database to retrieve all available table names.
Returns:
-
List[str]
–List[str]: A list of table names stored in the database.
-
List[str]
–Returns an empty list if no tables are found.
List all tables
# Get all tables in the current database
session.catalog.list_tables()
# Returns: ['table1', 'table2', 'table3']
Source code in src/fenic/api/catalog.py
365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 |
|
set_current_catalog
set_current_catalog(catalog_name: str) -> None
Sets the current catalog.
Parameters:
-
catalog_name
(str
) –Name of the catalog to set as current.
Raises:
-
ValueError
–If the specified catalog doesn't exist.
Set current catalog
# Set 'my_catalog' as the current catalog
session.catalog.set_current_catalog('my_catalog')
Source code in src/fenic/api/catalog.py
86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
|
set_current_database
set_current_database(database_name: str) -> None
Sets the current database.
Parameters:
-
database_name
(str
) –Fully qualified or relative database name to set as current.
Raises:
-
DatabaseNotFoundError
–If the specified database doesn't exist.
Set current database
# Set 'my_database' as the current database
session.catalog.set_current_database('my_database')
Source code in src/fenic/api/catalog.py
229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 |
|
ClassifyExample
Bases: BaseModel
A single semantic example for classification operations.
Classify examples demonstrate the classification of an input string into a specific category string, used in a semantic.classify operation.
ClassifyExampleCollection
ClassifyExampleCollection(examples: List[ExampleType] = None)
Bases: BaseExampleCollection[ClassifyExample]
Collection of examples for semantic classification operations.
Classification operations categorize input text into predefined classes. This collection manages examples that demonstrate the expected classification results for different inputs.
Examples in this collection have a single input string and an output string representing the classification result.
Methods:
-
from_polars
–Create collection from a Polars DataFrame. Must have an 'output' column and an 'input' column.
Source code in src/fenic/core/types/semantic_examples.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
from_polars
classmethod
from_polars(df: DataFrame) -> ClassifyExampleCollection
Create collection from a Polars DataFrame. Must have an 'output' column and an 'input' column.
Source code in src/fenic/core/types/semantic_examples.py
295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 |
|
Column
A column expression in a DataFrame.
This class represents a column expression that can be used in DataFrame operations. It provides methods for accessing, transforming, and combining column data.
Create a column reference
# Reference a column by name using col() function
col("column_name")
Use column in operations
# Perform arithmetic operations
df.select(col("price") * col("quantity"))
Chain column operations
# Chain multiple operations
df.select(col("name").upper().contains("John"))
Methods:
-
alias
–Create an alias for this column.
-
asc
–Apply ascending order to this column during a dataframe sort or order_by.
-
asc_nulls_first
–Apply ascending order putting nulls first to this column during a dataframe sort or order_by.
-
asc_nulls_last
–Apply ascending order putting nulls last to this column during a dataframe sort or order_by.
-
cast
–Cast the column to a new data type.
-
contains
–Check if the column contains a substring.
-
contains_any
–Check if the column contains any of the specified substrings.
-
desc
–Apply descending order to this column during a dataframe sort or order_by.
-
desc_nulls_first
–Apply descending order putting nulls first to this column during a dataframe sort or order_by.
-
desc_nulls_last
–Apply descending order putting nulls last to this column during a dataframe sort or order_by.
-
ends_with
–Check if the column ends with a substring.
-
get_item
–Access an item in a struct or array column.
-
ilike
–Check if the column matches a SQL LIKE pattern (case-insensitive).
-
is_in
–Check if the column is in a list of values or a column expression.
-
is_not_null
–Check if the column contains non-NULL values.
-
is_null
–Check if the column contains NULL values.
-
like
–Check if the column matches a SQL LIKE pattern.
-
otherwise
–Evaluates a list of conditions and returns one of multiple possible result expressions.
-
rlike
–Check if the column matches a regular expression pattern.
-
starts_with
–Check if the column starts with a substring.
-
when
–Evaluates a list of conditions and returns one of multiple possible result expressions.
alias
alias(name: str) -> Column
Create an alias for this column.
This method assigns a new name to the column expression, which is useful for renaming columns or providing names for complex expressions.
Parameters:
-
name
(str
) –The alias name to assign
Returns:
-
Column
(Column
) –Column with the specified alias
Rename a column
# Rename a column to a new name
df.select(col("original_name").alias("new_name"))
Name a complex expression
# Give a name to a calculated column
df.select((col("price") * col("quantity")).alias("total_value"))
Source code in src/fenic/api/column.py
158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
|
asc
asc() -> Column
Apply ascending order to this column during a dataframe sort or order_by.
This method creates an expression that provides a column and sort order to the sort function.
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Sort by age in ascending order
# Sort a dataframe by age in ascending order
df.sort(col("age").asc()).show()
Sort using column reference
# Sort using column reference with ascending order
df.sort(col("age").asc()).show()
Source code in src/fenic/api/column.py
304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 |
|
asc_nulls_first
asc_nulls_first() -> Column
Apply ascending order putting nulls first to this column during a dataframe sort or order_by.
This method creates an expression that provides a column and sort order to the sort function.
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Sort by age in ascending order with nulls first
# Sort a dataframe by age in ascending order, with nulls appearing first
df.sort(col("age").asc_nulls_first()).show()
Sort using column reference
# Sort using column reference with ascending order and nulls first
df.sort(col("age").asc_nulls_first()).show()
Source code in src/fenic/api/column.py
326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 |
|
asc_nulls_last
asc_nulls_last() -> Column
Apply ascending order putting nulls last to this column during a dataframe sort or order_by.
This method creates an expression that provides a column and sort order to the sort function.
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Sort by age in ascending order with nulls last
# Sort a dataframe by age in ascending order, with nulls appearing last
df.sort(col("age").asc_nulls_last()).show()
Sort using column reference
# Sort using column reference with ascending order and nulls last
df.sort(col("age").asc_nulls_last()).show()
Source code in src/fenic/api/column.py
350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 |
|
cast
cast(data_type: DataType) -> Column
Cast the column to a new data type.
This method creates an expression that casts the column to a specified data type. The casting behavior depends on the source and target types:
Primitive type casting:
- Numeric types (IntegerType, FloatType, DoubleType) can be cast between each other
- Numeric types can be cast to/from StringType
- BooleanType can be cast to/from numeric types and StringType
- StringType cannot be directly cast to BooleanType (will raise TypeError)
Complex type casting:
- ArrayType can only be cast to another ArrayType (with castable element types)
- StructType can only be cast to another StructType (with matching/castable fields)
- Primitive types cannot be cast to/from complex types
Parameters:
-
data_type
(DataType
) –The target DataType to cast the column to
Returns:
-
Column
(Column
) –A Column representing the casted expression
Cast integer to string
# Convert an integer column to string type
df.select(col("int_col").cast(StringType))
Cast array of integers to array of strings
# Convert an array of integers to an array of strings
df.select(col("int_array").cast(ArrayType(element_type=StringType)))
Cast struct fields to different types
# Convert struct fields to different types
new_type = StructType([
StructField("id", StringType),
StructField("value", FloatType)
])
df.select(col("data_struct").cast(new_type))
Raises:
-
TypeError
–If the requested cast operation is not supported
Source code in src/fenic/api/column.py
184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 |
|
contains
contains(other: Union[str, Column]) -> Column
Check if the column contains a substring.
This method creates a boolean expression that checks if each value in the column contains the specified substring. The substring can be either a literal string or a column expression.
Parameters:
-
other
(Union[str, Column]
) –The substring to search for (can be a string or column expression)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value contains the substring
Find rows where name contains "john"
# Filter rows where the name column contains "john"
df.filter(col("name").contains("john"))
Find rows where text contains a dynamic pattern
# Filter rows where text contains a value from another column
df.filter(col("text").contains(col("pattern")))
Source code in src/fenic/api/column.py
374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 |
|
contains_any
contains_any(others: List[str], case_insensitive: bool = True) -> Column
Check if the column contains any of the specified substrings.
This method creates a boolean expression that checks if each value in the column contains any of the specified substrings. The matching can be case-sensitive or case-insensitive.
Parameters:
-
others
(List[str]
) –List of substrings to search for
-
case_insensitive
(bool
, default:True
) –Whether to perform case-insensitive matching (default: True)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value contains any substring
Find rows where name contains "john" or "jane" (case-insensitive)
# Filter rows where name contains either "john" or "jane"
df.filter(col("name").contains_any(["john", "jane"]))
Case-sensitive matching
# Filter rows with case-sensitive matching
df.filter(col("name").contains_any(["John", "Jane"], case_insensitive=False))
Source code in src/fenic/api/column.py
406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 |
|
desc
desc() -> Column
Apply descending order to this column during a dataframe sort or order_by.
This method creates an expression that provides a column and sort order to the sort function.
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Sort by age in descending order
# Sort a dataframe by age in descending order
df.sort(col("age").desc()).show()
Sort using column reference
# Sort using column reference with descending order
df.sort(col("age").desc()).show()
Source code in src/fenic/api/column.py
236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 |
|
desc_nulls_first
desc_nulls_first() -> Column
Apply descending order putting nulls first to this column during a dataframe sort or order_by.
This method creates an expression that provides a column and sort order to the sort function
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Sort by age in descending order with nulls first
df.sort(col("age").desc_nulls_first()).show()
Sort using column reference
df.sort(col("age").desc_nulls_first()).show()
Source code in src/fenic/api/column.py
258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 |
|
desc_nulls_last
desc_nulls_last() -> Column
Apply descending order putting nulls last to this column during a dataframe sort or order_by.
This method creates an expression that provides a column and sort order to the sort function.
Returns:
-
Column
(Column
) –A Column expression that provides a column and sort order to the sort function
Sort by age in descending order with nulls last
# Sort a dataframe by age in descending order, with nulls appearing last
df.sort(col("age").desc_nulls_last()).show()
Sort using column reference
# Sort using column reference with descending order and nulls last
df.sort(col("age").desc_nulls_last()).show()
Source code in src/fenic/api/column.py
280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 |
|
ends_with
ends_with(other: Union[str, Column]) -> Column
Check if the column ends with a substring.
This method creates a boolean expression that checks if each value in the column ends with the specified substring. The substring can be either a literal string or a column expression.
Parameters:
-
other
(Union[str, Column]
) –The substring to check for at the end (can be a string or column expression)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value ends with the substring
Find rows where email ends with "@gmail.com"
df.filter(col("email").ends_with("@gmail.com"))
Find rows where text ends with a dynamic pattern
df.filter(col("text").ends_with(col("suffix")))
Raises:
-
ValueError
–If the substring ends with a regular expression anchor ($)
Source code in src/fenic/api/column.py
471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 |
|
get_item
get_item(key: Union[str, int]) -> Column
Access an item in a struct or array column.
This method allows accessing elements in complex data types:
- For array columns, the key should be an integer index
- For struct columns, the key should be a field name
Parameters:
-
key
(Union[str, int]
) –The index (for arrays) or field name (for structs) to access
Returns:
-
Column
(Column
) –A Column representing the accessed item
Access an array element
# Get the first element from an array column
df.select(col("array_column").get_item(0))
Access a struct field
# Get a field from a struct column
df.select(col("struct_column").get_item("field_name"))
Source code in src/fenic/api/column.py
79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
ilike
ilike(other: str) -> Column
Check if the column matches a SQL LIKE pattern (case-insensitive).
This method creates a boolean expression that checks if each value in the column matches the specified SQL LIKE pattern, ignoring case. The pattern must be a literal string and cannot be a column expression.
SQL LIKE pattern syntax:
- % matches any sequence of characters
- _ matches any single character
Parameters:
-
other
(str
) –The SQL LIKE pattern to match against
Returns:
-
Column
(Column
) –A boolean column indicating whether each value matches the pattern
Find rows where name starts with "j" and ends with "n" (case-insensitive)
# Filter rows where name matches the pattern "j%n" (case-insensitive)
df.filter(col("name").ilike("j%n"))
Find rows where code matches pattern (case-insensitive)
# Filter rows where code matches the pattern "a_b%" (case-insensitive)
df.filter(col("code").ilike("a_b%"))
Source code in src/fenic/api/column.py
563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 |
|
is_in
is_in(other: Union[List[Any], ColumnOrName]) -> Column
Check if the column is in a list of values or a column expression.
Parameters:
-
other
(Union[List[Any], ColumnOrName]
) –A list of values or a Column expression
Returns:
-
Column
(Column
) –A Column expression representing whether each element of Column is in the list
Check if name is in a list of values
# Filter rows where name is in a list of values
df.filter(col("name").is_in(["Alice", "Bob"]))
Check if value is in another column
# Filter rows where name is in another column
df.filter(col("name").is_in(col("other_column")))
Source code in src/fenic/api/column.py
711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 |
|
is_not_null
is_not_null() -> Column
Check if the column contains non-NULL values.
This method creates an expression that evaluates to TRUE when the column value is not NULL.
Returns:
-
Column
(Column
) –A Column representing a boolean expression that is TRUE when this column is not NULL
Filter rows where a column is not NULL
df.filter(col("some_column").is_not_null())
Use in a complex condition
df.filter(col("col1").is_not_null() & (col("col2") <= 50))
Source code in src/fenic/api/column.py
617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 |
|
is_null
is_null() -> Column
Check if the column contains NULL values.
This method creates an expression that evaluates to TRUE when the column value is NULL.
Returns:
-
Column
(Column
) –A Column representing a boolean expression that is TRUE when this column is NULL
Filter rows where a column is NULL
# Filter rows where some_column is NULL
df.filter(col("some_column").is_null())
Use in a complex condition
# Filter rows where col1 is NULL or col2 is greater than 100
df.filter(col("col1").is_null() | (col("col2") > 100))
Source code in src/fenic/api/column.py
595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 |
|
like
like(other: str) -> Column
Check if the column matches a SQL LIKE pattern.
This method creates a boolean expression that checks if each value in the column matches the specified SQL LIKE pattern. The pattern must be a literal string and cannot be a column expression.
SQL LIKE pattern syntax:
- % matches any sequence of characters
- _ matches any single character
Parameters:
-
other
(str
) –The SQL LIKE pattern to match against
Returns:
-
Column
(Column
) –A boolean column indicating whether each value matches the pattern
Find rows where name starts with "J" and ends with "n"
# Filter rows where name matches the pattern "J%n"
df.filter(col("name").like("J%n"))
Find rows where code matches specific pattern
# Filter rows where code matches the pattern "A_B%"
df.filter(col("code").like("A_B%"))
Source code in src/fenic/api/column.py
531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 |
|
otherwise
otherwise(value: Column) -> Column
Evaluates a list of conditions and returns one of multiple possible result expressions.
If Column.otherwise() is not invoked, None is returned for unmatched conditions. Otherwise() will return for rows with None inputs.
Parameters:
-
value
(Column
) –A literal value or Column expression to return
Returns:
-
Column
(Column
) –A Column expression representing whether each element of Column is not matched by any previous conditions
Use when/otherwise for conditional logic
# Create a DataFrame with age and name columns
df = session.createDataFrame(
{"age": [2, 5]}, {"name": ["Alice", "Bob"]}
)
# Use when/otherwise to create a case result column
df.select(
col("name"),
when(col("age") > 3, 1).otherwise(0).alias("case_result")
).show()
# Output:
# +-----+-----------+
# | name|case_result|
# +-----+-----------+
# |Alice| 0|
# | Bob| 1|
# +-----+-----------+
Source code in src/fenic/api/column.py
676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 |
|
rlike
rlike(other: str) -> Column
Check if the column matches a regular expression pattern.
This method creates a boolean expression that checks if each value in the column matches the specified regular expression pattern. The pattern must be a literal string and cannot be a column expression.
Parameters:
-
other
(str
) –The regular expression pattern to match against
Returns:
-
Column
(Column
) –A boolean column indicating whether each value matches the pattern
Find rows where phone number matches pattern
# Filter rows where phone number matches a specific pattern
df.filter(col("phone").rlike(r"^\d{3}-\d{3}-\d{4}$"))
Find rows where text contains word boundaries
# Filter rows where text contains a word with boundaries
df.filter(col("text").rlike(r"\bhello\b"))
Source code in src/fenic/api/column.py
504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 |
|
starts_with
starts_with(other: Union[str, Column]) -> Column
Check if the column starts with a substring.
This method creates a boolean expression that checks if each value in the column starts with the specified substring. The substring can be either a literal string or a column expression.
Parameters:
-
other
(Union[str, Column]
) –The substring to check for at the start (can be a string or column expression)
Returns:
-
Column
(Column
) –A boolean column indicating whether each value starts with the substring
Find rows where name starts with "Mr"
# Filter rows where name starts with "Mr"
df.filter(col("name").starts_with("Mr"))
Find rows where text starts with a dynamic pattern
# Filter rows where text starts with a value from another column
df.filter(col("text").starts_with(col("prefix")))
Raises:
-
ValueError
–If the substring starts with a regular expression anchor (^)
Source code in src/fenic/api/column.py
436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 |
|
when
when(condition: Column, value: Column) -> Column
Evaluates a list of conditions and returns one of multiple possible result expressions.
If Column.otherwise() is not invoked, None is returned for unmatched conditions. Otherwise() will return for rows with None inputs.
Parameters:
-
condition
(Column
) –A boolean Column expression
-
value
(Column
) –A literal value or Column expression to return if the condition is true
Returns:
-
Column
(Column
) –A Column expression representing whether each element of Column matches the condition
Raises:
-
TypeError
–If the condition is not a boolean Column expression
Use when/otherwise for conditional logic
# Create a DataFrame with age and name columns
df = session.createDataFrame(
{"age": [2, 5]}, {"name": ["Alice", "Bob"]}
)
# Use when/otherwise to create a case result column
df.select(
col("name"),
when(col("age") > 3, 1).otherwise(0).alias("case_result")
).show()
# Output:
# +-----+-----------+
# | name|case_result|
# +-----+-----------+
# |Alice| 0|
# | Bob| 1|
# +-----+-----------+
Source code in src/fenic/api/column.py
637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 |
|
ColumnField
Represents a typed column in a DataFrame schema.
A ColumnField defines the structure of a single column by specifying its name and data type. This is used as a building block for DataFrame schemas.
Attributes:
-
name
(str
) –The name of the column.
-
data_type
(DataType
) –The data type of the column, as a DataType instance.
DataFrame
A data collection organized into named columns.
The DataFrame class represents a lazily evaluated computation on data. Operations on DataFrame build up a logical query plan that is only executed when an action like show(), to_polars(), to_pandas(), to_arrow(), to_pydict(), to_pylist(), or count() is called.
The DataFrame supports method chaining for building complex transformations.
Create and transform a DataFrame
# Create a DataFrame from a dictionary
df = session.create_dataframe({"id": [1, 2, 3], "value": ["a", "b", "c"]})
# Chain transformations
result = df.filter(col("id") > 1).select("id", "value")
# Show results
result.show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 2| b|
# | 3| c|
# +---+-----+
Methods:
-
agg
–Aggregate on the entire DataFrame without groups.
-
cache
–Alias for persist(). Mark DataFrame for caching after first computation.
-
collect
–Execute the DataFrame computation and return the result as a QueryResult.
-
count
–Count the number of rows in the DataFrame.
-
drop
–Remove one or more columns from this DataFrame.
-
drop_duplicates
–Return a DataFrame with duplicate rows removed.
-
explain
–Display the logical plan of the DataFrame.
-
explode
–Create a new row for each element in an array column.
-
filter
–Filters rows using the given condition.
-
group_by
–Groups the DataFrame using the specified columns.
-
join
–Joins this DataFrame with another DataFrame.
-
limit
–Limits the number of rows to the specified number.
-
lineage
–Create a Lineage object to trace data through transformations.
-
order_by
–Sort the DataFrame by the specified columns. Alias for sort().
-
persist
–Mark this DataFrame to be persisted after first computation.
-
select
–Projects a set of Column expressions or column names.
-
show
–Display the DataFrame content in a tabular form.
-
sort
–Sort the DataFrame by the specified columns.
-
to_arrow
–Execute the DataFrame computation and return an Apache Arrow Table.
-
to_pandas
–Execute the DataFrame computation and return a Pandas DataFrame.
-
to_polars
–Execute the DataFrame computation and return the result as a Polars DataFrame.
-
to_pydict
–Execute the DataFrame computation and return a dictionary of column arrays.
-
to_pylist
–Execute the DataFrame computation and return a list of row dictionaries.
-
union
–Return a new DataFrame containing the union of rows in this and another DataFrame.
-
unnest
–Unnest the specified struct columns into separate columns.
-
where
–Filters rows using the given condition (alias for filter()).
-
with_column
–Add a new column or replace an existing column.
-
with_column_renamed
–Rename a column. No-op if the column does not exist.
Attributes:
-
columns
(List[str]
) –Get list of column names.
-
schema
(Schema
) –Get the schema of this DataFrame.
-
semantic
(SemanticExtensions
) –Interface for semantic operations on the DataFrame.
-
write
(DataFrameWriter
) –Interface for saving the content of the DataFrame.
columns
property
columns: List[str]
Get list of column names.
Returns:
-
List[str]
–List[str]: List of all column names in the DataFrame
Examples:
>>> df.columns
['name', 'age', 'city']
schema
property
schema: Schema
Get the schema of this DataFrame.
Returns:
-
Schema
(Schema
) –Schema containing field names and data types
Examples:
>>> df.schema
Schema([
ColumnField('name', StringType),
ColumnField('age', IntegerType)
])
semantic
property
semantic: SemanticExtensions
Interface for semantic operations on the DataFrame.
write
property
write: DataFrameWriter
Interface for saving the content of the DataFrame.
Returns:
-
DataFrameWriter
(DataFrameWriter
) –Writer interface to write DataFrame.
agg
agg(*exprs: Union[Column, Dict[str, str]]) -> DataFrame
Aggregate on the entire DataFrame without groups.
This is equivalent to group_by() without any grouping columns.
Parameters:
-
*exprs
(Union[Column, Dict[str, str]]
, default:()
) –Aggregation expressions or dictionary of aggregations.
Returns:
-
DataFrame
(DataFrame
) –Aggregation results.
Multiple aggregations
# Create sample DataFrame
df = session.create_dataframe({
"salary": [80000, 70000, 90000, 75000, 85000],
"age": [25, 30, 35, 28, 32]
})
# Multiple aggregations
df.agg(
count().alias("total_rows"),
avg(col("salary")).alias("avg_salary")
).show()
# Output:
# +----------+-----------+
# |total_rows|avg_salary|
# +----------+-----------+
# | 5| 80000.0|
# +----------+-----------+
Dictionary style
# Dictionary style
df.agg({col("salary"): "avg", col("age"): "max"}).show()
# Output:
# +-----------+--------+
# |avg(salary)|max(age)|
# +-----------+--------+
# | 80000.0| 35|
# +-----------+--------+
Source code in src/fenic/api/dataframe/dataframe.py
1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 |
|
cache
cache() -> DataFrame
Alias for persist(). Mark DataFrame for caching after first computation.
Returns:
-
DataFrame
(DataFrame
) –Same DataFrame, but marked for caching
See Also
persist(): Full documentation of caching behavior
Source code in src/fenic/api/dataframe/dataframe.py
405 406 407 408 409 410 411 412 413 414 |
|
collect
collect(data_type: DataLikeType = 'polars') -> QueryResult
Execute the DataFrame computation and return the result as a QueryResult.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a QueryResult, which contains both the result data and the query metrics.
Parameters:
-
data_type
(DataLikeType
, default:'polars'
) –The type of data to return
Returns:
-
QueryResult
(QueryResult
) –A QueryResult with materialized data and query metrics
Source code in src/fenic/api/dataframe/dataframe.py
241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 |
|
count
count() -> int
Count the number of rows in the DataFrame.
This is an action that triggers computation of the DataFrame. The output is an integer representing the number of rows.
Returns:
-
int
(int
) –The number of rows in the DataFrame
Source code in src/fenic/api/dataframe/dataframe.py
339 340 341 342 343 344 345 346 347 348 |
|
drop
drop(*col_names: str) -> DataFrame
Remove one or more columns from this DataFrame.
Parameters:
-
*col_names
(str
, default:()
) –Names of columns to drop.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame without specified columns.
Raises:
-
ValueError
–If any specified column doesn't exist in the DataFrame.
-
ValueError
–If dropping the columns would result in an empty DataFrame.
Drop single column
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"],
"age": [25, 30, 35]
})
# Drop single column
df.drop("age").show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
Drop multiple columns
# Drop multiple columns
df.drop(col("id"), "age").show()
# Output:
# +-------+
# | name|
# +-------+
# | Alice|
# | Bob|
# |Charlie|
# +-------+
Error when dropping non-existent column
# This will raise a ValueError
df.drop("non_existent_column")
# ValueError: Column 'non_existent_column' not found in DataFrame
Source code in src/fenic/api/dataframe/dataframe.py
705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 |
|
drop_duplicates
drop_duplicates(subset: Optional[List[str]] = None) -> DataFrame
Return a DataFrame with duplicate rows removed.
Parameters:
-
subset
(Optional[List[str]]
, default:None
) –Column names to consider when identifying duplicates. If not provided, all columns are considered.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with duplicate rows removed.
Raises:
-
ValueError
–If a specified column is not present in the current DataFrame schema.
Remove duplicates considering specific columns
# Create sample DataFrame
df = session.create_dataframe({
"c1": [1, 2, 3, 1],
"c2": ["a", "a", "a", "a"],
"c3": ["b", "b", "b", "b"]
})
# Remove duplicates considering all columns
df.drop_duplicates([col("c1"), col("c2"), col("c3")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
# Remove duplicates considering only c1
df.drop_duplicates([col("c1")]).show()
# Output:
# +---+---+---+
# | c1| c2| c3|
# +---+---+---+
# | 1| a| b|
# | 2| a| b|
# | 3| a| b|
# +---+---+---+
Source code in src/fenic/api/dataframe/dataframe.py
1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 |
|
explain
explain() -> None
Display the logical plan of the DataFrame.
Source code in src/fenic/api/dataframe/dataframe.py
221 222 223 |
|
explode
explode(column: ColumnOrName) -> DataFrame
Create a new row for each element in an array column.
This operation is useful for flattening nested data structures. For each row in the input DataFrame that contains an array/list in the specified column, this method will: 1. Create N new rows, where N is the length of the array 2. Each new row will be identical to the original row, except the array column will contain just a single element from the original array 3. Rows with NULL values or empty arrays in the specified column are filtered out
Parameters:
-
column
(ColumnOrName
) –Name of array column to explode (as string) or Column expression.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame with the array column exploded into multiple rows.
Raises:
-
TypeError
–If column argument is not a string or Column.
Explode array column
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4],
"tags": [["red", "blue"], ["green"], [], None],
"name": ["Alice", "Bob", "Carol", "Dave"]
})
# Explode the tags column
df.explode("tags").show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
Using column expression
# Explode using column expression
df.explode(col("tags")).show()
# Output:
# +---+-----+-----+
# | id| tags| name|
# +---+-----+-----+
# | 1| red|Alice|
# | 1| blue|Alice|
# | 2|green| Bob|
# +---+-----+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 |
|
filter
filter(condition: Column) -> DataFrame
Filters rows using the given condition.
Parameters:
-
condition
(Column
) –A Column expression that evaluates to a boolean
Returns:
-
DataFrame
(DataFrame
) –Filtered DataFrame
Filter with numeric comparison
# Create a DataFrame
df = session.create_dataframe({"age": [25, 30, 35], "name": ["Alice", "Bob", "Charlie"]})
# Filter with numeric comparison
df.filter(col("age") > 25).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
Filter with semantic predicate
# Filter with semantic predicate
df.filter((col("age") > 25) & semantic.predicate("This {feedback} mentions problems with the user interface or navigation")).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
Filter with multiple conditions
# Filter with multiple conditions
df.filter((col("age") > 25) & (col("age") <= 35)).show()
# Output:
# +---+-------+
# |age| name|
# +---+-------+
# | 30| Bob|
# | 35|Charlie|
# +---+-------+
Source code in src/fenic/api/dataframe/dataframe.py
499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 |
|
group_by
group_by(*cols: ColumnOrName) -> GroupedData
Groups the DataFrame using the specified columns.
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Columns to group by. Can be column names as strings or Column expressions.
Returns:
-
GroupedData
(GroupedData
) –Object for performing aggregations on the grouped data.
Group by single column
# Create sample DataFrame
df = session.create_dataframe({
"department": ["IT", "HR", "IT", "HR", "IT"],
"salary": [80000, 70000, 90000, 75000, 85000]
})
# Group by single column
df.group_by(col("department")).count().show()
# Output:
# +----------+-----+
# |department|count|
# +----------+-----+
# | IT| 3|
# | HR| 2|
# +----------+-----+
Group by multiple columns
# Group by multiple columns
df.group_by(col("department"), col("location")).agg({"salary": "avg"}).show()
# Output:
# +----------+--------+-----------+
# |department|location|avg(salary)|
# +----------+--------+-----------+
# | IT| NYC| 85000.0|
# | HR| NYC| 72500.0|
# +----------+--------+-----------+
Group by expression
# Group by expression
df.group_by(col("age").cast("int").alias("age_group")).count().show()
# Output:
# +---------+-----+
# |age_group|count|
# +---------+-----+
# | 20| 2|
# | 30| 3|
# | 40| 1|
# +---------+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 |
|
join
join(other: DataFrame, on: Union[str, List[str]], *, how: JoinType = 'inner') -> DataFrame
join(other: DataFrame, *, left_on: Union[ColumnOrName, List[ColumnOrName]], right_on: Union[ColumnOrName, List[ColumnOrName]], how: JoinType = 'inner') -> DataFrame
join(other: DataFrame, on: Optional[Union[str, List[str]]] = None, *, left_on: Optional[Union[ColumnOrName, List[ColumnOrName]]] = None, right_on: Optional[Union[ColumnOrName, List[ColumnOrName]]] = None, how: JoinType = 'inner') -> DataFrame
Joins this DataFrame with another DataFrame.
The Dataframes must have no duplicate column names between them. This API only supports equi-joins. For non-equi-joins, use session.sql().
Parameters:
-
other
(DataFrame
) –DataFrame to join with.
-
on
(Optional[Union[str, List[str]]]
, default:None
) –Join condition(s). Can be: - A column name (str) - A list of column names (List[str]) - A Column expression (e.g., col('a')) - A list of Column expressions -
None
for cross joins -
left_on
(Optional[Union[ColumnOrName, List[ColumnOrName]]]
, default:None
) –Column(s) from the left DataFrame to join on. Can be: - A column name (str) - A Column expression (e.g., col('a'), col('a') + 1) - A list of column names or expressions
-
right_on
(Optional[Union[ColumnOrName, List[ColumnOrName]]]
, default:None
) –Column(s) from the right DataFrame to join on. Can be: - A column name (str) - A Column expression (e.g., col('b'), upper(col('b'))) - A list of column names or expressions
-
how
(JoinType
, default:'inner'
) –Type of join to perform.
Returns:
-
DataFrame
–Joined DataFrame.
Raises:
-
ValidationError
–If cross join is used with an ON clause.
-
ValidationError
–If join condition is invalid.
-
ValidationError
–If both 'on' and 'left_on'/'right_on' parameters are provided.
-
ValidationError
–If only one of 'left_on' or 'right_on' is provided.
-
ValidationError
–If 'left_on' and 'right_on' have different lengths
Inner join on column name
# Create sample DataFrames
df1 = session.create_dataframe({
"id": [1, 2, 3],
"name": ["Alice", "Bob", "Charlie"]
})
df2 = session.create_dataframe({
"id": [1, 2, 4],
"age": [25, 30, 35]
})
# Join on single column
df1.join(df2, on=col("id")).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
Join with expression
# Join with Column expressions
df1.join(
df2,
left_on=col("id"),
right_on=col("id"),
).show()
# Output:
# +---+-----+---+
# | id| name|age|
# +---+-----+---+
# | 1|Alice| 25|
# | 2| Bob| 30|
# +---+-----+---+
Cross join
# Cross join (cartesian product)
df1.join(df2, how="cross").show()
# Output:
# +---+-----+---+---+
# | id| name| id|age|
# +---+-----+---+---+
# | 1|Alice| 1| 25|
# | 1|Alice| 2| 30|
# | 1|Alice| 4| 35|
# | 2| Bob| 1| 25|
# | 2| Bob| 2| 30|
# | 2| Bob| 4| 35|
# | 3|Charlie| 1| 25|
# | 3|Charlie| 2| 30|
# | 3|Charlie| 4| 35|
# +---+-----+---+---+
Source code in src/fenic/api/dataframe/dataframe.py
932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 |
|
limit
limit(n: int) -> DataFrame
Limits the number of rows to the specified number.
Parameters:
-
n
(int
) –Maximum number of rows to return.
Returns:
-
DataFrame
(DataFrame
) –DataFrame with at most n rows.
Raises:
-
TypeError
–If n is not an integer.
Limit rows
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2, 3, 4, 5],
"name": ["Alice", "Bob", "Charlie", "Dave", "Eve"]
})
# Get first 3 rows
df.limit(3).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 1| Alice|
# | 2| Bob|
# | 3|Charlie|
# +---+-------+
Limit with other operations
# Limit after filtering
df.filter(col("id") > 2).limit(2).show()
# Output:
# +---+-------+
# | id| name|
# +---+-------+
# | 3|Charlie|
# | 4| Dave|
# +---+-------+
Source code in src/fenic/api/dataframe/dataframe.py
866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 |
|
lineage
lineage() -> Lineage
Create a Lineage object to trace data through transformations.
The Lineage interface allows you to trace how specific rows are transformed through your DataFrame operations, both forwards and backwards through the computation graph.
Returns:
-
Lineage
(Lineage
) –Interface for querying data lineage
Example
# Create lineage query
lineage = df.lineage()
# Trace specific rows backwards through transformations
source_rows = lineage.backward(["result_uuid1", "result_uuid2"])
# Or trace forwards to see outputs
result_rows = lineage.forward(["source_uuid1"])
See Also
LineageQuery: Full documentation of lineage querying capabilities
Source code in src/fenic/api/dataframe/dataframe.py
350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 |
|
order_by
order_by(cols: Union[ColumnOrName, List[ColumnOrName], None] = None, ascending: Optional[Union[bool, List[bool]]] = None) -> 'DataFrame'
Sort the DataFrame by the specified columns. Alias for sort().
Returns:
-
DataFrame
('DataFrame'
) –sorted Dataframe.
See Also
sort(): Full documentation of sorting behavior and parameters.
Source code in src/fenic/api/dataframe/dataframe.py
1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 |
|
persist
persist() -> DataFrame
Mark this DataFrame to be persisted after first computation.
The persisted DataFrame will be cached after its first computation, avoiding recomputation in subsequent operations. This is useful for DataFrames that are reused multiple times in your workflow.
Returns:
-
DataFrame
(DataFrame
) –Same DataFrame, but marked for persistence
Example
# Cache intermediate results for reuse
filtered_df = (df
.filter(col("age") > 25)
.persist() # Cache these results
)
# Both operations will use cached results
result1 = filtered_df.group_by("department").count()
result2 = filtered_df.select("name", "salary")
Source code in src/fenic/api/dataframe/dataframe.py
377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 |
|
select
select(*cols: ColumnOrName) -> DataFrame
Projects a set of Column expressions or column names.
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Column expressions to select. Can be: - String column names (e.g., "id", "name") - Column objects (e.g., col("id"), col("age") + 1)
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with selected columns
Select by column names
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Select by column names
df.select(col("name"), col("age")).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 25|
# | Bob| 30|
# +-----+---+
Select with expressions
# Select with expressions
df.select(col("name"), col("age") + 1).show()
# Output:
# +-----+-------+
# | name|age + 1|
# +-----+-------+
# |Alice| 26|
# | Bob| 31|
# +-----+-------+
Mix strings and expressions
# Mix strings and expressions
df.select(col("name"), col("age") * 2).show()
# Output:
# +-----+-------+
# | name|age * 2|
# +-----+-------+
# |Alice| 50|
# | Bob| 60|
# +-----+-------+
Source code in src/fenic/api/dataframe/dataframe.py
416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 |
|
show
show(n: int = 10, explain_analyze: bool = False) -> None
Display the DataFrame content in a tabular form.
This is an action that triggers computation of the DataFrame. The output is printed to stdout in a formatted table.
Parameters:
-
n
(int
, default:10
) –Number of rows to display
-
explain_analyze
(bool
, default:False
) –Whether to print the explain analyze plan
Source code in src/fenic/api/dataframe/dataframe.py
225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 |
|
sort
sort(cols: Union[ColumnOrName, List[ColumnOrName], None] = None, ascending: Optional[Union[bool, List[bool]]] = None) -> DataFrame
Sort the DataFrame by the specified columns.
Parameters:
-
cols
(Union[ColumnOrName, List[ColumnOrName], None]
, default:None
) –Columns to sort by. This can be: - A single column name (str) - A Column expression (e.g.,
col("name")
) - A list of column names or Column expressions - Column expressions may include sorting directives such asasc("col")
,desc("col")
,asc_nulls_last("col")
, etc. - If no columns are provided, the operation is a no-op. -
ascending
(Optional[Union[bool, List[bool]]]
, default:None
) –A boolean or list of booleans indicating sort order. - If
True
, sorts in ascending order; ifFalse
, descending. - If a list is provided, its length must match the number of columns. - Cannot be used if any of the columns useasc()
/desc()
expressions. - If not specified and no sort expressions are used, columns will be sorted in ascending order by default.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame sorted by the specified columns.
Raises:
-
ValueError
–- If
ascending
is provided and its length does not matchcols
- If both
ascending
and column expressions likeasc()
/desc()
are used
- If
-
TypeError
–- If
cols
is not a column name, Column, or list of column names/Columns - If
ascending
is not a boolean or list of booleans
- If
Sort in ascending order
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (5, "Bob")], schema=["age", "name"])
# Sort by age in ascending order
df.sort(asc(col("age"))).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 2|Alice|
# | 5| Bob|
# +---+-----+
Sort in descending order
# Sort by age in descending order
df.sort(col("age").desc()).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
Sort with boolean ascending parameter
# Sort by age in descending order using boolean
df.sort(col("age"), ascending=False).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# +---+-----+
Multiple columns with different sort orders
# Create sample DataFrame
df = session.create_dataframe([(2, "Alice"), (2, "Bob"), (5, "Bob")], schema=["age", "name"])
# Sort by age descending, then name ascending
df.sort(desc(col("age")), col("name")).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2|Alice|
# | 2| Bob|
# +---+-----+
Multiple columns with list of ascending strategies
# Sort both columns in descending order
df.sort([col("age"), col("name")], ascending=[False, False]).show()
# Output:
# +---+-----+
# |age| name|
# +---+-----+
# | 5| Bob|
# | 2| Bob|
# | 2|Alice|
# +---+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 |
|
to_arrow
to_arrow() -> pa.Table
Execute the DataFrame computation and return an Apache Arrow Table.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into an Apache Arrow Table with columnar memory layout optimized for analytics and zero-copy data exchange.
Returns:
-
Table
–pa.Table: An Apache Arrow Table containing the computed results
Source code in src/fenic/api/dataframe/dataframe.py
295 296 297 298 299 300 301 302 303 304 305 306 |
|
to_pandas
to_pandas() -> pd.DataFrame
Execute the DataFrame computation and return a Pandas DataFrame.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Pandas DataFrame.
Returns:
-
DataFrame
–pd.DataFrame: A Pandas DataFrame containing the computed results with
Source code in src/fenic/api/dataframe/dataframe.py
283 284 285 286 287 288 289 290 291 292 293 |
|
to_polars
to_polars() -> pl.DataFrame
Execute the DataFrame computation and return the result as a Polars DataFrame.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Polars DataFrame.
Returns:
-
DataFrame
–pl.DataFrame: A Polars DataFrame with materialized results
Source code in src/fenic/api/dataframe/dataframe.py
271 272 273 274 275 276 277 278 279 280 281 |
|
to_pydict
to_pydict() -> Dict[str, List[Any]]
Execute the DataFrame computation and return a dictionary of column arrays.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Python dictionary where each column becomes a list of values.
Returns:
-
Dict[str, List[Any]]
–Dict[str, List[Any]]: A dictionary containing the computed results with: - Keys: Column names as strings - Values: Lists containing all values for each column
Source code in src/fenic/api/dataframe/dataframe.py
308 309 310 311 312 313 314 315 316 317 318 319 320 |
|
to_pylist
to_pylist() -> List[Dict[str, Any]]
Execute the DataFrame computation and return a list of row dictionaries.
This is an action that triggers computation of the DataFrame query plan. All transformations and operations are executed, and the results are materialized into a Python list where each element is a dictionary representing one row with column names as keys.
Returns:
-
List[Dict[str, Any]]
–List[Dict[str, Any]]: A list containing the computed results with: - Each element: A dictionary representing one row - Dictionary keys: Column names as strings - Dictionary values: Cell values in Python native types - List length equals number of rows in the result
Source code in src/fenic/api/dataframe/dataframe.py
322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 |
|
union
union(other: DataFrame) -> DataFrame
Return a new DataFrame containing the union of rows in this and another DataFrame.
This is equivalent to UNION ALL in SQL. To remove duplicates, use drop_duplicates() after union().
Parameters:
-
other
(DataFrame
) –Another DataFrame with the same schema.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame containing rows from both DataFrames.
Raises:
-
ValueError
–If the DataFrames have different schemas.
-
TypeError
–If other is not a DataFrame.
Union two DataFrames
# Create two DataFrames
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [3, 4],
"value": ["c", "d"]
})
# Union the DataFrames
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# | 4| d|
# +---+-----+
Union with duplicates
# Create DataFrames with overlapping data
df1 = session.create_dataframe({
"id": [1, 2],
"value": ["a", "b"]
})
df2 = session.create_dataframe({
"id": [2, 3],
"value": ["b", "c"]
})
# Union with duplicates
df1.union(df2).show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 2| b|
# | 3| c|
# +---+-----+
# Remove duplicates after union
df1.union(df2).drop_duplicates().show()
# Output:
# +---+-----+
# | id|value|
# +---+-----+
# | 1| a|
# | 2| b|
# | 3| c|
# +---+-----+
Source code in src/fenic/api/dataframe/dataframe.py
786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 |
|
unnest
unnest(*col_names: str) -> DataFrame
Unnest the specified struct columns into separate columns.
This operation flattens nested struct data by expanding each field of a struct into its own top-level column.
For each specified column containing a struct: 1. Each field in the struct becomes a separate column. 2. New columns are named after the corresponding struct fields. 3. The new columns are inserted into the DataFrame in place of the original struct column. 4. The overall column order is preserved.
Parameters:
-
*col_names
(str
, default:()
) –One or more struct columns to unnest. Each can be a string (column name) or a Column expression.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with the specified struct columns expanded.
Raises:
-
TypeError
–If any argument is not a string or Column.
-
ValueError
–If a specified column does not contain struct data.
Unnest struct column
# Create sample DataFrame
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"name": ["Alice", "Bob"]
})
# Unnest the tags column
df.unnest(col("tags")).show()
# Output:
# +---+---+----+-----+
# | id| red|blue| name|
# +---+---+----+-----+
# | 1| 1| 2|Alice|
# | 2| 3|null| Bob|
# +---+---+----+-----+
Unnest multiple struct columns
# Create sample DataFrame with multiple struct columns
df = session.create_dataframe({
"id": [1, 2],
"tags": [{"red": 1, "blue": 2}, {"red": 3}],
"info": [{"age": 25, "city": "NY"}, {"age": 30, "city": "LA"}],
"name": ["Alice", "Bob"]
})
# Unnest multiple struct columns
df.unnest(col("tags"), col("info")).show()
# Output:
# +---+---+----+---+----+-----+
# | id| red|blue|age|city| name|
# +---+---+----+---+----+-----+
# | 1| 1| 2| 25| NY|Alice|
# | 2| 3|null| 30| LA| Bob|
# +---+---+----+---+----+-----+
Source code in src/fenic/api/dataframe/dataframe.py
1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 |
|
where
where(condition: Column) -> DataFrame
Filters rows using the given condition (alias for filter()).
Parameters:
-
condition
(Column
) –A Column expression that evaluates to a boolean
Returns:
-
DataFrame
(DataFrame
) –Filtered DataFrame
See Also
filter(): Full documentation of filtering behavior
Source code in src/fenic/api/dataframe/dataframe.py
485 486 487 488 489 490 491 492 493 494 495 496 497 |
|
with_column
with_column(col_name: str, col: Union[Any, Column]) -> DataFrame
Add a new column or replace an existing column.
Parameters:
-
col_name
(str
) –Name of the new column
-
col
(Union[Any, Column]
) –Column expression or value to assign to the column. If not a Column, it will be treated as a literal value.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame with added/replaced column
Add literal column
# Create a DataFrame
df = session.create_dataframe({"name": ["Alice", "Bob"], "age": [25, 30]})
# Add literal column
df.with_column("constant", lit(1)).show()
# Output:
# +-----+---+--------+
# | name|age|constant|
# +-----+---+--------+
# |Alice| 25| 1|
# | Bob| 30| 1|
# +-----+---+--------+
Add computed column
# Add computed column
df.with_column("double_age", col("age") * 2).show()
# Output:
# +-----+---+----------+
# | name|age|double_age|
# +-----+---+----------+
# |Alice| 25| 50|
# | Bob| 30| 60|
# +-----+---+----------+
Replace existing column
# Replace existing column
df.with_column("age", col("age") + 1).show()
# Output:
# +-----+---+
# | name|age|
# +-----+---+
# |Alice| 26|
# | Bob| 31|
# +-----+---+
Add column with complex expression
# Add column with complex expression
df.with_column(
"age_category",
when(col("age") < 30, "young")
.when(col("age") < 50, "middle")
.otherwise("senior")
).show()
# Output:
# +-----+---+------------+
# | name|age|age_category|
# +-----+---+------------+
# |Alice| 25| young|
# | Bob| 30| middle|
# +-----+---+------------+
Source code in src/fenic/api/dataframe/dataframe.py
554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 |
|
with_column_renamed
with_column_renamed(col_name: str, new_col_name: str) -> DataFrame
Rename a column. No-op if the column does not exist.
Parameters:
-
col_name
(str
) –Name of the column to rename.
-
new_col_name
(str
) –New name for the column.
Returns:
-
DataFrame
(DataFrame
) –New DataFrame with the column renamed.
Rename a column
# Create sample DataFrame
df = session.create_dataframe({
"age": [25, 30, 35],
"name": ["Alice", "Bob", "Charlie"]
})
# Rename a column
df.with_column_renamed("age", "age_in_years").show()
# Output:
# +------------+-------+
# |age_in_years| name|
# +------------+-------+
# | 25| Alice|
# | 30| Bob|
# | 35|Charlie|
# +------------+-------+
Rename multiple columns
# Rename multiple columns
df = (df
.with_column_renamed("age", "age_in_years")
.with_column_renamed("name", "full_name")
).show()
# Output:
# +------------+----------+
# |age_in_years|full_name |
# +------------+----------+
# | 25| Alice|
# | 30| Bob|
# | 35| Charlie|
# +------------+----------+
Source code in src/fenic/api/dataframe/dataframe.py
640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 |
|
DataFrameReader
DataFrameReader(session_state: BaseSessionState)
Interface used to load a DataFrame from external storage systems.
Similar to PySpark's DataFrameReader.
Creates a DataFrameReader.
Parameters:
-
session_state
(BaseSessionState
) –The session state to use for reading
Methods:
-
csv
–Load a DataFrame from one or more CSV files.
-
parquet
–Load a DataFrame from one or more Parquet files.
Source code in src/fenic/api/io/reader.py
25 26 27 28 29 30 31 32 |
|
csv
csv(paths: Union[str, Path, list[Union[str, Path]]], schema: Optional[Schema] = None, merge_schemas: bool = False) -> DataFrame
Load a DataFrame from one or more CSV files.
Parameters:
-
paths
(Union[str, Path, list[Union[str, Path]]]
) –A single file path, a glob pattern (e.g., "data/*.csv"), or a list of paths.
-
schema
(Optional[Schema]
, default:None
) –(optional) A complete schema definition of column names and their types. Only primitive types are supported. - For e.g.: - Schema([ColumnField(name="id", data_type=IntegerType), ColumnField(name="name", data_type=StringType)]) - If provided, all files must match this schema exactly—all column names must be present, and values must be convertible to the specified types. Partial schemas are not allowed.
-
merge_schemas
(bool
, default:False
) –Whether to merge schemas across all files. - If True: Column names are unified across files. Missing columns are filled with nulls. Column types are inferred and widened as needed. - If False (default): Only accepts columns from the first file. Column types from the first file are inferred and applied across all files. If subsequent files do not have the same column name and order as the first file, an error is raised. - The "first file" is defined as: - The first file in lexicographic order (for glob patterns), or - The first file in the provided list (for lists of paths).
Notes
- The first row in each file is assumed to be a header row.
- Delimiters (e.g., comma, tab) are automatically inferred.
- You may specify either
schema
ormerge_schemas=True
, but not both. - Any date/datetime columns are cast to strings during ingestion.
Raises:
-
ValidationError
–If both
schema
andmerge_schemas=True
are provided. -
ValidationError
–If any path does not end with
.csv
. -
PlanError
–If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Read a single CSV file
df = session.read.csv("file.csv")
Read multiple CSV files with schema merging
df = session.read.csv("data/*.csv", merge_schemas=True)
Read CSV files with explicit schema
python
df = session.read.csv(
["a.csv", "b.csv"],
schema=Schema([
ColumnField(name="id", data_type=IntegerType),
ColumnField(name="value", data_type=FloatType)
])
)
Source code in src/fenic/api/io/reader.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
|
parquet
parquet(paths: Union[str, Path, list[Union[str, Path]]], merge_schemas: bool = False) -> DataFrame
Load a DataFrame from one or more Parquet files.
Parameters:
-
paths
(Union[str, Path, list[Union[str, Path]]]
) –A single file path, a glob pattern (e.g., "data/*.parquet"), or a list of paths.
-
merge_schemas
(bool
, default:False
) –If True, infers and merges schemas across all files. Missing columns are filled with nulls, and differing types are widened to a common supertype.
Behavior
- If
merge_schemas=False
(default), all files must match the schema of the first file exactly. Subsequent files must contain all columns from the first file with compatible data types. If any column is missing or has incompatible types, an error is raised. - If
merge_schemas=True
, column names are unified across all files, and data types are automatically widened to accommodate all values. - The "first file" is defined as:
- The first file in lexicographic order (for glob patterns), or
- The first file in the provided list (for lists of paths).
Notes
- Date and datetime columns are cast to strings during ingestion.
Raises:
-
ValidationError
–If any file does not have a
.parquet
extension. -
PlanError
–If schemas cannot be merged or if there's a schema mismatch when merge_schemas=False.
Read a single Parquet file
df = session.read.parquet("file.parquet")
Read multiple Parquet files
df = session.read.parquet("data/*.parquet")
Read Parquet files with schema merging
df = session.read.parquet(["a.parquet", "b.parquet"], merge_schemas=True)
Source code in src/fenic/api/io/reader.py
117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 |
|
DataFrameWriter
DataFrameWriter(dataframe: DataFrame)
Interface used to write a DataFrame to external storage systems.
Similar to PySpark's DataFrameWriter.
Initialize a DataFrameWriter.
Parameters:
-
dataframe
(DataFrame
) –The DataFrame to write.
Methods:
-
csv
–Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
-
parquet
–Saves the content of the DataFrame as a single Parquet file.
-
save_as_table
–Saves the content of the DataFrame as the specified table.
Source code in src/fenic/api/io/writer.py
27 28 29 30 31 32 33 |
|
csv
csv(file_path: Union[str, Path], mode: Literal['error', 'overwrite', 'ignore'] = 'overwrite') -> QueryMetrics
Saves the content of the DataFrame as a single CSV file with comma as the delimiter and headers in the first row.
Parameters:
-
file_path
(Union[str, Path]
) –Path to save the CSV file to
-
mode
(Literal['error', 'overwrite', 'ignore']
, default:'overwrite'
) –Write mode. Default is "overwrite". - error: Raises an error if file exists - overwrite: Overwrites the file if it exists - ignore: Silently ignores operation if file exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with overwrite mode (default)
df.write.csv("output.csv") # Overwrites if exists
Save with error mode
df.write.csv("output.csv", mode="error") # Raises error if exists
Save with ignore mode
df.write.csv("output.csv", mode="ignore") # Skips if exists
Source code in src/fenic/api/io/writer.py
78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
|
parquet
parquet(file_path: Union[str, Path], mode: Literal['error', 'overwrite', 'ignore'] = 'overwrite') -> QueryMetrics
Saves the content of the DataFrame as a single Parquet file.
Parameters:
-
file_path
(Union[str, Path]
) –Path to save the Parquet file to
-
mode
(Literal['error', 'overwrite', 'ignore']
, default:'overwrite'
) –Write mode. Default is "overwrite". - error: Raises an error if file exists - overwrite: Overwrites the file if it exists - ignore: Silently ignores operation if file exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with overwrite mode (default)
df.write.parquet("output.parquet") # Overwrites if exists
Save with error mode
df.write.parquet("output.parquet", mode="error") # Raises error if exists
Save with ignore mode
df.write.parquet("output.parquet", mode="ignore") # Skips if exists
Source code in src/fenic/api/io/writer.py
130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
save_as_table
save_as_table(table_name: str, mode: Literal['error', 'append', 'overwrite', 'ignore'] = 'error') -> QueryMetrics
Saves the content of the DataFrame as the specified table.
Parameters:
-
table_name
(str
) –Name of the table to save to
-
mode
(Literal['error', 'append', 'overwrite', 'ignore']
, default:'error'
) –Write mode. Default is "error". - error: Raises an error if table exists - append: Appends data to table if it exists - overwrite: Overwrites existing table - ignore: Silently ignores operation if table exists
Returns:
-
QueryMetrics
(QueryMetrics
) –The query metrics
Save with error mode (default)
df.write.save_as_table("my_table") # Raises error if table exists
Save with append mode
df.write.save_as_table("my_table", mode="append") # Adds to existing table
Save with overwrite mode
df.write.save_as_table("my_table", mode="overwrite") # Replaces existing table
Source code in src/fenic/api/io/writer.py
35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 |
|
DataType
Bases: ABC
Base class for all data types.
You won't instantiate this class directly. Instead, use one of the
concrete types like StringType
, ArrayType
, or StructType
.
Used for casting, type validation, and schema inference in the DataFrame API.
DocumentPathType
Bases: _StringBackedType
Represents a string containing a a document's local (file system) or remote (URL) path.
EmbeddingType
Bases: DataType
A type representing a fixed-length embedding vector.
Attributes:
-
dimensions
(int
) –The number of dimensions in the embedding vector.
-
embedding_model
(str
) –Name of the model used to generate the embedding.
Create an embedding type for text-embedding-3-small
EmbeddingType(384, embedding_model="text-embedding-3-small")
ExtractSchema
Represents a structured extraction schema.
An extract schema contains a collection of named fields with descriptions that define what information should be extracted into each field.
Methods:
-
field_names
–Get a list of all field names in the schema.
field_names
field_names() -> List[str]
Get a list of all field names in the schema.
Returns:
-
List[str]
–A list of strings containing the names of all fields in the schema.
Source code in src/fenic/core/types/extract_schema.py
123 124 125 126 127 128 129 |
|
ExtractSchemaField
ExtractSchemaField(name: str, data_type: Union[DataType, ExtractSchemaList, ExtractSchema], description: str)
Represents a field within an structured extraction schema.
An extract schema field has a name, a data type, and a required description that explains what information should be extracted into this field.
Initialize an ExtractSchemaField.
Parameters:
-
name
(str
) –The name of the field.
-
data_type
(Union[DataType, ExtractSchemaList, ExtractSchema]
) –The data type of the field. Must be either a primitive DataType, ExtractSchemaList, or ExtractSchema.
-
description
(str
) –A description of what information should be extracted into this field.
Raises:
-
ValueError
–If data_type is a non-primitive DataType.
Source code in src/fenic/core/types/extract_schema.py
64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 |
|
ExtractSchemaList
ExtractSchemaList(element_type: Union[DataType, ExtractSchema])
Represents a list data type for structured extraction schema definitions.
A schema list contains elements of a specific data type and is used for defining array-like structures in structured extraction schemas.
Initialize an ExtractSchemaList.
Parameters:
-
element_type
(Union[DataType, ExtractSchema]
) –The data type of elements in the list. Must be either a primitive DataType or another ExtractSchema.
Raises:
-
ValueError
–If element_type is a non-primitive DataType.
Source code in src/fenic/core/types/extract_schema.py
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
|
GoogleGLAModelConfig
Bases: BaseModel
Configuration for Google GenerativeLAnguage (GLA) models.
This class defines the configuration settings for models available in Google Developer AI Studio, including model selection and rate limiting parameters. These models are accessible using a GEMINI_API_KEY environment variable.
GoogleVertexModelConfig
Bases: BaseModel
Configuration for Google Vertex models.
This class defines the configuration settings for models available in Google Vertex AI,
including model selection and rate limiting parameters. In order to use these models, you must have a
Google Cloud service account, or use the gcloud
cli tool to authenticate your local environment.
GroupedData
GroupedData(df: DataFrame, by: Optional[List[ColumnOrName]] = None)
Bases: BaseGroupedData
Methods for aggregations on a grouped DataFrame.
Initialize grouped data.
Parameters:
-
df
(DataFrame
) –The DataFrame to group.
-
by
(Optional[List[ColumnOrName]]
, default:None
) –Optional list of columns to group by.
Methods:
-
agg
–Compute aggregations on grouped data and return the result as a DataFrame.
Source code in src/fenic/api/dataframe/grouped_data.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
|
agg
agg(*exprs: Union[Column, Dict[str, str]]) -> DataFrame
Compute aggregations on grouped data and return the result as a DataFrame.
This method applies aggregate functions to the grouped data.
Parameters:
-
*exprs
(Union[Column, Dict[str, str]]
, default:()
) –Aggregation expressions. Can be:
- Column expressions with aggregate functions (e.g.,
count("*")
,sum("amount")
) - A dictionary mapping column names to aggregate function names (e.g.,
{"amount": "sum", "age": "avg"}
)
- Column expressions with aggregate functions (e.g.,
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame with one row per group and columns for group keys and aggregated values
Raises:
-
ValueError
–If arguments are not Column expressions or a dictionary
-
ValueError
–If dictionary values are not valid aggregate function names
Count employees by department
# Group by department and count employees
df.group_by("department").agg(count("*").alias("employee_count"))
Multiple aggregations
# Multiple aggregations
df.group_by("department").agg(
count("*").alias("employee_count"),
avg("salary").alias("avg_salary"),
max("age").alias("max_age")
)
Dictionary style aggregations
# Dictionary style for simple aggregations
df.group_by("department", "location").agg({"salary": "avg", "age": "max"})
Source code in src/fenic/api/dataframe/grouped_data.py
45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 |
|
JoinExample
Bases: BaseModel
A single semantic example for semantic join operations.
Join examples demonstrate the evaluation of two input strings across different datasets against a specific condition, used in a semantic.join operation.
JoinExampleCollection
JoinExampleCollection(examples: List[ExampleType] = None)
Bases: BaseExampleCollection[JoinExample]
Collection of examples for semantic join operations.
Methods:
-
from_polars
–Create collection from a Polars DataFrame. Must have 'left', 'right', and 'output' columns.
Source code in src/fenic/core/types/semantic_examples.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
from_polars
classmethod
from_polars(df: DataFrame) -> JoinExampleCollection
Create collection from a Polars DataFrame. Must have 'left', 'right', and 'output' columns.
Source code in src/fenic/core/types/semantic_examples.py
451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 |
|
LMMetrics
dataclass
LMMetrics(num_uncached_input_tokens: int = 0, num_cached_input_tokens: int = 0, num_output_tokens: int = 0, cost: float = 0.0, num_requests: int = 0)
Tracks language model usage metrics including token counts and costs.
Attributes:
-
num_uncached_input_tokens
(int
) –Number of uncached tokens in the prompt/input
-
num_cached_input_tokens
(int
) –Number of cached tokens in the prompt/input,
-
num_output_tokens
(int
) –Number of tokens in the completion/output
-
cost
(float
) –Total cost in USD for the LM API call
Lineage
Lineage(lineage: BaseLineage)
Query interface for tracing data lineage through a query plan.
This class allows you to navigate through the query plan both forwards and backwards, tracing how specific rows are transformed through each operation.
Example
# Create a lineage query starting from the root
query = LineageQuery(lineage, session.execution)
# Or start from a specific source
query.start_from_source("my_table")
# Trace rows backwards through a transformation
result = query.backward(["uuid1", "uuid2"])
# Trace rows forward to see their outputs
result = query.forward(["uuid3", "uuid4"])
Initialize a Lineage instance.
Parameters:
-
lineage
(BaseLineage
) –The underlying lineage implementation.
Methods:
-
backwards
–Trace rows backwards to see which input rows produced them.
-
forwards
–Trace rows forward to see how they are transformed by the next operation.
-
get_result_df
–Get the result of the query as a Polars DataFrame.
-
get_source_df
–Get a query source by name as a Polars DataFrame.
-
get_source_names
–Get the names of all sources in the query plan. Used to determine where to start the lineage traversal.
-
show
–Print the operator tree of the query.
-
skip_backwards
–[Not Implemented] Trace rows backwards through multiple operations at once.
-
skip_forwards
–[Not Implemented] Trace rows forward through multiple operations at once.
-
start_from_source
–Set the current position to a specific source in the query plan.
Source code in src/fenic/api/lineage.py
34 35 36 37 38 39 40 |
|
backwards
backwards(ids: List[str], branch_side: Optional[BranchSide] = None) -> pl.DataFrame
Trace rows backwards to see which input rows produced them.
Parameters:
-
ids
(List[str]
) –List of UUIDs identifying the rows to trace back
-
branch_side
(Optional[BranchSide]
, default:None
) –For operators with multiple inputs (like joins), specify which input to trace ("left" or "right"). Not needed for single-input operations.
Returns:
-
DataFrame
–DataFrame containing the source rows that produced the specified outputs
Raises:
-
ValueError
–If invalid ids format or incorrect branch_side specification
Example
# Simple backward trace
source_rows = query.backward(["result_uuid1"])
# Trace back through a join
left_rows = query.backward(["join_uuid1"], branch_side="left")
right_rows = query.backward(["join_uuid1"], branch_side="right")
Source code in src/fenic/api/lineage.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 |
|
forwards
forwards(row_ids: List[str]) -> pl.DataFrame
Trace rows forward to see how they are transformed by the next operation.
Parameters:
-
row_ids
(List[str]
) –List of UUIDs identifying the rows to trace
Returns:
-
DataFrame
–DataFrame containing the transformed rows in the next operation
Raises:
-
ValueError
–If at root node or if row_ids format is invalid
Example
# Trace how specific customer rows are transformed
transformed = query.forward(["customer_uuid1", "customer_uuid2"])
Source code in src/fenic/api/lineage.py
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
get_result_df
get_result_df() -> pl.DataFrame
Get the result of the query as a Polars DataFrame.
Source code in src/fenic/api/lineage.py
150 151 152 |
|
get_source_df
get_source_df(source_name: str) -> pl.DataFrame
Get a query source by name as a Polars DataFrame.
Source code in src/fenic/api/lineage.py
154 155 156 157 |
|
get_source_names
get_source_names() -> List[str]
Get the names of all sources in the query plan. Used to determine where to start the lineage traversal.
Source code in src/fenic/api/lineage.py
42 43 44 45 |
|
show
show() -> None
Print the operator tree of the query.
Source code in src/fenic/api/lineage.py
47 48 49 |
|
skip_backwards
skip_backwards(ids: List[str]) -> Dict[str, pl.DataFrame]
[Not Implemented] Trace rows backwards through multiple operations at once.
This method will allow efficient tracing through multiple operations without intermediate results.
Parameters:
-
ids
(List[str]
) –List of UUIDs identifying the rows to trace back
Returns:
-
Dict[str, DataFrame]
–Dictionary mapping operation names to their source DataFrames
Raises:
-
NotImplementedError
–This method is not yet implemented
Source code in src/fenic/api/lineage.py
133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
|
skip_forwards
skip_forwards(row_ids: List[str]) -> pl.DataFrame
[Not Implemented] Trace rows forward through multiple operations at once.
This method will allow efficient tracing through multiple operations without intermediate results.
Parameters:
-
row_ids
(List[str]
) –List of UUIDs identifying the rows to trace
Returns:
-
DataFrame
–DataFrame containing the final transformed rows
Raises:
-
NotImplementedError
–This method is not yet implemented
Source code in src/fenic/api/lineage.py
116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
|
start_from_source
start_from_source(source_name: str) -> None
Set the current position to a specific source in the query plan.
Parameters:
-
source_name
(str
) –Name of the source table to start from
Example
query.start_from_source("customers")
# Now you can trace forward from the customers table
Source code in src/fenic/api/lineage.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
|
MapExample
Bases: BaseModel
A single semantic example for semantic mapping operations.
Map examples demonstrate the transformation of input variables to a specific output string used in a semantic.map operation.
MapExampleCollection
MapExampleCollection(examples: List[ExampleType] = None)
Bases: BaseExampleCollection[MapExample]
Collection of examples for semantic mapping operations.
Map operations transform input variables into a text output according to specified instructions. This collection manages examples that demonstrate the expected transformations for different inputs.
Examples in this collection can have multiple input variables, each mapped to their respective values, with a single output string representing the expected transformation result.
Methods:
-
from_polars
–Create collection from a Polars DataFrame. Must have an 'output' column and at least one input column.
Source code in src/fenic/core/types/semantic_examples.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
from_polars
classmethod
from_polars(df: DataFrame) -> MapExampleCollection
Create collection from a Polars DataFrame. Must have an 'output' column and at least one input column.
Source code in src/fenic/core/types/semantic_examples.py
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 |
|
OpenAIModelConfig
Bases: BaseModel
Configuration for OpenAI models.
This class defines the configuration settings for OpenAI language and embedding models, including model selection and rate limiting parameters.
Attributes:
-
model_name
(Union[OPENAI_AVAILABLE_LANGUAGE_MODELS, OPENAI_AVAILABLE_EMBEDDING_MODELS]
) –The name of the OpenAI model to use.
-
rpm
(int
) –Requests per minute limit; must be greater than 0.
-
tpm
(int
) –Tokens per minute limit; must be greater than 0.
Examples:
Configuring an OpenAI Language model with rate limits:
config = OpenAIModelConfig(model_name="gpt-4.1-nano", rpm=100, tpm=100)
Configuring an OpenAI Embedding model with rate limits:
config = OpenAIModelConfig(model_name="text-embedding-3-small", rpm=100, tpm=100)
OperatorMetrics
dataclass
OperatorMetrics(operator_id: str, num_output_rows: int = 0, execution_time_ms: float = 0.0, lm_metrics: LMMetrics = LMMetrics(), rm_metrics: RMMetrics = RMMetrics(), is_cache_hit: bool = False)
Metrics for a single operator in the query execution plan.
Attributes:
-
operator_id
(str
) –Unique identifier for the operator
-
num_output_rows
(int
) –Number of rows output by this operator
-
execution_time_ms
(float
) –Execution time in milliseconds
-
lm_metrics
(LMMetrics
) –Language model usage metrics for this operator
-
is_cache_hit
(bool
) –Whether results were retrieved from cache
PredicateExample
Bases: BaseModel
A single semantic example for semantic predicate operations.
Predicate examples demonstrate the evaluation of input variables against a specific condition, used in a semantic.predicate operation.
PredicateExampleCollection
PredicateExampleCollection(examples: List[ExampleType] = None)
Bases: BaseExampleCollection[PredicateExample]
Collection of examples for semantic predicate operations.
Predicate operations evaluate conditions on input variables to produce boolean (True/False) results. This collection manages examples that demonstrate the expected boolean outcomes for different inputs.
Examples in this collection can have multiple input variables, each mapped to their respective values, with a single boolean output representing the evaluation result of the predicate.
Methods:
-
from_polars
–Create collection from a Polars DataFrame.
Source code in src/fenic/core/types/semantic_examples.py
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
|
from_polars
classmethod
from_polars(df: DataFrame) -> PredicateExampleCollection
Create collection from a Polars DataFrame.
Source code in src/fenic/core/types/semantic_examples.py
366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 |
|
QueryMetrics
dataclass
QueryMetrics(execution_time_ms: float = 0.0, num_output_rows: int = 0, total_lm_metrics: LMMetrics = LMMetrics(), total_rm_metrics: RMMetrics = RMMetrics(), _operator_metrics: Dict[str, OperatorMetrics] = dict(), _plan_repr: PhysicalPlanRepr = lambda: PhysicalPlanRepr(operator_id='empty')())
Comprehensive metrics for an executed query.
Includes overall statistics and detailed metrics for each operator in the execution plan.
Attributes:
-
execution_time_ms
(float
) –Total query execution time in milliseconds
-
num_output_rows
(int
) –Total number of rows returned by the query
-
total_lm_metrics
(LMMetrics
) –Aggregated language model metrics across all operators
Methods:
-
get_execution_plan_details
–Generate a formatted execution plan with detailed metrics.
-
get_summary
–Summarize the query metrics in a single line.
get_execution_plan_details
get_execution_plan_details() -> str
Generate a formatted execution plan with detailed metrics.
Produces a hierarchical representation of the query execution plan, including performance metrics and language model usage for each operator.
Returns:
-
str
(str
) –A formatted string showing the execution plan with metrics.
Source code in src/fenic/core/metrics.py
140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 |
|
get_summary
get_summary() -> str
Summarize the query metrics in a single line.
Returns:
-
str
(str
) –A concise summary of execution time, row count, and LM cost.
Source code in src/fenic/core/metrics.py
127 128 129 130 131 132 133 134 135 136 137 138 |
|
QueryResult
dataclass
QueryResult(data: DataLike, metrics: QueryMetrics)
Container for query execution results and associated metadata.
This dataclass bundles together the materialized data from a query execution along with metrics about the execution process. It provides a unified interface for accessing both the computed results and performance information.
Attributes:
-
data
(DataLike
) –The materialized query results in the requested format. Can be any of the supported data types (Polars/Pandas DataFrame, Arrow Table, or Python dict/list structures).
-
metrics
(QueryMetrics
) –Execution metadata including timing information, memory usage, rows processed, and other performance metrics collected during query execution.
Access query results and metrics
# Execute query and get results with metrics
result = df.filter(col("age") > 25).collect("pandas")
pandas_df = result.data # Access the Pandas DataFrame
print(result.metrics.execution_time) # Access execution metrics
print(result.metrics.rows_processed) # Access row count
Work with different data formats
# Get results in different formats
polars_result = df.collect("polars")
arrow_result = df.collect("arrow")
dict_result = df.collect("pydict")
# All contain the same data, different formats
print(type(polars_result.data)) # <class 'polars.DataFrame'>
print(type(arrow_result.data)) # <class 'pyarrow.lib.Table'>
print(type(dict_result.data)) # <class 'dict'>
Note
The actual type of the data
attribute depends on the format requested
during collection. Use type checking or isinstance() if you need to
handle the data differently based on its format.
RMMetrics
dataclass
RMMetrics(num_input_tokens: int = 0, num_requests: int = 0, cost: float = 0.0)
Tracks embedding model usage metrics including token counts and costs.
Attributes:
-
num_input_tokens
(int
) –Number of tokens to embed
-
cost
(float
) –Total cost in USD to embed the tokens
Schema
Represents the schema of a DataFrame.
A Schema defines the structure of a DataFrame by specifying an ordered collection of column fields. Each column field defines the name and data type of a column in the DataFrame.
Attributes:
-
column_fields
(List[ColumnField]
) –An ordered list of ColumnField objects that define the structure of the DataFrame.
Methods:
-
column_names
–Get a list of all column names in the schema.
column_names
column_names() -> List[str]
Get a list of all column names in the schema.
Returns:
-
List[str]
–A list of strings containing the names of all columns in the schema.
Source code in src/fenic/core/types/schema.py
62 63 64 65 66 67 68 |
|
SemanticConfig
Bases: BaseModel
Configuration for semantic language and embedding models.
This class defines the configuration for both language models and optional embedding models used in semantic operations. It ensures that all configured models are valid and supported by their respective providers.
Attributes:
-
language_models
(dict[str, ModelConfig]
) –Mapping of model aliases to language model configurations.
-
default_language_model
(Optional[str]
) –The alias of the default language model to use for semantic operations. Not required if only one language model is configured.
-
embedding_models
(Optional[dict[str, ModelConfig]]
) –Optional mapping of model aliases to embedding model configurations.
-
default_embedding_model
(Optional[str]
) –The alias of the default embedding model to use for semantic operations.
Note
The embedding model is optional and only required for operations that need semantic search or embedding capabilities.
Methods:
-
model_post_init
–Post initialization hook to set defaults.
-
validate_models
–Validates that the selected models are supported by the system.
model_post_init
model_post_init(__context) -> None
Post initialization hook to set defaults.
This hook runs after the model is initialized and validated. It sets the default language and embedding models if they are not set and there is only one model available.
Source code in src/fenic/api/session/config.py
154 155 156 157 158 159 160 161 162 163 164 165 166 |
|
validate_models
validate_models() -> SemanticConfig
Validates that the selected models are supported by the system.
This validator checks that both the language model and embedding model (if provided) are valid and supported by their respective providers.
Returns:
-
SemanticConfig
–The validated SemanticConfig instance.
Raises:
-
ConfigurationError
–If any of the models are not supported.
Source code in src/fenic/api/session/config.py
168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 |
|
SemanticExtensions
SemanticExtensions(df: DataFrame)
A namespace for semantic dataframe operators.
Initialize semantic extensions.
Parameters:
-
df
(DataFrame
) –The DataFrame to extend with semantic operations.
Methods:
-
join
–Performs a semantic join between two DataFrames using a natural language predicate.
-
sim_join
–Performs a semantic similarity join between two DataFrames using embedding expressions.
-
with_cluster_labels
–Cluster rows using K-means and add cluster metadata columns.
Source code in src/fenic/api/dataframe/semantic_extensions.py
30 31 32 33 34 35 36 |
|
join
join(other: DataFrame, join_instruction: str, examples: Optional[JoinExampleCollection] = None, model_alias: Optional[str] = None) -> DataFrame
Performs a semantic join between two DataFrames using a natural language predicate.
That evaluates to either true or false for each potential row pair.
The join works by: 1. Evaluating the provided join_instruction as a boolean predicate for each possible pair of rows 2. Including ONLY the row pairs where the predicate evaluates to True in the result set 3. Excluding all row pairs where the predicate evaluates to False
The instruction must reference exactly two columns, one from each DataFrame,
using the :left
and :right
suffixes to indicate column origin.
This is useful when row pairing decisions require complex reasoning based on a custom predicate rather than simple equality or similarity matching.
Parameters:
-
other
(DataFrame
) –The DataFrame to join with.
-
join_instruction
(str
) –A natural language description of how to match values.
- Must include one placeholder from the left DataFrame (e.g.
{resume_summary:left}
) and one from the right (e.g.{job_description:right}
). - This instruction is evaluated as a boolean predicate - pairs where it's
True
are included, pairs where it'sFalse
are excluded.
- Must include one placeholder from the left DataFrame (e.g.
-
examples
(Optional[JoinExampleCollection]
, default:None
) –Optional JoinExampleCollection containing labeled pairs (
left
,right
,output
) to guide the semantic join behavior. -
model_alias
(Optional[str]
, default:None
) –Optional alias for the language model to use for the mapping. If None, will use the language model configured as the default.
Returns:
-
DataFrame
(DataFrame
) –A new DataFrame containing only the row pairs where the join_instruction predicate evaluates to True.
Raises:
-
TypeError
–If
other
is not a DataFrame orjoin_instruction
is not a string. -
ValueError
–If the instruction format is invalid or references invalid columns.
Basic semantic join
# Match job listings with candidate resumes based on title/skills
# Only includes pairs where the predicate evaluates to True
df_jobs.semantic.join(df_resumes,
join_instruction="Given a candidate's resume_summary: {resume_summary:left} and a job description: {job_description:right}, does the candidate have the appropriate skills for the job?"
)
Semantic join with examples
# Improve join quality with examples
examples = JoinExampleCollection()
examples.create_example(JoinExample(
left="5 years experience building backend services in Python using asyncio, FastAPI, and PostgreSQL",
right="Senior Software Engineer - Backend",
output=True)) # This pair WILL be included in similar cases
examples.create_example(JoinExample(
left="5 years experience with growth strategy, private equity due diligence, and M&A",
right="Product Manager - Hardware",
output=False)) # This pair will NOT be included in similar cases
df_jobs.semantic.join(df_resumes,
join_instruction="Given a candidate's resume_summary: {resume_summary:left} and a job description: {job_description:right}, does the candidate have the appropriate skills for the job?",
examples=examples)
Source code in src/fenic/api/dataframe/semantic_extensions.py
113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 |
|
sim_join
sim_join(other: DataFrame, left_on: ColumnOrName, right_on: ColumnOrName, k: int = 1, similarity_metric: SemanticSimilarityMetric = 'cosine', similarity_score_column: Optional[str] = None) -> DataFrame
Performs a semantic similarity join between two DataFrames using embedding expressions.
For each row in the left DataFrame, returns the top k
most semantically similar rows
from the right DataFrame based on the specified similarity metric.
Parameters:
-
other
(DataFrame
) –The right-hand DataFrame to join with.
-
left_on
(ColumnOrName
) –Expression or column representing embeddings in the left DataFrame.
-
right_on
(ColumnOrName
) –Expression or column representing embeddings in the right DataFrame.
-
k
(int
, default:1
) –Number of most similar matches to return per row.
-
similarity_metric
(SemanticSimilarityMetric
, default:'cosine'
) –Similarity metric to use: "l2", "cosine", or "dot".
-
similarity_score_column
(Optional[str]
, default:None
) –If set, adds a column with this name containing similarity scores. If None, the scores are omitted.
Returns:
-
DataFrame
–A DataFrame containing one row for each of the top-k matches per row in the left DataFrame.
-
DataFrame
–The result includes all columns from both DataFrames, optionally augmented with a similarity score column
-
DataFrame
–if
similarity_score_column
is provided.
Raises:
-
ValidationError
–If
k
is not positive or if the columns are invalid. -
ValidationError
–If
similarity_metric
is not one of "l2", "cosine", "dot"
Match queries to FAQ entries
# Match customer queries to FAQ entries
df_queries.semantic.sim_join(
df_faqs,
left_on=embeddings(col("query_text")),
right_on=embeddings(col("faq_question")),
k=1
)
Link headlines to articles
# Link news headlines to full articles
df_headlines.semantic.sim_join(
df_articles,
left_on=embeddings(col("headline")),
right_on=embeddings(col("content")),
k=3,
return_similarity_scores=True
)
Find similar job postings
# Find similar job postings across two sources
df_linkedin.semantic.sim_join(
df_indeed,
left_on=embeddings(col("job_title")),
right_on=embeddings(col("job_description")),
k=2
)
Source code in src/fenic/api/dataframe/semantic_extensions.py
231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 |
|
with_cluster_labels
with_cluster_labels(by: ColumnOrName, num_clusters: int, label_column: str = 'cluster_label', centroid_column: Optional[str] = None) -> DataFrame
Cluster rows using K-means and add cluster metadata columns.
This method clusters rows based on the given embedding column or expression using K-means. It adds a new column with cluster assignments, and optionally includes the centroid embedding for each assigned cluster.
Parameters:
-
by
(ColumnOrName
) –Column or expression producing embeddings to cluster (e.g.,
embed(col("text"))
). -
num_clusters
(int
) –Number of clusters to compute (must be > 0).
-
label_column
(str
, default:'cluster_label'
) –Name of the output column for cluster IDs. Default is "cluster_label".
-
centroid_column
(Optional[str]
, default:None
) –If provided, adds a column with this name containing the centroid embedding for each row's assigned cluster.
Returns:
-
DataFrame
–A DataFrame with all original columns plus:
-
DataFrame
–<label_column>
: integer cluster assignment (0 to num_clusters - 1)
-
DataFrame
–<centroid_column>
: cluster centroid embedding, if specified
Raises:
-
ValidationError
–If num_clusters is not a positive integer
-
ValidationError
–If label_column is not a non-empty string
-
ValidationError
–If centroid_column is not a non-empty string
-
TypeMismatchError
–If the column is not an EmbeddingType
Basic clustering
# Cluster customer feedback and add cluster metadata
clustered_df = df.semantic.with_cluster_labels("feedback_embeddings", 5)
# Then use regular operations to analyze clusters
clustered_df.group_by("cluster_label").agg(count("*"), avg("rating"))
Filter outliers using centroids
# Cluster and filter out rows far from their centroid
clustered_df = df.semantic.with_cluster_labels("embeddings", 3, centroid_column="cluster_centroid")
clean_df = clustered_df.filter(
embedding.compute_similarity("embeddings", "cluster_centroid", metric="cosine") > 0.7
)
Source code in src/fenic/api/dataframe/semantic_extensions.py
38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 |
|
Session
The entry point to programming with the DataFrame API. Similar to PySpark's SparkSession.
Create a session with default configuration
session = Session.get_or_create(SessionConfig(app_name="my_app"))
Create a session with cloud configuration
config = SessionConfig(
app_name="my_app",
cloud=True,
api_key="your_api_key"
)
session = Session.get_or_create(config)
Methods:
-
create_dataframe
–Create a DataFrame from a variety of Python-native data formats.
-
get_or_create
–Gets an existing Session or creates a new one with the configured settings.
-
sql
–Execute a read-only SQL query against one or more DataFrames using named placeholders.
-
stop
–Stops the session and closes all connections.
-
table
–Returns the specified table as a DataFrame.
Attributes:
-
catalog
(Catalog
) –Interface for catalog operations on the Session.
-
read
(DataFrameReader
) –Returns a DataFrameReader that can be used to read data in as a DataFrame.
catalog
property
catalog: Catalog
Interface for catalog operations on the Session.
read
property
read: DataFrameReader
Returns a DataFrameReader that can be used to read data in as a DataFrame.
Returns:
-
DataFrameReader
(DataFrameReader
) –A reader interface to read data into DataFrame
Raises:
-
RuntimeError
–If the session has been stopped
create_dataframe
create_dataframe(data: DataLike) -> DataFrame
Create a DataFrame from a variety of Python-native data formats.
Parameters:
-
data
(DataLike
) –Input data. Must be one of: - Polars DataFrame - Pandas DataFrame - dict of column_name -> list of values - list of dicts (each dict representing a row) - pyarrow Table
Returns:
-
DataFrame
–A new DataFrame instance
Raises:
-
ValueError
–If the input format is unsupported or inconsistent with provided column names.
Create from Polars DataFrame
import polars as pl
df = pl.DataFrame({"col1": [1, 2], "col2": ["a", "b"]})
session.create_dataframe(df)
Create from Pandas DataFrame
import pandas as pd
df = pd.DataFrame({"col1": [1, 2], "col2": ["a", "b"]})
session.create_dataframe(df)
Create from dictionary
session.create_dataframe({"col1": [1, 2], "col2": ["a", "b"]})
Create from list of dictionaries
session.create_dataframe([
{"col1": 1, "col2": "a"},
{"col1": 2, "col2": "b"}
])
Create from pyarrow Table
import pyarrow as pa
table = pa.Table.from_pydict({"col1": [1, 2], "col2": ["a", "b"]})
session.create_dataframe(table)
Source code in src/fenic/api/session/session.py
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
|
get_or_create
classmethod
get_or_create(config: SessionConfig) -> Session
Gets an existing Session or creates a new one with the configured settings.
Returns:
-
Session
–A Session instance configured with the provided settings
Source code in src/fenic/api/session/session.py
61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 |
|
sql
sql(query: str, /, **tables: DataFrame) -> DataFrame
Execute a read-only SQL query against one or more DataFrames using named placeholders.
This allows you to execute ad hoc SQL queries using familiar syntax when it's more convenient than the DataFrame API.
Placeholders in the SQL string (e.g. {df}
) should correspond to keyword arguments (e.g. df=my_dataframe
).
For supported SQL syntax and functions, refer to the DuckDB SQL documentation: https://duckdb.org/docs/sql/introduction.
Parameters:
-
query
(str
) –A SQL query string with placeholders like
{df}
-
**tables
(DataFrame
, default:{}
) –Keyword arguments mapping placeholder names to DataFrames
Returns:
-
DataFrame
–A lazy DataFrame representing the result of the SQL query
Raises:
-
ValidationError
–If a placeholder is used in the query but not passed as a keyword argument
Simple join between two DataFrames
df1 = session.create_dataframe({"id": [1, 2]})
df2 = session.create_dataframe({"id": [2, 3]})
result = session.sql(
"SELECT * FROM {df1} JOIN {df2} USING (id)",
df1=df1,
df2=df2
)
Complex query with multiple DataFrames
users = session.create_dataframe({"user_id": [1, 2], "name": ["Alice", "Bob"]})
orders = session.create_dataframe({"order_id": [1, 2], "user_id": [1, 2]})
products = session.create_dataframe({"product_id": [1, 2], "name": ["Widget", "Gadget"]})
result = session.sql("""
SELECT u.name, p.name as product
FROM {users} u
JOIN {orders} o ON u.user_id = o.user_id
JOIN {products} p ON o.product_id = p.product_id
""", users=users, orders=orders, products=products)
Source code in src/fenic/api/session/session.py
243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 |
|
stop
stop()
Stops the session and closes all connections.
Source code in src/fenic/api/session/session.py
311 312 313 |
|
table
table(table_name: str) -> DataFrame
Returns the specified table as a DataFrame.
Parameters:
-
table_name
(str
) –Name of the table
Returns:
-
DataFrame
–Table as a DataFrame
Raises:
-
ValueError
–If the table does not exist
Load an existing table
df = session.table("my_table")
Source code in src/fenic/api/session/session.py
220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 |
|
SessionConfig
Bases: BaseModel
Configuration for a user session.
This class defines the complete configuration for a user session, including application settings, model configurations, and optional cloud settings. It serves as the central configuration object for all language model operations.
Attributes:
-
app_name
(str
) –Name of the application using this session. Defaults to "default_app".
-
db_path
(Optional[Path]
) –Optional path to a local database file for persistent storage.
-
semantic
(SemanticConfig
) –Configuration for semantic models (required).
-
cloud
(Optional[CloudConfig]
) –Optional configuration for cloud execution.
Note
The semantic configuration is required as it defines the language models that will be used for processing. The cloud configuration is optional and only needed for distributed processing.
StructField
A field in a StructType. Fields are nullable.
Attributes:
-
name
(str
) –The name of the field.
-
data_type
(DataType
) –The data type of the field.
StructType
Bases: DataType
A type representing a struct (record) with named fields.
Attributes:
-
fields
–List of field definitions.
Create a struct with name and age fields
StructType([
StructField("name", StringType),
StructField("age", IntegerType),
])
TranscriptType
Bases: _StringBackedType
Represents a string containing a transcript in a specific format.
array
array(*args: Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]) -> Column
Creates a new array column from multiple input columns.
Parameters:
-
*args
(Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]
, default:()
) –Columns or column names to combine into an array. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
-
Column
–A Column expression representing an array containing values from the input columns
Raises:
-
TypeError
–If any argument is not a Column, string, or collection of Columns/strings
Source code in src/fenic/api/functions/builtin.py
228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 |
|
array_agg
array_agg(column: ColumnOrName) -> Column
Alias for collect_list().
Source code in src/fenic/api/functions/builtin.py
161 162 163 164 |
|
array_contains
array_contains(column: ColumnOrName, value: Union[str, int, float, bool, Column]) -> Column
Checks if array column contains a specific value.
This function returns True if the array in the specified column contains the given value, and False otherwise. Returns False if the array is None.
Parameters:
-
column
(ColumnOrName
) –Column or column name containing the arrays to check.
-
value
(Union[str, int, float, bool, Column]
) –Value to search for in the arrays. Can be: - A literal value (string, number, boolean) - A Column expression
Returns:
-
Column
–A boolean Column expression (True if value is found, False otherwise).
Raises:
-
TypeError
–If value type is incompatible with the array element type.
-
TypeError
–If the column does not contain array data.
Check for values in arrays
# Check if 'python' exists in arrays in the 'tags' column
df.select(array_contains("tags", "python"))
# Check using a value from another column
df.select(array_contains("tags", col("search_term")))
Source code in src/fenic/api/functions/builtin.py
442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 |
|
array_size
array_size(column: ColumnOrName) -> Column
Returns the number of elements in an array column.
This function computes the length of arrays stored in the specified column. Returns None for None arrays.
Parameters:
-
column
(ColumnOrName
) –Column or column name containing arrays whose length to compute.
Returns:
-
Column
–A Column expression representing the array length.
Raises:
-
TypeError
–If the column does not contain array data.
Get array sizes
# Get the size of arrays in 'tags' column
df.select(array_size("tags"))
# Use with column reference
df.select(array_size(col("tags")))
Source code in src/fenic/api/functions/builtin.py
412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 |
|
asc
asc(column: ColumnOrName) -> Column
Creates a Column expression representing an ascending sort order.
Parameters:
-
column
(ColumnOrName
) –The column to apply the ascending ordering to.
Returns:
-
Column
–A Column expression representing the column and the ascending sort order.
Raises:
-
ValueError
–If the type of the column cannot be inferred.
-
Error
–If this expression is passed to a dataframe operation besides sort() and order_by().
Source code in src/fenic/api/functions/builtin.py
310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 |
|
asc_nulls_first
asc_nulls_first(column: ColumnOrName) -> Column
Creates a Column expression representing an ascending sort order with nulls first.
Parameters:
-
column
(ColumnOrName
) –The column to apply the ascending ordering to.
Returns:
-
Column
–A Column expression representing the column and the ascending sort order with nulls first.
Raises:
-
ValueError
–If the type of the column cannot be inferred.
-
Error
–If this expression is passed to a dataframe operation besides sort() and order_by().
Source code in src/fenic/api/functions/builtin.py
327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 |
|
asc_nulls_last
asc_nulls_last(column: ColumnOrName) -> Column
Creates a Column expression representing an ascending sort order with nulls last.
Parameters:
-
column
(ColumnOrName
) –The column to apply the ascending ordering to.
Returns:
-
Column
–A Column expression representing the column and the ascending sort order with nulls last.
Raises:
-
ValueError
–If the type of the column cannot be inferred.
-
Error
–If this expression is passed to a dataframe operation besides sort() and order_by().
Source code in src/fenic/api/functions/builtin.py
344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 |
|
avg
avg(column: ColumnOrName) -> Column
Aggregate function: returns the average (mean) of all values in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the average of
Returns:
-
Column
–A Column expression representing the average aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
|
coalesce
coalesce(*cols: ColumnOrName) -> Column
Returns the first non-null value from the given columns for each row.
This function mimics the behavior of SQL's COALESCE function. It evaluates the input columns in order and returns the first non-null value encountered. If all values are null, returns null.
Parameters:
-
*cols
(ColumnOrName
, default:()
) –Column expressions or column names to evaluate. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
-
Column
–A Column expression containing the first non-null value from the input columns.
Raises:
-
ValueError
–If no columns are provided.
Basic coalesce usage
# Basic usage
df.select(coalesce("col1", "col2", "col3"))
# With nested collections
df.select(coalesce(["col1", "col2"], "col3"))
Source code in src/fenic/api/functions/builtin.py
519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 |
|
col
col(col_name: str) -> Column
Creates a Column expression referencing a column in the DataFrame.
Parameters:
-
col_name
(str
) –Name of the column to reference
Returns:
-
Column
–A Column expression for the specified column
Raises:
-
TypeError
–If colName is not a string
Source code in src/fenic/api/functions/core.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
|
collect_list
collect_list(column: ColumnOrName) -> Column
Aggregate function: collects all values from the specified column into a list.
Parameters:
-
column
(ColumnOrName
) –Column or column name to collect values from
Returns:
-
Column
–A Column expression representing the list aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
|
configure_logging
configure_logging(log_level: int = logging.INFO, log_format: str = '%(asctime)s [%(name)s] %(levelname)s: %(message)s', log_stream: Optional[TextIO] = None) -> None
Configure logging for the library and root logger in interactive environments.
This function ensures that logs from the library's modules appear in output by setting up a default handler on the root logger only if one does not already exist. This is especially useful in notebooks, scripts, or REPLs where logging is often unset. It configures the root logger and sets the library's top-level logger to propagate logs to the root.
If the root logger has no handlers, this function sets up a default configuration and silences noisy dependencies like 'openai' and 'httpx'.
In more complex applications or when integrating with existing logging configurations, you might prefer to manage logging setup externally. In such cases, you may not need to call this function.
Source code in src/fenic/logging.py
8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
|
count
count(column: ColumnOrName) -> Column
Aggregate function: returns the count of non-null values in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to count values in
Returns:
-
Column
–A Column expression representing the count aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
|
desc
desc(column: ColumnOrName) -> Column
Creates a Column expression representing a descending sort order.
Parameters:
-
column
(ColumnOrName
) –The column to apply the descending ordering to.
Returns:
-
Column
–A Column expression representing the column and the descending sort order.
Raises:
-
ValueError
–If the type of the column cannot be inferred.
-
Error
–If this expression is passed to a dataframe operation besides sort() and order_by().
Source code in src/fenic/api/functions/builtin.py
361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 |
|
desc_nulls_first
desc_nulls_first(column: ColumnOrName) -> Column
Creates a Column expression representing a descending sort order with nulls first.
Parameters:
-
column
(ColumnOrName
) –The column to apply the descending ordering to.
Returns:
-
Column
–A Column expression representing the column and the descending sort order with nulls first.
Raises:
-
ValueError
–If the type of the column cannot be inferred.
-
Error
–If this expression is passed to a dataframe operation besides sort() and order_by().
Source code in src/fenic/api/functions/builtin.py
378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 |
|
desc_nulls_last
desc_nulls_last(column: ColumnOrName) -> Column
Creates a Column expression representing a descending sort order with nulls last.
Parameters:
-
column
(ColumnOrName
) –The column to apply the descending ordering to.
Returns:
-
Column
–A Column expression representing the column and the descending sort order with nulls last.
Raises:
-
ValueError
–If the type of the column cannot be inferred.
-
Error
–If this expression is passed to a dataframe operation besides sort() and order_by().
Source code in src/fenic/api/functions/builtin.py
395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 |
|
first
first(column: ColumnOrName) -> Column
Aggregate function: returns the first non-null value in the specified column.
Typically used in aggregations to select the first observed value per group.
Parameters:
-
column
(ColumnOrName
) –Column or column name.
Returns:
-
Column
–Column expression for the first value.
Source code in src/fenic/api/functions/builtin.py
166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
|
lit
lit(value: Any) -> Column
Creates a Column expression representing a literal value.
Parameters:
-
value
(Any
) –The literal value to create a column for
Returns:
-
Column
–A Column expression representing the literal value
Raises:
-
ValueError
–If the type of the value cannot be inferred
Source code in src/fenic/api/functions/core.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
|
max
max(column: ColumnOrName) -> Column
Aggregate function: returns the maximum value in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the maximum of
Returns:
-
Column
–A Column expression representing the maximum aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 |
|
mean
mean(column: ColumnOrName) -> Column
Aggregate function: returns the mean (average) of all values in the specified column.
Alias for avg().
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the mean of
Returns:
-
Column
–A Column expression representing the mean aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
min
min(column: ColumnOrName) -> Column
Aggregate function: returns the minimum value in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the minimum of
Returns:
-
Column
–A Column expression representing the minimum aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 |
|
stddev
stddev(column: ColumnOrName) -> Column
Aggregate function: returns the sample standard deviation of the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name.
Returns:
-
Column
–Column expression for sample standard deviation.
Source code in src/fenic/api/functions/builtin.py
182 183 184 185 186 187 188 189 190 191 192 193 194 |
|
struct
struct(*args: Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]) -> Column
Creates a new struct column from multiple input columns.
Parameters:
-
*args
(Union[ColumnOrName, List[ColumnOrName], Tuple[ColumnOrName, ...]]
, default:()
) –Columns or column names to combine into a struct. Can be:
- Individual arguments
- Lists of columns/column names
- Tuples of columns/column names
Returns:
-
Column
–A Column expression representing a struct containing the input columns
Raises:
-
TypeError
–If any argument is not a Column, string, or collection of Columns/strings
Source code in src/fenic/api/functions/builtin.py
196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 |
|
sum
sum(column: ColumnOrName) -> Column
Aggregate function: returns the sum of all values in the specified column.
Parameters:
-
column
(ColumnOrName
) –Column or column name to compute the sum of
Returns:
-
Column
–A Column expression representing the sum aggregation
Raises:
-
TypeError
–If column is not a Column or string
Source code in src/fenic/api/functions/builtin.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
|
udf
udf(f: Optional[Callable] = None, *, return_type: DataType)
A decorator or function for creating user-defined functions (UDFs) that can be applied to DataFrame rows.
When applied, UDFs will:
- Access StructType
columns as Python dictionaries (dict[str, Any]
).
- Access ArrayType
columns as Python lists (list[Any]
).
- Access primitive types (e.g., int
, float
, str
) as their respective Python types.
Parameters:
-
f
(Optional[Callable]
, default:None
) –Python function to convert to UDF
-
return_type
(DataType
) –Expected return type of the UDF. Required parameter.
UDF with primitive types
# UDF with primitive types
@udf(return_type=IntegerType)
def add_one(x: int):
return x + 1
# Or
add_one = udf(lambda x: x + 1, return_type=IntegerType)
UDF with nested types
# UDF with nested types
@udf(return_type=StructType([StructField("value1", IntegerType), StructField("value2", IntegerType)]))
def example_udf(x: dict[str, int], y: list[int]):
return {
"value1": x["value1"] + x["value2"] + y[0],
"value2": x["value1"] + x["value2"] + y[1],
}
Source code in src/fenic/api/functions/builtin.py
260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 |
|
when
when(condition: Column, value: Column) -> Column
Evaluates a condition and returns a value if true.
This function is used to create conditional expressions. If Column.otherwise() is not invoked, None is returned for unmatched conditions.
Parameters:
-
condition
(Column
) –A boolean Column expression to evaluate.
-
value
(Column
) –A Column expression to return if the condition is true.
Returns:
-
Column
–A Column expression that evaluates the condition and returns the specified value when true,
-
Column
–and None otherwise.
Raises:
-
TypeError
–If the condition is not a boolean Column expression.
Basic conditional expression
# Basic usage
df.select(when(col("age") > 18, lit("adult")))
# With otherwise
df.select(when(col("age") > 18, lit("adult")).otherwise(lit("minor")))
Source code in src/fenic/api/functions/builtin.py
486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 |
|